I'd like to set up a Keras layer in which each node simply computes the logarithm of the corresponding node in the preceding layer. I see from the Keras documentation that there is a "log" function in its backend module. But somehow I'm not understanding how to use this.
Thanks in advance for any hints you can offer!
You can use any backend function inside a Lambda layer:
from keras.layers import Lambda
import keras.backend as K
Define just any function taking the input tensor:
def logFunc(x):
return K.log(x)
And create a lambda layer with it:
#add to the model the way you're used to:
model.add(Lambda(logFunc,output_shape=(necessaryWithTheano)))
And if the function is already defined, taking only one argument and returning a tensor, you don't need to create your own function, just Lambda(K.log), for instance.
Related
I am using Keras's "ImageDataGenerator" class for data augmentation. Since the image has the bounding box of the relevant object, I want to crop the image to the relevant part before augmenting it. The class has an argument named "preprocessing_function" among its arguments and allows us to implement the desired function after augmentation and resizing. I am asking for this to happen the opposite. First, let the function run, then the augmentation takes place. How can I implement that to the code?
tf.keras.preprocessing.image.ImageDataGenerator(
featurewise_center=False,
samplewise_center=False,
featurewise_std_normalization=False,
samplewise_std_normalization=False,
zca_whitening=False,
zca_epsilon=1e-06,
rotation_range=0,
width_shift_range=0.0,
height_shift_range=0.0,
brightness_range=None,
shear_range=0.0,
zoom_range=0.0,
channel_shift_range=0.0,
fill_mode="nearest",
cval=0.0,
horizontal_flip=False,
vertical_flip=False,
rescale=None,
preprocessing_function=None,
data_format=None,
validation_split=0.0,
dtype=None,
)
preprocessing_function: a function that will be applied to each input. The function will run after the image is resized and augmented. The function should take one argument: one image (Numpy tensor with rank 3) and should output a Numpy tensor with the same shape.
Keras team members said that the ImageDataGenerator class is legacy. They suggest me to use transformation layers. They can be used anytime while training.
Example usage of transformation layers: Keras Transformation layers example page
Github Issue (Closed): GitHub Issues
I want to implement a custom layer in keras. Unfortunately one part of the calculation requires the pseudo inverse (to solve x = (A'A)⁻¹A'b). Now I am missing the functionality of K.inverse. Is there a way I can solve an OLS equation using the keras backend?
def call(inputs, **kwargs):
A = ...
b = ...
return K.inverse(K.transpose(A) # A) # K.transpose(A) # b
Maybe I can read it back to numpy arrays do the inverse and then feed it back to a tensor or similar?
you can use tensorflow's inverse(tf.linalg.inv) inside a keras Lambda layer, that way, you don't have to create a custom layer, just a custom function.
I am building a Keras RNN model and preprocess my input to normalize (between 0 and 1).
I am wondering if there is a way to achieve the same through some first layer as a part of the model itself?
Since the model only has batch-wise information, it cannot do normalization with global max/min itself. However, if you can somehow pass your global max/min to the model, you might try this:
from keras.layers import Lambda
model.add(Lambda(lambda x: (x-min) / (max-min))
I want to compare two numbers in keras model. The input of this layer is a tensorvariable and this layer compare this tensorvariable with a constant. Then it will return 0 or 1.
Is there any method? I tried to find a function in theano to do this job but failed.
You can find the functions in keras backend
import keras.backend as K
What you need is one of these: K.equal, K.greater, K.greater_equal, etc.
You can use a Lambda layer for that:
Lambda(lambda x: K.cast(K.greater_equal(x,constant),'float32'),output_shape=sameAsInputShape)
How is it possible to use leaky ReLUs in the newest version of keras?
Function relu() accepts an optional parameter 'alpha', that is responsible for the negative slope, but I cannot figure out how to pass ths paramtere when constructing a layer.
This line is how I tried to do it,
model.add(Activation(relu(alpha=0.1))
but then I get the error
TypeError: relu() missing 1 required positional argument: 'x'
How can I use a leaky ReLU, or any other activation function with some parameter?
relu is a function and not a class and it takes the input to the activation function as the parameter x. The activation layer takes a function as the argument, so you could initialize it with a lambda function through input x for example:
model.add(Activation(lambda x: relu(x, alpha=0.1)))
Well, from this source (keras doc), and this github question , you use a linear activation then you put the leaky relu as another layer right after.
from keras.layers.advanced_activations import LeakyReLU
model.add(Dense(512, 512, activation='linear')) # Add any layer, with the default of an identity/linear squashing function (no squashing)
model.add(LeakyReLU(alpha=.001)) # add an advanced activation
does that help?
You can build a wrapper for parameterized activations functions. I've found this useful and more intuitive.
class activation_wrapper(object):
def __init__(self, func):
self.func = func
def __call__(self, *args, **kwargs):
def _func(x):
return self.func(x, *args, **kwargs)
return _func
Of course I could have used a lambda expression in call.
Then
wrapped_relu = activation_wrapper(relu).
Then use it as you have above
model.add(Activation(wrapped_relu(alpha=0.1))
You can also use it as part of a layer
model.add(Dense(64, activation=wrapped_relu(alpha=0.1))
While this solution is a little more complicated than the one offered by #Thomas Jungblut, the wrapper class can be reused for any parameterized activation function. In fact, I used it whenever I have a family of activation functions that are parameterized.
Keras defines separate activation layers for the most common use cases, including LeakyReLU, ThresholdReLU, ReLU (which is a generic version that supports all ReLU parameters), among others. See the full documentation here: https://keras.io/api/layers/activation_layers
Example usage with the Sequential model:
import tensorflow as tf
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.InputLayer(10))
model.add(tf.keras.layers.Dense(16))
model.add(tf.keras.layers.LeakyReLU(0.2))
model.add(tf.keras.layers.Dense(1))
model.add(tf.keras.layers.Activation(tf.keras.activations.sigmoid))
model.compile('adam', 'binary_crossentropy')
If the activation parameter you want to use is unavailable as a predefined class, you could use a plain lambda expression as suggested by #Thomas Jungblut:
from tensorflow.keras.layers import Activation
model.add(Activation(lambda x: tf.keras.activations.relu(x, alpha=0.2)))
However, as noted by #leenremm in the comments, this fails when trying to save or load the model. As suggested you could use the Lambda layer as follows:
from tensorflow.keras.layers import Activation, Lambda
model.add(Activation(Lambda(lambda x: tf.keras.activations.relu(x, alpha=0.2))))
However, the Lambda documentation includes the following warning:
WARNING: tf.keras.layers.Lambda layers have (de)serialization limitations!
The main reason to subclass tf.keras.layers.Layer instead of using a Lambda layer is saving and inspecting a Model. Lambda layers are saved by serializing the Python bytecode, which is fundamentally non-portable. They should only be loaded in the same environment where they were saved. Subclassed layers can be saved in a more portable way by overriding their get_config method. Models that rely on subclassed Layers are also often easier to visualize and reason about.
As such, the best method for activations not already provided by a layer is to subclass tf.keras.layers.Layer instead. This should not be confused with subclassing object and overriding __call__ as done in #Anonymous Geometer's answer, which is the same as using a lambda without the Lambda layer.
Since my use case is covered by the provided layer classes, I'll leave it up to the reader to implement this method. I am making this answer a community wiki in the event anyone would like to provide an example below.