I want to compare two numbers in keras model. The input of this layer is a tensorvariable and this layer compare this tensorvariable with a constant. Then it will return 0 or 1.
Is there any method? I tried to find a function in theano to do this job but failed.
You can find the functions in keras backend
import keras.backend as K
What you need is one of these: K.equal, K.greater, K.greater_equal, etc.
You can use a Lambda layer for that:
Lambda(lambda x: K.cast(K.greater_equal(x,constant),'float32'),output_shape=sameAsInputShape)
Related
I want to implement a custom layer in keras. Unfortunately one part of the calculation requires the pseudo inverse (to solve x = (A'A)⁻¹A'b). Now I am missing the functionality of K.inverse. Is there a way I can solve an OLS equation using the keras backend?
def call(inputs, **kwargs):
A = ...
b = ...
return K.inverse(K.transpose(A) # A) # K.transpose(A) # b
Maybe I can read it back to numpy arrays do the inverse and then feed it back to a tensor or similar?
you can use tensorflow's inverse(tf.linalg.inv) inside a keras Lambda layer, that way, you don't have to create a custom layer, just a custom function.
I am building a Keras RNN model and preprocess my input to normalize (between 0 and 1).
I am wondering if there is a way to achieve the same through some first layer as a part of the model itself?
Since the model only has batch-wise information, it cannot do normalization with global max/min itself. However, if you can somehow pass your global max/min to the model, you might try this:
from keras.layers import Lambda
model.add(Lambda(lambda x: (x-min) / (max-min))
I am currently working on a classification task with given class labels 0 and 1. For this I am using ScikitLearn's MLPClassifier providing an output of either 0 or 1 for each training example. However, I can not find any documentation, what the output layer of the MLPClassifier is exactly doing (which activation function? encoding?).
Since there is an output of only one class I assume something like One-hot_encoding is used. Is this assumption correct? Is there any documentation tackling this question for the MLPClassifier?
out_activation_ attribute would give you the type of activation used in the output layer of your MLPClassifier.
From Documentation:
out_activation_ : string
Name of the output activation function.
The activation param just sets the hidden layer's activation function.
activation : {‘identity’, ‘logistic’, ‘tanh’, ‘relu’}, default ‘relu’
Activation function for the hidden layer.
The output layer is decided internally in this piece of code.
# Output for regression
if not is_classifier(self):
self.out_activation_ = 'identity'
# Output for multi class
elif self._label_binarizer.y_type_ == 'multiclass':
self.out_activation_ = 'softmax'
# Output for binary class and multi-label
else:
self.out_activation_ = 'logistic'
Hence, for binary classification it would be logistic and for multi-class it would be softmax.
To know more details about these activations, see here.
You have most of the information in the docs. The MLP is a simple neural network. It can use several activation functions, the default is relu.
It doesn't use one-hot encoding, rather you need to feed in a y (target) vector with class labels.
My understanding is that the last activation function is the logistic function, and the output is set to 1 if the probability is >0.5 and to 0 otherwise.
However, you can output the probability if you want.
I'd like to set up a Keras layer in which each node simply computes the logarithm of the corresponding node in the preceding layer. I see from the Keras documentation that there is a "log" function in its backend module. But somehow I'm not understanding how to use this.
Thanks in advance for any hints you can offer!
You can use any backend function inside a Lambda layer:
from keras.layers import Lambda
import keras.backend as K
Define just any function taking the input tensor:
def logFunc(x):
return K.log(x)
And create a lambda layer with it:
#add to the model the way you're used to:
model.add(Lambda(logFunc,output_shape=(necessaryWithTheano)))
And if the function is already defined, taking only one argument and returning a tensor, you don't need to create your own function, just Lambda(K.log), for instance.
Why does this code work fine for the loss function but the metrics fail after one iteration with "ValueError: operands could not be broadcast together with shapes (32,) (24,) (32,)"?
If I use "categorical_crossentropy" in quotes then it works. And my custom metric looks identical to the one in keras.losses.
import keras.backend as K
def categorical_crossentropy(y_true, y_pred):
return K.categorical_crossentropy(y_pred, y_true)
fc.compile(optimizer=Adam(.01), loss=categorical_crossentropy, metrics=[categorical_crossentropy])
fc.fit(xtrain, ytrain, validation_data=(xvalid, yvalid), verbose=0,
callbacks=[TQDMNotebookCallback(leave_inner=True, leave_outer=True)],
nb_epoch=2)
It works if I import categorical_crossentropy from keras.metrics; rather than importing K. Still no idea why the above doesn't work but at least this is a solution.
Also it looks like the loss function is not necessary in metrics parameter anyway as it is automatically calculated and shown for training and validation.