how to implement hamming loss as a custom metric in keras model
I have a multilabel classification with 6 classes
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy',hamming_loss])
I tried using
from sklearn.metrics import hamming_loss
def custom_hl(y_true, y_pred):
return hamming_loss(y_true, y_pred)
which doesn't work because i have the y_true , y_pred as follow
YTRUE
Tensor("Cast_10:0", shape=(None, 6), dtype=float32)
YPRED
Tensor("model_1/dense_1/Sigmoid:0", shape=(None, 6), dtype=float32)
also tried the function in this question and it doesn't work
Getting the accuracy for multi-label prediction in scikit-learn
is there any way I can get the hamming loss as metric in keras
thanks for any help
so i found a way and
def Custom_Hamming_Loss(y_true, y_pred):
return K.mean(y_true*(1-y_pred)+(1-y_true)*y_pred)
def Custom_Hamming_Loss1(y_true, y_pred):
tmp = K.abs(y_true-y_pred)
return K.mean(K.cast(K.greater(tmp,0.5),dtype=float))
source:https://groups.google.com/g/keras-users/c/_sjndHbejTY?pli=1
Related
I found this loss function in a paper
loss function equation
and I tried to implement it in python as follows:
import keras.backend as K
import math
sigma=math.sqrt(2)/2
s=2*sigma**2
def kernel_MSE(actual, predicted):
actual, predicted = K.flatten(actual), K.flatten(predicted)
sum= 0.0
for i in range(len(actual)):
sum += 1-(K.exp(-((predicted[i]-actual[i]) **2) /s))
return sum
then using this loss function in the model would be like this:
model = Sequential()
model.add(LSTM(units=256, return_sequences=True,input_shape(X_train.shape[1],X_train.shape[2]),activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(64))
model.add(Dense(32))
model.add(Dense(1))
optimizer= tf.keras.optimizers.Adam(lr=0.001,decay=0.00001)
model.compile(loss=kernel_MSE, optimizer=optimizer)
the code is working fine, but I am not sure if my implementation is correct. Could anyone check it?
I am new to Keras. I want to know the loss of certain instances. So I got the y_true and y_pred of these data instances. I want to call the loss function to calculate the loss but only get Tensor("Mean_5:0",shape=(),dtype=float32). How can I evaluate the value of the tensor? Is it similar to tensorflow by calling los.eval()?
y_pred is calcualted by:
y_pred = self.model.predict(x, batch_size=self.batch_size)
y_true is also an available list.
How to use binary_crossentropy()?
You almost had the answer.
from keras import backend
from keras.losses import binary_crossentropy
y_true = backend.variable(y_true)
y_pred = backend.variable(y_pred)
# calculate the average cross-entropy
mean_ce = backend.eval(binary_crossentropy(y_true, y_pred))
print('Average Cross Entropy: %.3f nats' % mean_ce)
I am trying to approach a regression problem, which is multi label with 8 labels for which i am using mean squared error loss, but the data set is imbalanced and i want to pass weights to the loss function.Currently i am compiling the model this way.
model.compile(loss='mse', optimizer=Adam(lr=0.0001), metrics=['mse', 'acc'])
Could someone please suggest if it is possible to add weights to mean squared error,if so, how could i do it?
Thanks in advance
The labels look like so
#
model = Sequential()
model.add(effnet)
model.add(GlobalAveragePooling2D())
model.add(Dropout(0.5))
model.add(Dense(8,name = 'nelu', activation=elu))
model.compile(loss=custom_mse(class_weights),
optimizer=Adam(lr=0.0001), metrics=['mse', 'acc'])
import keras
from keras.models import Sequential
from keras.layers import Conv2D, Flatten, Dense, Conv1D, LSTM, TimeDistributed
import keras.backend as K
# custom loss function
def custom_mse(class_weights):
def loss_fixed(y_true, y_pred):
"""
:param y_true: A tensor of the same shape as `y_pred`
:param y_pred: A tensor resulting from a sigmoid
:return: Output tensor.
"""
# print('y_pred:', K.int_shape(y_pred))
# print('y_true:', K.int_shape(y_true))
y_pred = K.reshape(y_pred, (8, 1))
y_pred = K.dot(class_weights, y_pred)
# calculating mean squared error
mse = K.mean(K.square(y_pred - y_true), axis=-1)
# print('mse:', K.int_shape(mse))
return mse
model = Sequential()
model.add(Conv1D(8, (1), input_shape=(28, 28)))
model.add(Flatten())
model.add(Dense(8))
# custom class weights
class_weights = K.variable([[0.25, 1., 2., 3., 2., 0.6, 0.5, 0.15]])
# print('class_weights:', K.int_shape(class_weights))
model.compile(optimizer='adam', loss=custom_mse(class_weights), metrics=['accuracy'])
Here is a small implementation of a custom loss function based on your problem statement
You find more information about keras loss function from losses.py and also check out its official documentation from here
Keras does not handle low-level operations such as tensor products, convolutions and so on itself. Instead, it relies on a specialized, well-optimized tensor manipulation library to do so, serving as the "backend engine" of Keras. More information about keras backend can be found here and also check out its official documentation from here
Use K.int_shape(tensor_name) to find the dimensions of a tensor.
First create a dictionary of how much you want to weight each class, for example:
class_weights = {0: 1,
1: 1,
2: 1,
3: 9,
4: 1...} # Do this for all eight classes
Then pass them into model.fit:
model.fit(X, y, class_weight=class_weights)
The following two models/compilations behave differently:
def custom_loss(y_true, y_pred):
return keras.losses.binary_crossentropy(y_true, y_pred)
optimizer = Adam(lr=5e-3)
model.compile(loss=custom_loss, optimizer=optimizer, metrics=['accuracy'])
And:
optimizer = Adam(lr=5e-3)
model.compile(loss=keras.losses.binary_crossentropy, optimizer=optimizer, metrics=['accuracy'])
What can be the reason?
If you implement a custom binary cross-entropy loss, you should also specify the right accuracy metric. This is because if you use Keras' binary cross-entropy, then Keras will automatically adjust which accuracy metric to use (between binary and categorical accuracy).
This doesn't happen if you use a custom loss, and then Keras will default to categorical accuracy, which is actually wrong, producing incorrect accuracy values. For example:
model.compile(loss=custom_loss, optimizer=optimizer, metrics=['binary_accuracy'])
I was to use a simple BiLSTM model with my own custom loss function in Keras.
See below.
model = Sequential()
model.add(Bidirectional(LSTM(128, return_sequences=True), input_shape=(1,8)))
model.add(Bidirectional(LSTM(128)))
model.add(Dense(64, activation='relu'))
model.add(Dense(20, activation='softmax'))
def my_loss_np(y_true, y_pred):
labels = [np.argmax(y_pred[i]) for i in range(y_pred.shape[1])]
loss = np.mean(labels)
return loss
import keras.backend as K
def my_loss(y_true, y_pred):
loss = K.eval(my_loss_np(K.eval(y_true), K.eval(y_pred)))
return loss
When I compile this model, I get an error -
model.compile(loss=my_loss, optimizer='adam')
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'dense_95_target' with dtype float and shape [?,?]
[[Node: dense_95_target = Placeholder[dtype=DT_FLOAT, shape=[?,?], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
There are several issues here with your loss function:
You are using NumPy on tensors, unfortunately though it is an intuitive this doesn't work. You need to use tensor operators from the Keras backend, they are very similar.
To that end you are calling K.eval but at this stage you are still constructing a symbolic computation graph which will be run in TensorFlow or Theano. So the tensors don't have a value to compute per say, you need to keep it symbolic, you can get any values like you do in NumPy.
Even if you fix the problems above, you are using a non-differentiable operation argmax which will not work with gradient descent algorithms.
Your model looks like a multi-label classification problem, 20 classes as your final layer is 20 with softmax. In this case, the literature uses categorical-crossentropy loss to train the classifier network.