Keras custom loss function without y_pred and y_true - keras

I am having a problem at hand which optimizes a loss function that is not a function of y_pred and y_true . After going through the Keras documentation , I found out that all the custom loss functions must be a function of both y_pred and y_true.
Is there any alternate way of implementing my kind of loss function in Keras?

No.This is the flaw of keras.
If you want to use that type of loss function, then the basic stochastic gradient descent scheme won't work.Many concepts such as batch size will disappear and that will be a substantive change so keras does not allow you to do so.

Related

Keras accuracy with a custom loss function

I'm using a custom loss function in Keras:
def get_top_one_probability(vector):
return (K.exp(vector) / K.sum(K.exp(vector)))
def listnet_loss(real_labels, predicted_labels):
return -K.sum(get_top_one_probability(real_labels) * tf.math.log(get_top_one_probability(predicted_labels)))
How is computed the metric accuracy with a custom loss function?
Loss functions and accuracy functions are two different metrics.
Changing one won't change the other. So if your task is a regression problem the accuracy function won't change and it will be fine (Keras use regression accuracy function for regression problem). Same for multiclass classification (Keras use categorical_accuracy function for the multiclass problem).
But, make sure that when the task is binary classification, changing the loss function changes your accuracy function from binary_accuracy to categorical_accuracy and thus you might end up with different results.
The solution for this is to use binary_accuarcy as follows:
def binary_accuracy(y_true, y_pred):
return K.mean(K.equal(y_true, K.round(y_pred)), axis=-1)
model.compile(loss=custom_loss,
metrics=[binary_accuracy])

Custom Loss Functions in Keras

I have tried to write my own custom loss function in Keras. But writing complex loss functions are normally requiring deep knowledge of TensorFlow and Keras Backend. Do I need to study them to write my own loss function or is there an alternative method for writing custom loss functions for neural networks in Keras?
My loss function requires the probability of prediction of all classes and some way to point out the probability corresponding to the label class.

How to implement weighted cross entropy loss in Keras?

I would like to know how to add in custom weights for the loss function in a binary or multiclass classifier in Keras. I am using binary_crossentropy or sparse_categorical_crossentropy as the baseline and I want to be able to choose what weight to give incorrect predictions for each class.
For multiple classes one should use not binary but categorical crossentropy.
Consider using custom loss function as described here: Custom loss function in Keras

Keras "acc" metrics - an algorithm

In Keras I often see people compile a model with mean square error function and "acc" as metrics.
model.compile(optimizer=opt, loss='mse', metrics=['acc'])
I have been reading about acc and I can not find an algorithm for it?
What if I would change my loss function to binary crossentropy for an example and use 'acc' as metrics? Would this be the same metrics as in first case or Keras changes this acc based on loss function - so binary crossentropy in this case?
Check the source code from line 375. The metric_fn change dependent on loss function, so it is automatically handled by keras.
If you want to compare models using different loss function it could in some cases be necessary to specify what accuracy method you want to grade your model with, such that the models actually are tested with the same tests.

How to write a categorization accuracy loss function for keras (deep learning library)?

How to write a categorization accuracy loss function for keras (deep learning library)?
Categorization accuracy loss is the percentage of predictions that are wrong, i.e. #wrong/#data points.
Is it possible to write a custom loss function for that?
Thanks.
EDIT
Although Keras allows you to use custom loss function, I am not convinced anymore that using accuracy as loss makes sense. First, the network's last layer will typically be soft-max, so that you obtain a vector of class probabilities rather than the single most likely class. Second, I fear that there will be issues with gradient computation due to lack of smoothness of accuracy.
OLD POST
Keras offers you the possibility to use custom loss functions. To get the accuracy loss, you can take inspiration from the examples that are already implemented. For binary classification, I would suggest the following implementation
def mean_accuracy_error(y_true, y_pred):
return K.mean(K.abs(K.sign(y_true - y_pred)), axis=-1)

Resources