Accuracy and Precision in Keras models - python-3.x

I'm trying to manually calculate accuracy and precision of my Keras model. I looked at the metrics.py function and it has the below code to calculate precision.
def precision(y_true, y_pred):
'''Calculates the precision, a metric for multi-label classification of
how many selected items are relevant.
'''
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
What I don't understand is why should we do the y_true * y_pred to get the true positives? My y_pred is a vector of length 7, which has the probability for each pixel in my image and my y_true is a one-hot encoded vector of legnth 7.
Can anyone please help me understand the y_true * y_pred in calculating true positives.
Also using the above precision function as reference, I'm using the below custom function for accuracy.
def overall_acc(y_true, y_pred):
y_true_2D = K.max(y_true, axis=1, keepdims=False)
y_pred_2D = K.max(y_true*y_pred, axis=1, keepdims=False)
y_true_f = K.sum(K.flatten(y_true_2D))
y_pred_f = K.sum(K.flatten(y_pred_2D))
acc = y_pred_f / (y_true_f)
return acc
Is it correct way to calculate accuracy ?
Any help is greatly appreciated.

Related

Custom loss function in keras with class weights for each batch

I am new to deep learning and tensorflow. I am working on a speech binary classification problem, trying to replicate a research paper. Number of samples in class 1 are 2700 approx and in class 2 are 1200 approx. The paper has used MFCC features for binary classification with 88.3% accuracy, 88% F1-Score and 82.3% recall. They have also used a custom loss function in which they weighted average between specificity and recall with focus on specificity, with formula:
custom_loss = 1 - (0.85 * specificity + 0.15 * recall)
I have implemented all the parameters given by the paper. The only thing which is still not addressed by me is class imbalance (Class 1 -2700 and class 2 - 1200). I tried oversampling the class 2 with different oversampling methods but nothing worked. The accuracy I achieve is at max 68% with algorithm just learning the majority class well and performing poorly on minority class.
The class weight technique as follows did not work with custom loss function:
from sklearn.utils import class_weight
class_weights = class_weight.compute_class_weight('balanced', classes = [0, 1], y = y_train)
weights = {i:w for i,w in enumerate(class_weights)}
Hence I tried weighted custom loss function, but the results were not good. Accuracy was still near 68%.
I am sharing the code as follows:
from sklearn.utils import class_weight
class_weights = class_weight.compute_class_weight('balanced', classes = [0, 1], y = y_train)
weights = {i:w for i,w in enumerate(class_weights)}
def binary_recall_specificity(y_true, y_pred, recall_weight, spec_weight):
y_true = K.clip(y_true, K.epsilon(), 1)
y_pred = K.clip(y_pred, K.epsilon(), 1)
ground_positives = K.sum(y_true, axis=0) + K.epsilon() # = TP + FN
pred_positives = K.sum(y_pred, axis=0) + K.epsilon() # = TP + FP
true_positives = K.sum(y_true * y_pred, axis=0) + K.epsilon() # = TP
neg_y_true = 1 - y_true
neg_y_pred = 1 - y_pred
fp = K.sum(neg_y_true * y_pred)
tn = K.sum(neg_y_true * neg_y_pred)
specificity = tn / (tn + fp + K.epsilon())
recall = true_positives / ground_positives
loss1 = 1.0 - (recall_weight*recall + spec_weight*specificity)
return loss1 * class_weights.tolist()
def custom_loss(recall_weight, spec_weight):
def recall_spec_loss(y_true, y_pred):
return binary_recall_specificity(y_true, y_pred, recall_weight, spec_weight)
# Returns the (y_true, y_pred) loss function
return recall_spec_loss
Am I doing something wrong here? Please guide. I have read answers to other questions to the same questions too but unable to get my query solved.

Why is the output of a convolutional network so exponentially large?

I am trying to reproduce a unet result on Carvana dataset using Ternausnet in PyTorch using Lightning.
I am using DiceLoss for that with sigmoid activation function. I think I am running into an issue of a vanishing gradient, because all gradients of weights are 0, and I see the output of the network with min value of order 10^8.
What could be the issue here? How can I address the vanishing gradient? Also, if I use a different criterion, I see a problem of loss going into negative values without stopping (for BCE with logits, for instance).
Here is the code for my Dice loss:
class DiceLoss(nn.Module):
def __init__(self):
super().__init__()
def forward(self, logits, targets, eps=0, threshold=None):
# comment out if your model contains a sigmoid or
# equivalent activation layer
proba = torch.sigmoid(logits)
proba = proba.view(proba.shape[0], 1, -1)
targets = targets.view(targets.shape[0], 1, -1)
if threshold:
proba = (proba > threshold).float()
# flatten label and prediction tensors
intersection = torch.sum(proba * targets, dim=1)
summation = torch.sum(proba, dim=1) + torch.sum(targets, dim=1)
dice = (2.0 * intersection + eps) / (summation + eps)
# print(intersection, summation, dice)
return (1 - dice).mean()

How to write a custom f1 loss function with weighted average for keras?

I am trying to do a multiclass classification in keras. Till now I am using categorical_crossentropy
as the loss function. But since the metric required is weighted-f1, I am not sure if categorical_crossentropy is the best loss choice. I was trying to implement a weighted-f1 score in keras using sklearn.metrics.f1_score, but due to the problems in conversion between a tensor and a scalar, I am running into errors.
Something like this:
def f1_loss(y_true, y_pred):
return 1 - f1_score(np.argmax(y_true, axis=1), np.argmax(y_pred, axis=1), average='weighted')
Followed by
model.compile(loss=f1_loss, optimizer=opt)
How do I write this loss function in keras?
Edit:
Shape for y_true and y_pred is (n_samples, n_classes) in my case it is (n_samples, 4)
y_true and y_pred both are tensors so sklearn's f1_score cannot work directly on them. I need a function that calculates weighted f1 on tensors.
The variables are self explained:
def f1_weighted(true, pred): #shapes (batch, 4)
#for metrics include these two lines, for loss, don't include them
#these are meant to round 'pred' to exactly zeros and ones
#predLabels = K.argmax(pred, axis=-1)
#pred = K.one_hot(predLabels, 4)
ground_positives = K.sum(true, axis=0) + K.epsilon() # = TP + FN
pred_positives = K.sum(pred, axis=0) + K.epsilon() # = TP + FP
true_positives = K.sum(true * pred, axis=0) + K.epsilon() # = TP
#all with shape (4,)
precision = true_positives / pred_positives
recall = true_positives / ground_positives
#both = 1 if ground_positives == 0 or pred_positives == 0
#shape (4,)
f1 = 2 * (precision * recall) / (precision + recall + K.epsilon())
#still with shape (4,)
weighted_f1 = f1 * ground_positives / K.sum(ground_positives)
weighted_f1 = K.sum(weighted_f1)
return 1 - weighted_f1 #for metrics, return only 'weighted_f1'
Important notes:
This loss will work batchwise (as any Keras loss).
So if you are working with small batch sizes, the results will be unstable between each batch, and you may get a bad result. Use big batch sizes, enough to include a significant number of samples for all classes.
Since this loss collapses the batch size, you will not be able to use some Keras features that depend on the batch size, such as sample weights, for instance.

Keras metric equivalent to scikit learn's average precision score metric

I've had a look at the Keras metrics documentation and couldn't find an equivalent to scikit learn's average precision score metric (which I think is the same as the area under the precision-recall curve, AUPRC). It isn't the same as the average_precision_at_k, I believe unless someone can correct me on that.
Late answer, but I was facing the same issue recently.
You can use the AUC metric with the parameter curve . Something like:
AUC(curve='PR')
You can implement custom metrics for keras to be passed at the compilation step. (https://keras.io/metrics/) The function would need to take (y_true, y_pred) as arguments and return a single tensor value.
Here is an implementation of average_precision for keras:
import keras.backend as K
def average_precision(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision

How to calculate F1 Macro in Keras?

I've tried to use the code given from Keras before they're removed. Here's the code:
def precision(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def recall(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def fbeta_score(y_true, y_pred, beta=1):
if beta < 0:
raise ValueError('The lowest choosable beta is zero (only precision).')
# If there are no true positives, fix the F score at 0 like sklearn.
if K.sum(K.round(K.clip(y_true, 0, 1))) == 0:
return 0
p = precision(y_true, y_pred)
r = recall(y_true, y_pred)
bb = beta ** 2
fbeta_score = (1 + bb) * (p * r) / (bb * p + r + K.epsilon())
return fbeta_score
def fmeasure(y_true, y_pred):
return fbeta_score(y_true, y_pred, beta=1)
From what I saw, it seems like they use the correct formula. But, when I tried to use it as a metric in the training process, I got exactly equal output for val_accuracy, val_precision, val_recall, and val_fmeasure. I do believe that it might happen even if the formula correct, but I believe it is unlikely. Any explanation for this issue?
since Keras 2.0 metrics f1, precision, and recall have been removed. The solution is to use a custom metric function:
from keras import backend as K
def f1(y_true, y_pred):
def recall(y_true, y_pred):
"""Recall metric.
Only computes a batch-wise average of recall.
Computes the recall, a metric for multi-label classification of
how many relevant items are selected.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision(y_true, y_pred):
"""Precision metric.
Only computes a batch-wise average of precision.
Computes the precision, a metric for multi-label classification of
how many selected items are relevant.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
precision = precision(y_true, y_pred)
recall = recall(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
model.compile(loss='binary_crossentropy',
optimizer= "adam",
metrics=[f1])
The return line of this function
return 2*((precision*recall)/(precision+recall+K.epsilon()))
was modified by adding the constant epsilon, in order to avoid division by 0. Thus NaN will not be computed.
Using a Keras metric function is not the right way to calculate F1 or AUC or something like that.
The reason for this is that the metric function is called at each batch step at validation. That way the Keras system calculates an average on the batch results. And that is not the right F1 score.
Thats the reason why F1 score got removed from the metric functions in keras. See here:
https://github.com/keras-team/keras/commit/a56b1a55182acf061b1eb2e2c86b48193a0e88f7
https://github.com/keras-team/keras/issues/5794
The right way to do this is to use a custom callback function in a way like this:
https://github.com/PhilipMay/mltb#module-keras
https://medium.com/#thongonary/how-to-compute-f1-score-for-each-epoch-in-keras-a1acd17715a2
This is a streaming custom f1_score metric that I made using subclassing. It works for TensorFlow 2.0 beta but I haven't tried it on other versions. What it's doing it keeping track of true positives, predicted positives, and all possible positives throughout the whole epoch and then calculating the f1 score at the end of the epoch. I think the other answers are only giving the f1 score for each batch which isn't really the best metric when we really want the f1 score of the all the data.
I got a raw unedited copy of Aurélien Geron new book Hands-On Machine Learning with Scikit-Learn & Tensorflow 2.0 and highly recommend it. This is how I learned how to this f1 custom metric using sub-classes. It's hands down the most comprehensive TensorFlow book I've ever seen. TensorFlow is seriously a pain in the butt to learn and this guy lays down the coding groundwork to learn a lot.
FYI: In the Metrics, I had to put the parenthesis in f1_score() or else it wouldn't work.
pip install tensorflow==2.0.0-beta1
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow import keras
import numpy as np
def create_f1():
def f1_function(y_true, y_pred):
y_pred_binary = tf.where(y_pred>=0.5, 1., 0.)
tp = tf.reduce_sum(y_true * y_pred_binary)
predicted_positives = tf.reduce_sum(y_pred_binary)
possible_positives = tf.reduce_sum(y_true)
return tp, predicted_positives, possible_positives
return f1_function
class F1_score(keras.metrics.Metric):
def __init__(self, **kwargs):
super().__init__(**kwargs) # handles base args (e.g., dtype)
self.f1_function = create_f1()
self.tp_count = self.add_weight("tp_count", initializer="zeros")
self.all_predicted_positives = self.add_weight('all_predicted_positives', initializer='zeros')
self.all_possible_positives = self.add_weight('all_possible_positives', initializer='zeros')
def update_state(self, y_true, y_pred,sample_weight=None):
tp, predicted_positives, possible_positives = self.f1_function(y_true, y_pred)
self.tp_count.assign_add(tp)
self.all_predicted_positives.assign_add(predicted_positives)
self.all_possible_positives.assign_add(possible_positives)
def result(self):
precision = self.tp_count / self.all_predicted_positives
recall = self.tp_count / self.all_possible_positives
f1 = 2*(precision*recall)/(precision+recall)
return f1
X = np.random.random(size=(1000, 10))
Y = np.random.randint(0, 2, size=(1000,))
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2)
model = keras.models.Sequential([
keras.layers.Dense(5, input_shape=[X.shape[1], ]),
keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy', optimizer='SGD', metrics=[F1_score()])
history = model.fit(X_train, y_train, epochs=5, validation_data=(X_test, y_test))
As #Diesche mentioned the main problem in implementing f1_score this way is that it is called at every batch step and leads to confusing results more than anything else.
I've been struggling some time with this issue but eventually worked my way around the problem by using a callback: at the end of an epoch the callback predicts on the data (in this case I chose to only apply it to my validation data) with the new model parameters and gives you coherent metrics evaluated on the whole epoch.
I'm using tensorflow-gpu (1.14.0) on python3
from tensorflow.python.keras.models import Sequential, Model
from sklearn.metrics import f1_score
from tensorflow.keras.callbacks import Callback
from tensorflow.python.keras import optimizers
optimizer = optimizers.SGD(lr=0.0001, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer=optimizer, loss="binary_crossentropy", metrics=['accuracy'])
model.summary()
class Metrics(Callback):
def __init__(self, model, valid_data, true_outputs):
super(Callback, self).__init__()
self.model=model
self.valid_data=valid_data #the validation data I'm getting metrics on
self.true_outputs=true_outputs #the ground truth of my validation data
self.steps=len(self.valid_data)
def on_epoch_end(self, args,*kwargs):
gen=generator(self.valid_data) #generator yielding the validation data
val_predict = (np.asarray(self.model.predict(gen, batch_size=1, verbose=0, steps=self.steps)))
"""
The function from_proba_to_output is used to transform probabilities
into an understandable format by sklearn's f1_score function
"""
val_predict=from_proba_to_output(val_predict, 0.5)
_val_f1 = f1_score(self.true_outputs, val_predict)
print ("val_f1: ", _val_f1, " val_precision: ", _val_precision, " _val_recall: ", _val_recall)
The function from_proba_to_output goes as follows:
def from_proba_to_output(probabilities, threshold):
outputs = np.copy(probabilities)
for i in range(len(outputs)):
if (float(outputs[i])) > threshold:
outputs[i] = int(1)
else:
outputs[i] = int(0)
return np.array(outputs)
I then train my model by referencing this metrics class in the callbacks part of fit_generator. I did not detail the implementation of my train_generator and valid_generator as these data generators are specific to the classification problem at hand and posting them would only bring confusion.
model.fit_generator(
train_generator, epochs=nbr_epochs, verbose=1, validation_data=valid_generator, callbacks=[Metrics(model, valid_data)])
As what #Pedia has said in his comment above, on_epoch_end,as stated in the github.com/fchollet/keras/issues/5400 is the best approach.
I also suggest this work-around
install keras_metrics package by ybubnov
call model.fit(nb_epoch=1, ...) inside a for loop taking advantage of the precision/recall metrics outputted after every epoch
Something like this:
for mini_batch in range(epochs):
model_hist = model.fit(X_train, Y_train, batch_size=batch_size, epochs=1,
verbose=2, validation_data=(X_val, Y_val))
precision = model_hist.history['val_precision'][0]
recall = model_hist.history['val_recall'][0]
f_score = (2.0 * precision * recall) / (precision + recall)
print 'F1-SCORE {}'.format(f_score)

Resources