In Keras VAE implementation:
class Sampling(layers.Layer):
"""Uses (z_mean, z_log_var) to sample z, the vector encoding a digit."""
def call(self, inputs):
z_mean, z_log_var = inputs
batch = tf.shape(z_mean)[0]
dim = tf.shape(z_mean)[1]
epsilon = tf.keras.backend.random_normal(shape=(batch, dim))
return z_mean + tf.exp(0.5 * z_log_var) * epsilon
My question is about the "tf.exp(0.5 * z_log_var)" part: why we used the exponential and not just have the var as it is? I mean why not just: return z_mean + z_log_var * epsilon
I want to know why tf.exp(0.5 * z_log_var) and not just z_log_var?
I am using it for tabular data and not images. I mean, I am using dense layers and not Conv layers.
First, as you can guess from the name, the encoder outputs the log variance. Thereby, we ensure that the variance is always positive (exp(z_log_var) >= 0), because the e function is always positive.
The part that is done in this snippet is called the reparameterization trick (as discussed here). The key idea for normal distributions is that any normal distribution can be expressed by sampling from a standard gaussian distribution and shifting it as follows z = mu + sigma * epsilon with epsilon ~ N(0,1). For this, we need the standard deviation, exp(0.5 * z_log_var) converts the log variance to the standard deviation.
Related
As the implementation in Keras for the VAE https://keras.io/examples/generative/vae/, we have to pass a mean and log_variance to calculate the distribution in the latent space.
class Sampling(layers.Layer):
"""Uses (z_mean, z_log_var) to sample z, the vector encoding a digit."""
def call(self, inputs):
z_mean, z_log_var = inputs
batch = tf.shape(z_mean)[0]
dim = tf.shape(z_mean)[1]
epsilon = tf.keras.backend.random_normal(shape=(batch, dim))
return z_mean + tf.exp(0.5 * z_log_var) * epsilon
# flatten layer
x = layers.Flatten()(x)
x = layers.Dense(16, activation="relu")(x)
z_mean = layers.Dense(latent_dim, name="z_mean")(x)
z_log_var = layers.Dense(latent_dim, name="z_log_var")(x)
z = Sampling()([z_mean, z_log_var])
encoder = keras.Model(encoder_inputs, [z_mean, z_log_var, z], name="encoder")
encoder.summary()
I don't understand how two dense layers can represent the mean and log variance without doing any special calculation? Because from the code above is just simply create a dense layer and receive result from the previous flatten layer.
The dense output layers are trained to output mean and log variance for the input using the Kullback–Leibler divergence loss function.
In the Keras example VAE model it is calculated in the custom train_step using the output of the dense layers:
kl_loss = -0.5 * (1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var))
I'm doing a binary classification, hence I used a binary cross entropy loss:
criterion = torch.nn.BCELoss()
However, I'm getting an error:
Using a target size (torch.Size([64, 1])) that is different to the input size (torch.Size([64, 2])) is deprecated. Please ensure they have the same size.
My model ends with:
x = self.wave_block6(x)
x = self.sigmoid(self.fc(x))
return x.squeeze()
I tried removing the squeeze, but to no avail. My batch size is 64. It seems like I'm doing something simple wrong here. Is my model giving 1 output and BCE loss expecting 2 inputs? Which loss should I use then?
Binary Cross-Entropy Loss (BCELoss) is used for binary classification tasks. Therefore if N is your batch size, your model output should be of shape [64, 1] and your labels must be of shape [64].Therefore just squeeze your output at the 2nd dimension and pass it to the loss function -
Here is a minimal working example
import torch
a = torch.randn((64, 1))
b = torch.randn((64))
loss = torch.nn.BCELoss()
b = torch.round(torch.sigmoid(b)) # just to create some labels
a = torch.sigmoid(a).squeeze(1)
l = loss(a, b)
Update - Basing on the conversation in the comments, focal loss can be defined as follows -
class focalLoss(nn.Module):
def __init__(self, alpha=0.25, gamma=3):
super(focalLoss, self).__init__()
self.alpha = alpha
self.gamma = gamma
def forward(self, pred_logits: torch.Tensor, target: torch.Tensor):
batch_size = pred_logits.shape[0]
pred = pred.view(batch_size, -1)
target = target.view(batch_size, -1)
pred = pred_logits.sigmoid()
ce = F.binary_cross_entropy(pred_logits, target, reduction='none')
alpha = target * self.alpha + (1. - target) * (1. - self.alpha)
pt = torch.where(target == 1, pred, 1 - pred)
return alpha * (1. - pt) ** self.gamma * ce
I am trying to do a multiclass classification in keras. Till now I am using categorical_crossentropy
as the loss function. But since the metric required is weighted-f1, I am not sure if categorical_crossentropy is the best loss choice. I was trying to implement a weighted-f1 score in keras using sklearn.metrics.f1_score, but due to the problems in conversion between a tensor and a scalar, I am running into errors.
Something like this:
def f1_loss(y_true, y_pred):
return 1 - f1_score(np.argmax(y_true, axis=1), np.argmax(y_pred, axis=1), average='weighted')
Followed by
model.compile(loss=f1_loss, optimizer=opt)
How do I write this loss function in keras?
Edit:
Shape for y_true and y_pred is (n_samples, n_classes) in my case it is (n_samples, 4)
y_true and y_pred both are tensors so sklearn's f1_score cannot work directly on them. I need a function that calculates weighted f1 on tensors.
The variables are self explained:
def f1_weighted(true, pred): #shapes (batch, 4)
#for metrics include these two lines, for loss, don't include them
#these are meant to round 'pred' to exactly zeros and ones
#predLabels = K.argmax(pred, axis=-1)
#pred = K.one_hot(predLabels, 4)
ground_positives = K.sum(true, axis=0) + K.epsilon() # = TP + FN
pred_positives = K.sum(pred, axis=0) + K.epsilon() # = TP + FP
true_positives = K.sum(true * pred, axis=0) + K.epsilon() # = TP
#all with shape (4,)
precision = true_positives / pred_positives
recall = true_positives / ground_positives
#both = 1 if ground_positives == 0 or pred_positives == 0
#shape (4,)
f1 = 2 * (precision * recall) / (precision + recall + K.epsilon())
#still with shape (4,)
weighted_f1 = f1 * ground_positives / K.sum(ground_positives)
weighted_f1 = K.sum(weighted_f1)
return 1 - weighted_f1 #for metrics, return only 'weighted_f1'
Important notes:
This loss will work batchwise (as any Keras loss).
So if you are working with small batch sizes, the results will be unstable between each batch, and you may get a bad result. Use big batch sizes, enough to include a significant number of samples for all classes.
Since this loss collapses the batch size, you will not be able to use some Keras features that depend on the batch size, such as sample weights, for instance.
I have a task in which I input a 500x500x1 image and get out a 500x500x1 binary segmentation. When working, only a small fraction of the 500x500 should be triggered (small "targets"). I'm using a sigmoid activation at the output. Since such a small fraction is desired to be positive, the training tends to stall with all outputs at zero, or very close. I've written my own loss function that partially deals with it, but I'd like to use binary cross entropy with a class weighting if possible.
My question is in two parts:
If I naively apply binary_crossentropy as the loss to my 500x500x1 output, will it apply on a per pixel basis as desired?
Is there a way for keras to apply class weighting with the single sigmoid output per pixel?
To answer your questions.
Yes, binary_cross_entropy will work per-pixel based, provided you feed to your image segmentation neural network pairs of the form (500x500x1 image(grayscale image) + 500x500x1 (corresponding mask to your image).
By feeding the parameter 'class_weight' parameter in model.fit()
Suppose you have 2 classes with 90%-10% distribution. Then you may want to penalise your algorithm 9 times more when it makes a mistake for the less well represented class(the class with 10% in this case). Suppose you have 900 examples of class 1 and 100 examples of class 2.
Then your class weights dictionary(there are multiple ways to compute it, what is important is to assign a greater weight to the less well represented class),
class_weights = {0:1000/900,1:1000/100}
Example : model.fit(X_train, Y_train, epochs = 30, batch_size=32, class_weight=class_weight)
NOTE: This is available only on 2d cases(class_weight). For 3D or higher dimensional spaces, one should use 'sample_weights'. For segmentation purposes, you would rather use sample_weights parameter.
The biggest gain you will have is by means of other loss functions. Other losses, apart from binary_crossentropy and categorical_crossentropy, inherently perform better on unbalanced datasets. Dice Loss is such a loss function.
Keras implementation:
smooth = 1.
def dice_coef(y_true, y_pred):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
def dice_coef_loss(y_true, y_pred):
return 1 - dice_coef(y_true, y_pred)
You can also use as a loss function the sum of binary_crossentropy
and other losses if it suits you : i.e. loss = dice_loss + bce
It’s known that sparse_categorical_crossentropy in keras can get the average loss function among each category. But what if only one certain category was I concerned most? Like if I want to define the precision(=TP/(TP+FP)) based on this category as loss function, how can I write it? Thanks!
My codes were like:
from keras import backend as K
def my_loss(y_true,y_pred):
y_true = K.cast(y_true,"float32")
y_pred = K.cast(K.argmax(y_pred),"float32")
nominator = K.sum(K.cast(K.equal(y_true,y_pred) & K.equal(y_true, 0),"float32"))
denominator = K.sum(K.cast(K.equal(y_pred,0),"float32"))
return -(nominator + K.epsilon()) / (denominator + K.epsilon())
And the error is like:
argmax is not differentiable
I don't recommend you to use precision as the loss function.
It is not differentiable that can't be set as a loss function for nn.
you can max it by predicting all the instance as class negative, that makes no sense.
One of the alternative solution is using F1 as the loss function, then tuning the probability cut-off manually for obtaining a desirable level of precision as well as recall is not too low.
You can pass to the fit method a parameter class_weight where you determine which classes are more important.
It should be a dictionary:
{
0: 1, #class 0 has weight 1
1: 0.5, #class 1 has half the importance of class 0
2: 0.7, #....
...
}
Custom loss
If that is not exactly what you need, you can create loss functions like:
import keras.backend as K
def customLoss(yTrue,yPred):
create operations with yTrue and yPred
- yTrue = the true output data (equal to y_train in most examples)
- yPred = the model's calculated output
- yTrue and yPred have exactly the same shape: (batch_size,output_dimensions,....)
- according to the output shape of the last layer
- also according to the shape of y_train
all operations must be like +, -, *, / or operations from K (backend)
return someResultingTensor
You cannot used argmax as it is not differentiable. That means that backprop will not work if loss function can't be differentiated.
Instead of using argmax, do y_true * y_pred.