how to implement Loss function of paper ''Semantic Image Inpainting with Deep Generative Models' in keras - keras

I have trained GAN on celebA dataset. After that i separate G and D. Then i pick one image from celebA training dataset say yTrue and now i want to find the closest image to yTrue that G can generate say yPred. So the loss at output of G is ||yTrue - yPred||_2^{2} and i minimized it w.r.t generator input(latent variable from normal distribution). Below is code that is giving good results. Now the problem is i want to also add prior loss (log(1-D(G(z))) 1 in first line but i am not getting how to do it as D is not connected to G now and if i directly add k.mean(k.log(1-D.predict(G.output))) in first line it returns numpy array not tensor that is not allowed.
`loss = K.mean(K.square(yTrue - gf.output))
grad = K.gradients(loss,[gf.input])[0]
fn = K.function([gf.input], [grad])
generator_input = np.random.normal(0,1,[1,100])
for i in range(5000):
grad1 = fn([generator_input])
generator_input -= grads[0]*.01
recovered = gf.predict(generator_input)`

In keras, you get the final output to create loss functions. Then, you will have to train the full network to achieve that loss. (Train G+D joined as a single model).
In the loss function, you will have y_true and y_pred, and you use them to compare:
PS: if MSE is not taking the output of the discriminator, please detail your questoin better.
import keras.backend as K
def customLoss(yTrue,yPred):
mse = K.mean(K.square(yTrue-yPred)
prior = K.mean(K.log(1-yPred))
return mse + prior
Pass this function when compiling the model
discriminator.compile(loss=customLoss,optimizer=.....)

Related

Using weights in CrossEntropyLoss and BCELoss (PyTorch)

I am training a PyTorch model to perform binary classification. My minority class makes up about 10% of the data, so I want to use a weighted loss function. The docs for BCELoss and CrossEntropyLoss say that I can use a 'weight' for each sample.
However, when I declare CE_loss = nn.BCELoss() or nn.CrossEntropyLoss() and then do CE_Loss(output, target, weight=batch_weights), where output, target, and batch_weights are Tensors of batch_size, I get the following error message:
forward() got an unexpected keyword argument 'weight'
Another way you could accomplish your goal is to use reduction=none when initializing the loss and then multiply the resulting tensor by your weights before computing the mean.
e.g.
loss = torch.nn.BCELoss(reduction='none')
model = torch.sigmoid
weights = torch.rand(10,1)
inputs = torch.rand(10,1)
targets = torch.rand(10,1)
intermediate_losses = loss(model(inputs), targets)
final_loss = torch.mean(weights*intermediate_losses)
Of course for your scenario you still would need to calculate the weights tensor. But hopefully this helps!
Could it be that you want to apply separate fixed weights to all elements of class 0 and class 1 in your dataset? It is not clear what value you are passing for batch_weights here. If so, then that is not what the weight parameter in BCELoss does. The weight parameter expects you to pass a separate weight for every ELEMENT in the dataset, not for every CLASS. There are several ways around this. You could construct a weight table for every element. Alternatively, you could use a custom loss function that does what you want:
def BCELoss_class_weighted(weights):
def loss(input, target):
input = torch.clamp(input,min=1e-7,max=1-1e-7)
bce = - weights[1] * target * torch.log(input) - (1 - target) * weights[0] * torch.log(1 - input)
return torch.mean(bce)
return loss
Note that it is important to add a clamp to avoid numerical instability.
HTH Jeroen
the issue is wherein your providing the weight parameter. As it is mentioned in the docs, here, the weights parameter should be provided during module instantiation.
For example, something like,
from torch import nn
weights = torch.FloatTensor([2.0, 1.2])
loss = nn.BCELoss(weights=weights)
You can find a more concrete example here or another helpful PT forum discussion here.
you need to pass weights like below:
CE_loss = CrossEntropyLoss(weight=[…])
This is similar to the idea of #Jeroen Vuurens, but the class weights are determined by the target mean:
y_train_mean = y_train.mean()
bi_cls_w2 = 1/(1 - y_train_mean)
bi_cls_w1 = 1/y_train_mean - bi_cls_w2
bce_loss = nn.BCELoss(reduction='none')
loss_fun = lambda pred, target: ((bi_cls_w1*target + bi_cls_w2) * bce_loss(pred, target)).mean()

Tensorflow 1.15 / Keras 2.3.1 Model.train_on_batch() returns more values than there are outputs/loss functions

I am trying to train a model that has more than one output and as a result, also has more than one loss function attached to it when I compile it.
I haven't done something similar in the past (not from scratch at least).
Here's some code I am using to figure out how this works.
from tensorflow.keras.layers import Dense, Input
from tensorflow.keras.models import Model
batch_size = 50
input_size = 10
i = Input(shape=(input_size,))
x = Dense(100)(i)
x_1 = Dense(output_size)(x)
x_2 = Dense(output_size)(x)
model = Model(i, [x_1, x_2])
model.compile(optimizer = 'adam', loss = ["mse", "mse"])
# Data creation
x = np.random.random_sample([batch_size, input_size]).astype('float32')
y = np.random.random_sample([batch_size, output_size]).astype('float32')
loss = model.train_on_batch(x, [y,y])
print(loss) # sample output [0.8311912, 0.3519104, 0.47928077]
I would expect the variable loss to have two entries (one for each loss function), however, I get back three. I thought maybe one of them is the weighted average but that does not look to be the case.
Could anyone explain how passing in multiple loss functions works, because obviously, I am misunderstanding something.
I believe the three outputs are the sum of all the losses, followed by the individual losses on each output.
For example, if you look at the sample output you've printed there:
0.3519104 + 0.47928077 = 0.83119117 ≈ 0.8311912
Your assumption that there should be two losses in incorrect. You have a model with two outputs, and you specified one loss for each output, but the model has to be trained on a single loss, so Keras trains the model on a new loss that is the sum of the per-output losses.
You can control how these losses are mixed using the loss_weights parameter in model.compile. I think by default it takes weights values equal to 1.0.
So in the end what train_on_batch returns is the loss, output one mse, and output two mse. That is why you get three values.

How to properly update the weights in PyTorch?

I'm trying to implement the gradient descent with PyTorch according to this schema but can't figure out how to properly update the weights. It is just a toy example with 2 linear layers with 2 nodes in hidden layer and one output.
Learning rate = 0.05;
target output = 1
https://hmkcode.github.io/ai/backpropagation-step-by-step/
Forward
Backward
My code is as following:
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
class MyNet(nn.Module):
def __init__(self):
super(MyNet, self).__init__()
self.linear1 = nn.Linear(2, 2, bias=None)
self.linear1.weight = torch.nn.Parameter(torch.tensor([[0.11, 0.21], [0.12, 0.08]]))
self.linear2 = nn.Linear(2, 1, bias=None)
self.linear2.weight = torch.nn.Parameter(torch.tensor([[0.14, 0.15]]))
def forward(self, inputs):
out = self.linear1(inputs)
out = self.linear2(out)
return out
losses = []
loss_function = nn.L1Loss()
model = MyNet()
optimizer = optim.SGD(model.parameters(), lr=0.05)
input = torch.tensor([2.0,3.0])
print('weights before backpropagation = ', list(model.parameters()))
for epoch in range(1):
result = model(input )
loss = loss_function(result , torch.tensor([1.00],dtype=torch.float))
print('result = ', result)
print("loss = ", loss)
model.zero_grad()
loss.backward()
print('gradients =', [x.grad.data for x in model.parameters()] )
optimizer.step()
print('weights after backpropagation = ', list(model.parameters()))
The result is following :
weights before backpropagation = [Parameter containing:
tensor([[0.1100, 0.2100],
[0.1200, 0.0800]], requires_grad=True), Parameter containing:
tensor([[0.1400, 0.1500]], requires_grad=True)]
result = tensor([0.1910], grad_fn=<SqueezeBackward3>)
loss = tensor(0.8090, grad_fn=<L1LossBackward>)
gradients = [tensor([[-0.2800, -0.4200], [-0.3000, -0.4500]]),
tensor([[-0.8500, -0.4800]])]
weights after backpropagation = [Parameter containing:
tensor([[0.1240, 0.2310],
[0.1350, 0.1025]], requires_grad=True), Parameter containing:
tensor([[0.1825, 0.1740]], requires_grad=True)]
Forward pass values:
2x0.11 + 3*0.21=0.85 ->
2x0.12 + 3*0.08=0.48 -> 0.85x0.14 + 0.48*0.15=0.191 -> loss =0.191-1 = -0.809
Backward pass: let's calculate w5 and w6 (output node weights)
w = w - (prediction-target)x(gradient)x(output of previous node)x(learning rate)
w5= 0.14 -(0.191-1)*1*0.85*0.05= 0.14 + 0.034= 0.174
w6= 0.15 -(0.191-1)*1*0.48*0.05= 0.15 + 0.019= 0.169
In my example Torch doesn't multiply the loss by derivative so we get wrong weights after updating. For the output node we got new weights w5,w6 [0.1825, 0.1740] , when it should be [0.174, 0.169]
Moving backward to update the first weight of the output node (w5) we need to calculate: (prediction-target)x(gradient)x(output of previous node)x(learning rate)=-0.809*1*0.85*0.05=-0.034. Updated weight w5 = 0.14-(-0.034)=0.174. But instead pytorch calculated new weight = 0.1825. It forgot to multiply by (prediction-target)=-0.809. For the output node we got gradients -0.8500 and -0.4800. But we still need to multiply them by loss 0.809 and learning rate 0.05 before we can update the weights.
What is the proper way of doing this?
Should we pass 'loss' as an argument to backward() as following: loss.backward(loss) .
That seems to fix it. But I couldn't find any example on this in documentation.
You should use .zero_grad() with optimizer, so optimizer.zero_grad(), not loss or model as suggested in the comments (though model is fine, but it is not clear or readable IMO).
Except that your parameters are updated fine, so the error is not on PyTorch's side.
Based on gradient values you provided:
gradients = [tensor([[-0.2800, -0.4200], [-0.3000, -0.4500]]),
tensor([[-0.8500, -0.4800]])]
Let's multiply all of them by your learning rate (0.05):
gradients_times_lr = [tensor([[-0.014, -0.021], [-0.015, -0.0225]]),
tensor([[-0.0425, -0.024]])]
Finally, let's apply ordinary SGD (theta -= gradient * lr), to get exactly the same results as in PyTorch:
parameters = [tensor([[0.1240, 0.2310], [0.1350, 0.1025]]),
tensor([[0.1825, 0.1740]])]
What you have done is taken the gradients calculated by PyTorch and multiplied them with the output of previous node and that's not how it works!.
What you've done:
w5= 0.14 -(0.191-1)*1*0.85*0.05= 0.14 + 0.034= 0.174
What should of been done (using PyTorch's results):
w5 = 0.14 - (-0.85*0.05) = 0.1825
No multiplication of previous node, it's done behind the scenes (that's what .backprop() does - calculates correct gradients for all of the nodes), no need to multiply them by previous ones.
If you want to calculate them manually, you have to start at the loss (with delta being one) and backprop all the way down (do not use learning rate here, it's a different story!).
After all of them are calculated, you can multiply each weight by optimizers learning rate (or any other formula for that matter, e.g. Momentum) and after this you have your correct update.
How to calculate backprop
Learning rate is not part of backpropagation, leave it alone until you calculate all of the gradients (it confuses separate algorithms together, optimization procedures and backpropagation).
1. Derivative of total error w.r.t. output
Well, I don't know why you are using Mean Absolute Error (while in the tutorial it is Mean Squared Error), and that's why both those results vary. But let's go with your choice.
Derivative of | y_true - y_pred | w.r.t. to y_pred is 1, so IT IS NOT the same as loss. Change to MSE to get equal results (here, the derivative will be (1/2 * y_pred - y_true), but we usually multiply MSE by two in order to remove the first multiplication).
In MSE case you would multiply by the loss value, but it depends entirely on the loss function (it was a bit unfortunate that the tutorial you were using didn't point this out).
2. Derivative of total error w.r.t. w5
You could probably go from here, but... Derivative of total error w.r.t to w5 is the output of h1 (0.85 in this case). We multiply it by derivative of total error w.r.t. output (it is 1!) and obtain 0.85, as done in PyTorch. Same idea goes for w6.
I seriously advise you not to confuse learning rate with backprop, you are making your life harder (and it's not easy with backprop IMO, quite counterintuitive), and those are two separate things (can't stress that one enough).
This source is nice, more step-by-step, with a little more complicated network idea (activations included), so you can get a better grasp if you go through all of it.
Furthermore, if you are really keen (and you seem to be), to know more ins and outs of this, calculate the weight corrections for other optimizers (say, nesterov), so you know why we should keep those ideas separated.

How to define custom cost function that depends on input when using ImageDataGenerator in Keras?

I would like to define a custom cost function
def custom_objective(y_true, y_pred):
....
return L
that will depend not only on y_true and y_pred, but on some feature of the corresponding x that produced y_pred. The only way I can think of doing this is to "hide" the relevant features in y_true, so that y_true = [usual_y_true, relevant_x_features], or something like that.
There are two main problems I am having with implementing this:
1) Changing the shape of y_true means I need to pad y_pred with some garbage so that their shapes are the same. I can do this by modyfing the last layer of my model
2) I used data augmentation like so:
datagen = ImageDataGenerator(preprocessing_function=my_augmenter)
where my_augmenter() is the function that should also give me the relevant x features to use in custom_objective() above. However, training with
model.fit_generator(datagen.flow(x_train, y_train, batch_size=1), ...)
doesn't seem to give me access to the features calculated with my_augmenter.
I suppose I could hide the features in the augmented x_train, copy them right away in my model setup, and then feed them directly into y_true or something like that, but surely there must be a better way to do this?
Maybe you could create a two part model with:
Inner model: original model that predicts desired outputs
Outer model:
Takes y_true data as inputs
Takes features as inputs
Outputs the loss itself (instead of predicted data)
So, suppose you already have the originalModel defined. Let's define the outer model.
#this model has three inputs:
originalInputs = originalModel.input
yTrueInputs = Input(shape_of_y_train)
featureInputs = Input(shape_of_features)
#the original outputs will become an input for a custom loss layer
originalOutputs = originalModel.output
#this layer contains our custom loss
loss = Lambda(innerLoss)([originalOutputs, yTrueInputs, featureInputs])
#outer model
outerModel = Model([originalInputs, yTrueInputs, featureInputs], loss)
Now, our custom inner loss:
def innerLoss(x):
y_pred = x[0]
y_true = x[1]
features = x[2]
.... calculate and return loss here ....
Now, for this model that already contains a custom loss "inside" it, we don't actually want a final loss function, but since keras demands it, we will use the final loss as just return y_pred:
def finalLoss(true,pred):
return pred
This will allow us to train passing just a dummy y_true.
But of course, we also need a custom generator, otherwise we can't get the features.
Consider you already have originalGenerator =datagen.flow(x_train, y_train, batch_size=1) defined:
def customGenerator(originalGenerator):
while True: #keras needs infinite generators
x, y = next(originalGenerator)
features = ____extract features here____(x)
yield (x,y,features), y
#the last y will be a dummy output, necessary but not used
You could also, if you want the extra functionality of randomizing batch order and use multiprocessing, implement a class CustomGenerator(keras.utils.Sequence) following the same logic. The help page shows how.
So, let's compile and train the outer model (this also trains the inner model so you can use it later for predicting):
outerModel.compile(optimizer=..., loss=finalLoss)
outerModel.fit_generator(customGenerator(originalGenerator), batchesInOriginalGenerator,
epochs=...)

Multi-label classification with class weights in Keras

I have a 1000 classes in the network and they have multi-label outputs. For each training example, the number of positive output is same(i.e 10) but they can be assigned to any of the 1000 classes. So 10 classes have output 1 and rest 990 have output 0.
For the multi-label classification, I am using 'binary-cross entropy' as cost function and 'sigmoid' as the activation function. When I tried this rule of 0.5 as the cut-off for 1 or 0. All of them were 0. I understand this is a class imbalance problem. From this link, I understand that, I might have to create extra output labels.Unfortunately, I haven't been able to figure out how to incorporate that into a simple neural network in keras.
nclasses = 1000
# if we wanted to maximize an imbalance problem!
#class_weight = {k: len(Y_train)/(nclasses*(Y_train==k).sum()) for k in range(nclasses)}
inp = Input(shape=[X_train.shape[1]])
x = Dense(5000, activation='relu')(inp)
x = Dense(4000, activation='relu')(x)
x = Dense(3000, activation='relu')(x)
x = Dense(2000, activation='relu')(x)
x = Dense(nclasses, activation='sigmoid')(x)
model = Model(inputs=[inp], outputs=[x])
adam=keras.optimizers.adam(lr=0.00001)
model.compile('adam', 'binary_crossentropy')
history = model.fit(
X_train, Y_train, batch_size=32, epochs=50,verbose=0,shuffle=False)
Could anyone help me with the code here and I would also highly appreciate if you could suggest a good 'accuracy' metric for this problem?
Thanks a lot :) :)
I have a similar problem and unfortunately have no answer for most of the questions. Especially the class imbalance problem.
In terms of metric there are several possibilities: In my case I use the top 1/2/3/4/5 results and check if one of them is right. Because in your case you always have the same amount of labels=1 you could take your top 10 results and see how many percent of them are right and average this result over your batch size. I didn't find a possibility to include this algorithm as a keras metric. Instead, I wrote a callback, which calculates the metric on epoch end on my validation data set.
Also, if you predict the top n results on a test dataset, see how many times each class is predicted. The Counter Class is really convenient for this purpose.
Edit: If found a method to include class weights without splitting the output.
You need a numpy 2d array containing weights with shape [number classes to predict, 2 (background and signal)].
Such an array could be calculated with this function:
def calculating_class_weights(y_true):
from sklearn.utils.class_weight import compute_class_weight
number_dim = np.shape(y_true)[1]
weights = np.empty([number_dim, 2])
for i in range(number_dim):
weights[i] = compute_class_weight('balanced', [0.,1.], y_true[:, i])
return weights
The solution is now to build your own binary crossentropy loss function in which you multiply your weights yourself:
def get_weighted_loss(weights):
def weighted_loss(y_true, y_pred):
return K.mean((weights[:,0]**(1-y_true))*(weights[:,1]**(y_true))*K.binary_crossentropy(y_true, y_pred), axis=-1)
return weighted_loss
weights[:,0] is an array with all the background weights and weights[:,1] contains all the signal weights.
All that is left is to include this loss into the compile function:
model.compile(optimizer=Adam(), loss=get_weighted_loss(class_weights))

Resources