Pass user specified parameters to DataLoader - pytorch

I am using U - Net and implementing the weighting technique described in the papers from 2015 (U-Net: Convolutional Networks for Biomedical
Image Segmentation) and 2019 (U-Net – Deep Learning for Cell Counting, Detection, and Morphometry). In that technique there is a variance σ and a weight w_0. I would like, especially the σ, to be a learnable parameter instead of guessing which value is best from dataset to dataset.
From what I found, I can do this using nn.Parameter.
To use the learned σ from epoch to epoch, I need somehow to pass this new value to the get_item function of the DataSet through the DataLoader.
My current take on this, is to extend torch.utils.data.DataLoader where the new init has an extra parameter accepting the user specified/learnable parameters. Given the source code of torch.utils.data.DataLoader, I do not understand where and how the DataLoader calls the DataSet instance and hence to pass these parameters.
Code wise, in the DataSet definition there is the function
def __getitem__(self, index):
that I can change as
def __getitem__(self, index, sigma):
and to make use of the updated, newly learned σ.
My problem is that during training, I iterate through training dataset as
for epoch in range( checkpoint[ 'epoch'], num_epochs):
....
for ii, ( X, y, y_weight, fname) in enumerate( dataLoader[ phase]):
In that enumeration of DataLoader, how can I pass the new σ to the DataLoader such that the DataLoader will pass it to the DataSet getitem function mentioned above?
EDIT
Currently, I define inside the DataSet class a parameter sigma
class MedicalImageDataset( Dataset):
def __init__(self, fname, img_transform = None, mask_transform = None, weight_transform = None, sigma = 8):
...
self.sigma = sigma
def __getitem__(self, index):
sigma = self.sigma
...
which I update through the DataLoader as
dataLoader[ 'train'].dataset.sigma = model.sigma
where,
model.sigma
is a custom parameter defined as
model.register_parameter( name = 'sigma', param = torch.nn.Parameter( torch.tensor( 16, dtype = torch.float16), requires_grad = True))
after creating the model.
My problem is, that model.sigma doesn't look being updated from epoch to epoch. Specifically, is the same as the initial value. Why is this?
Having a look at optimizer.state_dict() I couldn't find any parameter named 'sigma', whereas I can find one in model.named_parameters().
Finally, this parameter sigma is not attached to any layer, it's kinda "free".

What you need to do is to set sigma as an attribute of the Dataset and change it between epochs.
For the dataset definition
class UNetDataset(object):
def __init__(self, ..., sigma=5):
self.sigma = sigma
Now, within __getitem__, you can use the sigma value using self.sigma
Now within your training cycle, after every epoch, you can change the sigma value by setting the sigma attribute of the Dataset
for epoch in range(num_epochs):
dataset.sigma = #whatever value you want
for i,(x,y) in enumarate(DataLoader):

Related

How keras.utils.Sequence works?

I am trying to create a data pipeline for U-net for Image Segmentation. I came across Keras.utils.Sequence class through which, I can create a data pipeline, But I am unable to understand how this is working.
link for the code Keras code , Source code
def __iter__(self):
"""Create a generator that iterate over the Sequence."""
for item in (self[i] for i in range(len(self))):
yield item
I will highly appreciate if anyone can tell me how this works ?
You don't need a generator. The sequence class is there to manage that. You need to define a class inherited from tensorflow.keras.utils.Sequence and define the methods:
__init__, __getitem__, __len__. In addition, you can define the method on_epoch_end, which is called at the end of each epoch and is usually used to shuffle the sample indexes.
There is an example in the link you gave Tensorflow Sequence.
Below is another example of Sequence.
Note that you can pass the data to the __init__ constructor, but you may as well read the data from files in the __getitem__ method, assuming you know where to read it, e.g. by passing the name of a directory or directories into the constructor. This is necessary if there is a lot of data.
from tensorflow import keras
import numpy as np
class SequenceExample(keras.utils.Sequence):
def __init__(self, x_in, y_in, batch_size, shuffle=True):
# Initialization
self.batch_size = batch_size
self.shuffle = shuffle
self.x = x_in
self.y = y_in
self.datalen = len(y_in)
self.indexes = np.arange(self.datalen)
if self.shuffle:
np.random.shuffle(self.indexes)
def __getitem__(self, index):
# get batch indexes from shuffled indexes
batch_indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
x_batch = self.x[batch_indexes]
y_batch = self.y[batch_indexes]
return x_batch, y_batch
def __len__(self):
# Denotes the number of batches per epoch
return self.datalen // self.batch_size
def on_epoch_end(self):
# Updates indexes after each epoch
self.indexes = np.arange(self.datalen)
if self.shuffle:
np.random.shuffle(self.indexes)

Using weights in CrossEntropyLoss and BCELoss (PyTorch)

I am training a PyTorch model to perform binary classification. My minority class makes up about 10% of the data, so I want to use a weighted loss function. The docs for BCELoss and CrossEntropyLoss say that I can use a 'weight' for each sample.
However, when I declare CE_loss = nn.BCELoss() or nn.CrossEntropyLoss() and then do CE_Loss(output, target, weight=batch_weights), where output, target, and batch_weights are Tensors of batch_size, I get the following error message:
forward() got an unexpected keyword argument 'weight'
Another way you could accomplish your goal is to use reduction=none when initializing the loss and then multiply the resulting tensor by your weights before computing the mean.
e.g.
loss = torch.nn.BCELoss(reduction='none')
model = torch.sigmoid
weights = torch.rand(10,1)
inputs = torch.rand(10,1)
targets = torch.rand(10,1)
intermediate_losses = loss(model(inputs), targets)
final_loss = torch.mean(weights*intermediate_losses)
Of course for your scenario you still would need to calculate the weights tensor. But hopefully this helps!
Could it be that you want to apply separate fixed weights to all elements of class 0 and class 1 in your dataset? It is not clear what value you are passing for batch_weights here. If so, then that is not what the weight parameter in BCELoss does. The weight parameter expects you to pass a separate weight for every ELEMENT in the dataset, not for every CLASS. There are several ways around this. You could construct a weight table for every element. Alternatively, you could use a custom loss function that does what you want:
def BCELoss_class_weighted(weights):
def loss(input, target):
input = torch.clamp(input,min=1e-7,max=1-1e-7)
bce = - weights[1] * target * torch.log(input) - (1 - target) * weights[0] * torch.log(1 - input)
return torch.mean(bce)
return loss
Note that it is important to add a clamp to avoid numerical instability.
HTH Jeroen
the issue is wherein your providing the weight parameter. As it is mentioned in the docs, here, the weights parameter should be provided during module instantiation.
For example, something like,
from torch import nn
weights = torch.FloatTensor([2.0, 1.2])
loss = nn.BCELoss(weights=weights)
You can find a more concrete example here or another helpful PT forum discussion here.
you need to pass weights like below:
CE_loss = CrossEntropyLoss(weight=[…])
This is similar to the idea of #Jeroen Vuurens, but the class weights are determined by the target mean:
y_train_mean = y_train.mean()
bi_cls_w2 = 1/(1 - y_train_mean)
bi_cls_w1 = 1/y_train_mean - bi_cls_w2
bce_loss = nn.BCELoss(reduction='none')
loss_fun = lambda pred, target: ((bi_cls_w1*target + bi_cls_w2) * bce_loss(pred, target)).mean()

How to compute gradient of the error with respect to the model input?

Given a simple 2 layer neural network, the traditional idea is to compute the gradient w.r.t. the weights/model parameters. For an experiment, I want to compute the gradient of the error w.r.t the input. Are there existing Pytorch methods that can allow me to do this?
More concretely, consider the following neural network:
import torch.nn as nn
import torch.nn.functional as F
class NeuralNet(nn.Module):
def __init__(self, n_features, n_hidden, n_classes, dropout):
super(NeuralNet, self).__init__()
self.fc1 = nn.Linear(n_features, n_hidden)
self.sigmoid = nn.Sigmoid()
self.fc2 = nn.Linear(n_hidden, n_classes)
self.dropout = dropout
def forward(self, x):
x = self.sigmoid(self.fc1(x))
x = F.dropout(x, self.dropout, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
I instantiate the model and an optimizer for the weights as follows:
import torch.optim as optim
model = NeuralNet(n_features=args.n_features,
n_hidden=args.n_hidden,
n_classes=args.n_classes,
dropout=args.dropout)
optimizer_w = optim.SGD(model.parameters(), lr=0.001)
While training, I update the weights as usual. Now, given that I have values for the weights, I should be able to use them to compute the gradient w.r.t. the input. I am unable to figure out how.
def train(epoch):
t = time.time()
model.train()
optimizer.zero_grad()
output = model(features)
loss_train = F.nll_loss(output[idx_train], labels[idx_train])
acc_train = accuracy(output[idx_train], labels[idx_train])
loss_train.backward()
optimizer_w.step()
# grad_features = loss_train.backward() w.r.t to features
# features -= 0.001 * grad_features
for epoch in range(args.epochs):
train(epoch)
It is possible, just set input.requires_grad = True for each input batch you're feeding in, and then after loss.backward() you should see that input.grad holds the expected gradient. In other words, if your input to the model (which you call features in your code) is some M x N x ... tensor, features.grad will be a tensor of the same shape, where each element of grad holds the gradient with respect to the corresponding element of features. In my comments below, I use i as a generalized index - if your parameters has for instance 3 dimensions, replace it with features.grad[i, j, k], etc.
Regarding the error you're getting: PyTorch operations build a tree representing the mathematical operation they are describing, which is then used for differentiation. For instance c = a + b will create a tree where a and b are leaf nodes and c is not a leaf (since it results from other expressions). Your model is the expression, and its inputs as well as parameters are the leaves, whereas all intermediate and final outputs are not leaves. You can think of leaves as "constants" or "parameters" and of all other variables as of functions of those. This message tells you that you can only set requires_grad of leaf variables.
Your problem is that at the first iteration, features is random (or however else you initialize) and is therefore a valid leaf. After your first iteration, features is no longer a leaf, since it becomes an expression calculated based on the previous ones. In pseudocode, you have
f_1 = initial_value # valid leaf
f_2 = f_1 + your_grad_stuff # not a leaf: f_2 is a function of f_1
to deal with that you need to use detach, which breaks the links in the tree, and makes the autograd treat a tensor as if it was constant, no matter how it was created. In particular, no gradient calculations will be backpropagated through detach. So you need something like
features = features.detach() - 0.01 * features.grad
Note: perhaps you need to sprinkle a couple more detaches here and there, which is hard to say without seeing your whole code and knowing the exact purpose.

keras: unsupervised learning with external constraint

I have to train a network on unlabelled data of binary type (True/False), which sounds like unsupervised learning. This is what the normalised data look like:
array([[-0.05744527, -1.03575495, -0.1940105 , -1.15348956, -0.62664491,
-0.98484037],
[-0.05497629, -0.50935675, -0.19396862, -0.68990988, -0.10551919,
-0.72375012],
[-0.03275552, 0.31480204, -0.1834951 , 0.23724946, 0.15504367,
0.29810553],
...,
[-0.05744527, -0.68482282, -0.1940105 , -0.87534175, -0.23580062,
-0.98484037],
[-0.05744527, -1.50366446, -0.1940105 , -1.52435329, -1.14777063,
-0.98484037],
[-0.05744527, -1.26970971, -0.1940105 , -1.33892142, -0.88720777,
-0.98484037]])
However, I do have a constraint on the total number of True labels in my data. This doesn't mean I can build a classical custom loss function in Keras taking (y_true, y_pred) arguments as required: my external constraint is just on the predicted total of True and False, not on the individual labels.
My question is whether there is a somewhat "standard" approach to this kind of problems, and how that is implementable in Keras.
POSSIBLE SOLUTION
Should I assign y_true randomly as 0/1, have a network return y_pred as 1/0 with a sigmoid activation function, and then define my loss function as
sum_y_true = 500 # arbitrary constant known a priori
def loss_function(y_true, y_pred):
loss = np.abs(y_pred.sum() - sum_y_true)
return loss
In the end, I went with the following solution, which worked.
1) Define batches in your dataframe df with a batch_id column, so that in each batch Y_train is your identical "batch ground truth" (in my case, the total number of True labels in the batch). You can then pass these instances together to the network. This can be done with a generator:
def grouper(g,x,y):
while True:
for gr in g.unique():
# this assigns indices to the entire set of values in g,
# then subsects to all the rows in which g == gr
indices = g == gr
yield (x[indices],y[indices])
# train set
train_generator = grouper(df.loc[df['set'] == 'train','batch_id'], X_train, Y_train)
# validation set
val_generator = grouper(df.loc[df['set'] == 'val','batch_id'], X_val, Y_val)
2) define a custom loss function, to track how close the total number of instances predicted as true matches the ground truth:
def custom_delta(y_true, y_pred):
loss = K.abs(K.mean(y_true) - K.sum(y_pred))
return loss
def custom_wrapper():
def custom_loss_function(y_true, y_pred):
return custom_delta(y_true, y_pred)
return custom_loss_function
Note that here
a) Each y_true label is already the sum of the ground truth in our batch (cause we don't have individual values). That's why y_true is not summed over;
b) K.mean is actually a bit of an overkill to extract a single scalar from this uniform tensor, in which all y_true values in each batch are identical - K.min or K.max would also work, but I haven't tested whether their performance is faster.
3) Use fit_generator instead of fit:
fmodel = Sequential()
# ...your layers...
# Create the loss function object using the wrapper function above
loss_ = custom_wrapper()
fmodel.compile(loss=loss_, optimizer='adam')
history1 = fmodel.fit_generator(train_generator, steps_per_epoch=total_batches,
validation_data=val_generator,
validation_steps=df.loc[encs.df['set'] == 'val','batch_id'].nunique(),
epochs=20, verbose = 2)
This way the problem is basically addressed as one of supervised learning, although without individual labels, which means that notions like true/false positive are meaningless here.
This approach not only managed to give me a y_pred that closely matches the totals I know per batch. It actually finds two groups (True/False) that occupy the expected different portions of parameter space.

How to get confusion matrix when using model.fit_generator

I am using model.fit_generator to train and get results for my binary (two class) model because I am giving input images directly from my folder. How to get confusion matrix in this case (TP, TN, FP, FN) as well because generally I use confusion_matrix command of sklearn.metrics to get it, which requires predicted, and actual labels. But here I don't have both. May be I can calculate predicted labels from predict=model.predict_generator(validation_generator) command. But I don't know how my model is taking input labels from my images. General structure of my input folder is:
train/
class1/
img1.jpg
img2.jpg
........
class2/
IMG1.jpg
IMG2.jpg
test/
class1/
img1.jpg
img2.jpg
........
class2/
IMG1.jpg
IMG2.jpg
........
and some blocks of my code is:
train_generator = train_datagen.flow_from_directory('train',
target_size=(50, 50), batch_size=batch_size,
class_mode='binary',color_mode='grayscale')
validation_generator = test_datagen.flow_from_directory('test',
target_size=(50, 50),batch_size=batch_size,
class_mode='binary',color_mode='grayscale')
model.fit_generator(
train_generator,steps_per_epoch=250 ,epochs=40,
validation_data=validation_generator,
validation_steps=21 )
So the above code automatically takes two class inputs, but I don't know for which it consider class 0 and for which class 1.
I've managed it in the following way, using keras.utils.Sequence.
from sklearn.metrics import confusion_matrix
from keras.utils import Sequence
class MySequence(Sequence):
def __init__(self, *args, **kwargs):
# initialize
# see manual on implementing methods
def __len__(self):
return self.length
def __getitem__(self, index):
# return index-th complete batch
# create data generator
data_gen = MySequence(evaluation_set, batch_size=10)
n_batches = len(data_gen)
confusion_matrix(
np.concatenate([np.argmax(data_gen[i][1], axis=1) for i in range(n_batches)]),
np.argmax(m.predict_generator(data_gen, steps=n_batches), axis=1)
)
The implemented class returns batches of data in tuples, that allows not to hold all of them in RAM. Please, note that it must be implemented in __getitem__, and this method must return same batch for the same argument.
Unfortunately this code iterates data twice: first time, it creates array of true answers from returned batches, the second time it calls predict method of the model.
probabilities = model.predict_generator(generator=test_generator)
will give us set of probabilities.
y_true = test_generator.classes
will give us true labels.
Because this is a binary classification problem, you have to find predicted labels. To do that you can use
y_pred = probabilities > 0.5
Then we have true labels and predicted labels on the test dataset. So, the confusion matrix is given by
font = {
'family': 'Times New Roman',
'size': 12
}
matplotlib.rc('font', **font)
mat = confusion_matrix(y_true, y_pred)
plot_confusion_matrix(conf_mat=mat, figsize=(8, 8), show_normed=False)
You can view the mapping from class names to class indices by calling the attribute class_indices on your train_generator or validation_generator objects, as in
train_generator.class_indices

Resources