Pytorch weights not updating.. sometimes - pytorch

Not sure what causes this, but sometimes I start training my neural net, and none of my weights update. This happens maybe 4 out of 5 times when I initialize my script. The other 1 time, it updates everything as expected and trains and predicts as expected. Does anyone have any idea why this happens? Started when I changed my loss function if that's relevant.
Here's the gross part of my training loop, let me know any other relevant code I should include.
def train(model, train_loader, test_loader, test_data, full_test, args, epochs, early_stop=5):
t0 = time()
optimizer = Adam(model.parameters(), lr=args.lr)
lr_decay = lr_scheduler.ExponentialLR(optimizer, gamma=args.lr_decay)
best_val_acc, best_mae = 0, 500
for epoch in range(epochs):
model.train()
ti = time()
training_loss = 0.0
for i, (x, y) in enumerate(train_loader):
x, y = Variable(x.cuda()), Variable(y.cuda())
y_pred = model(x, y)
loss = mae_loss(y, y_pred) + rmse_loss(y, y_pred)
loss.backward()
training_loss += loss.detach() * x.size(0)
optimizer.step()
optimizer.zero_grad()
lr_decay.step()

I believe by far the most likely issue is that your loss function is returning something incorrect. Try printing the first few losses to see what they are and ensure they are reasonable and the correct datatype and shape. One possible reason for the weights not updating if your losses seem ok is that the learning rate is too low for your losses and the weights are being changed by such a small amount that it is either rounded off or not apparent.

Related

I am kinda new to the pytorch, now struggling with a classification problem

I built a very simple structure
class classifier (nn.Module):
def __init__(self):
super().__init__()
self.classify = nn.Sequential(
nn.Linear(166,80),
nn.Tanh(),
nn.Linear(80,40),
nn.Tanh(),
nn.Linear(40,1),
nn.Softmax()
)
def forward (self, x):
pred = self.classify(x)
return pred
model = classifier()
The loss function and optimizer are defined as
criteria = nn.BCEWithLogitsLoss()
iteration = 1000
learning_rate = 0.1
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
and here is the training and evaluation section
for epoch in range (iteration):
model.train()
y_pred = model(x_train)
loss = criteria(y_pred,y_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
model.eval()
with torch.inference_mode():
test_pred = model(x_test)
test_loss = criteria(test_pred, y_test)
if epoch % 100 == 0:
print(loss)
print(test_loss)
I received the same loss values, and by debugging, I found that the weights were not being updated.
The problem is in the network architecture: you are using a Softmax layer on a single valued output at the end. As per the definition of the softmax function, for a output vector x, we have, for index i:
softmax(x_i) = e^{x_i} / sum_j (e^{x_j})
Here, you only have a single valued output. Due to this, the output of your neural network is always 1, irrespective of the inputs or the weights. To fix this, remove the Softmax layer at the end. An activation function like Sigmoid might be more appropriate, and in fact you are already applying this when using the BCEWithLogitsLoss.
The problem lies here
y_pred = model(x_train)
loss = criteria(y_pred,y_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
after loss is calculated, you are clearing the gradients by doing optimizer.zero_grad()
the ideal case should be:
optimizer.zero_grad()
y_pred = model(x_train)
loss = criteria(y_pred,y_train)
loss.backward()
optimizer.step()

Pytorch - Repeating Loss

I am new to PyTorch and I found a problem when displaying the loss of my model.
Pytorch Adam Optimizer - Model Loss Figure
Pytorch SGD Optimizer - Model Loss Figure
As you can see, the model seem to go up and down multiple times, with a recurrent pattern (the pattern starting to repeat at the begging of every epoch).
The full code can be found at: https://github.com/19valentin99/Kaggle/tree/main/Iris%20Flowers
in main_test.py (the # lines are the ones that I used to debug the code and the answer should be below).
When we just take the loss of the last element (or the loss over the
whole epoch) we will see a smooth decrease in loss
The reason your loss is smooth is because you are looking at the loss of the exact same batch on every iteration. Indeed your train data loader isn't shuffling your instance:
train2 = DataLoader(flowers_data_train, batch_size=BATCH_SIZE)
This means the same batch will appear last on every epoch. That's all there is to it, this doesn't mean the learning is different, it means you are looking at a part of the complete dataset loss.
The difference between "not working" and "working" is based of when the loss is recorded.
The idea is that: overall, the loss converges, but in this time until it converges it jumps up and down.
While it jumps up and down, we might see a pattern if we are sampling too often. The pattern is given by the data we use for training (as the data we use to train is the same every epoch - in batches).
As a result:
For the not-working version: I was recording the loss every epoch, after every batch.
For the working version: I was recording only the latest loss in the epoch.
Pytorch Adam Optimizer - Model Loss (working)
Pytorch SGD Optimizer - Model Loss (working)
Furthermore, I will attach the code which generates the non working version:
loss_list = []
for epoch in range(EPOCHS):
for idx, (x, y) in enumerate(train_load):
x, y = x.to(device), y.to(device)
#Compute Error
prediction = model(x)
#print(prediction, y)
loss = loss_fn(prediction, y)
#debuging
loss_list.append(loss.item())
##Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
plt.plot(loss_list)
plt.show()
The working code:
loss_list2 = np.zeros((EPOCHS,))
for epoch in range(EPOCHS):
for batch, (x, y) in enumerate(train_load):
x = x.to(device=device)
y = y.to(device=device)
y_pred = model(x)
loss = loss_fn(y_pred, y)
loss_list2[epoch] = loss.item()
# Zero gradients
optimizer.zero_grad()
loss.backward()
optimizer.step()
plt.plot(loss_list2)
plt.show()
In the end, I would like to mention that I know that there are a couple of other threads out there that say how to solve this problem (like: clip the gradients, remove the last batch, model is too simple to capture the data) but in the end, what I discovered is that it wasn't actually a problem but more "when the recording of the data is done".
I hope that this will help other people as well.

Pytorch quickstart calls model.eval() but not model.train()

In Pytorch quickstart tutorial the code uses model.eval() during evaluation/test but it does not call model.train() during training.
According to this and source, some modules like BatchNorm and Dropout need to know if the model is in train or evaluation mode. The model in the tutorial does not use any such module so it runs to convergence. Am I missing something or Pytorch's very first tutorial actually has a logical bug?
Training:
def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
for batch, (X, y) in enumerate(dataloader):
X, y = X.to(device), y.to(device)
# Compute prediction error
pred = model(X)
loss = loss_fn(pred, y)
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch % 100 == 0:
loss, current = loss.item(), batch * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
You can see there is no model.train() in the above code.
Testing:
def test(dataloader, model):
size = len(dataloader.dataset)
model.eval()
test_loss, correct = 0, 0
with torch.no_grad():
for X, y in dataloader:
X, y = X.to(device), y.to(device)
pred = model(X)
test_loss += loss_fn(pred, y).item()
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
test_loss /= size
correct /= size
print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
At the second line, there is a model.eval().
Training loop:
epochs = 5
for t in range(epochs):
print(f"Epoch {t+1}\n-------------------------------")
train(train_dataloader, model, loss_fn, optimizer)
test(test_dataloader, model)
print("Done!")
This loop calls train() and test() methods without any call to model.train(). So after the first call of test(), the model is always in "evaluation" mode. If we add a BatchNorm to the model we'll be on our way to encounter a hard-to-find bug.
Main question:
Is it good practice to always call model.train() during training and model.eval() during evaluation/test?

Pytorch: Custom Loss involving Norm of End-to-End Jacobian

Cross posting from Pytorch discussion boards
I want to train a network using a modified loss function that has both a typical classification loss (e.g. nn.CrossEntropyLoss) as well as a penalty on the Frobenius norm of the end-to-end Jacobian (i.e. if f(x) is the output of the network, \nabla_x f(x)).
I’ve implemented a model that can successfully learn using nn.CrossEntropyLoss. However, when I try adding the second loss function (by doing two backwards passes), my training loop runs, but the model never learns. Furthermore, if I calculate the end-to-end Jacobian, but don’t include it in the loss function, the model also never learns. At a high level, my code does the following:
Forward pass to get predicted classes, yhat, from inputs x
Call yhat.backward(torch.ones(appropriate shape), retain_graph=True)
Jacobian norm = x.grad.data.norm(2)
Set loss equal to classification loss + scalar coefficient * jacobian norm
Run loss.backward()
I suspect that I’m misunderstanding how backward() works when run twice, but I haven’t been able to find any good resources to clarify this.
Too much is required to produce a working example, so I’ve tried to extract the relevant code:
def train_model(model, train_dataloader, optimizer, loss_fn, device=None):
if device is None:
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.train()
train_loss = 0
correct = 0
for batch_idx, (batch_input, batch_target) in enumerate(train_dataloader):
batch_input, batch_target = batch_input.to(device), batch_target.to(device)
optimizer.zero_grad()
batch_input.requires_grad_(True)
model_batch_output = model(batch_input)
loss = loss_fn(model_output=model_batch_output, model_input=batch_input, model=model, target=batch_target)
train_loss += loss.item() # sum up batch loss
loss.backward()
optimizer.step()
and
def end_to_end_jacobian_loss(model_output, model_input):
model_output.backward(
torch.ones(*model_output.shape),
retain_graph=True)
jacobian = model_input.grad.data
jacobian_norm = jacobian.norm(2)
return jacobian_norm
Edit 1: I swapped my previous implementation with .backward() to autograd.grad and it apparently works! What's the difference?
def end_to_end_jacobian_loss(model_output, model_input):
jacobian = autograd.grad(
outputs=model_output['penultimate_layer'],
inputs=model_input,
grad_outputs=torch.ones(*model_output['penultimate_layer'].shape),
retain_graph=True,
only_inputs=True)[0]
jacobian_norm = jacobian.norm(2)
return jacobian_norm

Val loss behaves strange while using custom training loop in tensorflow 2.0

I'm using a VGG16 model written in tf2.0 to train on my own datasets. Some BatchNormalization layers were included in the model and the "training" argument were set to True during training time and False during validation time as described in many tutorials.
The train_loss decreased to a certain level during training as expected. However, the val_loss behaves really strange. I checked out the output of the model after training and found out that, if I set the training argument to True, the output is quite correct, but if I set it to False, the result is incorrect at all.
According to the tutorials in tensorflow website, when training is set to False , the model will normalize its inputs using the mean and variance of its moving statistics learned during training but it doesn't seem so. Am I missing something?
I've provided the trainning and validation code in the below.
def train():
logging.basicConfig(level=logging.INFO)
tdataset = tf.data.Dataset.from_tensor_slices((train_img_list[:200], train_label_list[:200]))
tdataset = tdataset.map(parse_function, 3).shuffle(buffer_size=200).batch(batch_size).repeat(repeat_times)
vdataset = tf.data.Dataset.from_tensor_slices((val_img_list[:100], val_label_list[:100]))
vdataset = vdataset.map(parse_function, 3).batch(batch_size)
### Vgg model
model = VGG_PR(num_classes=num_label)
logging.info('Model loaded')
start_epoch = 0
latest_ckpt = tf.train.latest_checkpoint(os.path.dirname(ckpt_path))
if latest_ckpt:
start_epoch = int(latest_ckpt.split('-')[1].split('.')[0])
model.load_weights(latest_ckpt)
logging.info('model resumed from: {}, start at epoch: {}'.format(latest_ckpt, start_epoch))
else:
logging.info('training from scratch since weights no there')
######## training loop ########
loss_object = tf.keras.losses.MeanSquaredError()
val_loss_object = tf.keras.losses.MeanSquaredError()
optimizer = tf.keras.optimizers.Adam(learning_rate=initial_lr)
train_loss = tf.metrics.Mean(name='train_loss')
val_loss = tf.metrics.Mean(name='val_loss')
writer = tf.summary.create_file_writer(log_path.format(case_num))
with writer.as_default():
for epoch in range(start_epoch, total_epoch):
print('start training')
try:
for batch, data in enumerate(tdataset):
images, labels = data
with tf.GradientTape() as tape:
pred = model(images, training=True)
if len(pred.shape) == 2:
pred = tf.reshape(pred,[-1, 1, 1, num_label])
loss = loss_object(pred, labels)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
if batch % 20 ==0:
logging.info('Epoch: {}, iter: {}, loss:{}'.format(epoch, batch, loss.numpy()))
tf.summary.scalar('train_loss', loss.numpy(), step=epoch*1250*repeat_times+batch) # the tdataset has been repeated 5 times..
tf.summary.text('Zernike_coe_pred', tf.as_string(tf.squeeze(pred)), step=epoch*1250*repeat_times+batch)
tf.summary.text('Zernike_coe_gt', tf.as_string(tf.squeeze(labels)), step=epoch*1250*repeat_times+batch)
writer.flush()
train_loss(loss)
model.save_weights(ckpt_path.format(epoch=epoch))
except KeyboardInterrupt:
logging.info('interrupted.')
model.save_weights(ckpt_path.format(epoch=epoch))
logging.info('model saved into {}'.format(ckpt_path.format(epoch=epoch)))
exit(0)
# validation step
for batch, data in enumerate(vdataset):
images, labels = data
val_pred = model(images, training=False)
if len(val_pred.shape) == 2:
val_pred = tf.reshape(val_pred,[-1, 1, 1, num_label])
v_loss = val_loss_object(val_pred, labels)
val_loss(v_loss)
logging.info('Epoch: {}, average train_loss:{}, val_loss: {}'.format(epoch, train_loss.result(), val_loss.result()))
tf.summary.scalar('val_loss', val_loss.result(), step = epoch)
writer.flush()
train_loss.reset_states()
val_loss.reset_states()
model.save_weights(ckpt_path.format(epoch=epoch))
The train losss reduced to a very small value like the groundtruth label are in the range of [0, 1] and the average train loss can be 0.007, but the val loss is much higher than this. The output of the model tends to be close to 0 if I set training to False.
updated on Nov. 6th:
I have found an interesting thing that if I use tf.function to decorate my model in its call method, the val loss will turn to be correct, but I'm not sure what has happened?
Mentioning the Answer for the benefit of the community.
Issue is resolved, i.e., val loss will turn to be correct if tf.function is used to decorate the model in its call method.

Resources