Precision,recall, F1 score with Sklearn on Pytorch - scikit-learn

I've been looking through samples but am unable to understand how to integrate the precision, recall and f1 metrics for my model. My code is as follows:
for epoch in range(num_epochs):
#Calculate Accuracy (stack tutorial no n_total)
n_correct = 0
n_total = 0
for i, (words, labels) in enumerate(train_loader):
words = words.to(device)
labels = labels.to(dtype=torch.long).to(device)
# Forward pass
outputs = model(words)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
#feedforward tutorial solution
_, predicted = torch.max(outputs, 1)
n_correct += (predicted == labels).sum().item()
n_total += labels.shape[0]
accuracy = 100 * n_correct/n_total
#Push to matplotlib
train_losses.append(loss.item())
train_epochs.append(epoch)
train_acc.append(accuracy)
#Loss and Accuracy
if (epoch+1) % 10 == 0:
print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.2f}, Acc: {accuracy:.2f}')

Since you have the predicted and the labels variables, you can aggregate them during the epoch loop and convert them to numpy arrays to calculate the required metrics.
At the beginning of the epoch, initialize two empty lists; one for true labels and one for ground truth labels.
for epoch in range(num_epochs):
predicted_labels, ground_truth_labels = [], []
...
Then, keep appending the respective entries to each list during the epoch:
...
_, predicted = torch.max(outputs, 1)
n_correct += (predicted == labels).sum().item()
# appending
predicted_labels.append(predicted.cpu().detach().numpy())
ground_truth_labels.append(labels.cpu().detach().numpy())
...
Then, at the epoch end, you could use precision_recall_fscore_support with predicted_labels and ground_truth_labels as inputs.
Notes:
You'll probably have to refer something like this to flatten the above two lists.
Read about torch.no_grad() to apply it as a good practice during the calculations of metrics.

Related

How to understand a periodicity in the training loss using a pre-trained model of PyTorch?

I'm using a pre-trained model from Pytorch ( Resnet 18,34,50) in order to classify images. During the training, a weird periodicity appears in the training as you can see in the image below. Did somebody already have a similar issue?In order to deal with the overfitting, I'm using Data augmentation in the preprocessing.
When using SGD as an optimizer with the following parameters, we obtain this sort of graph:
criterion: NLLLoss()
learning rate: 0.0001
epoch: 40
print every 40 iteration
We also try adam and Adam bound as optimizers but the same periodicity was observed.
Thank's in advance for your answer!
Here is the code :
def train_classifier():
start=0
stop=0
start = timeit.default_timer()
epochs = 40
steps = 0
print_every = 40
model.to('cuda')
epo=[]
train=[]
valid=[]
acc_valid=[]
for e in range(epochs):
print('Currently running epoch',e,':')
model.train()
running_loss = 0
for images, labels in iter(train_loader):
steps += 1
images, labels = images.to('cuda'), labels.to('cuda')
optimizer.zero_grad()
output = model.forward(images)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if steps % print_every == 0:
model.eval()
# Turn off gradients for validation, saves memory and computations
with torch.no_grad():
validation_loss, accuracy = validation(model, val_loader, criterion)
print("Epoch: {}/{}.. ".format(e+1, epochs),
"Training Loss: {:.3f}.. ".format(running_loss/print_every),
"Validation Loss: {:.3f}.. ".format(validation_loss/len(val_loader)),
"Validation Accuracy: {:.3f}".format(accuracy/len(val_loader)))
stop = timeit.default_timer()
print('Time: ', stop - start)
acc_valid.append(accuracy/len(val_loader))
train.append(running_loss/print_every)
valid.append(validation_loss/len(val_loader))
epo.append(e+1)
running_loss = 0
model.train()
return train,epo,valid,acc_valid

CNN Training Runs without error, but it does not display the results

I am new to pytorch, and I am trying to train my model (CNN), using the following code:
The program runs fine, but it does not display this Epoch/Step/Loss/Accuracy part:
print(‘Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}, Accuracy: {:.2f}%’
as if (i+1) % 100 == 0: never turns to 0
Training part:
iter = 0
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(dataloaders['train']):
images = Variable(images)
labels = Variable(labels)
# Clear the gradients
optimizer.zero_grad()
# Forward propagation
outputs = model(images)
# Calculating loss with softmax to obtain cross entropy loss
loss = criterion(outputs, labels)
# Backward prop
loss.backward()
# Updating gradients
optimizer.step()
iter += 1
# Total number of labels
total = labels.size(0)
# Obtaining predictions from max value
_, predicted = torch.max(outputs.data, 1)
# Calculate the number of correct answers
correct = (predicted == labels).sum().item()
# Print loss and accuracy
if (i+1) % 100 == 0:
print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}, Accuracy: {:.2f}%'
.format(epoch + 1, num_epochs, i + 1, len(dataloaders['train']), loss.item(),
(correct / total) * 100))
Full Code:
https://pastebin.com/dshNmhRL

Implementing Batch for Image Segmentation

I wrote a Python 3.5 script for doing street segmentation. Since I'm new in Image Segementation, I did not use predefined dataloaders from pytorch, instead I wrote them by my self (for better understanding). Until now I only use a batch size of 1. Now I want to generalize this for arbitrary batch sizes.
This is a snippet of my Dataloader:
def augment_data(batch_size):
# [...] defining some paths and data transformation (including ToTensor() function)
# The images are named by numbers (Frame numbers), this allows me to find the correct label image for a given input image.
all_input_image_paths = {int(elem.split('\\')[-1].split('.')[0]) : elem for idx, elem in enumerate(glob.glob(input_dir + "*"))}
all_label_image_paths = {int(elem.split('\\')[-1].split('.')[0]) : elem for idx, elem in enumerate(glob.glob(label_dir + "*"))}
dataloader = {"train":[], "val":[]}
all_samples = []
img_counter = 0
for key, value in all_input_image_paths.items():
input_img = Image.open(all_input_image_paths[key])
label_img = Image.open(all_label_image_paths[key])
# Here I use my own augmentation function which crops the input and label on the same position and do other things.
# We get a list of new augmented data
augmented_images = generate_augmented_images(input_img, label_img)
for elem in augmented_images:
input_as_tensor = data_transforms['norm'](elem[0])
label_as_tensor = data_transforms['val'](elem[1])
input_as_tensor.unsqueeze_(0)
label_as_tensor.unsqueeze_(0)
is_training_data = random.uniform(0.0, 1.0)
if is_training_data <= 0.7:
dataloader["train"].append([input_as_tensor, label_as_tensor])
else:
dataloader["val"].append([input_as_tensor, label_as_tensor])
img_counter += 1
shuffle(dataloader["train"])
shuffle(dataloader["val"])
dataloader_batched = {"train":[], "val":[]}
# Here I group my data to a given batch size
for elem in dataloader["train"]:
batch = []
for i in range(batch_size):
batch.append(elem)
dataloader_batched["train"].append(batch)
for elem in dataloader["val"]:
batch = []
for i in range(batch_size):
batch.append(elem)
dataloader_batched["val"].append(batch)
return dataloader_batched
This is a snippet of my training method with batch size 1:
while epoch <= num_epochs:
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
scheduler.step(3)
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
counter = 0
# Iterate over data.
for inputs, labels in dataloaders[phase]:
counter += 1
max_num = len(dataloaders[phase])
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
epoch_loss = running_loss / dataset_sizes[phase]
If I execute this, I get of course the error:
for inputs, labels in dataloaders[phase]:
ValueError: not enough values to unpack (expected 2, got 1)
I understand why, because now I have a list of images and not only an input and label image as before. So guessed I need a second for loop which iterates over these batches. So I tried this:
# Iterate over data.
for elem in dataloaders[phase]:
for inputs, labels in elem:
counter += 1
max_num = len(dataloaders[phase])
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
# _, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
But for me it looks like the optimization step (back-prop) is only applied on the last image of the batch. Is that true? And if so, how can I fix this? I guess if I indent the with-Block, then I get again a batch size 1 optimization.
Thanks in advance
But for me it looks like the optimization step (back-prop) is only applied on the last image of the batch.
It should not apply only based on the last image. It should apply based on the batch size.
If you set bs=2 and it should apply to the batch of two images.
Optimization step actually will update the params of your network. Backprop is a fancy name for PyTorch autograd system that computes the first order gradients.

Loss Not Coverging in SGD

I am using PyTorch Linear Regression loss and my SGD loss is not converging.
Use case -
I am using MNIST dataset and implementing an image classification to classify handwritten digit 0 and 1.
Then using Logistic regression model :
model = nn.Linear(input_size,num_classes)
Created the custom loss function.
Training the model where in I converted the labels from 0,1 to -1,1. Convert labels from 0,1 to -1,1
Determine Loss.
total_step = len(train_loader)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# Reshape images to (batch_size, input_size)
images = images.reshape(-1, 28*28)
#Convert labels from 0,1 to -1,1
labels = Variable(2*(labels.float()-0.5))
# Forward pass
outputs = model(images)
# we need maximum value of two class prediction
oneout = torch.max(outputs.data,1)[0]
loss = loss_criteria(oneout, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 100 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch+1, num_epochs, i+1, total_step, loss.item()))
Loss output: 0.8445,0.6883,0.7976,0.8133,0.8289,0.7195. As you notice the Loss is not converging.
Expected result :
0.8445,0.8289,0.8133,0.7976,0.7195,0.6883 all the way to zero....

Why my tensorboard is showing a discontinued output?

I am running a neural network with logging of training accuracy,Validation accuracy and validation loss. here is my code snippet.
def show_progress(epoch, feed_dict_train, feed_dict_validate, val_loss):
acc = session.run(accuracy, feed_dict=feed_dict_train)
val_acc = session.run(accuracy, feed_dict=feed_dict_validate)
msg = "Training Epoch {0} --- Training Accuracy: {1:>6.1%}, Validation Accuracy: {2:>6.1%}, Validation Loss: {3:.3f}"
print(msg.format(epoch + 1, acc, val_acc, val_loss))
return acc,val_acc
total_iterations = 0
#writer=tf.summary.FileWriter(options.tensorboard,session)
saver = tf.train.Saver()
def train(num_iteration):
global total_iterations
writer=tf.summary.FileWriter(options.tensorboard,session.graph)
#global writer
for i in range(total_iterations,
total_iterations + num_iteration):
x_batch, y_true_batch, _, cls_batch = data.train.next_batch(batch_size)
x_valid_batch, y_valid_batch, _, valid_cls_batch = data.valid.next_batch(batch_size)
feed_dict_tr = {x: x_batch,
y_true: y_true_batch}
feed_dict_val = {x: x_valid_batch,
y_true: y_valid_batch}
session.run(optimizer, feed_dict=feed_dict_tr)
if i % 10 == 0:
val_loss = session.run(cost, feed_dict=feed_dict_val)
epoch = int(i /10)
accu,valid_accu=show_progress(epoch, feed_dict_tr, feed_dict_val, val_loss)
#getting values for visualising inside the tensorboard
tf.summary.scalar("training_accuracy",accu)
tf.summary.scalar("Validation_accuracy",valid_accu)
tf.summary.scalar("Validation_loss",val_loss)
#tf.summary.scalar("epoch",epoch)
#merging all the values (serializing)
merged=tf.summary.merge_all()
summary=session.run(merged)
#adding them to the events directory
writer.add_summary(summary,epoch)
saver.save(session, options.save)
total_iterations += num_iteration
train(num_iteration=10)
Now I am getting a tensor board output, as for each epoch the accuracy,validation accuracy and validation loss as separate plots with single points.
For each epoch I am getting these three plots again with another point.
I want to get a continuous points for these three plots so as it forms a line graph.
Each of your call to tf.summary.scalar() will create a node in the computation graph. Specifically, in your code, the calls are inside the training loop and therefore metrics at different epochs get written to different plots.
tf.summary.scalar("training_accuracy", accu)
tf.summary.scalar("Validation_accuracy", valid_accu)
tf.summary.scalar("Validation_loss", val_loss)
What you can do is to define the summary ops before the loop with placeholders. Then, in the eval loop, you can feed these tensor with real values.
# Define a placeholder and wire it to the summary op.
accu_tensor = tf.placeholder(tf.float32)
tf.summary.scalar("training_accuracy", accu_tensor)
summary_op = tf.summary.merge_all()
# Create a session after defining ops.
sess = tf.Session()
writer = tf.summary.FileWriter(<some-directory>, sess.graph)
for i in range(total_iterations,
total_iterations + num_iteration):
# run training ops to get values for accu
# ...
# run the summary op with a feed_dict to feed the value.
summaries = sess.run(summary_op, feed_dict={accu_tensor: accu})
writer.add_summary(summaries, epoch)

Resources