I am training a U-NET model on 238 satellite images.
my val_loss is not decreasing below 0.3, despite of the different architectures that I tried.
Conv2D(8-16-32-64-128-64-32-16-8)
Conv2D(16-32-64-128-256-128-64-32-16)
Conv2D(32-64-128-256-512-256-128-64-32)
activation function = relu
sigmoid (outputs)
validation_split=0.10,batch_size=10, epochs=30)
loss='binary_crossentropy'
optimizers.Adam(learning_rate=0.001) -- also i try 0.01 and 0.0001
if you have a lead I'm interested
UpDate = i have 968 images now
Related
I've been training an image classification model using object detection and then applying image classification to the images. I have 87 custom classes in my data(not ImageNet classes), and just over 7000 images altogether(around 60 images per class). I am happy with my object detection code and I think it works quite well, however, for classification I have been using ResNet and AlexNet. I have tried AlexNet, ResNet18, ResNet50 and ResNet101 for training however, I am getting very low testing accuracies(around 10%), and my training accuracies are high for all models. I've also attempted regularisation and changing the learning rates, but I am not getting the higher accuracies(>80%) that I require. I wonder if there is a bug in my code, although I haven't been able to figure it out.
Here is my training code, I have also processed images in the way that Pytorch pretrained models expect:
import torch.nn as nn
import torch.optim as optim
from typing import Callable
import numpy as np
EPOCHS=100
resnet = torch.hub.load('pytorch/vision:v0.10.0', 'resnet50')
resnet.eval()
resnet.fc = nn.Linear(2048, 87)
res_loss = nn.CrossEntropyLoss()
res_optimiser = optim.SGD(resnet.parameters(), lr=0.01, momentum=0.9, weight_decay=1e-5)
def train_model(model, loss_fn, optimiser, modelsavepath):
train_acc = 0
for j in range(EPOCHS):
running_loss = 0.0
correct = 0
total = 0
for i, data in enumerate(training_generator, 0):
model.train()
inputs, labels, paths = data
total += 1
optimizer.zero_grad()
outputs = model(inputs)
_, predicted = torch.max(outputs, 1)
if(predicted.int() == labels.int()):
correct += 1
loss = loss_fn(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
train_acc = train_correct / len(training_generator)
print("Epoch:{}/{} AVG Training Loss:{:.3f} AVG Training Acc {:.2f}% ".format(j + 1, EPOCHS, train_loss, train_acc))
torch.save(model, modelsavepath)
train_model(resnet, res_loss, res_optimiser, 'resnet.pth')
Here is the testing code used for a single image, it is part of a class:
self.model.eval()
outputs = self.model(img[None, ...]) #models expect batches, so give it a singleton batch
scores, predictions = torch.max(outputs, 1)
predictions = predictions.numpy()[0]
possible_scores= np.argmax(scores.detach().numpy())
Is there a bug in my code, either testing or training, or is my model just overfitting? Additionally, is there a better image classification model that I could try?
Your dataset is very small, so you're most likely overfitting. Try:
decrease learning rate (try 0.001, 0.0001, 0.00001)
increase weight_decay (try 1e-4, 1e-3, 1e-2)
if you don't already, use image augmentations (at least the default ones, like random crop and flip).
Watch train/test loss curves when finetuning your model and stop training as soon as you see test accuracy going down while train accuracy goes up.
I'm using Kaggle - Animal-10 dataset for experimenting transfer learning with FastAI and Keras.
Base model is Resnet-50.
With FastAI I'm able to get accuracy of 95% in 3 epochs.
learn.fine_tune(3, base_lr=1e-2, cbs=[ShowGraphCallback()])
I believe it only trains the top layers.
With Keras
If I train the complete Resnet then only I'm able to achieve accuracy of 96%
If I use the below code for transfer learning, then at max I'm able to reach 40%
num_classes = 10
#number of layers to retrain from previous model
fine_tune = 33 #conv5 block
model = Sequential()
base_layer = ResNet50(include_top=False, pooling='avg', weights="imagenet")
# base_layer.trainable = False
#make only last few layers trainable, except them make all false
for layer in base_layer.layers[:-fine_tune]:
layer.trainable = False
model.add(base_layer)
model.add(layers.Flatten())
# model.add(layers.BatchNormalization())
# model.add(layers.Dense(2048, activation='relu'))
# model.add(layers.Dropout(rate=0.2))
model.add(layers.Dense(1024, activation='relu'))
model.add(layers.BatchNormalization())
model.add(layers.Dense(num_classes, activation='softmax'))
I assume the cause Transfer learning with Keras, validation accuracy does not improve from outset (beyond naive baseline) while train accuracy improves
and thats the reason that now I'm re-training complete block5 of Resnet and still it doesn't add any value.
I am trying to create a model to predict the art style of painting. To do so I am using the dataset that Kaggle provides for their competition named Painter by Numbers. Though there are 137 art styles in the dataset I am using only three of them. Those three styles are namely - Impressionism, Expressionism, and Surrealism. I have taken 3000 images from each class to train the model. Moreover, I have used 300 images from each class totaling 900 images to validate the training.
I have planned to use pre-trained VGGNet as the bottom layer of my model. I have trained the model on Google Colab. Now the issue is, as the model started to learn the loss is ever increasing and validation accuracy is near .33 which is not pleasant. Random guessing will also give this accuracy.
I created a model with a base layer of pre-trained VGGNet. I added some fully connected layers with 1024 neurons in the first two layers, 512 neurons in the third layer and 3 neurons in the last layer. Optimizer I used was SGD with a learning rate of 0.01, decay 1e-6, momentum 0.9. My loss function is "categorical_crossentropy". Moreover, the input image shape was (100,100,3).
For training, I declared samples per epoch as 100. The number of the epoch was 30. Below I have provided all the codes.
model_vgg16_conv = VGG16(weights='imagenet', include_top=False)
input = Input(shape=(100,100,3), name='image_input')
output_vgg16_conv = model_vgg16_conv(input)
x = Flatten(name='flatten')(output_vgg16_conv)
x = Dense( 1024, activation='relu', name='fc1')(x)
x = Dense( 1024, activation='relu', name='fc2')(x)
x = Dense( 512, activation='relu', name='fc3')(x)
x = Dense( 3, activation='softmax', name='predictions')(x)
my_model = Model(input=input, output=x)
sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
my_model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
train_datagen = ImageDataGenerator(rescale=1./255, shear_range=0.1, zoom_range=0.2, horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory(train_root, target_size=(100,100), batch_size=32, class_mode='categorical')
test_set = test_datagen.flow_from_directory(test_root, target_size=(100,100), batch_size=32, class_mode='categorical')
my_model.fit_generator(training_set, samples_per_epoch=100, nb_epoch=30, validation_data=test_set, nb_val_samples=300)
This generates low validation accuracy and ever-increasing loss value. Even the loss value is raised to 10. And moreover, this produces a low validation accuracy. What to do to get the situation better?
I have pre-stored bottleneck features (.npy files) obtained from VGG16 for around 10k images. Training a SVM classifier (3-class classification) on these features gave me an accuracy of 90% on the test set. These images are obtained from videos. I want to train an LSTM in keras on top of these features. My code snippet can be found below. The issue is that the training accuracy is not going above 43%, which is unexpected. Please help me in debugging the issue. I have tried with different learning rates.
#Asume all necessary imports done
classes = 3
frames = 5
channels = 3
img_height = 224
img_width = 224
epochs = 20
#Model definition
model = Sequential()
model.add(TimeDistributed(Flatten(),input_shape=(frames,7,7,512)))
model.add(LSTM(256,return_sequences=False))
model.add(Dense(1024,activation="relu"))
model.add(Dense(3,activation="softmax"))
optimizer = Adam(lr=0.1,beta_1=0.9,beta_2=0.999,epsilon=None,decay=0.0)
model.compile (loss="categorical_crossentropy",optimizer=optimizer,metrics=["accuracy"])
model.summary()
train_data = np.load(open('bottleneck_features_train.npy','rb'))
#final_img_data shape --> 2342,5,7,7,512
#one_hot_labels shape --> 2342,3
model.fit(final_img_data,one_hot_labels,epochs=epochs,batch_size=2)
You are probably missing the local minimum, because learning rate is too high. Try to decrease learning rate to 0.01 -- 0.001 and increase number of epochs. Also, decrease Dense layer neurons from 1024 to half. Otherwise you may overfit.
I defined a convolutional layer and also use the L2 weight decay in Keras.
When I define the loss in the model.fit(), has all the weight decay loss been included in this loss? If the weight decay loss has been included in the total loss, how can I get the loss without this weight decay during the training?
I want to investigate the loss without the weight decay, while I want this weight decay to attend this training.
Yes, weight decay losses are included in the loss value printed on the screen.
The value you want to monitor is the total loss minus the sum of regularization losses.
The total loss is just model.total_loss
.
The regularization losses are collected in the list model.losses.
The following lines can be found in the source code of model.compile():
# Add regularization penalties
# and other layer-specific losses.
for loss_tensor in self.losses:
total_loss += loss_tensor
To get the loss without weight decay, you can reverse the above operations. I.e., the value to be monitored is model.total_loss - sum(model.losses).
Now, how to monitor this value is a bit tricky. Fortunately, the list of metrics used by a Keras model is not fixed until model.fit() is called. So you can append this value to the list, and it'll be printed on the screen during model fitting.
Here's a simple example:
input_tensor = Input(shape=(64, 64, 3))
hidden = Conv2D(32, 1, kernel_regularizer=l2(0.01))(input_tensor)
hidden = GlobalAveragePooling2D()(hidden)
out = Dense(1)(hidden)
model = Model(input_tensor, out)
model.compile(loss='mse', optimizer='adam')
loss_no_weight_decay = model.total_loss - sum(model.losses)
model.metrics_tensors.append(loss_no_weight_decay)
model.metrics_names.append('loss_no_weight_decay')
When you run model.fit(), something like this will be printed to the screen:
Epoch 1/1
100/100 [==================] - 0s - loss: 0.5764 - loss_no_weight_decay: 0.5178
You can also verify whether this value is correct by computing the L2 regularization manually:
conv_kernel = model.layers[1].get_weights()[0]
print(np.sum(0.01 * np.square(conv_kernel)))
In my case, the printed value is 0.0585, which is indeed the difference between loss and loss_no_weight_decay (with some rounding error).