Keras ImageDataGenerator Low Validation Accuracy - keras

I'm trying to preform transfer learning with mobilenetv2 to classify 196 classes of cars from the cars-196 dataset of stanford.
My work environment is google colab notebook.
I use the ImageDataGenerator from keras to load the images for the train and validation.
On the training images I also perform data augmentation.
The following code is how I perform it:
# To load the dataset from the drive
from google.colab import drive
drive.mount('/content/drive')
import math
from keras.models import Sequential, Model
from keras.layers import Dense, Conv2D, MaxPooling2D, Flatten, Dropout, ReLU, GlobalAveragePooling2D
from keras.preprocessing.image import ImageDataGenerator
from keras.applications.mobilenet_v2 import MobileNetV2, preprocess_input
BATCH_SIZE = 196
train_datagen = ImageDataGenerator(
rotation_range=20, # Rotate the augmented image by 20 degrees
zoom_range=0.2, # Zoom by 20% more or less
horizontal_flip=True, # Allow for horizontal flips of augmented images
brightness_range=[0.8, 1.2], # Lighter and darker images by 20%
width_shift_range=0.1,
height_shift_range=0.1,
preprocessing_function=preprocess_input
)
img_data_iterator = train_datagen.flow_from_directory(
# Where to take the data from, the classes are the sub folder names
'/content/drive/My Drive/Datasets/cars-196/car_data/train',
class_mode="categorical", # classes are in 2D one hot encoded way, default is true but just to point it out
shuffle=True, # shuffle the data, default is true but just to point it out
batch_size=BATCH_SIZE,
target_size=(224, 224) # This size is the default of mobilenet NN
)
validation_img_data_iterator = ImageDataGenerator().flow_from_directory(
'/content/drive/My Drive/Datasets/cars-196/car_data/test',
class_mode="categorical",
shuffle=True,
batch_size=BATCH_SIZE,
target_size=(224, 224)
)
base_model = MobileNetV2(weights='imagenet', include_top=False)
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(512, activation='relu')(x)
x = Dropout(0.5)(x)
preds = Dense(196, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=preds)
# Disable training of already trained layer
for layer in model.layers[:-3]:
layer.trainable = False
model.compile(optimizer='Adam',loss='categorical_crossentropy',metrics=['accuracy'])
# define the checkpoint
from keras.callbacks import ModelCheckpoint
filepath = "/content/drive/My Drive/Datasets/cars-196/model.h5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
history = model.fit(
img_data_iterator,
steps_per_epoch=math.ceil(8144/BATCH_SIZE), # 8144 is the number of training images
validation_steps=math.ceil(8062/BATCH_SIZE), # 8062 is the number of validation images
validation_data=validation_img_data_iterator,
epochs=100,
callbacks=callbacks_list
)
About the batch size, from this stackoverflow question I decided to set the batch size as the number of labels available, but it did not change anything in terms of val_accuracy.
I've added a dropout of 0.5 between the fully connected layers that I have added, but again, no change in the accuracy of the validation.
My accuracy on the training set arrives to about 92% while the validation accuracy stays at about 0.7%.
My guess is that the ImageDataGenerator is acting weird and screwing up the accuracy, but I have not found any solution for the problem so ATM I dont have a clew what is the reason behind it.
Does anyone have any guesses as to what might be the problem?
----- EDIT
The train and test folder all have sub folders with name of the labels (the different cars I want to identify) and each subfolder has images of that car. This is just how the cars-196 dataset is. The ImageDataGenerator attaches the right label to the image, depending on in what subfolder that image was in.

The problem was that I did not apply the preprocess_input function to the validation data image generator.
Instead of
validation_img_data_iterator = ImageDataGenerator().flow_from_directory(
'/content/drive/My Drive/Datasets/cars-196/car_data/test',
class_mode="categorical",
shuffle=True,
batch_size=BATCH_SIZE,
target_size=(224, 224)
)
Changed it to
validation_img_data_iterator = ImageDataGenerator(
preprocessing_function=preprocess_input
).flow_from_directory(
'/content/drive/My Drive/Datasets/cars-196/car_data/test',
class_mode="categorical",
shuffle=True,
batch_size=BATCH_SIZE,
target_size=(224, 224)
)

Related

ValueError: logits and labels must have the same shape

I have a Multi-Layer Perceptron network in Keras with two hidden Layers.
While trying to train the network I get the Error in the fit_generator :
Error:
ValueError: logits and labels must have the same shape ((None, 2) vs (None, 1))
My Code is:
import numpy as np
import keras
from keras import layers
from keras import Sequential
# Define Window size (color images)
img_window = (32,32,3)
# Flatten the Window shape
input_shape = np.prod(img_window)
print(input_shape)
# Define MLP with two hidden layers(neurons)
simpleMLP = Sequential(
[layers.Input(shape=img_window),
layers.Flatten(), # Flattens the input, conv2D to 1 vector , which does not affect the batch size.
layers.Dense(input_shape//2 ,activation="relu"),
layers.Dense(input_shape//2 ,activation="relu"),
layers.Dense(2,activation="sigmoid"),
]
)
# After model is "built" call its summary() menthod to display its contents
simpleMLP.summary()
# Initialization
# Size of the batches of data, adjust it depends on RAM
batch_size = 128
epochs = 5
# Compile MLP model with 3 arguments: loss function, optimizer, and metrics function to judge model performance
simpleMLP.compile(loss="binary_crossentropy",optimizer="adam",metrics=["binary_accuracy"]) #BCE
# Create ImagedataGenerator to splite training, validation dataset
from keras.preprocessing.image import ImageDataGenerator
train_dir = '/content/train'
train_datagen = ImageDataGenerator(
rescale=1./255, # rescaling factor
shear_range=0.1,
zoom_range=0.1,
horizontal_flip=True,
fill_mode='nearest')
valid_dir = '/content/valid'
valid_datagen =ImageDataGenerator(
rescale=1./255,
shear_range=0.1,
zoom_range=0.1,
horizontal_flip=True,
fill_mode='nearest')
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=img_window[:2],
batch_size=batch_size,
class_mode='binary',
color_mode='rgb'
)
validation_generator = valid_datagen.flow_from_directory(
valid_dir,
target_size=img_window[:2],
batch_size=batch_size,
class_mode='binary',
color_mode='rgb')
# Train the MLP model
simpleMLP.fit_generator((
train_generator,
steps_per_epoch= 8271 // batch_size,
epochs=5,
validation_data=validation_generator,
validation_steps= 2072 // batch_size)
Can you please advise me how to resolve this problem? thanks in advance.
You problem simply is that, you have got labels of shape (N, 1) and loss defined as binary_crossentropy. This means you should have a single output node in the last layer. But you have a model that outputs two classes.
simpleMLP = Sequential(
[...
layers.Dense(2,activation="sigmoid"),
]
)
Simply change this to,
simpleMLP = Sequential(
[...
layers.Dense(1,activation="sigmoid"),
]
)

How to extract features from an image for training a CNN model

I am working on a project to classify waste as plastics and non plastics using only.images to train them.However i still dont know what features does the model take into account while classifyimg them.I am using CNN,however the accuracy of prediction is still not up to the mark.
The reason why i went to CNN because there is no specific feature to distinguish plastics from others.Is there any other way to approach this problem?
For eg If i train the images of cats,my Neural Network learns what is a cat however i do not explicitly give features is the same case valid here too?
Suppose you want to extract the Features from the Pre-Trained Convolutional Neural Network, VGGNet, VGG16.
Code to reuse the Convolutional Base is:
from keras.applications import VGG16
conv_base = VGG16(weights='imagenet',
include_top=False,
input_shape=(150, 150, 3)) # This is the Size of your Image
The final feature map has shape (4, 4, 512). That’s the feature on top of which you’ll stick a densely connected classifier.
There are 2 ways to extract Features:
FAST FEATURE EXTRACTION WITHOUT DATA AUGMENTATION: Running the convolutional base over your dataset, recording its output to a
Numpy array on disk, and then using this data as input to a standalone, densely
connected classifier similar to those you saw in part 1 of this book. This solution is fast and cheap to run, because it only requires running the convolutional base once for every input image, and the convolutional base is by far the most expensive part of the pipeline. But for the same reason, this technique won’t allow you to use data augmentation.
Code for extracting Features using this method is shown below:
import os
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
base_dir = '/Users/fchollet/Downloads/cats_and_dogs_small'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
test_dir = os.path.join(base_dir, 'test')
datagen = ImageDataGenerator(rescale=1./255)
batch_size = 20
def extract_features(directory, sample_count):
features = np.zeros(shape=(sample_count, 4, 4, 512))
labels = np.zeros(shape=(sample_count))
generator = datagen.flow_from_directory(directory, target_size=(150, 150),
batch_size=batch_size, class_mode='binary')
i=0
for inputs_batch, labels_batch in generator:
features_batch = conv_base.predict(inputs_batch)
features[i * batch_size : (i + 1) * batch_size] = features_batch
labels[i * batch_size : (i + 1) * batch_size] = labels_batch
i += 1
if i * batch_size >= sample_count:
break
return features, labels
train_features, train_labels = extract_features(train_dir, 2000)
validation_features, validation_labels = extract_features(validation_dir,1000)
test_features, test_labels = extract_features(test_dir, 1000)
train_features = np.reshape(train_features, (2000, 4*4* 512))
validation_features = np.reshape(validation_features, (1000, 4*4* 512))
test_features = np.reshape(test_features, (1000, 4*4* 512))
from keras import models
from keras import layers
from keras import optimizers
model = models.Sequential()
model.add(layers.Dense(256, activation='relu', input_dim=4 * 4 * 512))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer=optimizers.RMSprop(lr=2e-5),
loss='binary_crossentropy', metrics=['acc'])
history = model.fit(train_features, train_labels, epochs=30,
batch_size=20, validation_data=(validation_features, validation_labels))
Training is very fast, because you only have to deal with two Dense
layers—an epoch takes less than one second even on CPU
FEATURE EXTRACTION WITH DATA AUGMENTATION: Extending the model you have (conv_base) by adding Dense layers on top, and running the whole thing end to end on the input data. This will allow you to use data augmentation, because every input image goes through the convolutional base every time it’s seen by the model. But for the same reason, this technique is far more expensive than the first
Code for the same is shown below:
from keras import models
from keras import layers
model = models.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dense(256, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
train_datagen = ImageDataGenerator(rescale=1./255,rotation_range=40,
width_shift_range=0.2,height_shift_range=0.2,shear_range=0.2,
zoom_range=0.2,horizontal_flip=True,fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(train_dir,target_size=(150, 150), batch_size=20, class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=2e-5),
metrics=['acc'])
history = model.fit_generator(train_generator, steps_per_epoch=100, epochs=30, validation_data=validation_generator, validation_steps=50)
For more details, please refer Section 5.3.1 of the book, "Deep Learning with Python", authored by Father of Keras, "Francois Chollet"

I want all the output of the pretrained VGG16 model as well as the new classes it is trained on

I have tried transfer learning using VGG16 but only getting a result for those classes which are trained.I want that the output consists of both the VGG16 classes+ my new trained classes. Is it possible?
I have attached my whole code.
import matplotlib.pyplot as plt
import os, PIL
import tensorflow as tf
import numpy as np
import keras
from keras.models import Sequential, Model
from keras.layers.core import Dense, Dropout, Flatten, Reshape, Activation
from keras.layers import Embedding, Input, merge, ELU
from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras.optimizers import SGD, Adam, RMSprop
from keras.regularizers import l2
from keras.utils.np_utils import to_categorical
import sklearn.metrics as metrics
from PIL import Image, ImageDraw
from keras.applications import VGG16
from keras.applications.vgg16 import preprocess_input, decode_predictions
from keras.preprocessing.image import ImageDataGenerator
def load_images(image_paths):
# Load the images from disk.
images = [plt.imread(path) for path in image_paths]
# Convert to a numpy array and return it.
return np.asarray(images)
def path_join(dirname, filenames):
return [os.path.join(dirname, filename) for filename in filenames]
train_dir = "/home/priyank/Jupyter_notebook/plant_leaves_train_set"
test_dir = "/home/priyank/Jupyter_notebook/val_data_plant"
# # Pre-Trained Model: VGG16
# Downloading the pretrained model of imagenet dataset.
model = VGG16(include_top=True, weights='imagenet')
# # Input Pipeline
# First we need to know the shape of the tensors expected as input by the pre-trained VGG16 model. In this case it is images of shape 224 x 224 x 3.
input_shape = model.layers[0].output_shape[1:3] # the input shape of the vgg16 model
input_shape
# # ImageDataGenerator
# It will pick the image one-by-one and transform all the data each time the image is loaded in the training set.
datagen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=180,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.1,
zoom_range=[0.9, 1.5],
horizontal_flip=True,
vertical_flip=True,
fill_mode='nearest')
datagen_test = ImageDataGenerator(rescale=1./255)
#
# The datagenerator will return the batches of the images. VGG16 model is too large so we can't create the batches too large otherwise we will run out of the RAM and GPU.
# Taking small batch size
batch_size = 20
#
# We can save the randomly transformed images during training, so as to inspect whether they have been overly distorted, so we have to adjust the parameters for the data-generator above.
if True:
save_to_dir = None
else:
save_to_dir='augmented_images/'
generator_train = datagen_train.flow_from_directory(directory=train_dir,
target_size=input_shape,
batch_size=batch_size,
shuffle=True,
save_to_dir=save_to_dir)
generator_test = datagen_test.flow_from_directory(directory=test_dir,
target_size=input_shape,
batch_size=batch_size,
shuffle=False)
steps_test = generator_test.n / batch_size
steps_test
image_paths_train = path_join(train_dir, generator_train.filenames)
image_paths_test = path_join(test_dir, generator_test.filenames)
cls_train = generator_train.classes
cls_test = generator_test.classes
class_names = list(generator_train.class_indices.keys())
class_names
num_classes = generator_train.num_classes
num_classes
# The dataset we have is imbalanced so the gradients for 9.01192 will remain higher adn the gradients of 0.8080 will reamin lower so that model can learn from higher gradient more than the lower gradient.
#
from sklearn.utils.class_weight import compute_class_weight
class_weight = compute_class_weight(class_weight='balanced',
classes=np.unique(cls_train),
y=cls_train)
class_weight
#
# Predicting the our data image with the already trained VGG16 model. Using a helper function which can resize the image so it can be the input to VGG16 model
def predict(image_path):
# Load and resize the image using PIL.
img = PIL.Image.open(image_path)
img_resized = img.resize(input_shape, PIL.Image.LANCZOS)
# Plot the image.
plt.imshow(img_resized)
plt.show()
# Convert the PIL image to a numpy-array with the proper shape.
img_array = np.expand_dims(np.array(img_resized), axis=0)
# Use the VGG16 model to make a prediction.
# This outputs an array with 1000 numbers corresponding to
# the classes of the ImageNet-dataset.
print(img_array.shape)
pred = model.predict(img_array)
# Decode the output of the VGG16 model.
print(pred)
print(pred.shape)
pred_decoded = decode_predictions(pred)[0]
# Print the predictions.
for code, name, score in pred_decoded:
print("{0:>6.2%} : {1}".format(score, name))
predict(image_path='/home/priyank/Pictures/people.jpg')
predict(image_path=image_paths_train[0])
# The pre-trained VGG16 model was unable to classify images from the plant disease dataset. The reason is perhaps that the VGG16 model was trained on the so-called ImageNet dataset which may not have contained many images of plant diseases.
#
# The lower layers of a Convolutional Neural Network can recognize many different shapes or features in an image. It is the last few fully-connected layers that combine these featuers into classification of a whole image. So we can try and re-route the output of the last convolutional layer of the VGG16 model to a new fully-connected neural network that we create for doing classification
# summary of VGG16 model.
model.summary()
# We can see that the last convolutional layer is called 'block5_pool' so we use Keras to get a reference to that layer.
transfer_layer = model.get_layer('block5_pool')
#
#
# We refer to this layer as the Transfer Layer because its output will be re-routed to our new fully-connected neural network which will do the classification for the Knifey-Spoony dataset.
#
# The output of the transfer layer has the following shape:
#
transfer_layer.output
# we take the part of the VGG16 model from its input-layer to the output of the transfer-layer. We may call this the convolutional model, because it consists of all the convolutional layers from the VGG16 model.
conv_model = Model(inputs=model.input,
outputs=transfer_layer.output)
# Start a new Keras Sequential model.
new_model = Sequential()
# Add the convolutional part of the VGG16 model from above.
new_model.add(conv_model)
# Flatten the output of the VGG16 model because it is from a
# convolutional layer.
new_model.add(Flatten())
# Add a dense (aka. fully-connected) layer.
# This is for combining features that the VGG16 model has
# recognized in the image.
new_model.add(Dense(1024, activation='relu'))
# Add a dropout-layer which may prevent overfitting and
# improve generalization ability to unseen data e.g. the test-set.
new_model.add(Dropout(0.5))
# Add the final layer for the actual classification.
new_model.add(Dense(num_classes, activation='softmax'))
optimizer = Adam(lr=1e-5)
loss = 'categorical_crossentropy'
metrics = ['categorical_accuracy']
# Helper-function for printing whether a layer in the VGG16 model should be trained.
def print_layer_trainable():
for layer in conv_model.layers:
print("{0}:\t{1}".format(layer.trainable, layer.name))
print_layer_trainable()
#
#
# In Transfer Learning we are initially only interested in reusing the pre-trained VGG16 model as it is, so we will disable training for all its layers.
#
conv_model.trainable = False
for layer in conv_model.layers:
layer.trainable = False
print_layer_trainable()
new_model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
epochs = 15
steps_per_epoch = 100
# Steps per epochs are multiplied with the epoch here 100*20 = 2000 means 2000 random images will be selected.
history = new_model.fit_generator(generator=generator_train,
epochs=epochs,
steps_per_epoch=steps_per_epoch,
class_weight=class_weight,
validation_data=generator_test,
validation_steps=steps_test)
new_model.save("trained_new.h5")
predict(image_path = "/home/priyank/Jupyter_notebook/pp.jpg")
**IT is only predicting the 38 classes it is trained on I want, if the new image is not belongs to these 38 classes then the model should return the VGG16 class or no match found. please help **
Thanks in advance.
use functional api instead of Sequential,
offical guide is here: https://keras.io/getting-started/functional-api-guide/
You can find a multi input & multi output model example there. What you want is very similar but uses only one input instead of multiple.

Data Augmentation hurts accuracy Keras

I'm trying to adapt Deep Learning with Python section 5.3 Feature extraction with Data Augmentation to a 3-class problem with resnet50 (imagenet weights).
Full code at https://github.com/morenoh149/plantdisease
from keras import models
from keras import layers
from keras.applications.resnet50 import ResNet50
from keras import optimizers
from keras.preprocessing.image import ImageDataGenerator
input_shape = (224, 224, 3)
target_size = (224, 224)
batch_size = 20
conv_base = ResNet50(weights='imagenet', input_shape=input_shape, include_top=False)
model = models.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dense(256, activation='relu'))
model.add(layers.Dense(3, activation='softmax'))
conv_base.trainable = False
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
'input/train',
target_size=target_size,
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
'input/validation',
target_size=target_size,
batch_size=batch_size,
class_mode='categorical')
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.RMSprop(lr=2e-5),
metrics=['acc'])
history = model.fit_generator(
train_generator,
steps_per_epoch=96,
epochs=30,
verbose=2,
validation_data=validation_generator,
validation_steps=48)
Questions:
the book doesn't go much into ImageDataGenerator and selecting steps_per_epoch and validation_steps. What should these values be?
I have 3 classes, 1000 images each. I've split it 60/20/20 train/validation/test.
I was able to get a validation accuracy of 60% without data augmentation. Above I've simplified the ImageDataGenerator to only rescale. This model has a validation accuracy of 30% Why?
What changes do I need to make to the data-augmented version of this script to match the accuracy with no augmentation?
UPDATE:
This may be an issue with keras itself
https://github.com/keras-team/keras/issues/9214
https://github.com/keras-team/keras/pull/9965
To answer your first question: steps_per_epochis the number of batches the training generator should yield before considering an epoch finished. If you have 600 training images with batch size 20, this would be 30 steps per epoch et cetera. validation_steps applies the same logic to the validation data generator, be it at the end of each epoch.
In general, steps_per_epoch is the size of your dataset divided by the batch size.

Using Model file of convolutional neural network

I have trained my model and saved it using model.save.
How can i use model file to predict images.
I used this article How to predict input image using trained model in Keras? and used this codes
# Modify 'test1.jpg' and 'test2.jpg' to the images you want to predict on
from keras.models import load_model
from keras.preprocessing import image
import numpy as np
# dimensions of our images
img_width, img_height = 320, 240
# load the model we saved
model = load_model('model1.h5')
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
# predicting images
img = image.load_img('yes.jpeg', target_size=(img_width, img_height))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = model.predict_classes(images, batch_size=10)
print(classes)
# predicting multiple images at once
##img = image.load_img('yes.jpeg', target_size=(img_width, img_height))
##y = image.img_to_array(img)
##y = np.expand_dims(y, axis=0)
# pass the list of multiple images np.vstack()
##images = np.vstack([x, y])
##classes = model.predict_classes(images, batch_size=10)
# print the classes, the images belong to
print(classes)
print(classes[0])
print(classes[0][0])
but this result is
[[1]]
[[1]]
[1]
1
how can i convert it into class indices?
Do not recompile your model unless you want to train it again. simply load your model then predict.
Compiling will reset the weights.

Resources