I have trained a basic CNN model for image classification.
While training the model I have used ImageDataGenerator from keras api.
After the model is being trained i used testdatagenerator and flow_from_directory method for testing.
Everything Went well.
Then I saved the model for future use.
Now i am using the same model and used predict method from keras api with a single image, but the prediction is very different every time I test using different images.
Could you please let me know any solution.
training_augmentation = ImageDataGenerator(rescale=1 / 255.0)
validation_testing_augmentation = ImageDataGenerator(rescale=1 / 255.0)
# Initialize training generator
training_generator = training_augmentation.flow_from_directory(
JPG_TRAIN_IMAGE_DIR,
class_mode="categorical",
target_size=(32, 32),
color_mode="rgb",
shuffle=True,
batch_size=batch_size
)
# initialize the validation generator
validation_generator = validation_testing_augmentation.flow_from_directory(
JPG_VAL_IMAGE_DIR,
class_mode="categorical",
target_size=(32, 32),
color_mode="rgb",
shuffle=False,
batch_size=batch_size
)
# initialize the testing generator
testing_generator = validation_testing_augmentation.flow_from_directory(
JPG_TEST_IMAGE_DIR,
class_mode="categorical",
target_size=(32, 32),
color_mode="rgb",
shuffle=False,
batch_size=batch_size
)
history = model.fit_generator(
training_generator,
steps_per_epoch=total_img_count_dict['train'] // batch_size,
validation_data=validation_generator,
validation_steps=total_img_count_dict['val'] // batch_size,
epochs=epochs,
callbacks=callbacks)
testing_generator.reset()
prediction_stats = model.predict_generator(testing_generator, steps=(total_img_count_dict['test'] // batch_size) + 1)
### Trying to use predict method
img_file = '/content/drive/My Drive/Traffic_Sign_Recognition/to_display_output/Copy of 00003_00019.jpg'
img = cv2.imread(img_file)
img=cv2.resize(img, (32,32))
img = img/255.0
a=np.reshape(img, (1, 32, 32, 3))
model = load_model('/content/drive/My Drive/Traffic_Sign_Recognition/basic_cnn.h5')
prediction = model.predict(a)
When I am trying to use predict, every time wrong prediction is coming.
Any leads will be appreciated.
Keras generator uses PIL for image reading which read images from disk as RGB.
You are using opencv for reading which reads images as BGR. You have to convert your image from BGR to RGB.
img = cv2.imread(img_file)
img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
...
Related
I'm trying to build a CNN model in order to classify an image, but whenever the training is done and I try to feed it a single image (from the training dataset) it misclassifies this image always.
Please take a look at the code I wrote below.
Thank you in advance.
First, I declared an Image Data Generator for both my training and testing sets:
train_datagen = ImageDataGenerator(rescale = 1./255, rotation_range=20, horizontal_flip = True,
validation_split=0.3)
test_datagen = ImageDataGenerator(rescale = 1./255,validation_split=0.3)
Then, I used the flow_from_directory() function to load the images:
train_generator = train_datagen.flow_from_directory(
data_dir,
shuffle=False,
subset='training',
target_size = (224, 224),
class_mode = 'categorical'
)
test_generator = test_datagen.flow_from_directory(
data_dir,
shuffle=False,
subset='validation',
target_size = (224, 224),
class_mode = 'categorical'
)
I then loaded a pretrained model and added a few layers to build my model:
pretrained_model = VGG16(weights="imagenet", include_top=False,
input_tensor=input_shape)
pretrained_model.trainable = False
model = tf.keras.Sequential([
pretrained_model,
Flatten(name="flatten"),
Dense(3, activation="softmax")
])
I then trained the model :
INIT_LR = 3e-4
EPOCHS = 15
opt = Adam(lr=INIT_LR)
model.compile(loss="categorical_crossentropy", optimizer='Adam', metrics=["accuracy"])
H = model.fit(
train_generator,
validation_data=test_generator,
epochs=EPOCHS,
verbose= 1)
Then came the part to predict a single image:
I chose an image that was part of the training set, I even overfitted the model to make sure the predictions should be correct, but it was giving me wrong results for every image I input to the model.
I tried the following ways:
image = image.load_img(url,target_size = (224, 224))
img = tf.keras.preprocessing.image.img_to_array(image)
img = np.array([img])
img = img.astype('float32') / 255.
img = tf.keras.applications.vgg16.preprocess_input(img)
This didn't work
image = cv2.imread(url)
image = cv2.normalize(image, None,beta=255, dtype=cv2.CV_32F)
image = cv2.resize(image, (224, 224))
image = np.expand_dims(image, axis=0)
This also didn't work, I also tried many other ways to predict a single image, but none worked.
Finally, the only way was that I had to create an Image Data Generator and Flow From Directory for this single image, and it worked, but I believe that's not how it should be done.
The code img = tf.keras.applications.vgg16.preprocess_input(img) scales the pixel
values in the image to values between -1 to +1 assuming the original pixel values are in the range 0 to 255. In the previous line of code
img = img.astype('float32') / 255.
You rescaled the pixels. So remove that line of code. Now to predict a single image you need to expand the dimensions with
img = np.expand_dims(img, axis=0)
In your second code effort be aware the CV2 reads in images as BGR. If your model was trained on RGB images then your predictions will be wrong. Use the code below to convert the image to RGB.
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
As a side note you can replace tf.keras.applications.vgg16.preprocess_input(img) with the function below which will scale the images between -1 to +1
def scalar(img):
return img/127.5 - 1
This answer could be one starting point:
Resnet50 produces different prediction when image loading and resizing is done with OpenCV
These are possible differences (short gist):
RGB vs BGR (OpenCV loads BGR)
The interpolation method used (INTER_LINEAR vs INTER_NEAREST).
img_to_array() transforms the data type into float32 rather than uint8 which is obtained by default when loading with OpenCV.
tf.keras.applications.vgg16.preprocess_input(img). This preprocessing function can actually differ from what you have written above as image preprocessing; it is also notable that, if you do not preprocess it while training in this particular way (preprocess_input()) then it also makes sense to have bad results on the test set, since the preprocessings are different.
Hope these observations shed some light.
I have made this network which seems to work ok.
datagen = tf.keras.preprocessing.image.ImageDataGenerator(
#zoom_range=0.2,
#shear_range=0.2,
#rotation_range=10,
rescale=1/255,
validation_split=0.2,
# other things...
)
train_ds = datagen.flow_from_directory(
data_dir,
subset="training",
class_mode='binary',
target_size=target_size,
batch_size=batch_size,
)
val_ds = datagen.flow_from_directory(
data_dir,
subset="validation",
class_mode='binary',
target_size=target_size,
batch_size=batch_size,
)
pre_trained_model = InceptionV3(input_shape=(128,128,3),
include_top=False,
weights='imagenet')
for layer in pre_trained_model.layers:
layer.trainable = False
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
Dl_1 = tf.keras.layers.Dropout(rate = 0.2)
prediction_layer = tf.keras.layers.Dense(1,activation='sigmoid')
model_V3 = tf.keras.Sequential([
pre_trained_model,
global_average_layer,
Dl_1,
prediction_layer
])
model_V3.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
lr_reduce = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=3, verbose=2, mode='max')
hist = model_V3.fit(
train_ds,
epochs=5,
steps_per_epoch=len(train_ds),
validation_data=val_ds,
validation_steps=len(val_ds),
callbacks=[lr_reduce])
However the issue here is that the network is built for RGB images. My data is grayscale images. Right now I am just replicating channels so that R,G and B get same value for each pixel.
The problem is that this is extremely memory consuming and slow.
Is there a way to make the network use graylevel images instead of rgb?
Is there maybe another pretrained network which is better to classify grayscale images of elliptic cell-like structures?
When importing InceptionV3 try this:
pre_trained_model = InceptionV3(input_shape=(128,128,1),
include_top=False,
weights='imagenet')
128 and 128 are the physical dimensions of your image and the 3 (or 1 in my case) are the color channels.
I am working on a project to classify waste as plastics and non plastics using only.images to train them.However i still dont know what features does the model take into account while classifyimg them.I am using CNN,however the accuracy of prediction is still not up to the mark.
The reason why i went to CNN because there is no specific feature to distinguish plastics from others.Is there any other way to approach this problem?
For eg If i train the images of cats,my Neural Network learns what is a cat however i do not explicitly give features is the same case valid here too?
Suppose you want to extract the Features from the Pre-Trained Convolutional Neural Network, VGGNet, VGG16.
Code to reuse the Convolutional Base is:
from keras.applications import VGG16
conv_base = VGG16(weights='imagenet',
include_top=False,
input_shape=(150, 150, 3)) # This is the Size of your Image
The final feature map has shape (4, 4, 512). That’s the feature on top of which you’ll stick a densely connected classifier.
There are 2 ways to extract Features:
FAST FEATURE EXTRACTION WITHOUT DATA AUGMENTATION: Running the convolutional base over your dataset, recording its output to a
Numpy array on disk, and then using this data as input to a standalone, densely
connected classifier similar to those you saw in part 1 of this book. This solution is fast and cheap to run, because it only requires running the convolutional base once for every input image, and the convolutional base is by far the most expensive part of the pipeline. But for the same reason, this technique won’t allow you to use data augmentation.
Code for extracting Features using this method is shown below:
import os
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
base_dir = '/Users/fchollet/Downloads/cats_and_dogs_small'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
test_dir = os.path.join(base_dir, 'test')
datagen = ImageDataGenerator(rescale=1./255)
batch_size = 20
def extract_features(directory, sample_count):
features = np.zeros(shape=(sample_count, 4, 4, 512))
labels = np.zeros(shape=(sample_count))
generator = datagen.flow_from_directory(directory, target_size=(150, 150),
batch_size=batch_size, class_mode='binary')
i=0
for inputs_batch, labels_batch in generator:
features_batch = conv_base.predict(inputs_batch)
features[i * batch_size : (i + 1) * batch_size] = features_batch
labels[i * batch_size : (i + 1) * batch_size] = labels_batch
i += 1
if i * batch_size >= sample_count:
break
return features, labels
train_features, train_labels = extract_features(train_dir, 2000)
validation_features, validation_labels = extract_features(validation_dir,1000)
test_features, test_labels = extract_features(test_dir, 1000)
train_features = np.reshape(train_features, (2000, 4*4* 512))
validation_features = np.reshape(validation_features, (1000, 4*4* 512))
test_features = np.reshape(test_features, (1000, 4*4* 512))
from keras import models
from keras import layers
from keras import optimizers
model = models.Sequential()
model.add(layers.Dense(256, activation='relu', input_dim=4 * 4 * 512))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer=optimizers.RMSprop(lr=2e-5),
loss='binary_crossentropy', metrics=['acc'])
history = model.fit(train_features, train_labels, epochs=30,
batch_size=20, validation_data=(validation_features, validation_labels))
Training is very fast, because you only have to deal with two Dense
layers—an epoch takes less than one second even on CPU
FEATURE EXTRACTION WITH DATA AUGMENTATION: Extending the model you have (conv_base) by adding Dense layers on top, and running the whole thing end to end on the input data. This will allow you to use data augmentation, because every input image goes through the convolutional base every time it’s seen by the model. But for the same reason, this technique is far more expensive than the first
Code for the same is shown below:
from keras import models
from keras import layers
model = models.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dense(256, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
train_datagen = ImageDataGenerator(rescale=1./255,rotation_range=40,
width_shift_range=0.2,height_shift_range=0.2,shear_range=0.2,
zoom_range=0.2,horizontal_flip=True,fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(train_dir,target_size=(150, 150), batch_size=20, class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=2e-5),
metrics=['acc'])
history = model.fit_generator(train_generator, steps_per_epoch=100, epochs=30, validation_data=validation_generator, validation_steps=50)
For more details, please refer Section 5.3.1 of the book, "Deep Learning with Python", authored by Father of Keras, "Francois Chollet"
I have trained my model and saved it using model.save.
How can i use model file to predict images.
I used this article How to predict input image using trained model in Keras? and used this codes
# Modify 'test1.jpg' and 'test2.jpg' to the images you want to predict on
from keras.models import load_model
from keras.preprocessing import image
import numpy as np
# dimensions of our images
img_width, img_height = 320, 240
# load the model we saved
model = load_model('model1.h5')
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
# predicting images
img = image.load_img('yes.jpeg', target_size=(img_width, img_height))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = model.predict_classes(images, batch_size=10)
print(classes)
# predicting multiple images at once
##img = image.load_img('yes.jpeg', target_size=(img_width, img_height))
##y = image.img_to_array(img)
##y = np.expand_dims(y, axis=0)
# pass the list of multiple images np.vstack()
##images = np.vstack([x, y])
##classes = model.predict_classes(images, batch_size=10)
# print the classes, the images belong to
print(classes)
print(classes[0])
print(classes[0][0])
but this result is
[[1]]
[[1]]
[1]
1
how can i convert it into class indices?
Do not recompile your model unless you want to train it again. simply load your model then predict.
Compiling will reset the weights.
I'm trying to do an image segmentation problem where I want to segment 5 objects in an image. I'm using a U-net architecture. My final layer looks like this:
conv_final = Conv2D(OUTPUT_MASK_CHANNELS, (1, 1))(up_conv_224)
conv_final = Activation('sigmoid')(conv_final)
model = Model(inputs, conv_final, name="ZF_UNET_224")
However I get an error saying:
ValueError: Error when checking target: expected conv2d_24 to have shape (224, 224, 5) but got array with shape (224, 224, 3)
This is the generator that I'm using
image_generator = train_datagen.flow_from_directory(
'data/train', # this is the target directory
target_size=(224, 224),
color_mode = 'rgb',# all images will be resized to 150x150
batch_size=batch_size,
class_mode=None,
seed = 1) # since we use binary_crossentropy loss, we need binary labels
# this is a similar generator, for validation data
mask_generator = mask_datagen.flow_from_directory(
'data/train',
target_size=(224, 224),
color_mode = 'rgb',
batch_size=batch_size,
class_mode=None,
seed = 1)
train_generator = zip(image_generator, mask_generator)
What can I do to fix this? Any help appreciated!
You have to convert the data into one hot encoded format.
Use from keras.utils import to_categorical