How to label test data using trained model with keras? - python-3.x

I am working on the following keras convolutional neural network tutorial https://gist.github.com/fchollet/0830affa1f7f19fd47b06d4cf89ed44d
After training the model I want to test the model on sample images, and also label the images. I realize that I have to use predict method that generates an array that shows which label gets what score for a particular image. But I am having trouble using this method. If the images are in the folder test_images and there are 20 of them, how do I test these images and get the prediction?
This is how I've gotten with one image (even though I want it for multiple images):
image = cv2.imread('test1.jpg')
image = cv2.resize(image,(224,224))
features = np.swapaxes(np.swapaxes(image, 1, 2), 0, 1)
predictions = model.predict(features)
This throws the following error:
ValueError: Error when checking : expected conv2d_1_input to have 4 dimensions, but got array with shape (3, 224, 224)
Thank you very much!
Some of the questions I consulted before:
Simple Neural Network in Python not displaying label for the test image
https://github.com/fchollet/keras/issues/315

model.predict works by processing an array of samples, not just one image, so you are missing the batch/samples dimension, which in your case would only be just one image. You just have to reshape the array:
features = features.reshape((1, 3, 224, 224)
And then pass it to predict.

Related

Multiple Images as one input in Keras CNN

I am trying to build an image classification model with Keras that should be able to classify cups as "good condition" or "defect". This was easy to do with a single image as input. However, I now want to try feeding 6 images, one from every angle(top, bottom, side etc), as input. What would be the best approach for this ? My initial idea was a np array of shape (6, width, height, 3), but this deemed unsuitable.

How to use ImageDataGenerator with multi-label masks for multi-class image segmentation?

In order to do multiclass segmentation the masks need to be one-hot-encoded. For example if I have a 100 images of shape 224x224x3 with 5 different classes I would have a set of masks with shape (100, 224, 224, 5) i.e the last dimension (the channel) refers to the class of the pixel. Take a grayscale masks that contains 6 classes where each pixel has the label 1-6, I can easily convert this to the categorical mask I need using tf.keras.utils.to_categorical.
If I use the ImageDataGenerator provided with keras I know I can create a generator for both images and masks then zip them together for the problem (as code shows below) but where i'm confused is how do I convert the masks into this categorical one-hot-encoded structure whilst using the ImageDataGenerator? The ImageDataGenerator only finds files in directories that are saved as images therefore I can't convert the masks and then save them down as numpy arrays (the one-hot-encoded masks) for the generator to pick up, as images can't have that have more than 4 channels right? Is there somehow of telling the generator to do this conversion? Or does this therefore limit the number of classes I can have in my problem?
One solution is to write my own custom generator with the sequence class which I have done but I'm keen on understanding if this is possible to do with Keras inbuilt ImageDataGenenerator? Could writing my a lambda layer on the network be the solution?
mask_categorical = tf.keras.utils.to_categoricl(mask) #converts 224x224 grayscale mask to one-hot encoding version
imgDataGen = ImageDataGenerator(rescale=1/255.)
maskDataGen = ImageDataGenerator()
imageGenerator =imageDataGen.flow_from_directory("dataset/image/",
class_mode=None, seed=40)
maskGenerator = maskDataGen.flow_from_directory("dataset/mask/",
class_mode=None, seed=40)
trainGenerator = zip(imageGenerator, maskGenerator)

Loading image with different input size than training in Keras

I am working on a CNN that deals with super-resolution. It is required that I extract patches from the image, then train on these small patches (ie.41x41).
However, when it comes to predicting the image, the image is of a larger size than the patches. But Keras doesn't allow me to predict an image of larger size than the training images.
I have read Can Keras deal with input images with different size?. I have tried the way by putting None in my network input shape and then loading the weights. However, when it comes to this line: c1 = PReLU()(c1), I get the error: nt() argument must be a string, a bytes-like object or a number, not 'NoneType'. The code is attched below.
How can I fix this problem? I am using Keras with tensorflow backend. I have no fully connected layers, all are Conv2D with relu, except for the snippet below, is PReLU for c1.
Thanks.
input_shape = (None,None,1)
x = Input(shape = input_shape)
c1 = Convolution2D(64, (3,3), init = 'he_normal', padding='same', name='Conv1')(x)
c1 = PReLU()(c1)
#............................
output_img = keras.layers.add([x, finalconv])
model = Model(x, output_img)
Keras doesn't allow me to predict an image of larger size than the
training images
This is wrong, and keras allows you to do so when your network is designed properly.
However, when it comes to this line: c1 = PReLU()(c1), I get the
error: nt() argument must be a string, a bytes-like object or a
number, not 'NoneType'.
This error is expected because your input shape contains None. Actually, if you previously set shared_axes=[1,2] for PReLU (default value shared_axes=None), you will not see this error.
Therefore, the real issue here is that PReLU's parameters, previously set only for an 41x41 input, but now are asked to work for an arbitrary input size.
The best solution is to train a new model with input shape = (None,None,3) directly.
If you don't care about the possible degradation, you can load all layer weights of your pretrained model except for the PReLU layer. Then manually compute appropriate PReLU parameters can be shared across shared_axes =[1,2], and use it as the new PReLU parameters.

Multiple predictions of multi-class image classification with Keras

I trained a CNN in Keras with images in a folder (two types of bees). I have a second folder with unlabeled bees images for prediction.
I'm able to predict a single image (as per below code).
from keras.preprocessing import image
test_image = image.load_img('data/test/20300.jpg')
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis = 0)
prob = classifier.predict_proba(test_image)
Result:
prob
Out[214]: array([[1., 0.]], dtype=float32)
I would like to be able to predict all of the images (around 300).
Is there a way to load and predict all the images in a batch? And will predict() be able to handle it, as it expects and array to predict?
Model.predict_proba() (which is a synonym of predict() really) accepts the batch input. From the documentation:
Generates class probability predictions for the input samples.
The input samples are processed batch by batch.
You just need to load several images and glue them together in a single numpy array. By expanding the 0 dimension your code already uses a batch of 1 in test_image. To complete the picture there's also a Model.predict_on_batch() method.
To load a batch of test images you can use image.list_pictures or ImageDataGenerator.flow_from_directory() (which is compatible with Model.predict_generator() method, see the examples in the documentation).

Using matrices as input to convolutional neural network

I am trying to use a convolutional neural network to identify patterns in binary matrices and classify them to one of two classes. At the moment I have a bunch of 15x15 matrices in csv format.
In order to get a handle on how convolutional nets work I have been following sentdex's tutorials on youtube. In this he uses a conv net to classify the MNIST dataset. The code he uses to specify the input is like this:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot = True)
x = tf.placeholder('float', [None, 784])
y = tf.placeholder('float')
My question is how do I set up a file like 'input_data' which the conv net can read my matrices and labels from? Can I include ALL of my training data in one file or do I need to split them into train/test files?
I have set up an excel file in the following format but not sure if it will work in the same way MNIST does.
input data example file:
My favorite tutorials are from aymericdamien, below is a link to the convolutional tutorial in jupyter (go back up a few directories in github for all of the tutorials).
https://github.com/aymericdamien/TensorFlow-Examples/blob/master/notebooks/3_NeuralNetworks/convolutional_network_raw.ipynb
You'll notice that their input is the same as what you have posted:
X = tf.placeholder(tf.float32, [None, num_input])
Y = tf.placeholder(tf.float32, [None, num_classes])
And the first thing they do in the conv_net() function is to reshape it to a image:
x = tf.reshape(x, shape=[-1, 28, 28, 1])
The shape arguments are understood as follows:
-1: variable batch size
28: height of the image (mnist is 28x28 grayscale images)
28: width of the image
1: color channels, grayscale images have 1 color channel, RGB images have 3 typically.
Try reshaping the image using numpy and displaying it yourself to check that you got it right:
import scipy.misc as misc
import numpy as np
img = np.reshape(flat_image, (28,28,1))
misc.imshow(img)
As far as train and test process goes, tensorflow doesn't care anything about your structure. I generally would separate the files to make sure you don't accidentally pass your test set to your training process though. You will ultimately need to call sess.run separately on your training and test datasets. I think the tutorial I linked to provides a very good example of this process, so if you have more specific questions I'll leave them to a future post.

Resources