I have done the train process and got the model with the .hdf5 format
the neural network that I use is the siamese convolutional neural network.
when validating, the predicted image is a random image from my test folder.
i use this when test
test_alphabets = glob('{}/TEST/*'.format(dataset_dirname))
testset={}
for alph in test_alphabets:
dirs = glob('{}/*'.format(alph))
alphabet = {}
for dirname in dirs:
alphabet[dirname] = glob('{}/*'.format(dirname))
testset[alph] = alphabet
then, display the result with
display_validation_test(siamese_model1, testset)
the result is like this
How do I do the test process by inputting the image I want, then displaying the appropriate image using the .h5 model earlier?
You first create your model (keras.Model or keras.Sequential instance) with the same architecture as the one you trained.
load the weights from .h5 file model.load_weights('your_weight_file.h5')
read your image(s). If a single image, make sure to add 1 as the batch dimension.
Call predict: prediction = model.predict(images)
Related
I have the following problem:
Input: a set of 6 images
Output: a probability for each image determining whether the image is the correct one out of the 6 images
I know how to create a CNN with keras, but not how to have multiple images as an input.
How would one solve this problem?
One way I can think of is to use a pre-trained model (VGG16 etc.) and extract out the vectors from some intermediate layer, then concat 6 vectors together then feed it into a neural network (or some other classification model) and train it as a multiclass classification task.
You can also use an Autoencoder and take the anomaly detection approach.
# define the model
model = MaskRCNN(mode='training', model_dir='./', config=config)
# load weights (mscoco) and exclude the output layers
model.load_weights('mask_rcnn_coco.h5', by_name=True, exclude=["mrcnn_class_logits", "mrcnn_bbox_fc", "mrcnn_bbox", "mrcnn_mask"])
# train weights (output layers or 'heads')
model.train(train_set, test_set, learning_rate=config.LEARNING_RATE, epochs=2, layers='heads')
I have certain medical images containing fibroids.
I wish to apply instance segmentation or object detection.
I may have to use mask Rcnn for instance segmentation and object detection. Is it possible to design the network from scratch instead of using transfer learning?
What I mean here is random initialization of weights for my data, instead of using weights derived from imagenet data or coco data.
From the command line,instead of training a model starting from pre-trained COCO weights like this
python my_model.py train --dataset=/path/dataset --weights=coco
execute the following line.
python my_model.py train --dataset=/path/dataset
And to start training from the first layer execute the following code.
model.train(dataset_train, dataset_val,learning_rate=config.LEARNING_RATE,epochs=10, layers='all')
Can't you just run the training without doing the model.load_weights() line? It seems to be running fine for me when I do that. I assume that runs it with randomized initial weights. It didn't result in quite as good results as starting with coco does, but I'm sure that's expected behavior for some datasets.
I trained a CNN in Keras with images in a folder (two types of bees). I have a second folder with unlabeled bees images for prediction.
I'm able to predict a single image (as per below code).
from keras.preprocessing import image
test_image = image.load_img('data/test/20300.jpg')
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis = 0)
prob = classifier.predict_proba(test_image)
Result:
prob
Out[214]: array([[1., 0.]], dtype=float32)
I would like to be able to predict all of the images (around 300).
Is there a way to load and predict all the images in a batch? And will predict() be able to handle it, as it expects and array to predict?
Model.predict_proba() (which is a synonym of predict() really) accepts the batch input. From the documentation:
Generates class probability predictions for the input samples.
The input samples are processed batch by batch.
You just need to load several images and glue them together in a single numpy array. By expanding the 0 dimension your code already uses a batch of 1 in test_image. To complete the picture there's also a Model.predict_on_batch() method.
To load a batch of test images you can use image.list_pictures or ImageDataGenerator.flow_from_directory() (which is compatible with Model.predict_generator() method, see the examples in the documentation).
I just completed the implementation of A Guide to TF Layers: Building a Convolutional Neural Network for the MNIST data set. The training model successfully ran and gave accuracy of 97.3%.
However, the tutorial does not mention how to use this new trained model to supply own images and see the predictions. Does anyone know how to use the output of the training model to make predictions? I see in the tmp/mnist_convnet_model$ folder, there are some output files like .pbtxt , meta files and index files. But I can't find instructions to use them for making predictions on my own images.
y_pred = tf.nn.softmax(your_final_layer)
y_pred_cls = tf.argmax(y_pred, dimension=1)
and for prediction
feed_dict = {x: [your_image]}
classification = tf.run(y_pred_cls, feed_dict)
print classification
This applies to just about any model you create
I'm trying to modify Keras Siamese Network example to get image feature.
The problem is, how can I get image features? The output of last layer is only a number. What should I do to get the feature before euclidean_distance?
You can try to first train the model on the entire dataset and save it.
Load the model back again, now set the output layers to be processed_a and processed_b
Now call the model.predict() function on the entire dataset once again and you'll have the features for each image in the dataset.
Have a look at this
Hope this helps!
To get the embeddings from the Keras siamese network MNIST example after training:
model_a = Model(inputs=model.input, outputs=processed_a)
model_a.predict([tr_pairs[:, 0], tr_pairs[:, 1]])
I did it as follows (reference from my github post):
My trained siamese model looked like this:
siamese_model.summary()
Note that my newly redefined model is basically the same as the one highlighted in yellow
I then redefined my model which I wanted to use for extracting embeddings (It should be the same model you defined except now it will not have those multiple inputs like siamese) which looked like this:
siamese_embeddings_model = build_siamese_model(input_shape)
siamese_embeddings_model .summary()
Then I just extracted the weights from my trained siamese model and set them into my new model
embeddings_weights = siamese_model.layers[-3].get_weights()
siamese_embeddings_model.set_weights(embeddings_weights )
Then you can supply the new Image to extract the embeddings from the new model
vector = siamese.predict(image)
len(vector[0]) it will print 150 because of my fine dense layer (which are the output vector)