How to predict the new image by using model.predict? - keras

I have built a model and saved its weights as 'first_try.h5' but I am not able to load the model & run it on any random image. My problem is how to use model.predict_generator or model.predict or model.predict_on_batch methods. I have read the documentation, but I need one small example to do so. Sorry, I am new to this. It is a binary classifier.
I am using CNTK as my backend. I am following this tutorial.
Need little help.
EDIT:
I am trying the following way, but the output is just [[0.]]
model.load_weights('first_try.h5')
img = Image.open("data/predict/T160305M-0222c_5.jpg")
a = array(img).reshape(1,512,512,3)
score = model.predict(a, batch_size=32, verbose=0)
print(score)

Related

Using pretrained models in Pytorch for Semantic Segmentation, then training only the fully connected layers with our own dataset

I am learning Pytorch and trying to understand how the library works for semantic segmentation.
What I've understood so far is that we can use a pre-trained model in pytorch. I've found an article which was using this model in the .eval() mode but I have not been able to find any tutorial on using such a model for training on our own dataset. I have a very small dataset and I need transfer learning to get results. My goal is to only train the FC layers with my own data. How is that achievable in Pytorch without complicating the code with OOP or so many .py files. I have been having a hard time figuring out such repos in github as I am not the most proficient person when it comes to OOP. I have been using Keras for Deep Learning until recently and there everything is easy and straightforward. Do I have the same options in Pycharm?
I appreciate any guidance on this. I need to run a piece of code that does the semantic segmentation and I am really confused about many of the steps I need to take.
Assume you start with a pretrained model called model. All of this occurs before you pass the model any data.
You want to find the layers you want to train by looking at all of them and then indexing them using model.children(). Running this command will show you all of the blocks and layers.
list(model.children())
Suppose you have now found the layers that you want to finetune (your FC layers as you describe). If the layers you want to train are the last 5 you can grab all of the layers except for the last 5 in order to set their requires_grad params to False so they don't train when you run the training algorithm.
list(model.children())[-5:]
Remove those layers:
layer_list = list(model.children())[-5:]
Rebuild model using sequential:
model_small = nn.Sequential(*list(model.children())[:-5])
Set requires_grad params to False:
for param in model_small.parameters():
param.requires_grad = False
Now you have a model called model_small that has all of the layers except the layers you want to train. Now you can reattach the layers that your removed and they will intrinsically have the requires_grad param set to True. Now when you train the model it will only update the weights on those layers.
model_small.avgpool_1 = nn.AdaptiveAvgPool2d()
model_small.lin1 = nn.Linear()
model_small.logits = nn.Linear()
model_small.softmax = nn.Softmax()
model = model_small.to(device)

How to run predictions on image using a pretrained tensorflow model?

I have adapted this retrain.py script to use with several pretraineds model,
after training is done this generates a 'retrained_graph.pb' which I then read and try to use to run predictions on an image using this code:
def get_top_labels(image_data):
'''
Returns a list of labels and their probabilities
image_data: content of image as string
'''
with tf.compat.v1.Session() as sess:
softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
predictions = sess.run(softmax_tensor, {'DecodeJpeg/contents:0': image_data})
return predictions
This works fine for inception_v3 model because it has a tensor called 'DecodeJpeg', other models I'm using such as inception_v4, mobilenet and inception_resnet_v2 don't.
My question is can I add an ops to the graph, like the one used in add_jpeg_decoding in the retrain.py script so that I can afterwards use that for prediction ?
Would it be possible to do something like this:
predictions = sess.run(softmax_tensor, {image_data_tensor: image_data}) where image_data_tensor is a variable that depends on what model I'm using ?
I looked through stackoverflow and couldn't find a question that solves my problem, I'd really appreciate any help with this, thanks.
I need to at least know if it's possible.
Sorry for repost I got no views on my first one.
So after some research, I figured out a way, leaving an answer here in case someone needs it. What you need to do is do the decoding yourself get a tensor from the image using t = read_tensor_from_image_file found here, then you can run your predictions using this piece of code:
start = time.time()
results = sess.run(output_layer_name,
{input_layer_name: t})
end = time.time()
return results
usually input_layer_name = input:0 and output_layer_name = final_result:0.

Tensorflow Estimator API :How to get weights of each node of a trained model using estimator api of tensorflow

I want to get the weights of each node of every layer in the DNNClassifier, trained using the estimator API of tensorflow. I found that it is possible to get weights of each node in keras. Is it possible for estimator API? Thanks for your help.
input_func = tf.estimator.inputs.pandas_input_fn(x=X_train,y=y_train,batch_size=10,num_epochs=1000,shuffle=True)
dnn_model = tf.estimator.DNNClassifier(hidden_units=[10,10,10],feature_columns=feat_cols,n_classes=2
model.train(input_fn,steps=6000)
I have used the above code to train the model. I want to further extract the weights of each node of hidden layer.
Yes, it should be possible to do so.
You can extract the trainable variables names with:
train_var_names = [var.name for var in tf.trainable_variables()]
These are usually named 'layer-0/kernel' and 'layer-0/bias'. You can then access their values (after training your network) through your estimator (which I'll assume is named dnn_model from your question). As an example:
weights_0 = dnn_model.get_variable_value(train_var_names[0])

Tensorflow - building a CNN model as described in the tutorial

I just completed the implementation of A Guide to TF Layers: Building a Convolutional Neural Network for the MNIST data set. The training model successfully ran and gave accuracy of 97.3%.
However, the tutorial does not mention how to use this new trained model to supply own images and see the predictions. Does anyone know how to use the output of the training model to make predictions? I see in the tmp/mnist_convnet_model$ folder, there are some output files like .pbtxt , meta files and index files. But I can't find instructions to use them for making predictions on my own images.
y_pred = tf.nn.softmax(your_final_layer)
y_pred_cls = tf.argmax(y_pred, dimension=1)
and for prediction
feed_dict = {x: [your_image]}
classification = tf.run(y_pred_cls, feed_dict)
print classification
This applies to just about any model you create

Super Resolution implementation using Keras

In keras, if I want my display the output for a given input low resolution image, what code would I use?
I tried
predictions = model.predict_classes(c)
as well as
get_3rd_layer_output = K.function([model.layers[0].input], [model.layers[2].output])
layer_output = get_3rd_layer_output([c])[0]
However, both of these do not work.

Resources