Output predicted image Tensorflow Lite - python-3.x

I am trying to figure out how i can save a predicted mask (output) from a tensorflow model which have been converted to a tf.lite model on my PC. Any tips or ideas of how i can vizualise it or save the predicted mask as a .png image. I have tried using the tensorflow Lite interference from https://www.tensorflow.org/lite/guide/inference#load_and_run_a_model_in_python without success.
Output now is as following:
[ 1 512 512 3]
[[[[9.7955531e-01 2.0444747e-02]
[9.9987805e-01 1.2197520e-04]
[9.9978799e-01 2.1196880e-04]
.......
.......
[9.9997246e-01 2.7536058e-05]
[9.9997437e-01 2.5645388e-05]
[1.9125430e-03 9.9808747e-01]]]]
Any help is greatly appriceated.
Many thanks
## Load the TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="tflite_model.tflite")
print(interpreter.get_input_details())
print(interpreter.get_output_details())
print(interpreter.get_tensor_details())
interpreter.allocate_tensors()
## Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
## Test the model on input data.
input_shape = input_details[0]['shape']
print(input_shape)
## Use same image as Keras model
input_data = np.array(Xall, dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
## The function `get_tensor()` returns a copy of the tensor data.
## Use `tensor()` in order to get a pointer to the tensor.
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
output_data.shape

It depends on the meaning of your model output. Then, use image libraries like cv2 or PIL to draw the mask.
For example, the first row:
9.7955531e-01 2.0444747e-02
You need to figure out what they correspond to. With the limited information, it is hard to guess from the context.

Related

How to save TF2 trained model and use it again for inference?

I used the following tutorial program (python3) to train a model to classify images as cat or dog.
https://www.tensorflow.org/tutorials/images/classification
I could run this on my ubuntu computer but I want to save the trained model and try it again to test it with my own images.
Can you please point me to a code snippet to
1. save the trained model and
2. infer image.
Re #PSKP
I was able to save and load the model. Code is below.
import tensorflow as tf
dog = tf.keras.preprocessing.image.load_img(
"mowgli.JPG", grayscale=False, color_mode='rgb', target_size=None,
interpolation='nearest'
)
print(dog.size)
model = tf.keras.models.load_model('dog-cat.h5')
y_hat = model.predict(dog)
print(y_hat)
But got this error at model.predict...
ValueError: Failed to find data adapter that can handle input: <class 'PIL.JpegImagePlugin.JpegImageFile'>, <class 'NoneType'>
Thank you
We have number of ways of doing this. But I am showing you easiest way.
solution
import tensorflow as tf
# Train model
model.fit(...)
# Save Model
model.save("model_name.h5")
# Delete Model
del model
# Load Model
model = tf.keras.models.load_model('model_name.h5')
# Now you can use model for inference
y_hat = model.predict(test_X)
Edit
Solution to ValueError
The problem is your dog variable is not numpy array or tensorflow tensor. Before using it you should convert it into numpy array. And also model.predict(..) does not accept only single image so you should add one extra dimension.
import tensorflow as tf
dog = tf.keras.preprocessing.image.load_img(
"mowgli.JPG", grayscale=False, color_mode='rgb', target_size=None,
interpolation='nearest'
)
# Convert to numpy array
dog = np.asarray(dog)
model = tf.keras.models.load_model('dog-cat.h5')
# Add extrac Dimension (it depends on your model)
# This is because dog has only one image. But predict takes multiple
dog = np.array([dog])
y_hat = model.predict(dog)
print(y_hat)
Find Other Solutions
Here

Image_classification using resnet50 model with imagenet db with my custom labels

I am working on image_classification problem(multi-class).
i am using resnet50 model( https://keras.io/applications/#classify-imagenet-classes-with-resnet50 ) along with pretrained db "imagenet" using keras
I am getting the the output labels for which the images i passed to the model.
But now,
i have image data and label data with me of my own dataset.
When i pass the images to the resnet50 model it gives back the imagenet labels that are already trained. Now, here, i want the output as my own labels which is already in dataset instead of getting imagenet labels.
How to to fine tune labels in resnet50 model with imagenet db in keras
I have tried the resnet50 model alone and it works fine. but, how to change the output to my own labels instead of imagenet pre-trained labels.
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
import numpy as np
import os
model = ResNet50(weights='imagenet')
path='/Users/resnet-sample/'
img_path=os.listdir(path)
count=0
for i in img_path:
img = image.load_img(path+i, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
preds = model.predict(x)
print('Predicted:', decode_predictions(preds, top=1)[0], i)
count=count+1
print(preds)
example:
i have an elephant image in jpg format and label its as an 'elephant' in my dataset.
when i pass this image to resnet50 model which uses imagenet pre-trained db the output i received is 'African-Elephant'(imagenet-label).
So instead of getting imagenet label as output, i want to tune this as 'elephant' as label which is in my dataset.
So, not sure how to fine tune the last layers that gives output as my labels instead of imagenet labels.
Pelase help me on this.
Thanks,
Srknt73
The weights argument should be either None (random initialization), imagenet (pre-training on ImageNet), or the path to the weights file to be loaded. So you give the path to the file containing the labels of your dataset

Is image needed to rescale before predicting with model that trained with ImageDataGenerator(1./255)?

After training model with ImageDataGenerator(1/255.), do I need to rescale image before predicting ?
I thought it is necessary but experiment result said NO.
I trained a Resnet50 model which has 37 class on top layer.
Model was trained with ImageDataGenerator like this.
datagen = ImageDataGenerator(rescale=1./255)
generator=datagen.flow_from_directory(
directory=os.path.join(os.getcwd(), data_folder),
target_size=(224,224),
batch_size=256,
classes=None,
class_mode='categorical')
history = model.fit_generator(generator, steps_per_epoch=generator.n / 256, epochs=10)
Accuracy achieved 98% after 10 epochs on my train dataset.
The problem is, when i tried to predict each image in TRAIN dataset, prediction was wrong ( result is 33 whatever input image was )
img_p = './data/pets/shiba_inu/shiba_inu_27.jpg'
img = cv2.imread(img_p, cv2.IMREAD_COLOR)
img = cv2.resize(img, (224,224))
img_arr = np.zeros((1,224,224,3))
img_arr[0, :, :, :] = img / 255.
pred = model.predict(img_arr)
yhat = np.argmax(pred, axis=1)
yhat is 5, but y is 33
When I replace this line
img_arr[0, :, :, :] = img / 255.
by this
img_arr[0, :, :, :] = img
yhat is exactly 33.
Someone might suggest to use predict_generator() instead of predict(), but I want to understand what I did wrong here.
I knew what's wrong here.
I'm using Imagenet pretrained model, which DO NOT rescale image by divide it to 255. I have to use resnet50.preprocess_input before train/test.
preprocess_input function can be found here.
https://github.com/keras-team/keras-applications/blob/master/keras_applications/imagenet_utils.py
You must do every preprocessing that you do on train data, on each data that you want to feed to your trained network. actually when, for example, you rescale train images and train a network, your network train to get a matrix with entries between 0 and 1 and find the proper category. so if after training phase, you feed an image without rescaling, you feed a matrix with entries between 0 and 255 to your trained network while your network did not learn how treat with such matrix.
If you are following pre-processing exactly same as at the time of training then, you might look at the part of your code where you are predicting class using yhat = np.argmax(pred, axis=1) my hunch is that there might be class mismatch in accordance to indexing, to check how your classes are indexed when you use flow_from_directory use class_map = generator.class_indices this will return you a dictionary which will show you how your classes are mapped against index.
Note: The reason I state this because I've faced similar problem, using Keras flow_from_directory doesn't sort classes and hence it's quite possible that your prediction class 1 lies on the index 10 while np.argmax will return you class 1'.

Keras predict_generator and Image generator

How to use ImageDataGenerator and predict_generator on a single JPEG file in Keras?
I am having a single jpeg and i want to predict the probability using model trained using model.fita-generator function.
If you just have a single .jpeg, you don't need to use the ImageDataGenerator. In the code below I'm assuming you trained your model with RGB images sized 150px x 150px.
img = image.load_img(img_path, target_size=(150, 150))
img_tensor = image.img_to_array(img)
img_tensor = np.expand_dims(img_tensor, axis=0)
img_tensor /= 255.
model.predict(img_tensor)
For more info, check out Francois Chollet's excellent Ipython Notebooks. Specifically, Line (In [2]) of https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/5.4-visualizing-what-convnets-learn.ipynb
In this section, he looks at the intermediate activation layers for an image that wasn't in his train_generator. He loads in a model he created in another Ipython notebook: https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/5.2-using-convnets-with-small-datasets.ipynb

Keras loaded model input change

Do you have any idea of an easy way to modify input image size of a saved model in Keras? For example the training input image size is 32x32, but in test I would like to input the full image 180x180. The model has been saved and at test loaded as the following:
json_file = open('autoencoder64a.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights("autoencoder64a.h5")
Many thanks,
Tina
Is this a fully convolutional net? Otherwise you will not be able to reuse it with different input size, as this will change the number of weights in the non-convolutional layers.
If it is indeed a FCN, you only need to change the first and last line in the code defining the model:
input_layer = Input((180,180))
#All other layers copied here from your old model,
#ending with 'last_layer =...'
new_model = Model(input_layer, last_layer)

Resources