Finding Input and output tensors from .pb file - python-3.x

I created a .pb model thanks to roboflow.ai and I'm now trying to convert the .pb file into .tflite so I can use it in an Android app I'm hoping to develop. I'm struggling to do the conversion because I have to put in my 'input' and 'output' tensors.
I found a script that gave me my input tensor as 'image_tensor' but it gives me my output tensors as:
'Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Switch',
'raw_detection_boxes',
'MultipleGridAnchorGenerator/assert_equal_1/Assert/Assert',
'detection_boxes',
'detection_scores',
'Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/SortByField/TopKV2',
'detection_multiclass_scores',
'Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/SortByField/Assert/Assert',
'Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Switch_1',
'Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/SortByField_1/Assert/Assert',
'detection_classes',
'num_detections',
'Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/SortByField_1/TopKV2',
'Preprocessor/map/while/Switch_1',
'Preprocessor/map/while/Switch',
'raw_detection_scores'
I've tried all of this and different combinations of this but I'm unsure what I should be using (or if it even is the correct thing).
I'm trying to put this into the following code:
import tensorflow as tf
localpb = 'retrained_graph_eyes1za.pb'
tflite_file = 'retrained_graph_eyes1za.lite'
print("{} -> {}".format(localpb, tflite_file))
converter = tf.lite.TFLiteConverter.from_frozen_graph(
localpb,
['input'],
['final_result']
)
tflite_model = converter.convert()
open(tflite_file,'wb').write(tflite_model)
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
I'm using TensorFlow v1x as this is what roboflow.ai recommends.
Any help?

Related

Implementing chamfer distance as loss function - Keras symbolc inputs/outputs do not implement __len__ error

I have a VAE to encode and decode a point cloud of dimension (162,3) and I have a 100K such samples. Below is my network architecture. To save space, I haven't writen my decoder architecture as it is the transpose of the encoder architecture.
input_image = Input(shape=(162,3,1))
################ ENCODER #################
encoder_conv_layer1 = Conv2D(32,(1,3))(input_image)
encoder_norm_layer1 = BatchNormalization()(encoder_conv_layer1)
encoder_activ_layer1 = LeakyRelU()(encoder_norm_layer1)
encoder_conv_layer2 = Conv2D(64,(1,1))(encoder_activ_layer1)
encoder_norm_layer2 = BatchNormalization()(encoder_conv_layer2)
encoder_activ_layer2 = LeakyRelU()(encoder_norm_layer2)
encoder_conv_layer3 = Conv2D(128,(1,1))(encoder_activ_layer2)
encoder_norm_layer3 = BatchNormalization()(encoder_conv_layer3)
encoder_activ_layer3= LeakyRelU()(encoder_norm_layer3)
shape_before_flatten = tensorflow.keras.backend.int_shape(encoder_activ_layer3)[1:]
encoder_flatten = tensorflow.keras.layers.Flatten()(encoder_activ_layer3)
encoder_output = Dense(7)(encoder_flatten)
encoder = Model(input_image,encoder_output)
################ DECODER #################
##### Conv2DTranspose layers to reconstruct the input.
................
................
decoder = Model(decoder_input, decoder_output)
vae_encoder_output = encoder(input_image)
vae_decoder_output = decoder(vae_encoder_output)
vae = Model(input_image,vae_decoder_output)
For the loss function, I am wanting to use chamfer distance leveraging implementation from tensorflow_graphics, which takes in two point cloud tensors and outputs a value. I am running this code in eager execution mode
input_image - size - (None,162,3,1)
vae_decoder_output - size - (None,162,3,1)
input_image_tf = tf.squeeze(input_image,axis=3)
vae_decoder_output_tf = tf.squeeze(vae_decoder_output,axis=3)
This is how I am coding the loss function.
ch_loss = tfg.nn.loss.chamfer_distance.evaluate(input_image_tf,vae_decoder_output_tf)
When I am compiling the model, I am getting the following error
vae.compile(optimizer=optiimizer,loss = ch_loss)
I am getting the following error Keras symbolic inputs/outputs do not implement __len__ error I understand I am getting this error because both input_image and vae_decoder_output are both KerasTensor and I am passing both to a TF API and hence the error.
I have the following questions:
How can I get past this error and make chamfer_distance as part of my loss function. FYI, I am able to run this code with an off the shelf 'binary_crossentropy' loss function. I also importing a TF based chamfer distance as a function and pass tensors to it and it gave me the same KerasTensor error.
How can get rid of the KerasTensor error
Am I thinking of this correct?
Environment that I am using :- TF-GPU 2.10.0, python
3.7.3

Can't get Keras Code Example #1 to work with multi-label dataset

Apologies in advance.
I am attempting to recreate this CNN (from the Keras Code Examples), with another dataset.
https://keras.io/examples/vision/image_classification_from_scratch/
The dataset I am using is one for retinal scans, and classifies images on a scale from 0-4. So, it's a multi-label image classification.
The Keras example used is binary classification (cats v dogs), though I would have hoped it wouldn't make much difference (maybe this is a big assumption on my part).
I skipped the 'image augmentation' part of the walkthrough. So, I have not created the
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1),
]
)
part. So, instead of:
def make_model(input_shape, num_classes):
inputs = keras.Input(shape=input_shape)
# Image augmentation block
x = data_augmentation(inputs)
# Entry block
x = layers.Rescaling(1.0 / 255)(x)
.......
at the beginning of the model, I have:
def make_model(input_shape, num_classes):
inputs = keras.Input(shape=input_shape)
# Image augmentation block
x = keras.Sequential(inputs)
# Entry block
x = layers.Rescaling(1.0 / 255)(x)
.......
However I keep getting different errors no matter how much I try to change things around, such as "TypeError: Keras symbolic inputs/outputs do not implement __len__.", or "ValueError: Exception encountered when calling layer "rescaling_3" (type Rescaling).".
What am I missing here?

Using Keras like TensorFlow for gpu computing

I would like to know if Keras can be used as an interface to TensoFlow for only doing computation on my GPU.
I tested TF directly on my GPU. But for ML purposes, I started using Keras, including the backend. I would find it 'comfortable' to do all my stuff in Keras instead of Using two tools.
This is also a matter of curiosity.
I found some examples like this one:
http://christopher5106.github.io/deep/learning/2018/10/28/understand-batch-matrix-multiplication.html
However this example does not actually do the calculation.
It also does not get input data.
I duplicate the snippet here:
'''
from keras import backend as K
a = K.ones((3,4))
b = K.ones((4,5))
c = K.dot(a, b)
print(c.shape)
'''
I would simply like to know if I can get the result numbers from this snippet above, and how?
Thanks,
Michel
Keras doesn't have an eager mode like Tensorflow, and it depends on models or functions with "placeholders" to receive and output data.
So, it's a little more complicated than Tensorflow to do basic calculations like this.
So, the most user friendly solution would be creating a dummy model with one Lambda layer. (And be careful with the first dimension that Keras will insist to understand as a batch dimension and require that input and output have the same batch size)
def your_function_here(inputs):
#if you have more than one tensor for the inputs, it's a list:
input1, input2, input3 = inputs
#if you don't have a batch, you should probably have a first dimension = 1 and get
input1 = input1[0]
#do your calculations here
#if you used the batch_size=1 workaround as above, add this dimension again:
output = K.expand_dims(output,0)
return output
Create your model:
inputs = Input(input_shape)
#maybe inputs2 ....
outputs = Lambda(your_function_here)(list_of_inputs)
#maybe outputs2
model = Model(inputs, outputs)
And use it to predict the result:
print(model.predict(input_data))

Way to print invalid filenames for generator in keras

I have a big dataframe that I pass to a
generator.flow_from_dataframe(df,...)
but when I run it, I have
UserWarning: Found 52 invalid image filename(s) in x_col="image". These filename(s) will be ignored.
.format(n_invalid, x_col)
There is a way to print these invalid image filenames or to understand their indexes in the df?
I had a similar error, bypassed it by using the flag validate_filenames=False in the flow_from_dataframe method
Currently, AFAIK there is no way to list the file names from the data-frame that do not map to the image directory, in Keras
You can write your python method to list the differences or there must be a 3rd party library which does the same
The issue can be fixed by specifying the absolute path for images within the dataframe.
Assuming :
The dataframe df contains two columns - Image(X) & Class(Y)
Images are stored in train_dir
(Image dataframe structure)
# Specifying absolute path for images in the data frame
abs_file_names = []
for file_name in df['Image']:
tmp = os.path.abspath(train_dir+os.sep+file_name)
abs_file_names.append(tmp)
# update dataframe
df['Image'] = abs_file_names
datagen = ImageDataGenerator(rescale=1./255.,validation_split=0.25)
a = datagen.flow_from_dataframe(
dataframe = df,
train_dir=None,
x_col="Image",
y_col="Class",
weight_col=None,
target_size=(150, 150),
color_mode="rgb",
classes=None,
class_mode="categorical",
batch_size=32,
shuffle=True,
seed=None,
save_to_dir=None,
save_prefix="",
save_format=None,
subset=None,
interpolation="nearest",
validate_filenames=True,
)
Useful resources for more info :
https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator#flow_from_dataframe
Keras flowFromDirectory get file names as they are being generated
https://github.com/keras-team/keras-preprocessing/blob/6701f27afa62712b34a17d4b0ff879156b0c7937/keras_preprocessing/image/dataframe_iterator.py#L267
https://github.com/keras-team/keras-preprocessing/issues/92
df['image'] = df['image']+'.jpg'
Use the above code to convert the names of the images into filenames before inputting "image" in x_col parameter in the flow_from_dataframe function. This is what worked for me.

unable to use Trained Tensorflow model

I am new to Deep Learning and Tensorflow. I retrained a pretrained tensorflow inceptionv3 model as saved_model.pb to recognize different type of images but when I tried to use the fie with below code.
with tf.Session() as sess:
with tf.gfile.FastGFile("tensorflow/trained/saved_model.pb",'rb') as f:
graph_def = tf.GraphDef()
tf.Graph.as_graph_def()
graph_def.ParseFromString(f.read())
g_in=tf.import_graph_def(graph_def)
LOGDIR='/log'
train_writer=tf.summary.FileWriter(LOGDIR)
train_writer.add_graph(sess.graph)
it gives me this error -
File "testing.py", line 7, in <module>
graph_def.ParseFromString(f.read())
google.protobuf.message.DecodeError: Error parsing message
I tried many solution I can find for this problem and modules in tensorflow/python/tools which uses the graph_def.ParseFromString(f.read()) function are giving me same error. Please tell me how to solve this or tell me the way in which I can avoid ParseFromString(f.read()) function. Any help would be appreciated. Thank you!
Please use the frozen_inference_graph.pb to load the model,
than to use the saved_model.pb
Model_output
- saved_model
- saved_model.pb
- checkpoint
- frozen_inference_graph.pb # Main model
- model.ckpt.data-00000-of-00001
- model.ckpt.index
- model.ckpt.meta
- pipeline.config
I am assuming that you saved your trained model using tf.saved_model.Builder provided by TensorFlow, in which case you could possibly do something like:
Load model
export_path = './path/to/saved_model.pb'
# We start a session using a temporary fresh Graph
with tf.Session(graph=tf.Graph()) as sess:
'''
You can provide 'tags' when saving a model,
in my case I provided, 'serve' tag
'''
tf.saved_model.loader.load(sess, ['serve'], export_path)
graph = tf.get_default_graph()
# print your graph's ops, if needed
print(graph.get_operations())
'''
In my case, I named my input and output tensors as
input:0 and output:0 respectively
'''
y_pred = sess.run('output:0', feed_dict={'input:0': X_test})
To give some more context here, this is how I saved my model which can be loaded as above.
Save model
x = tf.get_default_graph().get_tensor_by_name('input:0')
y = tf.get_default_graph().get_tensor_by_name('output:0')
export_path = './models/'
builder = tf.saved_model.builder.SavedModelBuilder(export_path)
signature = tf.saved_model.predict_signature_def(
inputs={'input': x}, outputs={'output': y}
)
# using custom tag instead of: tags=[tf.saved_model.tag_constants.SERVING]
builder.add_meta_graph_and_variables(sess=obj.sess,
tags=['serve'],
signature_def_map={'predict': signature})
builder.save()
This will save your protobuf ('saved_model.pb') in the said folder ('models' here) which can then be loaded as stated above.
Have you passed as_text=False when saving a model? Please have a look at: TF save/restore graph fails at tf.GraphDef.ParseFromString()

Resources