ScatterND Plugin not found while converting onnx into tensorrt model - onnx

Able to convert pytorch model into onnx and validated onnx output with pytorch. it's working fine.
But when I try to convert onnx into tensorrt engine, it fails at onnx parser and return this error.
[TensorRT] ERROR: INVALID_ARGUMENT: getPluginCreator could not find plugin ScatterND version 1
Onnx version : 1.6.0
Tensorrt version : 7.1.3.4
Can anyone please help here, how can I add ScatterND plugin in python and how can I use it ?
Is there any other way to solve this problem ?
Thanks in advance.

Related

setup onnx to parsing onnx graph in c++

I'm trying to load an onnx file and print all the tensor dimensions in the graph(has to perform shape inference). I can do this in python by just importing from onnx import shape_inference, onnx. Is there any documentation on setting up onnx to use it in a c++ program?
If anyone looking for a solution, I've created a project here which uses libonnx.so to perform shape inference.

ONNX runtime is throwing TypeError when loading an onnx model

I have converted a savedModel format to onnx model but when loading it via onnxruntime
import onnxruntime as rt
sess = rt.InferenceSession('model.onnx')
It throws me the below error:
onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from /mnt/model/io_files/convert/1606801475/model.onnx failed:This is an invalid model. Type Error: Type 'tensor(float)' of input parameter (const_fold_opt__342) of operator (Slice) in node (StatefulPartitionedCall/mobilenet_1.00_224/reshape_1/strided_slice) is invalid.
The savedModel I have used is Keras pretrained MobileNet from the tensorflow website: https://www.tensorflow.org/guide/saved_model.
I saw the parameters in netron is float but I am unable to address and understand this issue.
Below is the snip from netron:
Looks like a bug in Keras->ONNX converter.
starts input of Slice must be int32 or int64: https://github.com/onnx/onnx/blob/master/docs/Operators.md#Slice
You can try to patch the model by using onnx Python interface: load the model, find the node, change input type. But if the model has this issue, the Keras->ONNX converter is probably not very well-tested and there are likely other issues.
Can you find an equivalent PyTorch model? PyTorch->ONNX converter should be much better.

Pytorch: Convert 2D-CNN model to tflite

I'd like to convert a model (eg Mobilenet V2) from pytorch to tflite in order to run it on a mobile device.
Has anyone managed to do so?
All I found, was a method that uses ONNX to convert the model into an inbetween state. However, this seems not to work properly, as Tensorflow expects a NHWC-channel order whereas onnx and pytorch work with NCHW channel order.
There is a discussion on github, however in my case the conversion worked without complaints until a "frozen tensorflow graph model", after trying to convert the model further to tflite, it complains about the channel order being wrong...
Here is my code so far:
import torch
import torch.onnx
import onnx
from onnx_tf.backend import prepare
# Create random input
input_data = torch.randn(1,3,224,224)
# Create network
model = torch.hub.load('pytorch/vision:v0.6.0', 'mobilenet_v2', pretrained=True)
model.eval()
# Forward Pass
output = model(input_data)
# Export model to onnx
filename_onnx = "mobilenet_v2.onnx"
filename_tf = "mobilenet_v2.pb"
torch.onnx.export(model, input_data, filename_onnx)
# Export model to tensorflow
onnx_model = onnx.load(filename_onnx)
tf_rep = prepare(onnx_model)
tf_rep.export_graph(filename_tf)
All working without errors until here (ignoring many tf warnings). Then I look up the names of the input and output tensors using netron ("input.1" and "473").
Finally I apply my usual tf-graph to tf-lite conversion script from bash:
tflite_convert \
--output_file=mobilenet_v2.tflite \
--graph_def_file=mobilenet_v2.pb \
--input_arrays=input.1 \
--output_arrays=473
My configuration:
torch 1.6.0.dev20200508 (needs pytorch-nightly to work with mobilenet V2 from torch.hub)
tensorflow-gpu 1.14.0
onnx 1.6.0
onnx-tf 1.5.0
Here is the exact error message I'm getting from tflite:
Unexpected value for attribute 'data_format'. Expected 'NHWC'
Fatal Python error: Aborted
UPDATE:
Updating my configuration:
torch 1.6.0.dev20200508
tensorflow-gpu 2.2.0
onnx 1.7.0
onnx-tf 1.5.0
using
tflite_convert \
--output_file=mobilenet_v2.tflite \
--graph_def_file=mobilenet_v2.pb \
--input_arrays=input.1 \
--output_arrays=473 \
--enable_v1_converter # <-- needed for conversion of frozen graphs
leading to another error:
Exception: <unknown>:0: error: loc("convolution"): 'tf.Conv2D' op is neither a custom op nor a flex op
Update:
Here is an onnx model of mobilenet v2 loaded via netron:
Here is a gdrive link to my converted onnx and pb file
#Ahwar posted a nice solution to this using a Google Colab notebook.
It uses
torch 1.5.0+cu101
torchsummary 1.5.1
torchtext 0.3.1
torchvision 0.6.0+cu101
tensorflow 1.15.2
tensorflow-addons 0.8.3
tensorflow-estimator 1.15.1
onnx 1.7.0
onnx-tf 1.5.0
The conversion is working and the model can be tested on my computer. However when pushing the model to the mobile phone it only works in CPU mode and is much slower (almost 10 fold) than a corresponding model created in tensorflow directly. GPU mode is not working on my mobile phone (in contrast to the corresponding model created in tensorflow directly)
Update:
Apparantly after converting the mobilenet v2 model, the tensorflow frozen graph contains many more convolution operations than the original pytorch model ( ~38 000 vs ~180 ) as discussed in this github issue.
https://github.com/alibaba/TinyNeuralNetwork
You can try this project to convert the pytorch model to tflite. It supports all models in torchvision, and can eliminate redundant operators, basically without performance loss

After upgraded tensorflow2.1,I got "RuntimeError: tf.placeholder() is not compatible with eager execution."

I'm new here,recently I am learning CNN with tensorflow and keras,and I am trying to run cnn model to train mnist dataset,but After I upgraded with tnesorflow 2.0 to 2.1, I got this error message:
raise RuntimeError("tf.placeholder() is not compatible with "
RuntimeError: tf.placeholder() is not compatible with eager execution.
I try this code
tf.compat.v1.disable_eager_execution()
and next,
##build input layer
with tf.compat.v1.name_scope('Input_Layer'):
x=tf.compat.v1.placeholder("float",shape=[None, 784],name="x")
x_image = tf.compat.v1.reshape(x, [-1,28,28,1])
and below is CNN model, so I can run the model successfully,but I still want to understand why....
(before I upgraded to 2.1,I can run the model,but now I need that code...)
could someone help me figure out?? Thanks..
use this line in your code:
tf.compat.v1.disable_eager_execution()

CoreML - Cannot perform conversion from Keras to CoreML - Windows 10

I recently upgraded my keras from version 1.1.0 to 1.2.2 and I ran a CNN for hand gesture classification (the code was developed using keras 1.1.0). I saved the trained model and I tried to convert it to CoreML model using coremltools. The code is as shown below:
import coremltools
import theano
from keras import backend as K
K.set_image_dim_ordering('th')
coreml_model = coremltools.converters.keras.convert('hgm_2.h5')
coreml_model.save('hgm.mlmodel')
But it gave me the following error:
RuntimeError: keras not found or unsupported version or backend found. keras conversion API is disabled.
How can I fix this issue? I tried upgrading theano, but it gave the same error.
I have received a reply here. Currently the conversion is not supported for theano backend.

Resources