convert yolov5s.pt to openvino IR format success, predict stuff failed - openvino

This is begin from curious....I download the pretrained yolov5s.pt from public google drive, and convert it as yolov5s.onnx file with input shape [1,3,640,640] by using yolov5's models/export.py. Then I use openvino's deployment tools/mo.py to convert the yolov5s.onnx into openvino inference engines file (.xml+.bin). The conversion is success without error. At last, I try to run the predict by using these files by using openvino's demo program the prediction is successfully return the result. But all the result return negative numbers, and the array level is wrong. Is it impossible to convert the yolov5 weights for openvino?

YOLOv5 is currently not an officially supported topology by the OpenVINO toolkit. Please see the list of validated ONNX and PyTorch topologies here https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX.html
However, we have one recommendation for you to try, but it was no guaranteed it will succeed. You can do it by change export.py to include the Detect layer:
yolov5/models/export.py
Line 28 in a1c8406
model.model[-1].export = True # set Detect() layer export=True
The above needs to be changed from True to False. For more detail, you can follow this thread here.

Try this :python mo.py --input_model yolov5s.onnx -s 255 --reverse_input_channels --output Conv_245,Conv_261,Conv_277 (use Netron to check your architecture)

Related

In spacy custom trianed model : Config Validation error ner -> incorrect_spans_key extra fields not permitted

I am running into the problem whenever I try to load custom trained NER model of spacy inside docker container.
Note:
I am using latest spacy version 3.0 and trained that NER model using CLI commands of spacy, first by converting Train data format into .spacy format
The error throws as following(You can check error in image as hyperlinked):
config validation error
My trained model file structure looks like this:
custom ner model structure
But while run that model without docker it works perfectly. What wrong I have done in this process. Plz help me to resolve the error.
Thank you in advance.

Operator translate error occurs when I try to convert onnx file to caffe2

I train a boject detection model on pytorch, and I have exported to onnx file.
And I want to convert it to caffe2 model :
import onnx
import caffe2.python.onnx.backend as onnx_caffe2_backend
# Load the ONNX ModelProto object. model is a standard Python protobuf object
model = onnx.load("CPU4export.onnx")
# prepare the caffe2 backend for executing the model this converts the ONNX model into a
# Caffe2 NetDef that can execute it. Other ONNX backends, like one for CNTK will be
# availiable soon.
prepared_backend = onnx_caffe2_backend.prepare(model)
# run the model in Caffe2
# Construct a map from input names to Tensor data.
# The graph of the model itself contains inputs for all weight parameters, after the input image.
# Since the weights are already embedded, we just need to pass the input image.
# Set the first input.
W = {model.graph.input[0].name: x.data.numpy()}
# Run the Caffe2 net:
c2_out = prepared_backend.run(W)[0]
# Verify the numerical correctness upto 3 decimal places
np.testing.assert_almost_equal(torch_out.data.cpu().numpy(), c2_out, decimal=3)
print("Exported model has been executed on Caffe2 backend, and the result looks good!")
I always got this error :
RuntimeError: ONNX conversion failed, encountered 1 errors:
Error while processing node: input: "90"
input: "91"
output: "92"
op_type: "Resize"
attribute {
name: "mode"
s: "nearest"
type: STRING
}
. Exception: Don't know how to translate op Resize
How can I solve it ?
The problem is that the Caffe2 ONNX backend does not yet support the export of the Resize operator.
Please raise an issue on the Caffe2 / PyTorch github -- there's an active community of developers who should be able to address this use case.

tensorflow openvino ssd-mobilnet coco custom dataset error input layer

So, I'm using TensorFlow SSD-Mobilnet V1 coco dataset. That I have further trained on my own dataset but when I try to convert it to OpenVino IR to run it on Raspberry PI with Movidius Chip. I get an error
➜ utils sudo python3 summarize_graph.py --input_model ssd.pb
WARNING: Logging before flag parsing goes to stderr.
W0722 17:17:05.565755 4678620608 __init__.py:308] Limited tf.compat.v2.summary API due to missing TensorBoard installation.
W0722 17:17:06.696880 4678620608 deprecation_wrapper.py:119] From ../../mo/front/tf/loader.py:35: The name tf.GraphDef is deprecated. Please use tf.compat.v1.GraphDef instead.
W0722 17:17:06.697348 4678620608 deprecation_wrapper.py:119] From ../../mo/front/tf/loader.py:109: The name tf.MetaGraphDef is deprecated. Please use tf.compat.v1.MetaGraphDef instead.
W0722 17:17:06.697680 4678620608 deprecation_wrapper.py:119] From ../../mo/front/tf/loader.py:235: The name tf.NodeDef is deprecated. Please use tf.compat.v1.NodeDef instead.
1 input(s) detected:
Name: image_tensor, type: uint8, shape: (-1,-1,-1,3)
7 output(s) detected:
detection_boxes
detection_scores
detection_multiclass_scores
detection_classes
num_detections
raw_detection_boxes
raw_detection_scores
When I try to convert the ssd.pb(frozen model) to OpenVino IR
➜ model_optimizer sudo python3 mo_tf.py --input_model ssd.pb
Password:
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/ssd.pb
- Path for generated IR: /opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/.
- IR output name: ssd
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: None
Model Optimizer version: 2019.1.1-83-g28dfbfd
WARNING: Logging before flag parsing goes to stderr.
E0722 17:24:22.964164 4474824128 infer.py:158] Shape [-1 -1 -1 3] is not fully defined for output 0 of "image_tensor". Use --input_shape with positive integers to override model input shapes.
E0722 17:24:22.964462 4474824128 infer.py:178] Cannot infer shapes or values for node "image_tensor".
E0722 17:24:22.964554 4474824128 infer.py:179] Not all output shapes were inferred or fully defined for node "image_tensor".
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.
E0722 17:24:22.964632 4474824128 infer.py:180]
E0722 17:24:22.964720 4474824128 infer.py:181] It can happen due to bug in custom shape infer function <function tf_placeholder_ext.<locals>.<lambda> at 0x12ab64bf8>.
E0722 17:24:22.964787 4474824128 infer.py:182] Or because the node inputs have incorrect values/shapes.
E0722 17:24:22.964850 4474824128 infer.py:183] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
E0722 17:24:22.965915 4474824128 infer.py:192] Run Model Optimizer with --log_level=DEBUG for more information.
E0722 17:24:22.966033 4474824128 main.py:317] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "image_tensor" node.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
How do you think we should fix this?
I updated my OpenVINO to OpenVINO toolkit R2 2019 & using the below command I was able to generate IR file
python3 ~/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model frozen_inference_graph.pb --tensorflow_use_custom_operations_config ~/intel/openvino/deployment_tools/model_optimizer/extension/front/tf/ssd_support_api_v1.14.json --tensorflow_object_detection_api_pipeline_config pipeline.config -b 1 --data_type FP16 --reverse_input_channels
When you try to convert ssd.pb(your frozen model), you are passing only the input model parameter to mo_tf.py scripts. To convert an object detection model to IR, go
to the model optimizer directory, run the mo_tf.py script with the following required parameters:
--input_model :
File with a pre-trained model (binary or text .pb file after freezing)
--tensorflow_use_custom_operations_config :
Configuration file that describes rules to convert specific TensorFlow* topologies.
For the models downloaded from the TensorFlow* Object Detection API zoo, you can find the configuration files in the /deployment_tools/model_optimizer/extensions/front/tf directory
You can use ssd_v2_support.json / ssd_support.json — for frozen SSD topologies from the models zoo. It will be available in the above mentioned directory.
--tensorflow_object_detection_api_pipeline_config :
A special configuration file that describes the topology hyper-parameters and structure of the TensorFlow Object Detection API model.
For the models downloaded from the TensorFlow* Object Detection API zoo, the configuration file is named pipeline.config.
If you plan to train a model yourself, you can find templates for these files in the models repository
--input_shape(optional):
A custom input image shape, we need to pass these values based on the pretrained model you used.
The model takes input image in the format [1 H W C], Where the parameter refers to the batch size, height, width, channel respectively.
Model Optimizer does not accept negative values for batch, height, width and channel number.
So, you need to pass a valid set of 4 positive numbers using --input_shape parameter, if input image dimensions of the model(SSD mobilenet) is known in advance.
If it is not available, you don't need to pass input shape.
An example mo_tf.py command which uses the model SSD-MobileNet-v2-COCO downloaded from model downloader comes up with openvino is shown below.
python mo_tf.py
--input_model "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\tools\model_downloader\object_detection\common\ssd_mobilenet_v2_coco\tf\ssd_mobilenet_v2_coco.frozen.pb"
--tensorflow_use_custom_operations_config "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\model_optimizer\extensions\front\tf\ssd_v2_support.json"
--tensorflow_object_detection_api_pipeline_config "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\tools\model_downloader\object_detection\common\ssd_mobilenet_v2_coco\tf\ssd_mobilenet_v2_coco.config"
--data_type FP16
--log_level DEBUG
For more details, refer to the link https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html
Hope it helps.
for conversion of mobilenetv2 ssd add "Postprocessor/Cast_1" in original ssd_v2_support.json and use following command. it should work fine.
"instances": {
"end_points": [
"detection_boxes",
"detection_scores",
"num_detections"
],
"start_points": [
"Postprocessor/Shape",
"Postprocessor/scale_logits",
"Postprocessor/Tile",
"Postprocessor/Reshape_1",
"Postprocessor/Cast_1"
]
},
then use following command
#### object detection conversion
import platform
is_win = 'windows' in platform.platform().lower()
mo_tf_path = '/opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py'
json_file = '/opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json'
pb_file = 'model/frozen_inference_graph.pb'
pipeline_file = 'model/pipeline.config'
output_dir = 'output/'
img_height = 300
input_shape = [1,img_height,img_height,3]
input_shape_str = str(input_shape).replace(' ','')
input_shape_str
!python3 {mo_tf_path} --input_model {pb_file} --tensorflow_object_detection_api_pipeline_config {pipeline_file} --tensorflow_use_custom_operations_config {json_file} --output="detection_boxes,detection_scores,num_detections" --output_dir {output_dir} --reverse_input_channels --data_type FP16 --log_level DEBUG

Remove graph on inference

Unable to convert tensorflow to VINO format
Followed documentation
In case I wish to fix the graph using the Point 97 in the Mo_FAQ.html document in the VINO documentation docs,
which nodes do I include in the first command -
python3 mo.py --input_model model/frozen_inference_graph.pb --tensorflow_subgraph_pattern ""FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/FusedBatchNorm, FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise, FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/BatchNorm/FusedBatchNorm, FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/Relu6,..
Which of the nodes do I put above to offload a sub-graph of operations?
(Actual .pbtxt file had about 100 nodes)
What model you are trying to convert? Not sure what you need to use param --tensorflow_subgraph_pattern to convert MobileNet, it supported out of the box.

TensorFlow 0.12 tutorials produce warning: "Rank of input Tensor should be the same as output_rank for column

I have some experience with writing machine learning programs in python, but I'm new to TensorFlow and am checking it out. My dev environment is a lubuntu 14.04 64-bit virtual machine. I've created a python 3.5 conda environment from miniconda and installed TensorFlow 0.12 and its dependencies. I began trying to run some example code from TensorFlow's tutorials and encountered this warning when calling fit() in the boston.py example for input functions: source.
WARNING:tensorflow:Rank of input Tensor (1) should be the same as
output_rank (2) for column. Will attempt to expand dims. It is highly
recommended that you resize your input, as this behavior may change.
After some searching in Google, I found other people encountered this same warning:
https://github.com/tensorflow/tensorflow/issues/6184
https://github.com/tensorflow/tensorflow/issues/5098
Tensorflow - Boston Housing Data Tutorial Errors
However, they also experienced errors which prevent code execution from completing. In my case, the code executes with the above warning. Unfortunately, I couldn't find a single answer in those links regarding what caused the warning and how to fix the warning. They all focused on the error. How does one remove the warning? Or is the warning safe to ignore?
Cheers!
Extra info, I also see the following warnings when running the aforementioned boston.py example.
WARNING:tensorflow:*******************************************************
WARNING:tensorflow:TensorFlow's V1 checkpoint format has been
deprecated. WARNING:tensorflow:Consider switching to the more
efficient V2 format: WARNING:tensorflow:
'tf.train.Saver(write_version=tf.train.SaverDef.V2)'
WARNING:tensorflow:now on by default.
WARNING:tensorflow:*******************************************************
and
WARNING:tensorflow:From
/home/kade/miniconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined.py:1053
in predict.: calling BaseEstimator.predict (from
tensorflow.contrib.learn.python.learn.estimators.estimator) with x is
deprecated and will be removed after 2016-12-01. Instructions for
updating: Estimator is decoupled from Scikit Learn interface by moving
into separate class SKCompat. Arguments x, y and batch_size are only
available in the SKCompat class, Estimator will only accept input_fn.
Example conversion: est = Estimator(...) -> est =
SKCompat(Estimator(...))
UPDATE (2016-12-22):
I've tracked the warning to this file:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/layers/python/layers/feature_column_ops.py
and this code block:
except NotImplementedError:
with variable_scope.variable_scope(
None,
default_name=column.name,
values=columns_to_tensors.values()):
tensor = column._to_dense_tensor(transformed_tensor)
tensor = fc._reshape_real_valued_tensor(tensor, 2, column.name)
variable = [
contrib_variables.model_variable(
name='weight',
shape=[tensor.get_shape()[1], num_outputs],
initializer=init_ops.zeros_initializer(),
trainable=trainable,
collections=weight_collections)
]
predictions = math_ops.matmul(tensor, variable[0], name='matmul')
Note the line: tensor = fc._reshape_real_valued_tensor(tensor, 2, column.name)
The method signature is: _reshape_real_valued_tensor(input_tensor, output_rank, column_name=None)
The value 2 is hardcoded as the value of output_rank, but the boston.py example is passing in an input_tensor of rank 1. I will continue to investigate.
If you specify the shape of your tensor explicitly:
tf.constant(df[k].values, shape=[df[k].size, 1])
the warning should go away.
After I specify the shape of the tensor explicitly.
continuous_cols = {k: tf.constant(df[k].values, shape=[df[k].size, 1]) for k in CONTINUOUS_COLUMNS}
It works!

Resources