Convert posenet model with OpenVINO toolkit? - openvino

I am trying to convert posenet model (of MobileNetV1 architecture) using OpenVINO model optimizer. But it is throwing an error as below.
I am using the following command to convert.
python3 mo_tf.py --input_model /vino/models/posenet/_models/model-mobilenet_v1_101.pb --input_meta_graph /vino/models/posenet/_models/checkpoints/model-mobilenet_v1_101.ckpt.meta --output_dir /vino/models/posenet/
Error I am receiving is:
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /vino/models/posenet/_models/model-mobilenet_v1_101.pb
- Path for generated IR: /vino/models/posenet/
- IR output name: model-mobilenet_v1_101
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: None
Model Optimizer version: 2019.1.0-341-gc9b66a2
[ ERROR ] Unknown configuration of input model parameters
I am suspecting that, I am using the wrong command altogher but couldn't figure it out even after going through its documentation. Any leads on what I am doing wrong would be really helpful.

Try to provide just input_meta_graph, without input_model.
https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html

Related

Load pytorch model with correct args from files

Having followed Chris McCormick's tutorial for creating a BERT Fake News Detector (link here), at the end he saves the PyTorch model using the following code:
output_dir = './model_save/'
if not os.path.exists(output_dir):
os.makedirs(output_dir)
# Save a trained model, configuration and tokenizer using `save_pretrained()`.
# They can then be reloaded using `from_pretrained()`
model_to_save = model.module if hasattr(model, 'module') else model
model_to_save.save_pretrained(output_dir)
tokenizer.save_pretrained(output_dir)
As he says himself, it can be reloaded using from_pretrained(). Currently, what the code does is create an output directory with 6 files:
config.json
merges.txt
pytorch_model.bin
special_tokens_map.json
tokenizer_config.json
vocab.json
So how can I use the from_pretrained() method to load the model with all of its arguments and respective weights, and which files do I use from the six?
I understand that a model can be loaded as such (from PyTorch documentation):
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
model.eval()
but how can I make use of the files in the output directory to do this?
Any help is appreciated!

convert yolov5s.pt to openvino IR format success, predict stuff failed

This is begin from curious....I download the pretrained yolov5s.pt from public google drive, and convert it as yolov5s.onnx file with input shape [1,3,640,640] by using yolov5's models/export.py. Then I use openvino's deployment tools/mo.py to convert the yolov5s.onnx into openvino inference engines file (.xml+.bin). The conversion is success without error. At last, I try to run the predict by using these files by using openvino's demo program the prediction is successfully return the result. But all the result return negative numbers, and the array level is wrong. Is it impossible to convert the yolov5 weights for openvino?
YOLOv5 is currently not an officially supported topology by the OpenVINO toolkit. Please see the list of validated ONNX and PyTorch topologies here https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX.html
However, we have one recommendation for you to try, but it was no guaranteed it will succeed. You can do it by change export.py to include the Detect layer:
yolov5/models/export.py
Line 28 in a1c8406
model.model[-1].export = True # set Detect() layer export=True
The above needs to be changed from True to False. For more detail, you can follow this thread here.
Try this :python mo.py --input_model yolov5s.onnx -s 255 --reverse_input_channels --output Conv_245,Conv_261,Conv_277 (use Netron to check your architecture)

Operator translate error occurs when I try to convert onnx file to caffe2

I train a boject detection model on pytorch, and I have exported to onnx file.
And I want to convert it to caffe2 model :
import onnx
import caffe2.python.onnx.backend as onnx_caffe2_backend
# Load the ONNX ModelProto object. model is a standard Python protobuf object
model = onnx.load("CPU4export.onnx")
# prepare the caffe2 backend for executing the model this converts the ONNX model into a
# Caffe2 NetDef that can execute it. Other ONNX backends, like one for CNTK will be
# availiable soon.
prepared_backend = onnx_caffe2_backend.prepare(model)
# run the model in Caffe2
# Construct a map from input names to Tensor data.
# The graph of the model itself contains inputs for all weight parameters, after the input image.
# Since the weights are already embedded, we just need to pass the input image.
# Set the first input.
W = {model.graph.input[0].name: x.data.numpy()}
# Run the Caffe2 net:
c2_out = prepared_backend.run(W)[0]
# Verify the numerical correctness upto 3 decimal places
np.testing.assert_almost_equal(torch_out.data.cpu().numpy(), c2_out, decimal=3)
print("Exported model has been executed on Caffe2 backend, and the result looks good!")
I always got this error :
RuntimeError: ONNX conversion failed, encountered 1 errors:
Error while processing node: input: "90"
input: "91"
output: "92"
op_type: "Resize"
attribute {
name: "mode"
s: "nearest"
type: STRING
}
. Exception: Don't know how to translate op Resize
How can I solve it ?
The problem is that the Caffe2 ONNX backend does not yet support the export of the Resize operator.
Please raise an issue on the Caffe2 / PyTorch github -- there's an active community of developers who should be able to address this use case.

tensorflow openvino ssd-mobilnet coco custom dataset error input layer

So, I'm using TensorFlow SSD-Mobilnet V1 coco dataset. That I have further trained on my own dataset but when I try to convert it to OpenVino IR to run it on Raspberry PI with Movidius Chip. I get an error
➜ utils sudo python3 summarize_graph.py --input_model ssd.pb
WARNING: Logging before flag parsing goes to stderr.
W0722 17:17:05.565755 4678620608 __init__.py:308] Limited tf.compat.v2.summary API due to missing TensorBoard installation.
W0722 17:17:06.696880 4678620608 deprecation_wrapper.py:119] From ../../mo/front/tf/loader.py:35: The name tf.GraphDef is deprecated. Please use tf.compat.v1.GraphDef instead.
W0722 17:17:06.697348 4678620608 deprecation_wrapper.py:119] From ../../mo/front/tf/loader.py:109: The name tf.MetaGraphDef is deprecated. Please use tf.compat.v1.MetaGraphDef instead.
W0722 17:17:06.697680 4678620608 deprecation_wrapper.py:119] From ../../mo/front/tf/loader.py:235: The name tf.NodeDef is deprecated. Please use tf.compat.v1.NodeDef instead.
1 input(s) detected:
Name: image_tensor, type: uint8, shape: (-1,-1,-1,3)
7 output(s) detected:
detection_boxes
detection_scores
detection_multiclass_scores
detection_classes
num_detections
raw_detection_boxes
raw_detection_scores
When I try to convert the ssd.pb(frozen model) to OpenVino IR
➜ model_optimizer sudo python3 mo_tf.py --input_model ssd.pb
Password:
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/ssd.pb
- Path for generated IR: /opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/.
- IR output name: ssd
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: None
Model Optimizer version: 2019.1.1-83-g28dfbfd
WARNING: Logging before flag parsing goes to stderr.
E0722 17:24:22.964164 4474824128 infer.py:158] Shape [-1 -1 -1 3] is not fully defined for output 0 of "image_tensor". Use --input_shape with positive integers to override model input shapes.
E0722 17:24:22.964462 4474824128 infer.py:178] Cannot infer shapes or values for node "image_tensor".
E0722 17:24:22.964554 4474824128 infer.py:179] Not all output shapes were inferred or fully defined for node "image_tensor".
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.
E0722 17:24:22.964632 4474824128 infer.py:180]
E0722 17:24:22.964720 4474824128 infer.py:181] It can happen due to bug in custom shape infer function <function tf_placeholder_ext.<locals>.<lambda> at 0x12ab64bf8>.
E0722 17:24:22.964787 4474824128 infer.py:182] Or because the node inputs have incorrect values/shapes.
E0722 17:24:22.964850 4474824128 infer.py:183] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
E0722 17:24:22.965915 4474824128 infer.py:192] Run Model Optimizer with --log_level=DEBUG for more information.
E0722 17:24:22.966033 4474824128 main.py:317] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "image_tensor" node.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
How do you think we should fix this?
I updated my OpenVINO to OpenVINO toolkit R2 2019 & using the below command I was able to generate IR file
python3 ~/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model frozen_inference_graph.pb --tensorflow_use_custom_operations_config ~/intel/openvino/deployment_tools/model_optimizer/extension/front/tf/ssd_support_api_v1.14.json --tensorflow_object_detection_api_pipeline_config pipeline.config -b 1 --data_type FP16 --reverse_input_channels
When you try to convert ssd.pb(your frozen model), you are passing only the input model parameter to mo_tf.py scripts. To convert an object detection model to IR, go
to the model optimizer directory, run the mo_tf.py script with the following required parameters:
--input_model :
File with a pre-trained model (binary or text .pb file after freezing)
--tensorflow_use_custom_operations_config :
Configuration file that describes rules to convert specific TensorFlow* topologies.
For the models downloaded from the TensorFlow* Object Detection API zoo, you can find the configuration files in the /deployment_tools/model_optimizer/extensions/front/tf directory
You can use ssd_v2_support.json / ssd_support.json — for frozen SSD topologies from the models zoo. It will be available in the above mentioned directory.
--tensorflow_object_detection_api_pipeline_config :
A special configuration file that describes the topology hyper-parameters and structure of the TensorFlow Object Detection API model.
For the models downloaded from the TensorFlow* Object Detection API zoo, the configuration file is named pipeline.config.
If you plan to train a model yourself, you can find templates for these files in the models repository
--input_shape(optional):
A custom input image shape, we need to pass these values based on the pretrained model you used.
The model takes input image in the format [1 H W C], Where the parameter refers to the batch size, height, width, channel respectively.
Model Optimizer does not accept negative values for batch, height, width and channel number.
So, you need to pass a valid set of 4 positive numbers using --input_shape parameter, if input image dimensions of the model(SSD mobilenet) is known in advance.
If it is not available, you don't need to pass input shape.
An example mo_tf.py command which uses the model SSD-MobileNet-v2-COCO downloaded from model downloader comes up with openvino is shown below.
python mo_tf.py
--input_model "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\tools\model_downloader\object_detection\common\ssd_mobilenet_v2_coco\tf\ssd_mobilenet_v2_coco.frozen.pb"
--tensorflow_use_custom_operations_config "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\model_optimizer\extensions\front\tf\ssd_v2_support.json"
--tensorflow_object_detection_api_pipeline_config "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\tools\model_downloader\object_detection\common\ssd_mobilenet_v2_coco\tf\ssd_mobilenet_v2_coco.config"
--data_type FP16
--log_level DEBUG
For more details, refer to the link https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html
Hope it helps.
for conversion of mobilenetv2 ssd add "Postprocessor/Cast_1" in original ssd_v2_support.json and use following command. it should work fine.
"instances": {
"end_points": [
"detection_boxes",
"detection_scores",
"num_detections"
],
"start_points": [
"Postprocessor/Shape",
"Postprocessor/scale_logits",
"Postprocessor/Tile",
"Postprocessor/Reshape_1",
"Postprocessor/Cast_1"
]
},
then use following command
#### object detection conversion
import platform
is_win = 'windows' in platform.platform().lower()
mo_tf_path = '/opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py'
json_file = '/opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json'
pb_file = 'model/frozen_inference_graph.pb'
pipeline_file = 'model/pipeline.config'
output_dir = 'output/'
img_height = 300
input_shape = [1,img_height,img_height,3]
input_shape_str = str(input_shape).replace(' ','')
input_shape_str
!python3 {mo_tf_path} --input_model {pb_file} --tensorflow_object_detection_api_pipeline_config {pipeline_file} --tensorflow_use_custom_operations_config {json_file} --output="detection_boxes,detection_scores,num_detections" --output_dir {output_dir} --reverse_input_channels --data_type FP16 --log_level DEBUG

CNTK: "inferred dimension cannot be calculated from input and new shape size."

I've set up a model for CIFAR-10 using Pytorch, and saved it as an ONNX file.
But it looks like I can't load it from CNTK.
I've already loaded another ONNX file from the same source code (by mistake), so the dependencies look OK. The problem occurs when I call Function.Load()
var deviceDescriptor = DeviceDescriptor.CPUDevice; ;
var function = Function.Load(ONNX_PATH, deviceDescriptor, ModelFormat.ONNX);
I get this exception (Unhandled exception):
System.ApplicationException : 'Reshape: inferred dimension cannot be calculated from input and new shape size.
[CALL STACK]
- CNTK::TrainingParameterSchedule:: GetMinibatchSize
- CNTK:: XavierInitializer (x6)
- CNTK::Function::Load
- CSharp_CNTK_Function__Load__SWIG_0
- 00007FFB0C41C307 (SymFromAddr() error: Le module spécifié est introuvable.)
It looks like this model can't be loaded in CNTK. CNTK has good support for exporting (saving) to ONNX, importing (loading) can be problematic for some operations.
CNTK development is frozen, what's your motivation to use it?
The recommended way now is to use ONNX Runtime https://github.com/microsoft/onnxruntime for inference, it has first-class support for ONNX.

Resources