setup onnx to parsing onnx graph in c++ - onnx

I'm trying to load an onnx file and print all the tensor dimensions in the graph(has to perform shape inference). I can do this in python by just importing from onnx import shape_inference, onnx. Is there any documentation on setting up onnx to use it in a c++ program?

If anyone looking for a solution, I've created a project here which uses libonnx.so to perform shape inference.

Related

Bart model inference results after converting from hugginface to onnx

I followed the instructions to convert BART-LARGE-CNN model to ONNX here (https://github.com/huggingface/transformers/blob/master/docs/source/serialization.rst) using transformers.onnx script. The model was exported fine and I can run inference.
However, the results of the inference, from the 'last_hideen_state' are in logits (I think)? How can I parse this output for summarization purposes?
Here are screenshots of what I've done.
This is the resulting output from those two states:
I have implemented fast-Bart. Which essentially converts Bart model from Pytorch to Onnx- with generate capabilities.
fast-Bart

how to save in pytorch an ONNX model with training (autograd) operations?

In pytorch, is it possible to save an ONNX model to file including the backward operations?
If not, is there any other way in pytorch to save the forward and backward graph as text (json, pbtxt ...)?
Any help will be appreciated.
it's possible if you wrap the model with ORTModule -
https://github.com/microsoft/onnxruntime-training-examples
There's flag to enable onnx model saving, for example:
model._save_onnx = True
model._save_onnx_prefix = 'MNIST'
However, the onnx graph from fw will be further optimized before generating bw graph. Thus it's specific to ORT, but the training results should be mathematically the same. If you are looking for just fw+bw graph, the output onnx is still a good reference. The onnx could be opened using Netron util - https://github.com/lutzroeder/Netron

How to load a sklearn model in Tensorflowjs?

I have a gradient boost model saved in the .pkl format. I have to load this model in tensorflowjs. i can see that there is a way to load a keras model but I can't find a way to load a sklearn model. Is it possible to do this?
It is not possible to load sklearn model in tensorflow.js. Tensorflow.js allows to load models written in tensorflow.
Though, I haven't tried myself, but I think that you can possibly use the scikit learn wrapper to rewrite the classifier in tensorflow. The model can be saved and converted to a format that can be loaded in tensorflow.js.

How to convert Turi Create created CoreML models to Keras?

I'm looking for a way to do the conversion, the only information I've found is how to go from Keras and other to CoreML.
You'll have to write your own code to do this, there is no automated conversion tool for Core ML models to Keras (only the other way around).

How can I use pytorch pre-trained model without installing pytorch?

I only want to use pre-trained model in pytorch without installing the whole package.
Can I just copy the model module from pytorch?
I'm afraid you cannot do that: in order to run the model, you need not only the trained weights ('.pth.tar' file) but also the "structure" of the net: that is, the layers, how they are connected to each other etc. This network structure is coded in python and requires pytorch to be installed.
A way of using PyTorch models without Installing PyTorch is if the model is exported in Onnx format. Once the model is in Onnx format the model can be Imported into the Onnx runtime and ca be used for Inferencing. This tutorial should help you out.Pytorch ONNX

Resources