I'm exporting an ML model from PyTorch to ONNX model with torch.onnx.export function.
When I use PyTorch model, it has NVTX annotations for optimization which are used for Nsight system.
But when I changed the model from PyTorch to ONNX, it just went away.
How can I use NVTX in ONNX model?
I use model = onnx.load(onnx_file) for a function call, so I cannot annotate NVTX code snippets on the code itself.
Related
This question is two-fold:
How do you convert a pytorch segmentation model to ONNX. Specifically, I am trying to convert the VGG-16 model from this github repo( https://github.com/khanhha/crack_segmentation/tree/e924b6a3632134848b993c68e7295b1aae92ce28) to ONNX to easily optimize with the Jetson TRT. Is there anything different that a segmentation model requires than a simple object detector/bbox model?
Can I do this conversion on CPU, a.k.a. load the pytorch model to cpu and then convert it to onnx, and then load the onnx model to gpu? I want to make sure loading the model to CPU in the conversion phase doesn't change the actual model that's being converted.
I'm trying to load an onnx file and print all the tensor dimensions in the graph(has to perform shape inference). I can do this in python by just importing from onnx import shape_inference, onnx. Is there any documentation on setting up onnx to use it in a c++ program?
If anyone looking for a solution, I've created a project here which uses libonnx.so to perform shape inference.
I want to do Named Entity Recognition on scientific articles.
tokenizer = AutoTokenizer.from_pretrained('allenai/scibert_scivocab_cased')
model=TFBertForTokenClassification.from_pretrained("allenai/scibert_scivocab_uncased",from_pt=True,config=config)
But it gives following warning.
Some weights of the PyTorch model were not used when initializing the
TF 2.0 model TFBertForTokenClassification: ['classifier.weight',
'classifier.bias'] - This IS expected if you are initializing
TFBertForTokenClassification from a TF 2.0 model trained on another
task or with another architecture (e.g. initializing a
BertForSequenceClassification model from a TFBertForPretraining
model). - This IS NOT expected if you are initializing
TFBertForTokenClassification from a TF 2.0 model that you expect to be
exactly identical (initializing a BertForSequenceClassification model
from a TFBertForSequenceClassification model). Some weights or buffers
of the PyTorch model TFBertForTokenClassification were not initialized
from the TF 2.0 model and are newly initialized:
['cls.predictions.transform.dense.weight',
'cls.predictions.decoder.bias',
'cls.predictions.transform.LayerNorm.weight',
'cls.predictions.transform.LayerNorm.bias',
'cls.predictions.decoder.weight', 'cls.seq_relationship.weight',
'cls.predictions.bias', 'cls.predictions.transform.dense.bias',
'cls.seq_relationship.bias'] You should probably TRAIN this model on a
down-stream task to be able to use it for predictions and inference.
And when I try to fit the model, it gives follwing warning.
WARNING:tensorflow:Gradients do not exist for variables
['tf_bert_for_token_classification/bert/pooler/dense/kernel:0',
'tf_bert_for_token_classification/bert/pooler/dense/bias:0'] when
minimizing the loss.
Does anyone know why it is giving these warnings?
I want to convert a Keras model to Tensorflow Lite model. When I examined the documentation, it is stated that we can use tf.keras HDF5 models as input. Does it mean I can use my saved HDF5 Keras model as input to it or tf.keras HDF5 model and Keras HDF5 models are different things?
Documentation: https://www.tensorflow.org/lite/convert
Edit: I could convert my Keras model to Tensorflow Lite model with using this API, but I didn't test it yet. My code:
converter = tf.lite.TFLiteConverter.from_keras_model_file(path + 'plant-
recognition-model.h5')
tflite_model = converter.convert()
with open('plant-recognition-model.tflite', 'wb') as f:
f.write(tflite_model)
tf.keras HDF5 model and Keras HDF5 models are not different things, except for inevitable software version update synchronicity. This is what the official docs say:
tf.keras is TensorFlow's implementation of the Keras API specification. This is a high-level API to build and train models that includes first-class support for TensorFlow-specific functionality
If the convertor can convert a keras model to tf.lite, it will deliver same results. But tf.lite functionality is more limited than tf.keras. If this feature set is not enough for you, you can still work with tensorflow, and enjoy its other advantages.
May be, it won't take too long before your models can run on a smartphone.
I only want to use pre-trained model in pytorch without installing the whole package.
Can I just copy the model module from pytorch?
I'm afraid you cannot do that: in order to run the model, you need not only the trained weights ('.pth.tar' file) but also the "structure" of the net: that is, the layers, how they are connected to each other etc. This network structure is coded in python and requires pytorch to be installed.
A way of using PyTorch models without Installing PyTorch is if the model is exported in Onnx format. Once the model is in Onnx format the model can be Imported into the Onnx runtime and ca be used for Inferencing. This tutorial should help you out.Pytorch ONNX