When running
torch.onnx.export(
model,
(im, im),
'wow.onnx',
verbose=True,
opset_version=opset,
input_names=['t0', 't1'],
output_names=['output']
)
on my model. I get:
Unsupported: ONNX export of operator Unfold , input size not accessible. Please feel free to request support or submit a pull request on PyTorch GitHub.
Any idea which operation Unfold (https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html) could be replaced by so that the export succeeds?
Related
I am running into the problem whenever I try to load custom trained NER model of spacy inside docker container.
Note:
I am using latest spacy version 3.0 and trained that NER model using CLI commands of spacy, first by converting Train data format into .spacy format
The error throws as following(You can check error in image as hyperlinked):
config validation error
My trained model file structure looks like this:
custom ner model structure
But while run that model without docker it works perfectly. What wrong I have done in this process. Plz help me to resolve the error.
Thank you in advance.
I am playing around with TensorFlow, and I am trying to export a Keras Model as a TensorFlow Model. And I ran into the above-mentioned error. I am following the "Build Deep Learning Applications with Keras 2.0" from Lynda (https://www.linkedin.com/learning/building-deep-learning-applications-with-keras-2-0/exporting-google-cloud-compatible-models?u=42751868)
While trying to build a tensor flow model, I came across this error, thrown at line 66 where the add meta graphs and variables function is defined.
line 66, in build_tensor_info
raise RuntimeError("build_tensor_info is not supported in Eager mode.")
RuntimeError: build_tensor_info is not supported in Eager mode.
...model_builder.add_meta_graph_and_variables(
K.get_session(),
tags=[tf.compat.v1.saved_model.tag_constants.SERVING],
signature_def_map={
tf.compat.v1.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature_def
}
)
...
Any thoughts folks?
Is because you are using tensorflow v2. You have to use the tensorflow v2 compatibility and disable eager mode.
Be careful with the tensorflow imports that you use, for example if you use tensorflow_core, be sure that you are using all the dependencies from "tensorflow".
You have to add before your code:
import tensorflow as tf
if tf.executing_eagerly():
tf.compat.v1.disable_eager_execution()
Ran into that exact same problem when compiling that code from LinkedIn Learning for exporting the Keras model that was written using TensorFlow 1.x API. After replacing everything with the equivalent tf.compat.v1 functions, such as changing
model_builder = tf.saved_model.builder.SavedModelBuilder("exported_model")
to
model_builder = tf.compat.v1.saved_model.Builder("exported_model")
and disables eager execution as suggested above by Cristian Zumelzu, the code was able to run fine with the expected warnings about the deprecated functions.
I train a boject detection model on pytorch, and I have exported to onnx file.
And I want to convert it to caffe2 model :
import onnx
import caffe2.python.onnx.backend as onnx_caffe2_backend
# Load the ONNX ModelProto object. model is a standard Python protobuf object
model = onnx.load("CPU4export.onnx")
# prepare the caffe2 backend for executing the model this converts the ONNX model into a
# Caffe2 NetDef that can execute it. Other ONNX backends, like one for CNTK will be
# availiable soon.
prepared_backend = onnx_caffe2_backend.prepare(model)
# run the model in Caffe2
# Construct a map from input names to Tensor data.
# The graph of the model itself contains inputs for all weight parameters, after the input image.
# Since the weights are already embedded, we just need to pass the input image.
# Set the first input.
W = {model.graph.input[0].name: x.data.numpy()}
# Run the Caffe2 net:
c2_out = prepared_backend.run(W)[0]
# Verify the numerical correctness upto 3 decimal places
np.testing.assert_almost_equal(torch_out.data.cpu().numpy(), c2_out, decimal=3)
print("Exported model has been executed on Caffe2 backend, and the result looks good!")
I always got this error :
RuntimeError: ONNX conversion failed, encountered 1 errors:
Error while processing node: input: "90"
input: "91"
output: "92"
op_type: "Resize"
attribute {
name: "mode"
s: "nearest"
type: STRING
}
. Exception: Don't know how to translate op Resize
How can I solve it ?
The problem is that the Caffe2 ONNX backend does not yet support the export of the Resize operator.
Please raise an issue on the Caffe2 / PyTorch github -- there's an active community of developers who should be able to address this use case.
I've set up a model for CIFAR-10 using Pytorch, and saved it as an ONNX file.
But it looks like I can't load it from CNTK.
I've already loaded another ONNX file from the same source code (by mistake), so the dependencies look OK. The problem occurs when I call Function.Load()
var deviceDescriptor = DeviceDescriptor.CPUDevice; ;
var function = Function.Load(ONNX_PATH, deviceDescriptor, ModelFormat.ONNX);
I get this exception (Unhandled exception):
System.ApplicationException : 'Reshape: inferred dimension cannot be calculated from input and new shape size.
[CALL STACK]
- CNTK::TrainingParameterSchedule:: GetMinibatchSize
- CNTK:: XavierInitializer (x6)
- CNTK::Function::Load
- CSharp_CNTK_Function__Load__SWIG_0
- 00007FFB0C41C307 (SymFromAddr() error: Le module spécifié est introuvable.)
It looks like this model can't be loaded in CNTK. CNTK has good support for exporting (saving) to ONNX, importing (loading) can be problematic for some operations.
CNTK development is frozen, what's your motivation to use it?
The recommended way now is to use ONNX Runtime https://github.com/microsoft/onnxruntime for inference, it has first-class support for ONNX.
I'm using CNTK as the backend for Keras. I'm trying to use my model which I have trained using Keras in C++.
I have trained and saved my model using Keras which is in HDF5. How do I now use CNTK API to save it in their model-v2 format?
I tried this:
model = load_model('model2.h5')
cntk.ops.functions.Function.save(model, 'CNTK_model2.pb')
but i got the following error:
TypeError: save() missing 1 required positional argument: 'filename'
If tensorflow were the backend I would have done this:
model = load_model('model2.h5')
sess = K.get_session()
tf_saver = tf.train.Saver()
tf_saver.save(sess=sess, save_path=checkpoint_path)
How can I achieve the same thing?
As per the comments here, I was able to use this:
import cntk as C
import keras.backend as K
keras_model = K.load_model('my_keras_model.h5')
C.combine(keras_model.model.outputs).save('my_cntk_model')
cntk_model = C.load_model('my_cntk_model')
You can do something like this
model.outputs[0].save('CNTK_model2.pb')
I'm assuming here you have called model.compile (i.e. that's the only case I have tried :-)
The reason you see this error is because keras' cntk backend use a user defined function to do reshape on batch axis, which can't been serialized. We have fixed this issue in CNTK v2.2. Please upgrade your cntk to v2.2, and upgrade keras to last master.
Please see this pull request:
https://github.com/fchollet/keras/pull/7907