Keras-CNTK saving model-v2 format - keras

I'm using CNTK as the backend for Keras. I'm trying to use my model which I have trained using Keras in C++.
I have trained and saved my model using Keras which is in HDF5. How do I now use CNTK API to save it in their model-v2 format?
I tried this:
model = load_model('model2.h5')
cntk.ops.functions.Function.save(model, 'CNTK_model2.pb')
but i got the following error:
TypeError: save() missing 1 required positional argument: 'filename'
If tensorflow were the backend I would have done this:
model = load_model('model2.h5')
sess = K.get_session()
tf_saver = tf.train.Saver()
tf_saver.save(sess=sess, save_path=checkpoint_path)
How can I achieve the same thing?

As per the comments here, I was able to use this:
import cntk as C
import keras.backend as K
keras_model = K.load_model('my_keras_model.h5')
C.combine(keras_model.model.outputs).save('my_cntk_model')
cntk_model = C.load_model('my_cntk_model')

You can do something like this
model.outputs[0].save('CNTK_model2.pb')
I'm assuming here you have called model.compile (i.e. that's the only case I have tried :-)

The reason you see this error is because keras' cntk backend use a user defined function to do reshape on batch axis, which can't been serialized. We have fixed this issue in CNTK v2.2. Please upgrade your cntk to v2.2, and upgrade keras to last master.
Please see this pull request:
https://github.com/fchollet/keras/pull/7907

Related

Run into the following issue: build_tensor_flow is not supported in Eager Mode

I am playing around with TensorFlow, and I am trying to export a Keras Model as a TensorFlow Model. And I ran into the above-mentioned error. I am following the "Build Deep Learning Applications with Keras 2.0" from Lynda (https://www.linkedin.com/learning/building-deep-learning-applications-with-keras-2-0/exporting-google-cloud-compatible-models?u=42751868)
While trying to build a tensor flow model, I came across this error, thrown at line 66 where the add meta graphs and variables function is defined.
line 66, in build_tensor_info
raise RuntimeError("build_tensor_info is not supported in Eager mode.")
RuntimeError: build_tensor_info is not supported in Eager mode.
...model_builder.add_meta_graph_and_variables(
K.get_session(),
tags=[tf.compat.v1.saved_model.tag_constants.SERVING],
signature_def_map={
tf.compat.v1.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature_def
}
)
...
Any thoughts folks?
Is because you are using tensorflow v2. You have to use the tensorflow v2 compatibility and disable eager mode.
Be careful with the tensorflow imports that you use, for example if you use tensorflow_core, be sure that you are using all the dependencies from "tensorflow".
You have to add before your code:
import tensorflow as tf
if tf.executing_eagerly():
tf.compat.v1.disable_eager_execution()
Ran into that exact same problem when compiling that code from LinkedIn Learning for exporting the Keras model that was written using TensorFlow 1.x API. After replacing everything with the equivalent tf.compat.v1 functions, such as changing
model_builder = tf.saved_model.builder.SavedModelBuilder("exported_model")
to
model_builder = tf.compat.v1.saved_model.Builder("exported_model")
and disables eager execution as suggested above by Cristian Zumelzu, the code was able to run fine with the expected warnings about the deprecated functions.

TF2 - hub.text_embedding_column still to be implemented

I'm migrating an old TensorFlow 1.x training script and I got some problem with hub.text_embedding_column function. At the moment, the following code does not work
# Python 3.6.9
import tensorflow as tf # tf.__version__ 2.1.0
import tensorflow_hub as hub # hub.__version__ 0.7.0
module_spec = hub.load_module_spec('https://tfhub.dev/google/universal-sentence-encoder/4')
text_column = hub.text_embedding_column(key='test_col', module_spec=module_spec)
The error that I got is:
RuntimeError: Missing implementation that supports: loader(*('/tmp/tfhub_modules/29abffb443cb0a0ca9c72e8e3863b76d85028490',), **{})
I tried help(hub.text_embedding_column) and the help returns me
TODO(b/131678043): This does not work yet with TF2.
Do you know any workaround to use text_embedding_column with TF2? I'm able to load the model using hub.load('https://tfhub.dev/google/universal-sentence-encoder/4') but then I don't know what to do with it.
Thank you all (:
Following this github issue I was able to solve my problem, at least for now. It's still based on tf.compact.v1.placeholder but at least I can use hub.text_embedding_column.
# Python 3.6.9
import tensorflow as tf # tf.__version__ 2.1.0
import tensorflow_hub as hub # hub.__version__ 0.7.0
def build_module_fn(model_url):
def module_fn():
text_input = tf.compat.v1.placeholder(dtype=tf.string, shape=[None])
embed_layer = hub.KerasLayer(
model_url,
input_shape=[], # Expects a tensor of shape [batch_size] as input.
dtype=tf.string) # Expects a tf.string input tensor
embeddings = embed_layer(text_input)
hub.add_signature(inputs=text_input, outputs=embeddings)
return module_fn
module_spec = hub.create_module_spec(build_module_fn('https://tfhub.dev/google/universal-sentence-encoder/4'))
hub.text_embedding_column(key='text', module_spec=module_spec)
Hope that this can help other people!

Operator translate error occurs when I try to convert onnx file to caffe2

I train a boject detection model on pytorch, and I have exported to onnx file.
And I want to convert it to caffe2 model :
import onnx
import caffe2.python.onnx.backend as onnx_caffe2_backend
# Load the ONNX ModelProto object. model is a standard Python protobuf object
model = onnx.load("CPU4export.onnx")
# prepare the caffe2 backend for executing the model this converts the ONNX model into a
# Caffe2 NetDef that can execute it. Other ONNX backends, like one for CNTK will be
# availiable soon.
prepared_backend = onnx_caffe2_backend.prepare(model)
# run the model in Caffe2
# Construct a map from input names to Tensor data.
# The graph of the model itself contains inputs for all weight parameters, after the input image.
# Since the weights are already embedded, we just need to pass the input image.
# Set the first input.
W = {model.graph.input[0].name: x.data.numpy()}
# Run the Caffe2 net:
c2_out = prepared_backend.run(W)[0]
# Verify the numerical correctness upto 3 decimal places
np.testing.assert_almost_equal(torch_out.data.cpu().numpy(), c2_out, decimal=3)
print("Exported model has been executed on Caffe2 backend, and the result looks good!")
I always got this error :
RuntimeError: ONNX conversion failed, encountered 1 errors:
Error while processing node: input: "90"
input: "91"
output: "92"
op_type: "Resize"
attribute {
name: "mode"
s: "nearest"
type: STRING
}
. Exception: Don't know how to translate op Resize
How can I solve it ?
The problem is that the Caffe2 ONNX backend does not yet support the export of the Resize operator.
Please raise an issue on the Caffe2 / PyTorch github -- there's an active community of developers who should be able to address this use case.

'OneHotEncoder' object has no attribute 'get_feature_names'

I am trying to extract the features using the get_feature_names function of the OneHotEncoder object of scikit learn but its is throwing me an error saying
"'OneHotEncoder' object has no attribute 'get_feature_names'".
Below is the code snippet
# Creating the object instance for label encoder
encoder = OneHotEncoder(sparse=False)
onehot_encoded = encoder.fit_transform(df[data_column_category])
onehot_encoded_frame = pd.DataFrame(onehot_encoded,columns = encoder.get_feature_names(data_column_category))
Apparently, it has been renamed to get_feature_names_out.
https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html#sklearn.preprocessing.OneHotEncoder.get_feature_names_out
That feature was introduced recently, so you might just need to update your sklearn version.
You can do it as follows:
pip install -U scikit-learn

tensorflow tensorflow.contrib.learn.Estimator load trained model

After I call this function like this:
from tensorflow.contrib import learn
#----------------------------------------
#Do some process here
#----------------------------------------
classifier = learn.Estimator(model_fn=bag_of_words_model,model_dir='F:/data')
classifier.fit(feature_train, target_train, steps=1000)
I will have some file in my folder "F:/data" like this
And I wonder Do I have anyway to reuse this model ? Like move to new computer and use this to predict new data. Sorry for my bad English. Thanks for all the answers!! Hope you all have a nice day.
In a script where you want to reuse your model, redefine/import bag_of_words_model again and define
classifier_loaded = learn.Estimator(model_fn=bag_of_words_model, model_dir='F:/data')
Tensorflow will reload the graph and import the weights into the graph and you can just use it as
classifier_loaded.predict(input_fn=input_fn)
or continue the training with
claissifier_loaded.fit(feature_train, target_train)

Resources