Issue using TF Text with Tensorflow serving - python-3.x

I have a model with TF Text preprocessing included in the graph exported for Tensorflow Serving but I have the following error :
2019-06-28 13:29:19.048482: E tensorflow_serving/util/retrier.cc:37] Loading servable: {name: my_model version: 1561728496} failed: Not found: Op type not registered 'UnicodeScriptTokenizeWithOffsets' in binary running on 57085cb71277. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
It seems that the 'UnicodeScriptTokenizeWithOffsets' Op from TF Text is not included in TF Server. I saw that a solution could be to build a custom serving image to include TF Text Ops and Kernel using Tensorflow docker development image, is it right ? How could I do that ? Or Maybe there is a simpler solution ?
I'm using :
Tensorflow 2.0.0b0
Tensorflow Text 1.0.0b0
Serving is done with the official docker image from : tensorflow/serving:latest
Thanks,
Maxime

Related

How to add Metadata in the Tensorflow Lite for a model taken from Github?

I have used this project from Github: https://github.com/nicknochnack/TFODCourse
The project contains a model that can detect License Plate on a given Vehicle image. The Github repo also contains code for the conversion of model into Tensorflow Lite file.
I used that code to generate TFLite file.
And then, I followed this link: https://developers.google.com/codelabs/tflite-object-detection-android
Where I downloaded the sample Application of Object detection model and following the instructions, I copied my TFLite files into the Android Application.
Now, if I run the application and take a photo, it gives me this error,
/TaskJniUtils: Error getting native address of native library: task_vision_jni
java.lang.RuntimeException: Error occurred when initializing ObjectDetector: Input tensor has type kTfLiteFloat32: it requires specifying NormalizationOptions metadata to preprocess input images.
at org.tensorflow.lite.task.vision.detector.ObjectDetector
I understand that I have to add Metadata in my TFLite model. so, I searched about it and ended up on this link: https://www.tensorflow.org/lite/models/convert/metadata#model_with_metadata_format
But I didn't understand at all what exactly should I be doing. Can anyone please help me in pointing to the right direction that for my problem specifically, what exactly do I need to do?

Load trained model on another machine - fastai, torch, huggingface

I am using fastai with pytorch to fine tune XLMRoberta from huggingface.
I've trained the model and everything is fine on the machine where I trained it.
But when I try to load the model on another machine I get OSError - Not Found - No such file or directory pointing to .cache/torch/transformers/. The issue is the path of a vocab_file.
I've used fastai's Learner.export to export the model in .pkl file, but I don't believe that issue is related to fastai since I found the same issue appearing in flairNLP.
It appears that the path to the cache folder, where the vocab_file is stored during the training, is embedded in the .pkl file:
The error comes from transformer's XLMRobertaTokenizer __setstate__:
def __setstate__(self, d):
self.__dict__ = d
self.sp_model = spm.SentencePieceProcessor()
self.sp_model.Load(self.vocab_file)
which tries to load the vocab_file using the path from the file.
I've tried patching this method using:
pretrained_model_name = "xlm-roberta-base"
vocab_file = XLMRobertaTokenizer.from_pretrained(pretrained_model_name).vocab_file
def _setstate(self, d):
self.__dict__ = d
self.sp_model = spm.SentencePieceProcessor()
self.sp_model.Load(vocab_file)
XLMRobertaTokenizer.__setstate__ = MethodType(_setstate, XLMRobertaTokenizer(vocab_file))
And that successfully loaded the model but caused other problems like missing model attributes and other unwanted issues.
Can someone please explain why is the path embedded inside the file, is there a way to configure it without reexporting the model or if it has to be reexported how to configure it dynamically using fastai, torch and huggingface.
I faced the same error. I had fine tuned XLMRoberta on downstream classification task with fastai version = 1.0.61. I'm loading the model inside docker.
I'm not sure about why the path is embedded, but I found a workaround. Posting for future readers who might be looking for workaround as retraining is usually not possible.
I created /home/.cache/torch/transformer/ inside the docker image.
RUN mkdir -p /home/<username>/.cache/torch/transformers
Copied the files (which were not found in docker) from my local /home/.cache/torch/transformer/ to docker image /home/.cache/torch/transformer/
COPY filename:/home/<username>/.cache/torch/transformers/filename

Run into the following issue: build_tensor_flow is not supported in Eager Mode

I am playing around with TensorFlow, and I am trying to export a Keras Model as a TensorFlow Model. And I ran into the above-mentioned error. I am following the "Build Deep Learning Applications with Keras 2.0" from Lynda (https://www.linkedin.com/learning/building-deep-learning-applications-with-keras-2-0/exporting-google-cloud-compatible-models?u=42751868)
While trying to build a tensor flow model, I came across this error, thrown at line 66 where the add meta graphs and variables function is defined.
line 66, in build_tensor_info
raise RuntimeError("build_tensor_info is not supported in Eager mode.")
RuntimeError: build_tensor_info is not supported in Eager mode.
...model_builder.add_meta_graph_and_variables(
K.get_session(),
tags=[tf.compat.v1.saved_model.tag_constants.SERVING],
signature_def_map={
tf.compat.v1.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature_def
}
)
...
Any thoughts folks?
Is because you are using tensorflow v2. You have to use the tensorflow v2 compatibility and disable eager mode.
Be careful with the tensorflow imports that you use, for example if you use tensorflow_core, be sure that you are using all the dependencies from "tensorflow".
You have to add before your code:
import tensorflow as tf
if tf.executing_eagerly():
tf.compat.v1.disable_eager_execution()
Ran into that exact same problem when compiling that code from LinkedIn Learning for exporting the Keras model that was written using TensorFlow 1.x API. After replacing everything with the equivalent tf.compat.v1 functions, such as changing
model_builder = tf.saved_model.builder.SavedModelBuilder("exported_model")
to
model_builder = tf.compat.v1.saved_model.Builder("exported_model")
and disables eager execution as suggested above by Cristian Zumelzu, the code was able to run fine with the expected warnings about the deprecated functions.

Loading a TensorFlow frozen graph (.pb) in Node.js

I saved a tensorflow model in a frozen PB file which is suitable to be used by TensorFlow Lite.
This file can be loaded in Android and works well by following code:
import org.tensorflow.contrib.android.TensorFlowInferenceInterface;
…
TensorFlowInferenceInterface inferenceInterface;
inferenceInterface = new TensorFlowInferenceInterface(context.getAssets(), "MODEL_FILE.pb");
Is there any way to load the frozen graph in Node.js?
I found the solution here:
1. the model must be converted to a web-friendly format which is a JSON file.
2. Then can be loaded using '#tensorflow/tfjs' in Nodejs.

How to use keras as interface of theano?

for tensorflow, this post very well explain how to using keras with tensorflow
https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html
But, I don't find how to use keras with theano directly.
Is it impossible using like tensorflow??
The official documentation is here: https://keras.io/backend/
Basically, edit your $HOME/.keras/keras.json (linux) or %USERPROFILE%\.keras\keras.json (windows) configuration file.
Use:
{
"image_data_format": "channels_last",
"epsilon": 1e-07,
"floatx": "float32",
"backend": "theano"
}
(Note the "backend" is set to "theano".)
This will of course change all your Keras projects to use Theano.
If you only want to change 1 project, you can set the KERAS_BACKEND environment variable, either from the command line or in code before you import keras:
import os
os.environ["KERAS_BACKEND"] = "theano"
import keras
(I tested this on Windows 10 with Python 3.5 with both Theano and TensorFlow installed (remove this, and it uses TensorFlow, include it and it will use Theano)).
It's nice to include in your Python source because this dependency is then included explicitly in source control. As the underlying ML library Keras is using is not 100% abstracted (there are lots of little differences that seep through), having the code indicate that it needs one or the other is probably a good idea.
I hope that helps,
Robert

Resources