Loading a TensorFlow frozen graph (.pb) in Node.js - node.js

I saved a tensorflow model in a frozen PB file which is suitable to be used by TensorFlow Lite.
This file can be loaded in Android and works well by following code:
import org.tensorflow.contrib.android.TensorFlowInferenceInterface;
…
TensorFlowInferenceInterface inferenceInterface;
inferenceInterface = new TensorFlowInferenceInterface(context.getAssets(), "MODEL_FILE.pb");
Is there any way to load the frozen graph in Node.js?

I found the solution here:
1. the model must be converted to a web-friendly format which is a JSON file.
2. Then can be loaded using '#tensorflow/tfjs' in Nodejs.

Related

Setting up Visual Studio Code to run models from Hugging Face

I am trying to import models from hugging face and use them in Visual Studio Code.
I installed transformers, tensorflow, and torch.
I have tried looking at multiple tutorials online but have found nothing.
I am trying to run the following code:
from transformers import pipeline
classifier = pipeline('sentiment-analysis')
result = classifier("I hate it when I'm sitting under a tree and an apple hits my head.")
print(result)
However, I get the following error:
No model was supplied, defaulted to distilbert-base-uncased-finetuned-sst-2-english and revision af0f99b (https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english).
Using a pipeline without specifying a model name and revision in production is not recommended.
Traceback (most recent call last):
File "c:\Users\user\Desktop\Artificial Intelligence\transformers\Workshops\workshop_3.py", line 4, in <module>
classifier = pipeline('sentiment-analysis')
File "C:\Users\user\Desktop\Artificial Intelligence\transformers\src\transformers\pipelines\__init__.py", line 702, in pipeline
framework, model = infer_framework_load_model(
File "C:\Users\user\Desktop\Artificial Intelligence\transformers\src\transformers\pipelines\base.py", line 266, in infer_framework_load_model
raise ValueError(f"Could not load model {model} with any of the following classes: {class_tuple}.")
ValueError: Could not load model distilbert-base-uncased-finetuned-sst-2-english with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForSequenceClassification'>, <class 'transformers.models.auto.modeling_tf_auto.TFAutoModelForSequenceClassification'>, <class 'transformers.models.distilbert.modeling_distilbert.DistilBertForSequenceClassification'>, <class 'transformers.models.distilbert.modeling_tf_distilbert.TFDistilBertForSequenceClassification'>).
I have already searched online for ways to set up transformers to use in Visual Studio Code but nothing is helping.
Do you know how to fix this error, or if someone knows how to successfully use models from Hugging Face into my code, it would be appreciated?
This question is a little less about Hugging Face itself and likely more about installation and the installation steps you took (and potentially your program's access to the cache file where the models are automatically downloaded to.).
From what I am seeing either:
1/ your program is unable to access the model
2/ your program is throwing specific value errors in a bit of an edge case
If 1/ Take a look here: [https://huggingface.co/docs/transformers/installation#cache-setup][1]
Notice that it the docs walks through where the pre-trained models are downloaded. Check that it was downloaded here: C:\Users\username\.cache\huggingface\hub (of course with your own username on your computer instead. Check in the cache location to make sure it was downloaded? (You can check in the cache locations mentioned.)
Second, if for some reason, there is an issue with downloading, you can try downloading manually and doing it via offline mode (this is more to get it up and running): https://huggingface.co/docs/transformers/installation#offline-mode
Third, if it is downloaded, do you have the right permissions to access the .cache? (Try running your program (if it is a program that you trust) on Windows Terminal as an administrator.). Various ways - find one that you're comfortable with, here are a couple hints from Stackoverflow/StackExchange: Opening up Windows Terminal with elevated privileges, from within Windows Terminal or this: https://superuser.com/questions/1560049/open-windows-terminal-as-admin-with-winr
If 2/ I have seen people bring up very specific issues on not finding specific values (not the same as yours but similar) and the issue was solved by installing PyTorch because some models only exist as PyTorch models. You can see the full response from #YokoHono here: Transformers model from Hugging-Face throws error that specific classes couldn t be loaded

How to add Metadata in the Tensorflow Lite for a model taken from Github?

I have used this project from Github: https://github.com/nicknochnack/TFODCourse
The project contains a model that can detect License Plate on a given Vehicle image. The Github repo also contains code for the conversion of model into Tensorflow Lite file.
I used that code to generate TFLite file.
And then, I followed this link: https://developers.google.com/codelabs/tflite-object-detection-android
Where I downloaded the sample Application of Object detection model and following the instructions, I copied my TFLite files into the Android Application.
Now, if I run the application and take a photo, it gives me this error,
/TaskJniUtils: Error getting native address of native library: task_vision_jni
java.lang.RuntimeException: Error occurred when initializing ObjectDetector: Input tensor has type kTfLiteFloat32: it requires specifying NormalizationOptions metadata to preprocess input images.
at org.tensorflow.lite.task.vision.detector.ObjectDetector
I understand that I have to add Metadata in my TFLite model. so, I searched about it and ended up on this link: https://www.tensorflow.org/lite/models/convert/metadata#model_with_metadata_format
But I didn't understand at all what exactly should I be doing. Can anyone please help me in pointing to the right direction that for my problem specifically, what exactly do I need to do?

Unable to solve multiprocessing.Manager.Lock() error in Python code (VS editor)

I am using machine learning in my Python (version 3.8.5) code. In the preprocessing part, I need to hash encode few features. So earlier I have dumped a hash encoder pickle file using the features in the training phase. Saved the file with the name of 'hash_encoder.pkl'. Now in the testing phase, I need to transform the features using this pickle file. I'm using the following code given in screenshot to hash encode three string features as given in the first line.
In the encoder.transform line, I'm getting the error of "data_lock=mutiprocessing.Manager().Lock()".
At the end I'm also getting 'raise EOF error'.
I have tried using same version of pandas (1.1.3) to dump the hash_encoder file and also to load it. I'm not sure why is this coming up.
Can someone help me in understand or debugging this part?
I have added the screenshot of the error.

Unable to save keras model in databricks

I am saving keras model
model.save('model.h5')
in databricks, but model is not saving,
I have also tried saving in /tmp/model.h5 as mentioned here
but model is not saving.
The saving cell executes but when I load model it shows no model.h5 file is available.
when I do this dbfs_model_path = 'dbfs:/FileStore/models/model.h5' dbutils.fs.cp('file:/tmp/model.h5', dbfs_model_path)
OR try loading model
tf.keras.models.load_model("file:/tmp/model.h5")
I get error message java.io.FileNotFoundException: File file:/tmp/model.h5 does not exist
The problem is that Keras is designed to work only with local files, so it doesn't understand URIs, such as dbfs:/, or file:/. So you need to use local paths for saving & loading operations, and then copy files to/from DBFS (unfortunately /dbfs doesn't play well with Keras because of the way it works).
The following code works just fine. Note that dbfs:/ or file:/ are used only in the calls to the dbutils.fs commands - Keras stuff uses the names of local files.
create model & save locally as /tmp/model-full.h5:
from tensorflow.keras.applications import InceptionV3
model = InceptionV3(weights="imagenet")
model.save('/tmp/model-full.h5')
copy data to DBFS as dbfs:/tmp/model-full.h5 and check it:
dbutils.fs.cp("file:/tmp/model-full.h5", "dbfs:/tmp/model-full.h5")
display(dbutils.fs.ls("/tmp/model-full.h5"))
copy file from DBFS as /tmp/model-full2.h5 & load it:
dbutils.fs.cp("dbfs:/tmp/model-full.h5", "file:/tmp/model-full2.h5")
from tensorflow import keras
model2 = keras.models.load_model("/tmp/model-full2.h5")

Issue using TF Text with Tensorflow serving

I have a model with TF Text preprocessing included in the graph exported for Tensorflow Serving but I have the following error :
2019-06-28 13:29:19.048482: E tensorflow_serving/util/retrier.cc:37] Loading servable: {name: my_model version: 1561728496} failed: Not found: Op type not registered 'UnicodeScriptTokenizeWithOffsets' in binary running on 57085cb71277. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
It seems that the 'UnicodeScriptTokenizeWithOffsets' Op from TF Text is not included in TF Server. I saw that a solution could be to build a custom serving image to include TF Text Ops and Kernel using Tensorflow docker development image, is it right ? How could I do that ? Or Maybe there is a simpler solution ?
I'm using :
Tensorflow 2.0.0b0
Tensorflow Text 1.0.0b0
Serving is done with the official docker image from : tensorflow/serving:latest
Thanks,
Maxime

Resources