Failed to construct interpreter TensorflowLite - python-3.x

I am using `
Cuda 10.1, cudnn 7.6, Tensorflow 2.3.0, keras 2.4.3
When training model I use save_weights_only=True, and a folder is saved including:
assets, variables and saved_model.pb
When i am converting the model to a tflite model using:
converter = tf.lite.TFLiteConverter.from_saved_model('x/210126_Test')
tflite_model = converter.convert()
# Save the model.
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
I get no errors of any kind, though when i am trying to use the model in an iPad appliacation, it only returns following:
"Exception Error: Failed to construct interpreter"
Have anyone experienced this or know a possible solution?
.

Problem resolved through changes of packages versions.
Tensorflow = 2.0.0 & Keras = 2.3.1

Related

tensorboard images are not getting updated

I have the following code for my reinforcement learning program:
tbCallBack = callbacks.TensorBoard(log_dir = log_dir,
histogram_freq = 1, update_freq = 'epoch', write_graph = True, write_images = True)
The model fits data soon after getting data for training:
model.fit(x = np.vstack(x_train),
y = np.vstack(y_train),
callbacks = [tbCallBack],
verbose = 1, sample_weight = s_t)
I see my SCALARS, DISTRIBUTIONS, and HISTOGRAMS all getting updated but not the IMAGES in the tensorboard...
I see only one image, but no updates....Can you please let me know where the problem is ?
Here is the version information:
tensorboard 2.3.0 pyh4dce500_0
tensorboard-plugin-wit 1.6.0 py_0
keras 2.4.3 0
keras-base 2.4.3 py_0
I am not sure if my answer will help, but as far as my experience is concerned,
Tensorboard doesn't seem to update images well when using Windows, regardeless of the web browser being used.
Such issue seems to exist even in the latest version (2.6.0) of Tensorboard and latest version (2.6.0) of Tensorflow.
Please check the OS upon which you are running your reinforcement learning model. If you were using Windows, I suggest you port the Python code to Linux-based OSes like Ubuntu.

getting an attribution error that AttributeError: module 'tensorflow.python.framework.ops' has no attribute '_TensorLike' in tensorflow?

I'm trying to make encoder decoder like model where I'm getting the following error at model.fit
AttributeError: module 'tensorflow.python.framework.ops' has no attribute '_TensorLike'
model.fit(train_dataloader,
validation_data = test_dataloader,
steps_per_epoch=len(train_dataset)//8,
epochs=10)
I'm using keras 2.3.1 and segmentaion-model
How to resolve it?
This issue is fixed in latest keras version 2.6.0
Workaround for older Keras version
Change is_tensor in file keras/backend/tensorflow_backend.py, as of keras with tensorflow 2.3.0
from tensorflow.python.framework import tensor_util
def is_tensor(x):
return tensor_util.is_tensor(x)
#return isinstance(x, tf_ops._TensorLike) or tf_ops.is_dense_tensor_like(x)

TF2 - hub.text_embedding_column still to be implemented

I'm migrating an old TensorFlow 1.x training script and I got some problem with hub.text_embedding_column function. At the moment, the following code does not work
# Python 3.6.9
import tensorflow as tf # tf.__version__ 2.1.0
import tensorflow_hub as hub # hub.__version__ 0.7.0
module_spec = hub.load_module_spec('https://tfhub.dev/google/universal-sentence-encoder/4')
text_column = hub.text_embedding_column(key='test_col', module_spec=module_spec)
The error that I got is:
RuntimeError: Missing implementation that supports: loader(*('/tmp/tfhub_modules/29abffb443cb0a0ca9c72e8e3863b76d85028490',), **{})
I tried help(hub.text_embedding_column) and the help returns me
TODO(b/131678043): This does not work yet with TF2.
Do you know any workaround to use text_embedding_column with TF2? I'm able to load the model using hub.load('https://tfhub.dev/google/universal-sentence-encoder/4') but then I don't know what to do with it.
Thank you all (:
Following this github issue I was able to solve my problem, at least for now. It's still based on tf.compact.v1.placeholder but at least I can use hub.text_embedding_column.
# Python 3.6.9
import tensorflow as tf # tf.__version__ 2.1.0
import tensorflow_hub as hub # hub.__version__ 0.7.0
def build_module_fn(model_url):
def module_fn():
text_input = tf.compat.v1.placeholder(dtype=tf.string, shape=[None])
embed_layer = hub.KerasLayer(
model_url,
input_shape=[], # Expects a tensor of shape [batch_size] as input.
dtype=tf.string) # Expects a tf.string input tensor
embeddings = embed_layer(text_input)
hub.add_signature(inputs=text_input, outputs=embeddings)
return module_fn
module_spec = hub.create_module_spec(build_module_fn('https://tfhub.dev/google/universal-sentence-encoder/4'))
hub.text_embedding_column(key='text', module_spec=module_spec)
Hope that this can help other people!

Operator translate error occurs when I try to convert onnx file to caffe2

I train a boject detection model on pytorch, and I have exported to onnx file.
And I want to convert it to caffe2 model :
import onnx
import caffe2.python.onnx.backend as onnx_caffe2_backend
# Load the ONNX ModelProto object. model is a standard Python protobuf object
model = onnx.load("CPU4export.onnx")
# prepare the caffe2 backend for executing the model this converts the ONNX model into a
# Caffe2 NetDef that can execute it. Other ONNX backends, like one for CNTK will be
# availiable soon.
prepared_backend = onnx_caffe2_backend.prepare(model)
# run the model in Caffe2
# Construct a map from input names to Tensor data.
# The graph of the model itself contains inputs for all weight parameters, after the input image.
# Since the weights are already embedded, we just need to pass the input image.
# Set the first input.
W = {model.graph.input[0].name: x.data.numpy()}
# Run the Caffe2 net:
c2_out = prepared_backend.run(W)[0]
# Verify the numerical correctness upto 3 decimal places
np.testing.assert_almost_equal(torch_out.data.cpu().numpy(), c2_out, decimal=3)
print("Exported model has been executed on Caffe2 backend, and the result looks good!")
I always got this error :
RuntimeError: ONNX conversion failed, encountered 1 errors:
Error while processing node: input: "90"
input: "91"
output: "92"
op_type: "Resize"
attribute {
name: "mode"
s: "nearest"
type: STRING
}
. Exception: Don't know how to translate op Resize
How can I solve it ?
The problem is that the Caffe2 ONNX backend does not yet support the export of the Resize operator.
Please raise an issue on the Caffe2 / PyTorch github -- there's an active community of developers who should be able to address this use case.

Tensorflow serving: request fails with object has no attribute 'unary_unary

I'm building a CNN text classifier using TensorFlow which I want to load in tensorflow-serving and query using the serving apis. When I call the Predict() method on the grcp stub I receive this error: AttributeError: 'grpc._cython.cygrpc.Channel' object has no attribute 'unary_unary'
What I've done to date:
I have successfully trained and exported a model suitable for serving (i.e., the signatures are verified and using tf.Saver I can successfully return a prediction). I can also load the model in tensorflow_model_server without error.
Here is a snippet of the client code (simplified for readability):
with tf.Session() as sess:
host = FLAGS.server
channel = grpc.insecure_channel('localhost:9001')
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
request = predict_pb2.PredictRequest()
request.model_spec.name = 'predict_text'
request.model_spec.signature_name = 'predict_text'
x_text = ["space"]
# restore vocab processor
# then create a ndarray with transform_fit using the vocabulary
vocab = learn.preprocessing.VocabularyProcessor.restore('/some_path/model_export/1/assets/vocab')
x = np.array(list(vocab.fit_transform(x_text)))
# data
temp_data = tf.contrib.util.make_tensor_proto(x, shape=[1, 15], verify_shape=True)
request.inputs['input'].CopyFrom(tf.contrib.util.make_tensor_proto(x, shape=[1, 15], verify_shape=True))
# get classification prediction
result = stub.Predict(request, 5.0)
Where I'm bending the rules: I am using tensorflow-serving-apis in Python 3.5.3 when pip install is not officially supported. Various posts (example: https://github.com/tensorflow/serving/issues/581) have reported that using tensorflow-serving with Python 3 has been successful. I have downloaded tensorflow-serving-apis package from pypi (https://pypi.python.org/pypi/tensorflow-serving-api/1.5.0)and manually pasted into the environment.
Versions: tensorflow: 1.5.0, tensorflow-serving-apis: 1.5.0, grpcio: 1.9.0rc3, grcpio-tools: 1.9.0rcs, protobuf: 3.5.1 (all other dependency version have been verified but are not included for brevity -- happy to add if they have utility)
Environment: Linux Mint 17 Qiana; x64, Python 3.5.3
Investigations:
A github issue (https://github.com/GoogleCloudPlatform/google-cloud-python/issues/2258) indicated that a historical package triggered this error was related to grpc beta.
What data or learning or implementation am I missing?
beta_create_PredictionService_stub() is deprecated. Try this:
from tensorflow_serving.apis import prediction_service_pb2_grpc
...
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
Try to use grpc.beta.implementations.insecure_channel instead of grpc.insecure_channel.
See example code here.

Resources