I have a tensorflow.js model and want to transform it into a TensorFlow Keras model. Is that in general possible? I have seen, that the conversion from tensorflow.js into Keras is possible, but did not found the conversion back.
In terminal enter the following command and just edit the path for the input model and where the output should be saved.
tensorflowjs_converter \
--input_format=tfjs_layers_model \
--output_format=keras \
my_layers_model/model.json
my_keras_model/
Related
I'd like to convert a model (eg Mobilenet V2) from pytorch to tflite in order to run it on a mobile device.
Has anyone managed to do so?
All I found, was a method that uses ONNX to convert the model into an inbetween state. However, this seems not to work properly, as Tensorflow expects a NHWC-channel order whereas onnx and pytorch work with NCHW channel order.
There is a discussion on github, however in my case the conversion worked without complaints until a "frozen tensorflow graph model", after trying to convert the model further to tflite, it complains about the channel order being wrong...
Here is my code so far:
import torch
import torch.onnx
import onnx
from onnx_tf.backend import prepare
# Create random input
input_data = torch.randn(1,3,224,224)
# Create network
model = torch.hub.load('pytorch/vision:v0.6.0', 'mobilenet_v2', pretrained=True)
model.eval()
# Forward Pass
output = model(input_data)
# Export model to onnx
filename_onnx = "mobilenet_v2.onnx"
filename_tf = "mobilenet_v2.pb"
torch.onnx.export(model, input_data, filename_onnx)
# Export model to tensorflow
onnx_model = onnx.load(filename_onnx)
tf_rep = prepare(onnx_model)
tf_rep.export_graph(filename_tf)
All working without errors until here (ignoring many tf warnings). Then I look up the names of the input and output tensors using netron ("input.1" and "473").
Finally I apply my usual tf-graph to tf-lite conversion script from bash:
tflite_convert \
--output_file=mobilenet_v2.tflite \
--graph_def_file=mobilenet_v2.pb \
--input_arrays=input.1 \
--output_arrays=473
My configuration:
torch 1.6.0.dev20200508 (needs pytorch-nightly to work with mobilenet V2 from torch.hub)
tensorflow-gpu 1.14.0
onnx 1.6.0
onnx-tf 1.5.0
Here is the exact error message I'm getting from tflite:
Unexpected value for attribute 'data_format'. Expected 'NHWC'
Fatal Python error: Aborted
UPDATE:
Updating my configuration:
torch 1.6.0.dev20200508
tensorflow-gpu 2.2.0
onnx 1.7.0
onnx-tf 1.5.0
using
tflite_convert \
--output_file=mobilenet_v2.tflite \
--graph_def_file=mobilenet_v2.pb \
--input_arrays=input.1 \
--output_arrays=473 \
--enable_v1_converter # <-- needed for conversion of frozen graphs
leading to another error:
Exception: <unknown>:0: error: loc("convolution"): 'tf.Conv2D' op is neither a custom op nor a flex op
Update:
Here is an onnx model of mobilenet v2 loaded via netron:
Here is a gdrive link to my converted onnx and pb file
#Ahwar posted a nice solution to this using a Google Colab notebook.
It uses
torch 1.5.0+cu101
torchsummary 1.5.1
torchtext 0.3.1
torchvision 0.6.0+cu101
tensorflow 1.15.2
tensorflow-addons 0.8.3
tensorflow-estimator 1.15.1
onnx 1.7.0
onnx-tf 1.5.0
The conversion is working and the model can be tested on my computer. However when pushing the model to the mobile phone it only works in CPU mode and is much slower (almost 10 fold) than a corresponding model created in tensorflow directly. GPU mode is not working on my mobile phone (in contrast to the corresponding model created in tensorflow directly)
Update:
Apparantly after converting the mobilenet v2 model, the tensorflow frozen graph contains many more convolution operations than the original pytorch model ( ~38 000 vs ~180 ) as discussed in this github issue.
https://github.com/alibaba/TinyNeuralNetwork
You can try this project to convert the pytorch model to tflite. It supports all models in torchvision, and can eliminate redundant operators, basically without performance loss
If we want to use weights from pretrained BioBERT model, we can execute following terminal command after downloading all the required BioBERT files.
os.system('python3 extract_features.py \
--input_file=trial.txt \
--vocab_file=vocab.txt \
--bert_config_file=bert_config.json \
--init_checkpoint=biobert_model.ckpt \
--output_file=output.json')
The above command actually reads individual file containing the text, reads the textual content from it, and then writes the extracted vectors to another file. So, the problem with this is that it could not be scaled easily for very large data-sets containing thousands of sentences/paragraphs.
Is there is a way to extract these features on the go (using an embedding layer) like it could be done for the word2vec vectors in PyTorch or TF1.3?
Note: BioBERT checkpoints do not exist for TF2.0, so I guess there is no way it could be done with TF2.0 unless someone generates TF2.0 compatible checkpoint files.
I will be grateful for any hint or help.
You can get the contextual embeddings on the fly, but the total time spend on getting the embeddings will always be the same. There are two options how to do it: 1. import BioBERT into the Transformers package and treat use it in PyTorch (which I would do) or 2. use the original codebase.
1. Import BioBERT into the Transformers package
The most convenient way of using pre-trained BERT models is the Transformers package. It was primarily written for PyTorch, but works also with TensorFlow. It does not have BioBERT out of the box, so you need to convert it from TensorFlow format yourself. There is convert_tf_checkpoint_to_pytorch.py script that does that. People had some issues with this script and BioBERT (seems to be resolved).
After you convert the model, you can load it like this.
import torch
from transformers import *
# Load dataset, tokenizer, model from pretrained model/vocabulary
tokenizer = BertTokenizer.from_pretrained('directory_with_converted_model')
model = BertModel.from_pretrained('directory_with_converted_model')
# Call the model in a standard PyTorch way
embeddings = model([tokenizer.encode("Cool biomedical tetra-hydro-sentence.", add_special_tokens=True)])
2. Use directly BioBERT codebase
You can get the embeddings on the go basically using the code that is exctract_feautres.py. On lines 346-382, they initialize the model. You get the embeddings by calling estimator.predict(...).
For that, you need to format your format the input. First, you need to format the string (using code on line 326-337) and then apply and call convert_examples_to_features on it.
I want to run pre trained weights by using Darknet api in python.
Can any help me please how can we do that?
I assume that you have compiled the darknet. Than you just put the pretrained weights into darknet folder and run:
./darknet detect cfg/yolov3.cfg pretrained.weights data/dog.jpg
Is there a good way or a tool to convert .caffemodel weight files to HDF5 files that can be used with Keras?
I don't care so much about converting the Caffe model definitions, I can easily write those in Keras manually, I'm just interested in getting the trained weights out of that Caffe binary protocol buffer format and into the Keras format. I'm not a Caffe user, i.e. not very familiar with it.
My goal is to classify MNIST handwritten digits using keras. I am trying to reproduce the results from this website.
When creating the model ("model = baseline_model()"), I get the error message
AssertionError: Keyword argument not understood: kernel_initializer
Do you know how to solve this issue ? I am using keras 1.1.1 with theano back-end
As it's mentioned in a comment at the beginning of an article - this is a version for keras 2.0.2, so in order to make this example working you need to use this version of Keras.