Create new tensorflow image classification module using tensorflow hub - tensorflow-hub

I'm new in tensorflow, i am trying to create new image classification module, I tried below example using tensorflow hub. but its not created. Is any simple example for create image classification module
import tensorflow as tf
import tensorflow_hub as hub
import numpy as np
def module_fn():
inputs = tf.placeholder(dtype=tf.float32, shape=[None, 50])
layer1 = tf.layers.dense(inputs, 200)
layer2 = tf.layers.dense(layer1, 100)
outputs = dict(default=layer2, hidden_activations=layer1)
# Add default signature.
hub.add_signature(inputs=inputs, outputs=outputs)
spec = hub.create_module_spec(module_fn)
module=hub.Module(spec)
with tf.Graph().as_default():
module=hub.Module('new_test_module')
test=module(np.random.normal(0, 1, (1, 100)))
with tf.Session() as session:
img=session.run(test)

TensorFlow Hub's image classification modules are a collection of examples that showcase how to use model components. To create an image classification component yourself, you can check out the Creating a New Module section of the TensorFlow Hub API guide.
Example utilization:
To define a new module, a publisher calls hub.create_module_spec() with a function module_fn. This function constructs a graph representing the module's internal structure, using tf.placeholder() for inputs to be supplied by the caller. Then it defines signatures by calling hub.add_signature(name, inputs, outputs) one or more times.
def module_fn():
inputs = tf.placeholder(dtype=tf.float32, shape=[None, 50])
layer1 = tf.layers.dense(inputs, 200)
layer2 = tf.layers.dense(layer1, 100)
outputs = dict(default=layer2, hidden_activations=layer1)
# Add default signature.
hub.add_signature(inputs=inputs, outputs=outputs)
spec = hub.create_module_spec(module_fn)
The result of hub.create_module_spec() can be used, instead of a path, to instantiate a module object within a particular TensorFlow graph. In such case, there is no checkpoint, and the module instance will use the variable initializers instead.

Related

torch_tensorrt compilation error ,Unknown type bool encountered in graph lowering. This type is not supported in ONNX export

I am following this article
https://pytorch.org/TensorRT/tutorials/serving_torch_tensorrt_with_triton.html , to serve the torch-tensorrt model in triton server.
but
import torch
import torch_tensorrt
torch.hub._validate_not_a_forked_repo=lambda a,b,c: True
from src.models.resnet2 import ResNet2
# load model
model = ResNet2(output_size = 2)
model.load_state_dict(torch.load('Epoch_9_Valacc_0.911_9_.pth'))
**# Compile with Torch TensorRT
trt_model = torch_tensorrt.compile(model,
inputs= [torch_tensorrt.Input((1, 3, 640, 640))],
enabled_precisions= { torch.half}, # Run with FP32
debug =True
)**
I am getting error at the last step , where its compiling a torch model
"Unknown type bool encountered in graph lowering. This type is not supported in ONNX export"
Please let me know how to overcome this
the issue here was , i was directly complilig a torch model using tensorrt, but torch_tensorrt need a TorchScript module
so solution is
# Switch the model to eval model
model.eval()
# An example input you would normally provide to your model's forward() method.
sample_input = torch.randn((1, 3, 640, 640)).float()
# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, sample_input)
then use the traced_script_module to compile in torch_tensorrt.
BTW , I am using tracing method to convert torch model to TorchScript

resize_token_embeddings on the a pertrained model with different embedding size

I would like to ask about the way to change the embedding size of the trained model.
I have a trained model models/BERT-pretrain-1-step-5000.pkl.
Now I am adding a new token [TRA]to the tokeniser and try to use the resize_token_embeddings to the pertained one.
from pytorch_pretrained_bert_inset import BertModel #BertTokenizer
from transformers import AutoTokenizer
from torch.nn.utils.rnn import pad_sequence
import tqdm
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model_bert = BertModel.from_pretrained('bert-base-uncased', state_dict=torch.load('models/BERT-pretrain-1-step-5000.pkl', map_location=torch.device('cpu')))
#print(tokenizer.all_special_tokens) #--> ['[UNK]', '[SEP]', '[PAD]', '[CLS]', '[MASK]']
#print(tokenizer.all_special_ids) #--> [100, 102, 0, 101, 103]
num_added_toks = tokenizer.add_tokens(['[TRA]'], special_tokens=True)
model_bert.resize_token_embeddings(len(tokenizer)) # --> Embedding(30523, 768)
print('[TRA] token id: ', tokenizer.convert_tokens_to_ids('[TRA]')) # --> 30522
But I encountered the error:
AttributeError: 'BertModel' object has no attribute 'resize_token_embeddings'
I assume that it is because the model_bert(BERT-pretrain-1-step-5000.pkl) I had has the different embedding size.
I would like to know if there is any way to fit the embedding size of my modified tokeniser and the model I would like to use as the initial weights.
Thanks a lot!!
resize_token_embeddings is a huggingface transformer method. You are using the BERTModel class from pytorch_pretrained_bert_inset which does not provide such a method. Looking at the code, it seems like they have copied the BERT code from huggingface some time ago.
You can either wait for an update from INSET (maybe create a github issue) or write your own code to extend the word_embedding layer:
from torch import nn
embedding_layer = model.embeddings.word_embeddings
old_num_tokens, old_embedding_dim = embedding_layer.weight.shape
num_new_tokens = 1
# Creating new embedding layer with more entries
new_embeddings = nn.Embedding(
old_num_tokens + num_new_tokens, old_embedding_dim
)
# Setting device and type accordingly
new_embeddings.to(
embedding_layer.weight.device,
dtype=embedding_layer.weight.dtype,
)
# Copying the old entries
new_embeddings.weight.data[:old_num_tokens, :] = embedding_layer.weight.data[
:old_num_tokens, :
]
model.embeddings.word_embeddings = new_embeddings

Cannot export PyTorch model to ONNX

I am trying to convert a pre-trained torch model to ONNX, but recive the following error:
RuntimeError: step!=1 is currently not supported
I'm trying this on a pre-trained colorization model: https://github.com/richzhang/colorization
Here is the code I ran in Google Colab:
!git clone https://github.com/richzhang/colorization.git
cd colorization/
import colorizers
model = colorizer_siggraph17 = colorizers.siggraph17(pretrained=True).eval()
input_names = [ "input" ]
output_names = [ "output" ]
dummy_input = torch.randn(1, 1, 256, 256, device='cpu')
torch.onnx.export(model, dummy_input, "test_converted_model.onnx", verbose=True,
input_names=input_names, output_names=output_names)
I appreciate any help :)
UPDATE 1: #Proko suggestion solved the ONNX export issue. Now I have a new possibly related problem when I try to convert the ONNX to TensorRT. I get the following error:
[TensorRT] ERROR: Network must have at least one output
Here is the code I used:
import torch
import pycuda.driver as cuda
import pycuda.autoinit
import tensorrt as trt
import onnx
TRT_LOGGER = trt.Logger()
def build_engine(onnx_file_path):
# initialize TensorRT engine and parse ONNX model
builder = trt.Builder(TRT_LOGGER)
builder.max_workspace_size = 1 << 25
builder.max_batch_size = 1
if builder.platform_has_fast_fp16:
builder.fp16_mode = True
network = builder.create_network()
parser = trt.OnnxParser(network, TRT_LOGGER)
# parse ONNX
with open(onnx_file_path, 'rb') as model:
print('Beginning ONNX file parsing')
parser.parse(model.read())
print('Completed parsing of ONNX file')
# generate TensorRT engine optimized for the target platform
print('Building an engine...')
engine = builder.build_cuda_engine(network)
context = engine.create_execution_context()
print("Completed creating Engine")
return engine, context
ONNX_FILE_PATH = 'siggraph17.onnx' # Exported using the code above
engine,_ = build_engine(ONNX_FILE_PATH)
I tried to force the build_engine function to use the output of the network by:
network.mark_output(network.get_layer(network.num_layers-1).get_output(0))
but it did not work.
I appropriate any help!
Like I have mentioned in a comment, this is because slicing in torch.onnx supports only step = 1 but there are 2-step slicing in the model:
self.model2(conv1_2[:,:,::2,::2])
Your only option as for now is to rewrite slicing to be some other ops. You can do it by using range and reshape to obtain proper indices. Consider the following function "step-less-arange" (I hope it is generic enough for anyone with similar problem):
def sla(x, step):
diff = x % step
x += (diff > 0)*(step - diff) # add length to be able to reshape properly
return torch.arange(x).reshape((-1, step))[:, 0]
usage:
>> sla(11, 3)
tensor([0, 3, 6, 9])
Now you can replace every slice like this:
conv2_2 = self.model2(conv1_2[:,:,self.sla(conv1_2.shape[2], 2),:][:,:,:, self.sla(conv1_2.shape[3], 2)])
NOTE: you should optimize it. Indices are calculated for every call so it might be wise to pre-compute it.
I have tested it with my fork of the repo and I was able to save the model:
https://github.com/prokotg/colorization
What works for me was to add the opset_version=11 on torch.onnx.export
First I had tried use opset_version=10, but the API suggest 11 so it works.
So your function should be:
torch.onnx.export(model, dummy_input, "test_converted_model.onnx", verbose=True,opset_version=11,
input_names=input_names, output_names=output_names)

Is it possible to replace keras Lambda layers with native CoreML ones to convert the model?

For the first time I faced a need to convert mu keras model to coreml. This can be done via coremltools package,
import coremltools
import keras
model = Model(...) # keras
coreml_model = coremltools.converters.keras.convert(model,
input_names="input_image_NHWC",
output_names="output_image_NHWC",
image_scale=1.0,
model_precision='float32',
use_float_arraytype=True,
custom_conversion_functions={ "Lambda": convert_lambda },
input_name_shape_dict={'input_image_NHWC': [None, 384, 384, 3]}
)
However, I have two lambda layers, where the first one is depth-to-space (pixelshuffle) and another one is scaler:
def tf_upsampler(x):
return tf.nn.depth_to_space(x, 4)
def mulfunc(x, beta=0.2):
return beta*x
...
x = Lambda(tf_upsampler)(x)
...
x = Lambda(mulfunc)(x)
The only advice I found was as far as I understand, to use custom layer with the need to implement my layer in Swift code later. Something like this with MyPixelShuffle and MyScaleLayer to be implemented somehow as classes in XCode project (?):
def convert_lambda(layer):
# Only convert this Lambda layer if it is for our swish function.
if layer.function == tf_upsampler:
params = NeuralNetwork_pb2.CustomLayerParams()
# The name of the Swift or Obj-C class that implements this layer.
params.className = "MyPixelShuffle"
# The desciption is shown in Xcode's mlmodel viewer.
params.description = "pixelshuffle"
params.parameters["blockSize"].intValue = 4
return params
elif layer.function == mulfunc:
# https://stackoverflow.com/questions/47987777/custom-layer-with-two-parameters-function-on-core-ml
params = NeuralNetwork_pb2.CustomLayerParams()
# The name of the Swift or Obj-C class that implements this layer.
params.className = "MyScaleLayer"
params.description = "scaling input"
# HERE!! This is important.
params.parameters["scale"].doubleValue = 0.2
# The desciption is shown in Xcode's mlmodel viewer.
params.description = "multiplication by constant"
return params
However, I found that CoreML actually has layers I need, they can be found as ScaleLayer and ReorganizeDataLayer
How can I use those native layers to replace lambdas in keras model? Is it possible to edit coreML protobuf for the network? Or if there are Swift/OBj-C classes for them, how they are called?
Can it be done via deleting/adding layers with coremltools.models.neural_network.NeuralNetworkBuilder ?
UPDATE:
I found that keras converter actually invokes Neural network builder to add different layers. Builder has the layer builder.add_reorganize_data I need. Now it's a question how to replace custom layers in the model. I can load it into builder and isnpect layers:
coreml_model_path = 'mymodel.mlmodel'
spec = coremltools.models.utils.load_spec(coreml_model_path)
builder = coremltools.models.neural_network.NeuralNetworkBuilder(spec=spec)
builder.inspect_layers(last=10)
[Id: 417], Name: lambda_10 (Type: custom)
Updatable: False
Input blobs: ['up1_output']
Output blobs: ['lambda_10_output']
It's much simpler to do something like this:
def convert_lambda(layer):
if layer.function == tf_upsampler:
params = NeuralNetwork_pb2.ReorganizeDataLayerParams()
params.fillInTheOtherPropertiesHere = someValue
return params
...etc..
In other words, you don't have to return a custom layer if some existing layer type already does what you want.
Okay, seems I have found a way to go. I created a virtual environment with separate copy of coremltools and edited _convert() method in _keras2_converter.py by adding the following code:
for iter, layer in enumerate(graph.layer_list):
keras_layer = graph.keras_layer_map[layer]
print("%d : %s, %s" % (iter, layer, keras_layer))
if isinstance(keras_layer, _keras.layers.wrappers.TimeDistributed):
keras_layer = keras_layer.layer
converter_func = _get_layer_converter_fn(keras_layer, add_custom_layers)
input_names, output_names = graph.get_layer_blobs(layer)
# this may be none if we're using custom layers
if converter_func:
converter_func(builder, layer, input_names, output_names,
keras_layer, respect_trainable)
else:
if _is_activation_layer(keras_layer):
import six
if six.PY2:
layer_name = keras_layer.activation.func_name
else:
layer_name = keras_layer.activation.__name__
else:
layer_name = type(keras_layer).__name__
if layer_name in custom_conversion_functions:
custom_spec = custom_conversion_functions[layer_name](keras_layer)
else:
custom_spec = None
if layer.find('tf_up') != -1:
print('TF_UPSCALE found')
builder.add_reorganize_data(layer, input_names[0], output_names[0], mode='DEPTH_TO_SPACE', block_size=4)
elif layer.find('mulfunc') != -1:
print('SCALE found')
builder.add_scale(layer, W=0.2, b=0, has_bias=False, input_name=input_names[0], output_name=output_names[0])
else:
builder.add_custom(layer, input_names, output_names, custom_spec)
The trigger is layer name. In keras I use model.load_weights(by_name=True) and the following marking of my lambdas:
x = Lambda(mulfunc, name=scope+'mulfunc')(x)
x = Lambda(tf_upsampler,name='tf_up')(x)
Now the model at least has layers I need:
[Id: 417], Name: tf_up (Type: reorganizeData)
Updatable: False
Input blobs: ['up1_output']
Output blobs: ['tf_up_output']
Now it's time to validate on my VBox MacOS, will post what I've got.
UPDATE:
I don't see errors regarding my replaced Lambda layers, but have another one not allowing me to predict:
Layer 'concatenate_1' type 320 has 1 inputs but expects at least 2
I think it's related to the fact that Keras receives single input to concatenate layer which is a list of inputs. Looking how to fix this
UPDATE 2:
Tried to hotfix this by using add_concat_nd(self, name, input_names, output_name, axis) function of builder for my concatenate layers: got error during inference that this layer type is usupported (?!) :
if converter_func:
if layer.find('concatenate') != -1:
print('CONCATENATE FOUND')
builder.add_concat_nd(layer, input_names, output_names[0], axis=3)
else:
converter_func(builder, layer, input_names, output_names,
keras_layer, respect_trainable)
Unsupported layer type (CoreML.Specification.NeuralNetworkLayer)
UPDATE 4:
Found fix for this, and changed the way how builder is initialized:
builder = _NeuralNetworkBuilder(input_features, output_features, mode = mode,
use_float_arraytype=use_float_arraytype, disable_rank5_shape_mapping=True)
Now error message is gone, but I have problems with XCode version: model has version 4, while my Xcode supports 3. Will seek to update my VM.
"CoreML survival guide" pdf suggests in this case:
pip install -U git+https://github.com/apple/coremltools.git
For example, if loading a model using coremltools gives an error such as the following, then
try installing the latest coremltools directly from the GitHub repo.
Error compiling model: "Error reading protobuf spec. validator error:
The .mlmodel supplied is of version 3, intended for a newer version of
Xcode. This version of Xcode supports model version 2 or earlier.
UPDATE:
Updated from git. Have error:
No module named 'coremltools.libcoremlpython'
Looks like latest git is broken :(
Damn, it seems I need macos 10.15 and xcode 11
UPDATE 5:
Still fighting with bug on 10.15. Found that
coremltools somehow deduplicates inputs to Concatenate layer, so if you have in your keras code something like Concatenate()([x,x]) you will have concatenate layer with 1 inputs in coreml and error. To fix it I try to further modify the code above:
if layer.find('concatenate') != -1:
print('CONCATENATE FOUND', len(input_names))
if len(input_names) == 1:
input_names = [input_names[0],input_names[0]]
builder.add_concat_nd(layer, input_names, output_names[0], axis=3)
[I faced this error: input layer 'conv_in' of type 'Convolution' has input rank 3 but expects rank at least 4](coremltools: how to properly use NeuralNetworkMultiArrayShapeRange? which seems to be caused by coreml making input 3-dimensional CHW, while it must be 4 NWHC (?). Currently playing with the following
spec = coreml_model._spec
# fixing input shape
# https://stackoverflow.com/questions/59765376/coreml-model-convert-imagetype-model-input-to-multiarray
spec.description.input[0].type.multiArrayType.shape.extend([1, 384, 384, 3])
del spec.description.input[0].type.multiArrayType.shape[0]
del spec.description.input[0].type.multiArrayType.shape[0]
del spec.description.input[0].type.multiArrayType.shape[0]
coremltools.utils.save_spec(spec, "my.mlmodel")
Getting invalid blob shape from model internals

Make ML Engine request for image

I'm trying to craft the correct request for my ML Engine model.
I get
$ gcloud ml-engine predict --model=plantDisease01 --json-instances=request-float32.json
{
"error": "Prediction failed: Error during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT,
details=\"Matrix size-incompatible: In[0]: [1,50176], In[1]: [25088,256]\n\t [[Node: dense_1_1/MatMul = MatMul[T=DT_FLOAT, _output_shapes=[[?,256]], transpose_a=false, transpose_b=false, _device=\"/job:localhost/replica:0/task:0/device:CPU:0\"](_arg_dense_1_input_1_0_0, dense_1_1/kernel/read)]]\")"
}
I generated a sample request with
python -c 'req = []; [req.append(0.2) for i in range(224*224)]; print(req)' &> request-float32.json
I generated the protocol buffer version with the following snippet
# convert keras model to mlengine model
import keras.backend as K
import tensorflow as tf
from keras.models import load_model, Sequential
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import tag_constants, signature_constants
from tensorflow.python.saved_model.signature_def_utils_impl import predict_signature_def
# reset session
K.clear_session()
sess = tf.Session()
K.set_session(sess)
# disable loading of learning nodes
K.set_learning_phase(0)
# load model
model = load_model('vgg16_no_augmentation.h5')
config = model.get_config()
weights = model.get_weights()
new_Model = Sequential.from_config(config)
new_Model.set_weights(weights)
# export saved model
export_path = 'mlengine-03' + '/export'
builder = saved_model_builder.SavedModelBuilder(export_path)
signature = predict_signature_def(inputs={'foo-input': new_Model.input},
outputs={'serve': new_Model.output})
with K.get_session() as sess:
builder.add_meta_graph_and_variables(sess=sess,
tags=[tag_constants.SERVING],
signature_def_map={
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature})
builder.save()
the model is this one https://github.com/ClaudeCoulombe/deep-learning-with-python-notebooks/blob/master/5.3-using-a-pretrained-convnet.ipynb upto cell 6
(I deviated slightly from the linked model, it uses input_dim=4 * 4 * 512, I used a larger one)
I know 25088 = 7 * 7 * 512, and this is the input_dim in the model. But I'm not sure how I should go from an image to a file with 25,088 floats in it?
The challenge was that you have to use the convolutional base on the image client-side and then send the extracted features to the model hosting service.
I opted instead to use a version of the model which integrates the convolutional base into the model. Then I created a simple api that runs the model, accepts image upload and resize it before feeding the image into the model.
It's visible at https://github.com/morenoh149/simple-keras-rest-api/tree/hm-plant-model

Resources