Make ML Engine request for image - python-3.x

I'm trying to craft the correct request for my ML Engine model.
I get
$ gcloud ml-engine predict --model=plantDisease01 --json-instances=request-float32.json
{
"error": "Prediction failed: Error during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT,
details=\"Matrix size-incompatible: In[0]: [1,50176], In[1]: [25088,256]\n\t [[Node: dense_1_1/MatMul = MatMul[T=DT_FLOAT, _output_shapes=[[?,256]], transpose_a=false, transpose_b=false, _device=\"/job:localhost/replica:0/task:0/device:CPU:0\"](_arg_dense_1_input_1_0_0, dense_1_1/kernel/read)]]\")"
}
I generated a sample request with
python -c 'req = []; [req.append(0.2) for i in range(224*224)]; print(req)' &> request-float32.json
I generated the protocol buffer version with the following snippet
# convert keras model to mlengine model
import keras.backend as K
import tensorflow as tf
from keras.models import load_model, Sequential
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import tag_constants, signature_constants
from tensorflow.python.saved_model.signature_def_utils_impl import predict_signature_def
# reset session
K.clear_session()
sess = tf.Session()
K.set_session(sess)
# disable loading of learning nodes
K.set_learning_phase(0)
# load model
model = load_model('vgg16_no_augmentation.h5')
config = model.get_config()
weights = model.get_weights()
new_Model = Sequential.from_config(config)
new_Model.set_weights(weights)
# export saved model
export_path = 'mlengine-03' + '/export'
builder = saved_model_builder.SavedModelBuilder(export_path)
signature = predict_signature_def(inputs={'foo-input': new_Model.input},
outputs={'serve': new_Model.output})
with K.get_session() as sess:
builder.add_meta_graph_and_variables(sess=sess,
tags=[tag_constants.SERVING],
signature_def_map={
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature})
builder.save()
the model is this one https://github.com/ClaudeCoulombe/deep-learning-with-python-notebooks/blob/master/5.3-using-a-pretrained-convnet.ipynb upto cell 6
(I deviated slightly from the linked model, it uses input_dim=4 * 4 * 512, I used a larger one)
I know 25088 = 7 * 7 * 512, and this is the input_dim in the model. But I'm not sure how I should go from an image to a file with 25,088 floats in it?

The challenge was that you have to use the convolutional base on the image client-side and then send the extracted features to the model hosting service.
I opted instead to use a version of the model which integrates the convolutional base into the model. Then I created a simple api that runs the model, accepts image upload and resize it before feeding the image into the model.
It's visible at https://github.com/morenoh149/simple-keras-rest-api/tree/hm-plant-model

Related

torch_tensorrt compilation error ,Unknown type bool encountered in graph lowering. This type is not supported in ONNX export

I am following this article
https://pytorch.org/TensorRT/tutorials/serving_torch_tensorrt_with_triton.html , to serve the torch-tensorrt model in triton server.
but
import torch
import torch_tensorrt
torch.hub._validate_not_a_forked_repo=lambda a,b,c: True
from src.models.resnet2 import ResNet2
# load model
model = ResNet2(output_size = 2)
model.load_state_dict(torch.load('Epoch_9_Valacc_0.911_9_.pth'))
**# Compile with Torch TensorRT
trt_model = torch_tensorrt.compile(model,
inputs= [torch_tensorrt.Input((1, 3, 640, 640))],
enabled_precisions= { torch.half}, # Run with FP32
debug =True
)**
I am getting error at the last step , where its compiling a torch model
"Unknown type bool encountered in graph lowering. This type is not supported in ONNX export"
Please let me know how to overcome this
the issue here was , i was directly complilig a torch model using tensorrt, but torch_tensorrt need a TorchScript module
so solution is
# Switch the model to eval model
model.eval()
# An example input you would normally provide to your model's forward() method.
sample_input = torch.randn((1, 3, 640, 640)).float()
# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, sample_input)
then use the traced_script_module to compile in torch_tensorrt.
BTW , I am using tracing method to convert torch model to TorchScript

Extract the features of last layer from the pytorch-fasterrcnn-resnet50-fpn

I have a image where i have to use the pytorch-fasterrcnn-resnet50-fpn to extract the features of the image. Below is the code that I am trying
import torchvision
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained = True)
### strip the last layer
feature_extractor = torch.nn.Sequential(*list(model_ft.children())[:-1])
inputs = feature_extractor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
Here the type of image is PIL.JpegImagePlugin.JpegImageFile
The above code is not working for getting the features. Can anyone tell me how to solve this?

resize_token_embeddings on the a pertrained model with different embedding size

I would like to ask about the way to change the embedding size of the trained model.
I have a trained model models/BERT-pretrain-1-step-5000.pkl.
Now I am adding a new token [TRA]to the tokeniser and try to use the resize_token_embeddings to the pertained one.
from pytorch_pretrained_bert_inset import BertModel #BertTokenizer
from transformers import AutoTokenizer
from torch.nn.utils.rnn import pad_sequence
import tqdm
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model_bert = BertModel.from_pretrained('bert-base-uncased', state_dict=torch.load('models/BERT-pretrain-1-step-5000.pkl', map_location=torch.device('cpu')))
#print(tokenizer.all_special_tokens) #--> ['[UNK]', '[SEP]', '[PAD]', '[CLS]', '[MASK]']
#print(tokenizer.all_special_ids) #--> [100, 102, 0, 101, 103]
num_added_toks = tokenizer.add_tokens(['[TRA]'], special_tokens=True)
model_bert.resize_token_embeddings(len(tokenizer)) # --> Embedding(30523, 768)
print('[TRA] token id: ', tokenizer.convert_tokens_to_ids('[TRA]')) # --> 30522
But I encountered the error:
AttributeError: 'BertModel' object has no attribute 'resize_token_embeddings'
I assume that it is because the model_bert(BERT-pretrain-1-step-5000.pkl) I had has the different embedding size.
I would like to know if there is any way to fit the embedding size of my modified tokeniser and the model I would like to use as the initial weights.
Thanks a lot!!
resize_token_embeddings is a huggingface transformer method. You are using the BERTModel class from pytorch_pretrained_bert_inset which does not provide such a method. Looking at the code, it seems like they have copied the BERT code from huggingface some time ago.
You can either wait for an update from INSET (maybe create a github issue) or write your own code to extend the word_embedding layer:
from torch import nn
embedding_layer = model.embeddings.word_embeddings
old_num_tokens, old_embedding_dim = embedding_layer.weight.shape
num_new_tokens = 1
# Creating new embedding layer with more entries
new_embeddings = nn.Embedding(
old_num_tokens + num_new_tokens, old_embedding_dim
)
# Setting device and type accordingly
new_embeddings.to(
embedding_layer.weight.device,
dtype=embedding_layer.weight.dtype,
)
# Copying the old entries
new_embeddings.weight.data[:old_num_tokens, :] = embedding_layer.weight.data[
:old_num_tokens, :
]
model.embeddings.word_embeddings = new_embeddings

Cannot export PyTorch model to ONNX

I am trying to convert a pre-trained torch model to ONNX, but recive the following error:
RuntimeError: step!=1 is currently not supported
I'm trying this on a pre-trained colorization model: https://github.com/richzhang/colorization
Here is the code I ran in Google Colab:
!git clone https://github.com/richzhang/colorization.git
cd colorization/
import colorizers
model = colorizer_siggraph17 = colorizers.siggraph17(pretrained=True).eval()
input_names = [ "input" ]
output_names = [ "output" ]
dummy_input = torch.randn(1, 1, 256, 256, device='cpu')
torch.onnx.export(model, dummy_input, "test_converted_model.onnx", verbose=True,
input_names=input_names, output_names=output_names)
I appreciate any help :)
UPDATE 1: #Proko suggestion solved the ONNX export issue. Now I have a new possibly related problem when I try to convert the ONNX to TensorRT. I get the following error:
[TensorRT] ERROR: Network must have at least one output
Here is the code I used:
import torch
import pycuda.driver as cuda
import pycuda.autoinit
import tensorrt as trt
import onnx
TRT_LOGGER = trt.Logger()
def build_engine(onnx_file_path):
# initialize TensorRT engine and parse ONNX model
builder = trt.Builder(TRT_LOGGER)
builder.max_workspace_size = 1 << 25
builder.max_batch_size = 1
if builder.platform_has_fast_fp16:
builder.fp16_mode = True
network = builder.create_network()
parser = trt.OnnxParser(network, TRT_LOGGER)
# parse ONNX
with open(onnx_file_path, 'rb') as model:
print('Beginning ONNX file parsing')
parser.parse(model.read())
print('Completed parsing of ONNX file')
# generate TensorRT engine optimized for the target platform
print('Building an engine...')
engine = builder.build_cuda_engine(network)
context = engine.create_execution_context()
print("Completed creating Engine")
return engine, context
ONNX_FILE_PATH = 'siggraph17.onnx' # Exported using the code above
engine,_ = build_engine(ONNX_FILE_PATH)
I tried to force the build_engine function to use the output of the network by:
network.mark_output(network.get_layer(network.num_layers-1).get_output(0))
but it did not work.
I appropriate any help!
Like I have mentioned in a comment, this is because slicing in torch.onnx supports only step = 1 but there are 2-step slicing in the model:
self.model2(conv1_2[:,:,::2,::2])
Your only option as for now is to rewrite slicing to be some other ops. You can do it by using range and reshape to obtain proper indices. Consider the following function "step-less-arange" (I hope it is generic enough for anyone with similar problem):
def sla(x, step):
diff = x % step
x += (diff > 0)*(step - diff) # add length to be able to reshape properly
return torch.arange(x).reshape((-1, step))[:, 0]
usage:
>> sla(11, 3)
tensor([0, 3, 6, 9])
Now you can replace every slice like this:
conv2_2 = self.model2(conv1_2[:,:,self.sla(conv1_2.shape[2], 2),:][:,:,:, self.sla(conv1_2.shape[3], 2)])
NOTE: you should optimize it. Indices are calculated for every call so it might be wise to pre-compute it.
I have tested it with my fork of the repo and I was able to save the model:
https://github.com/prokotg/colorization
What works for me was to add the opset_version=11 on torch.onnx.export
First I had tried use opset_version=10, but the API suggest 11 so it works.
So your function should be:
torch.onnx.export(model, dummy_input, "test_converted_model.onnx", verbose=True,opset_version=11,
input_names=input_names, output_names=output_names)

Create new tensorflow image classification module using tensorflow hub

I'm new in tensorflow, i am trying to create new image classification module, I tried below example using tensorflow hub. but its not created. Is any simple example for create image classification module
import tensorflow as tf
import tensorflow_hub as hub
import numpy as np
def module_fn():
inputs = tf.placeholder(dtype=tf.float32, shape=[None, 50])
layer1 = tf.layers.dense(inputs, 200)
layer2 = tf.layers.dense(layer1, 100)
outputs = dict(default=layer2, hidden_activations=layer1)
# Add default signature.
hub.add_signature(inputs=inputs, outputs=outputs)
spec = hub.create_module_spec(module_fn)
module=hub.Module(spec)
with tf.Graph().as_default():
module=hub.Module('new_test_module')
test=module(np.random.normal(0, 1, (1, 100)))
with tf.Session() as session:
img=session.run(test)
TensorFlow Hub's image classification modules are a collection of examples that showcase how to use model components. To create an image classification component yourself, you can check out the Creating a New Module section of the TensorFlow Hub API guide.
Example utilization:
To define a new module, a publisher calls hub.create_module_spec() with a function module_fn. This function constructs a graph representing the module's internal structure, using tf.placeholder() for inputs to be supplied by the caller. Then it defines signatures by calling hub.add_signature(name, inputs, outputs) one or more times.
def module_fn():
inputs = tf.placeholder(dtype=tf.float32, shape=[None, 50])
layer1 = tf.layers.dense(inputs, 200)
layer2 = tf.layers.dense(layer1, 100)
outputs = dict(default=layer2, hidden_activations=layer1)
# Add default signature.
hub.add_signature(inputs=inputs, outputs=outputs)
spec = hub.create_module_spec(module_fn)
The result of hub.create_module_spec() can be used, instead of a path, to instantiate a module object within a particular TensorFlow graph. In such case, there is no checkpoint, and the module instance will use the variable initializers instead.

Resources