TVM fails compiling a pytorch model in dense_strategy_cpu - pytorch

I made and trained a pytorch v1.4 model that predicts a sin() value (based on an example found on the web). Inference works. I then tried to compile it with TVM v0.8dev0 and llvm 10 on Ubuntu with a x86 cpu. I followed the TVM setup guide and ran some tutorials for onnx that do work.
I mainly used existing tutorials on TVM to figure out the procedure below. Note that I'm not a ML nor DataScience engineer. These were my steps:
import tvm, torch, os
from tvm import relay
state = torch.load("/home/dude/tvm/tst_state.pt") # load the trained pytorch state
import tst
m = tst.Net()
m.load_state_dict(state) # init the model with its trained state
m.eval()
sm = torch.jit.trace(m, torch.tensor([3.1415 / 4])) # convert to a scripted model
# the model only takes 1 input for inference hence [("input0", (1,))]
mod, params = tvm.relay.frontend.from_pytorch(sm, [("input0", (1,))])
mod.astext # outputs some small relay(?) script
with tvm.transform.PassContext(opt_level=1):
lib = relay.build(mod, target="llvm", target_host="llvm", params=params)
The last line gives me this error that I don't know how to solve nor where I went wrong. I hope that someone can pinpoint my mistake ...
... removed some lines here ...
[bt] (3) /home/dude/tvm/build/libtvm.so(TVMFuncCall+0x5f) [0x7f5cd65660af]
[bt] (2) /home/dude/tvm/build/libtvm.so(+0xb4f8a7) [0x7f5cd5f318a7]
[bt] (1) /home/dude/tvm/build/libtvm.so(tvm::GenericFunc::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const+0x1ab) [0x7f5cd5f315cb]
[bt] (0) /home/tvm/build/libtvm.so(+0x1180cab) [0x7f5cd6562cab]
File "/home/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 81, in cfun
rv = local_pyfunc(*pyargs)
File "/home/tvm/python/tvm/relay/op/strategy/x86.py", line 311, in dense_strategy_cpu
m, _ = inputs[0].shape
ValueError: not enough values to unpack (expected 2, got 1)

Related

Feature extraction in loop seems to cause memory leak in pytorch

I have spent considerable time trying to debug some pytorch code which I have created a minimal example of for the purpose of helping to better understand what the issue might be.
I have removed all necessary portions of the code which are unrelated to the issue so the remaining piece of code won't make much sense from a functional standpoint but it still displays the error I'm facing.
The overall task I'm working on is in a loop and every pass of the loop is computing the embedding of the image and adding it to a variable storing it. It's effectively aggregating it (not concatenating, so the size remains the same). I don't expect the number of iterations to force the datatype to overflow, I don't see this happening here nor in my code.
I have added multiple metrics to evaluate the size of the tensors I'm working with to make sure they're not growing in memory footprint
I'm checking the overall GPU memory usage to verify the issue leading to the final RuntimeError: CUDA out of memory..
My environment is as follows:
- python 3.6.2
- Pytorch 1.4.0
- Cudatoolkit 10.0
- Driver version 410.78
- GPU: Nvidia GeForce GT 1030 (2GB VRAM)
(though I've replicated this experiment with the same result on a Titan RTX with 24GB,
same pytorch version and cuda toolkit and driver, it only goes out of memory further in the loop).
Complete code below. I have marked 2 lines as culprits, as deleting them removes the issue, though obviously I need to find a way to execute them without having memory issues. Any help would be much appreciated! You may try with any image named "source_image.bmp" to replicate the issue.
import torch
from PIL import Image
import torchvision
from torchvision import transforms
from pynvml import nvmlDeviceGetHandleByIndex, nvmlDeviceGetMemoryInfo, nvmlInit
import sys
import os
os.environ["CUDA_VISIBLE_DEVICES"]='0' # this is necessary on my system to allow the environment to recognize my nvidia GPU for some reason
os.environ['CUDA_LAUNCH_BLOCKING'] = '1' # to debug by having all CUDA functions executed in place
torch.set_default_tensor_type('torch.cuda.FloatTensor')
# Preprocess image
tfms = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),])
img = tfms(Image.open('source_image.bmp')).unsqueeze(0).cuda()
model = torchvision.models.resnet50(pretrained=True).cuda()
model.eval() # we put the model in evaluation mode, to prevent storage of gradient which might accumulate
nvmlInit()
h = nvmlDeviceGetHandleByIndex(0)
info = nvmlDeviceGetMemoryInfo(h)
print(f'Total available memory : {info.total / 1000000000}')
feature_extractor = torch.nn.Sequential(*list(model.children())[:-1])
orig_embedding = feature_extractor(img)
embedding_depth = 2048
mem0 = 0
embedding = torch.zeros(2048, img.shape[2], img.shape[3]) #, dtype=torch.float)
patch_size=[4,4]
patch_stride=[2,2]
patch_value=0.0
# Here, we iterate over the patch placement, defined at the top left location
for row in range(img.shape[2]-1):
for col in range(img.shape[3]-1):
print("######################################################")
######################################################
# Isolated line, culprit 1 of the GPU memory leak
######################################################
patched_embedding = feature_extractor(img)
delta_embedding = (patched_embedding - orig_embedding).view(-1, 1, 1)
######################################################
# Isolated line, culprit 2 of the GPU memory leak
######################################################
embedding[:,row:row+1,col:col+1] = torch.add(embedding[:,row:row+1,col:col+1], delta_embedding)
print("img size:\t\t", img.element_size() * img.nelement())
print("patched_embedding size:\t", patched_embedding.element_size() * patched_embedding.nelement())
print("delta_embedding size:\t", delta_embedding.element_size() * delta_embedding.nelement())
print("Embedding size:\t\t", embedding.element_size() * embedding.nelement())
del patched_embedding, delta_embedding
torch.cuda.empty_cache()
info = nvmlDeviceGetMemoryInfo(h)
print("\nMem usage increase:\t", info.used / 1000000000 - mem0)
mem0 = info.used / 1000000000
print(f'Free:\t\t\t {(info.total - info.used) / 1000000000}')
print("Done.")
Add this to your code as soon as you load the model
for param in model.parameters():
param.requires_grad = False
from https://pytorch.org/docs/stable/notes/autograd.html#excluding-subgraphs-from-backward

Input 'input_image' of layer '63' not found in any of the outputs of the preceeding layers

Update #1 (original question and details below):
As per the suggestion of #MatthijsHollemans below I've tried to run this by removing dynamic_axes from the initial create_onnx step below. This removed both:
Description of image feature 'input_image' has missing or non-positive width 0.
and
Input 'input_image' of layer '63' not found in any of the outputs of the preceeding layers.
Unfortunately this opens up two sub-questions:
I still want to have a functional ONNX model. Is there a more appropriate way to make H and W dynamic? Or should I be saving two versions of the ONNX model, one without dynamic_axes for the CoreML conversion, and one with for use as a valid ONNX model?
Although this solves the compilation error in xcode (specified below) it introduces the following runtime issues:
Finalizing CVPixelBuffer 0x282f4c5a0 while lock count is 1.
[espresso] [Espresso::handle_ex_plan] exception=Invalid X-dimension 1/480 status=-7
[coreml] Error binding image input buffer input_image: -7
[coreml] Failure in bindInputsAndOutputs.
I am calling this the same way I was calling the fixed size model, which does still work fine. The image dimensions are 640 x 480.
As specified below the model should accept any image between 64x64 and higher.
For flexible shape models, do I need to provide an input differently in xcode?
Original Question (parts still relevant)
I have been slowly working on converting a style transfer model from pytorch > onnx > coreml. One of the issues that has been a struggle is flexible/dynamic input + output shape.
This method (besides i/o renaming) has worked well on iOS 12 & 13 when using a static input shape.
I am using the following code to do the onnx > coreml conversion:
def create_coreml(name):
mlmodel = convert(
model="onnx/" + name + ".onnx",
preprocessing_args={'is_bgr': True},
deprocessing_args={'is_bgr': True},
image_input_names=['input_image'],
image_output_names=['stylized_image'],
minimum_ios_deployment_target='13'
)
spec = mlmodel.get_spec()
img_size_ranges = flexible_shape_utils.NeuralNetworkImageSizeRange()
img_size_ranges.add_height_range((64, -1))
img_size_ranges.add_width_range((64, -1))
flexible_shape_utils.update_image_size_range(
spec,
feature_name='input_image',
size_range=img_size_ranges)
flexible_shape_utils.update_image_size_range(
spec,
feature_name='stylized_image',
size_range=img_size_ranges)
mlmodel = coremltools.models.MLModel(spec)
mlmodel.save("mlmodel/" + name + ".mlmodel")
Although the conversion 'succeeds' there are a couple of warnings (spaces added for readability):
Translation to CoreML spec completed. Now compiling the CoreML model.
/usr/local/lib/python3.7/site-packages/coremltools/models/model.py:111:
RuntimeWarning: You will not be able to run predict() on this Core ML model. Underlying exception message was:
Error compiling model:
"Error reading protobuf spec. validator error: Description of image feature 'input_image' has missing or non-positive width 0.".
RuntimeWarning)
Model Compilation done.
/usr/local/lib/python3.7/site-packages/coremltools/models/model.py:111:
RuntimeWarning: You will not be able to run predict() on this Core ML model. Underlying exception message was:
Error compiling model:
"compiler error: Input 'input_image' of layer '63' not found in any of the outputs of the preceeding layers.
".
RuntimeWarning)
If I ignore these warnings and try to compile the model for latest targets (13.0) I get the following error in xcode:
coremlc: Error: compiler error: Input 'input_image' of layer '63' not found in any of the outputs of the preceeding layers.
Here is what the problematic area appears to look like in netron:
My main question is how can I get these two warnings out of the way?
Happy to provide any other details.
Thanks for any advice!
Below is my pytorch > onnx conversion:
def create_onnx(name):
prior = torch.load("pth/" + name + ".pth")
model = transformer.TransformerNetwork()
model.load_state_dict(prior)
dummy_input = torch.zeros(1, 3, 64, 64) # I wasn't sure what I would set the H W to here?
torch.onnx.export(model, dummy_input, "onnx/" + name + ".onnx",
verbose=True,
opset_version=10,
input_names=["input_image"], # These are being renamed from garbled originals.
output_names=["stylized_image"], # ^
dynamic_axes={'input_image':
{2: 'height', 3: 'width'},
'stylized_image':
{2: 'height', 3: 'width'}}
)
onnx.save_model(original_model, "onnx/" + name + ".onnx")

How to fix: AttributeError: module 'tensorflow' has no attribute 'contrib'

I'm training a LSTM and I'm defining parameters and regression layer. I get the error in the title with this code:
lstm_cells = [
tf.contrib.rnn.LSTMCell(num_units=num_nodes[li],
state_is_tuple=True,
initializer= tf.contrib.layers.xavier_initializer()
)
for li in range(n_layers)]
drop_lstm_cells = [tf.contrib.rnn.DropoutWrapper(
lstm, input_keep_prob=1.0,output_keep_prob=1.0-dropout, state_keep_prob=1.0-dropout
) for lstm in lstm_cells]
drop_multi_cell = tf.contrib.rnn.MultiRNNCell(drop_lstm_cells)
multi_cell = tf.contrib.rnn.MultiRNNCell(lstm_cells)
w = tf.get_variable('w',shape=[num_nodes[-1], 1], initializer=tf.contrib.layers.xavier_initializer())
b = tf.get_variable('b',initializer=tf.random_uniform([1],-0.1,0.1))
I'm using tensorflow2 and I have already read the https://www.tensorflow.org/guide/migrate guide and I think almost everything on the net.
But I'm not able to solve it.
How can I do it?
This error occurs because the contrib module has been removed from version 2 of tensorflow. There are two solutions to this problem:
You can delete the current package and install one of the Series 1 versions.
You can use this command, which is also compatible with the version two package: Use tf.compat.v1.nn.rnn_cell.LSTMCell instead of tf.contrib.rnn.LSTMCell and use tf.initializers.GlorotUniform () instead of tf.contrib.layers.xavier_initializer () in other command which include rnn you can use tf.compat.v1.nn.rnn_cell.
tf.contrib.rnn.LSTMCell -> tf.compat.v1.nn.rnn_cell.LSTMCell or tf.keras.layers.LSTMCell
tf.contrib.rnn.DropoutWrapper -> tf.compat.v1.nn.rnn_cell.DropoutWrapper or tf.keras.layers.DropOut
tf.contrib.rnn.MultiRNNCell -> tf.compat.v1.nn.rnn_cell.MultiRNNCell or tf.keras.layers.RNN
tf.contrib has moved out of TF starting TF 2.0 alpha.
Take a look at these tf 2.0 release notes https://github.com/tensorflow/tensorflow/releases/tag/v2.0.0-alpha0
You can upgrade your TF 1.x code to TF 2.x using the tf_upgrade_v2 script
https://www.tensorflow.org/alpha/guide/upgrade

How to generate tflite from saved model?

I want to create an object-detection app based on a retrained ssd_mobilenet model I've retrained like the guy on youtube.
I chose the model ssd_mobilenet_v2_coco from the Tensorflow Model Zoo. After the retraining process I've got the model with the following structure:
- saved_model
- variables (empty folder)
- saved_model.pb
- checkpoint
- frozen_inverence_graph.pb
- model.ckpt.data-00000-of-00001
- model.ckpt.index
- model.ckpt.meta
- pipeline.config
In the same folder, I have the python script with the following code:
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model("saved_model", input_shapes={"image_tensor":[1,300,300,3]})
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
After running this code, I got the following error:
...
2019-05-24 18:46:59.811289: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayGatherV3
2019-05-24 18:46:59.811864: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayGatherV3
2019-05-24 18:46:59.908207: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 1792 operators, 3033 arrays (0 quantized)
2019-05-24 18:47:00.089034: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After Removing unused ops pass 1: 1771 operators, 2979 arrays (0 quantized)
2019-05-24 18:47:00.314681: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 1771 operators, 2979 arrays (0 quantized)
2019-05-24 18:47:00.453570: F tensorflow/lite/toco/graph_transformations/resolve_constant_slice.cc:59] Check failed: dim_size >= 1 (0 vs. 1)
Is there any solution for the "Check failed: dim_size >= 1 (0 vs. 1)"?
Conversion of MobileNet SSD is a little different due to some Custom ops that are needed in the graph.
Take a look at this Medium post for the end-to-end process of training and exporting the model as a TFLite graph. For conversion, you would need to use the export_tflite_ssd_graph script.

How to estimate ARIMA models with MA and AR parts with specified lags

I am using Matlab 2013b and the econometrics toolbox to learn some ARIMA models.
When I want to specify AR lags in the ARIMA model as follows:
%Estimate simple ARMA model
model1 = arima('ARLags',[1 24],'MALags',0,'D',0);
EstMdl1 = estimate(model1,learningSet');
Then everything is fine when I estimate the model
If now I use
%Estimate simple ARMA model
model1 = arima('ARLags',[1 24],'MALags',1,'D',0);
EstMdl1 = estimate(model1,learningSet');
then the following error is issued:
Error using optimset (line 184)
Invalid value for OPTIONS parameter MaxNodes: must be a real non-negative integer.
Error in internal.econ.arma0 (line 195)
options = optimset('lsqlin');
Error in arima/estimate (line 864)
[AR0, MA0, constant, variance] = internal.econ.arma0(I(YData), LagOpAR0, LagOpMA0);
I am a bit puzzled about this and am looking for a workaround, if not an explanation of what is happening

Resources