Integrate tflite model with tensorflow object-detection example code - android-studio

I have a tflite model which I procured by converting my TFmodel( MobileNet Single Shot Detector (v2) ).
I have successfully converted my model into tflite format using the code below.
!tflite_convert \
--input_shape=1,300,300,3 \
--input_arrays=normalized_input_image_tensor \
--output_arrays=TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3 \
--allow_custom_ops \
--graph_def_file=/content/models/research/fine_tuned_model/tflite/tflite_graph.pb \
--output_file="/content/models/research/fine_tuned_model/final_model.tflite"
And have tried to integrate it into the object-detection code which is provided by the tensorflow team.But the output is not visible.
The Steps taken from my end for integrating were as follows:
1.Commenting the below line from build.gradle(app)
apply from:'download_model.gradle'
I added my tflite model in the assets folder and modified the label.txt with my own labels.
In the Detector Activity,
private static final boolean TF_OD_API_IS_QUANTIZED = true;
I have set the above boolean to false
and reduced the probability to 0.2
private static final float MINIMUM_CONFIDENCE_TF_OD_API = 0.5f;
But it didn't worked.
The github link to the object-detection code :-
https://github.com/tensorflow/examples/blob/master/lite/examples/object_detection/android
Also ,please also let know how to test the working of the tflite model using the test images.
These are the values after debugging the model
[[[ 0.15021165 0.45557776 0.99523586 1.009417 ]
[ 0.4825344 0.18693507 0.9941584 0.83610606]
[ 0.36018616 0.612343 1.0781565 1.1020089 ]
[ 0.47380492 0.03632754 0.99250865 0.5964786 ]
[ 0.15898478 0.12117874 0.94728076 0.8854655 ]
[ 0.44774154 0.41910237 0.9966481 0.9704595 ]
[ 0.06241751 -0.02005028 0.93670964 0.3915068 ]
[ 0.1917564 0.00806974 1.0165613 0.5287838 ]
[ 0.20279509 0.738887 0.95690674 1.0022873 ]
[ 0.7434618 0.07342905 0.9969055 0.6412263 ]]]

First train your model . Better use a trained model . After training the model get the tflite graphs and convert them to tflite model using Bazel(Better use Ubuntu). Then get the Tensorflow/examples and open the object detection android folder on your Android Studio .
Remove the metadata code from the tflite interpreter class , since it demands it and there is no official way declared by the officials to make it happen . And then you can make it work .

Related

Clarifications on training job parameters with Tensorflow

Im using the new Tensorflow object detection API.
I need to replicate training parameters used on a paper but Im a bit confused.
In the paper is stated
When training neural network models, their base confguration is similar to that used to
train on the COCO 2017 dataset. For the unambiguous comparison of the selected models, the total number of
training steps was set to 100 equal to 100′000 iterations of learning.
Inside model_main_tf2.py, which is the script used to start the training, I can read the following:
"""Creates and runs TF2 object detection models.
For local training/evaluation run:
PIPELINE_CONFIG_PATH=path/to/pipeline.config
MODEL_DIR=/tmp/model_outputs
NUM_TRAIN_STEPS=10000
SAMPLE_1_OF_N_EVAL_EXAMPLES=1
python model_main_tf2.py -- \
--model_dir=$MODEL_DIR --num_train_steps=$NUM_TRAIN_STEPS \
--sample_1_of_n_eval_examples=$SAMPLE_1_OF_N_EVAL_EXAMPLES \
--pipeline_config_path=$PIPELINE_CONFIG_PATH \
--alsologtostderr
"""
Also, you can specify the num_steps and total_steps parameters in the pipeline.config file (used by the training script):
train_config: {
batch_size: 1
sync_replicas: true
startup_delay_steps: 0
replicas_to_aggregate: 8
num_steps: 50000
optimizer {
momentum_optimizer: {
learning_rate: {
cosine_decay_learning_rate {
learning_rate_base: .16
total_steps: 50000
warmup_learning_rate: 0
warmup_steps: 2500
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
So, what Im not understanding is how should I map what is written in the paper with tensorflow parameters.
What is the num steps and total_steps inside the pipeline.config file?
What is the NUM_TRAIN_STEPS argument instead?
Does it overwrite config file steps or its a completely different thing?
If more details are needed feel free to ask.

MLflow webserver returns 400 status, "Incompatible input types for column X. Can not safely convert float64 to <U0."

I am implementing an anomaly detection web service using MLflow and sklearn.pipeline.Pipeline(). The aim of the model is to detect web crawlers using server log and response_length column is one of my features. After serving model, for testing the web service I send below request that contains the 20 first columns of the train data.
$ curl --location --request POST '127.0.0.1:8000/invocations'
--header 'Content-Type: text/csv' \
--data-binary 'datasets/test.csv'
But response of the web server has status code 400 (BAD REQUEST) and this JSON body:
{
"error_code": "BAD_REQUEST",
"message": "Incompatible input types for column response_length. Can not safely convert float64 to <U0."
}
Here is the model compilation MLflow Tracking component log:
[Pipeline] ......... (step 1 of 3) Processing transform, total=11.8min
[Pipeline] ............... (step 2 of 3) Processing pca, total= 4.8s
[Pipeline] ........ (step 3 of 3) Processing rule_based, total= 0.0s
2021/07/16 04:55:12 WARNING mlflow.sklearn: Training metrics will not be recorded because training labels were not specified. To automatically record training metrics, provide training labels as inputs to the model training function.
2021/07/16 04:55:12 WARNING mlflow.utils.autologging_utils: MLflow autologging encountered a warning: "/home/matin/workspace/Rahnema College/venv/lib/python3.8/site-packages/mlflow/models/signature.py:129: UserWarning: Hint: Inferred schema contains integer column(s). Integer columns in Python cannot represent missing values. If your input data contains missing values at inference time, it will be encoded as floats and will cause a schema enforcement error. The best way to avoid this problem is to infer the model schema based on a realistic data sample (training dataset) that includes missing values. Alternatively, you can declare integer columns as doubles (float64) whenever these columns may have missing values. See `Handling Integers With Missing Values <https://www.mlflow.org/docs/latest/models.html#handling-integers-with-missing-values>`_ for more details."
Logged data and model in run: 8843336f5c31482c9e246669944b1370
---------- logged params ----------
{'memory': 'None',
'pca': 'PCAEstimator()',
'rule_based': 'RuleBasedEstimator()',
'steps': "[('transform', <log_transformer.LogTransformer object at "
"0x7f05a8b95760>), ('pca', PCAEstimator()), ('rule_based', "
'RuleBasedEstimator())]',
'transform': '<log_transformer.LogTransformer object at 0x7f05a8b95760>',
'verbose': 'True'}
---------- logged metrics ----------
{}
---------- logged tags ----------
{'estimator_class': 'sklearn.pipeline.Pipeline', 'estimator_name': 'Pipeline'}
---------- logged artifacts ----------
['model/MLmodel',
'model/conda.yaml',
'model/model.pkl',
'model/requirements.txt']
Could anyone tell me exactly how I can fix this model serve problem?
The problem caused by mlflow.utils.autologging_utils WARNING.
When the model is created, data input signature is saved on the MLmodel file with some.
You should change response_length signature input type from string to double by replacing
{"name": "response_length", "type": "double"}
instead of
{"name": "response_length", "type": "string"}
so it doesn't need to be converted. After serving the model with edited MLmodel file, the web server worked as expected.

Tflite (TF2) trained model running but not detecting in iOS app

I have trained a pre-model (ssd_mobilenet_v2_fpnlite_640x640) using TF2 Object Detection API then exported to an intermediate SavedModel to then convert it in TFlite model, using the following tutorials:
TF2 Object Detection API, Running TF2 Detection API Models on mobile, Converter Python API guide, and, Edge TF Lite iOS tutorial.
After many work hours I managed to make my model to predict in Python environment and run in the pre-made iOS app from TF lite.
However, after trying many ways of exporting and converting the model I cannot make the model to detect the objects I trained it for to detect.
The following is the instruction for training the model using TF2 API:
python3 model_main_tf2.py \
--pipeline_config_path={pipeline_path}\
--model_dir={output_model_dir} \
--alsologtostderr
This is the instructions for exporting the SavedModel using TF2 API:
python export_tflite_graph_tf2.py \
--pipeline_config_path {pipeline_path} \
--trained_checkpoint_dir {output_model_dir} \
--output_directory {exported_models_dir}
And the following the code to convert the model to tflite from Python API:
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
tflite_model = converter.convert()
I have also tried some other alternatives to convert in TF1 like:
converter = tf.compat.v1.lite.TFLiteConverter.from_saved_model(export_dir)
converter.inference_type = tf.compat.v1.lite.constants.QUANTIZED_UINT8
input_arrays = converter.get_input_arrays()
converter.quantized_input_stats = {input_arrays[0] : (0., 1.)} # mean_value, std_dev
tflite_model = converter.convert()
and with command line:
tflite_convert \
--saved_model_dir={saved_model} \
--output_file={output_dir} \
--output_format=TFLITE \
--input_shapes=1,640,640,3 \
--input_arrays='normalized_input_image_tensor' \
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
--inference_type=QUANTIZED_UINT8 \
--mean_values=128 \
--std_dev_values=127 \
--change_concat_input_ranges=false \
--allow_custom_ops
the result is a 500bytes file. This tflite model looks as follows (in Neutron):
In the IOS app I adjusted the code this way:
// MARK: Model parameters
let batchSize = 1
let inputChannels = 3
let inputWidth = 640
let inputHeight = 640
// image mean and std for floating model, should be consistent with parameters used in model training
let imageMean: Float = 128
let imageStd: Float = 127
I have also tried with some other SSD Mobilenet models unsuccesfully. I've been stuck for several days already, I'd appreciate your help.
Install tf-nighly and convert saved_model to tflite file
https://pypi.org/project/tf-nightly/

Input 'input_image' of layer '63' not found in any of the outputs of the preceeding layers

Update #1 (original question and details below):
As per the suggestion of #MatthijsHollemans below I've tried to run this by removing dynamic_axes from the initial create_onnx step below. This removed both:
Description of image feature 'input_image' has missing or non-positive width 0.
and
Input 'input_image' of layer '63' not found in any of the outputs of the preceeding layers.
Unfortunately this opens up two sub-questions:
I still want to have a functional ONNX model. Is there a more appropriate way to make H and W dynamic? Or should I be saving two versions of the ONNX model, one without dynamic_axes for the CoreML conversion, and one with for use as a valid ONNX model?
Although this solves the compilation error in xcode (specified below) it introduces the following runtime issues:
Finalizing CVPixelBuffer 0x282f4c5a0 while lock count is 1.
[espresso] [Espresso::handle_ex_plan] exception=Invalid X-dimension 1/480 status=-7
[coreml] Error binding image input buffer input_image: -7
[coreml] Failure in bindInputsAndOutputs.
I am calling this the same way I was calling the fixed size model, which does still work fine. The image dimensions are 640 x 480.
As specified below the model should accept any image between 64x64 and higher.
For flexible shape models, do I need to provide an input differently in xcode?
Original Question (parts still relevant)
I have been slowly working on converting a style transfer model from pytorch > onnx > coreml. One of the issues that has been a struggle is flexible/dynamic input + output shape.
This method (besides i/o renaming) has worked well on iOS 12 & 13 when using a static input shape.
I am using the following code to do the onnx > coreml conversion:
def create_coreml(name):
mlmodel = convert(
model="onnx/" + name + ".onnx",
preprocessing_args={'is_bgr': True},
deprocessing_args={'is_bgr': True},
image_input_names=['input_image'],
image_output_names=['stylized_image'],
minimum_ios_deployment_target='13'
)
spec = mlmodel.get_spec()
img_size_ranges = flexible_shape_utils.NeuralNetworkImageSizeRange()
img_size_ranges.add_height_range((64, -1))
img_size_ranges.add_width_range((64, -1))
flexible_shape_utils.update_image_size_range(
spec,
feature_name='input_image',
size_range=img_size_ranges)
flexible_shape_utils.update_image_size_range(
spec,
feature_name='stylized_image',
size_range=img_size_ranges)
mlmodel = coremltools.models.MLModel(spec)
mlmodel.save("mlmodel/" + name + ".mlmodel")
Although the conversion 'succeeds' there are a couple of warnings (spaces added for readability):
Translation to CoreML spec completed. Now compiling the CoreML model.
/usr/local/lib/python3.7/site-packages/coremltools/models/model.py:111:
RuntimeWarning: You will not be able to run predict() on this Core ML model. Underlying exception message was:
Error compiling model:
"Error reading protobuf spec. validator error: Description of image feature 'input_image' has missing or non-positive width 0.".
RuntimeWarning)
Model Compilation done.
/usr/local/lib/python3.7/site-packages/coremltools/models/model.py:111:
RuntimeWarning: You will not be able to run predict() on this Core ML model. Underlying exception message was:
Error compiling model:
"compiler error: Input 'input_image' of layer '63' not found in any of the outputs of the preceeding layers.
".
RuntimeWarning)
If I ignore these warnings and try to compile the model for latest targets (13.0) I get the following error in xcode:
coremlc: Error: compiler error: Input 'input_image' of layer '63' not found in any of the outputs of the preceeding layers.
Here is what the problematic area appears to look like in netron:
My main question is how can I get these two warnings out of the way?
Happy to provide any other details.
Thanks for any advice!
Below is my pytorch > onnx conversion:
def create_onnx(name):
prior = torch.load("pth/" + name + ".pth")
model = transformer.TransformerNetwork()
model.load_state_dict(prior)
dummy_input = torch.zeros(1, 3, 64, 64) # I wasn't sure what I would set the H W to here?
torch.onnx.export(model, dummy_input, "onnx/" + name + ".onnx",
verbose=True,
opset_version=10,
input_names=["input_image"], # These are being renamed from garbled originals.
output_names=["stylized_image"], # ^
dynamic_axes={'input_image':
{2: 'height', 3: 'width'},
'stylized_image':
{2: 'height', 3: 'width'}}
)
onnx.save_model(original_model, "onnx/" + name + ".onnx")

Log_probabilities returned by tf.nn.ctc_beam_search_decoder

I am training a LSTM-CTC speech recognition system with using beam search decoding in the following configuration:
decoded, log_prob =
tf.nn.ctc_beam_search_decoder(
inputs,
sequence_length,
beam_width=100,
top_paths=3,
merge_repeated=True
)
The output of log_probabilities for a batch by the above decoder are like:
[[ 14.73168373, 14.45586109, 14.35735512],
[ 20.45407486, 20.44991684, 20.41798401],
[ 14.9961853 , 14.925807 , 14.88066769],
...,
[ 18.89863396, 18.85992241, 18.85712433],
[ 3.93567419, 3.92791557, 3.89198923],
[ 14.56258488, 14.55923843, 14.51092243]],
So how do these scores represent log probabilities and if I want to compare confidence for top paths among examples then what will be the normalisation factor?

Resources