Tensorflow Serving - Error passing image to the server - python-3.x

I managed to get the server to work, but I can't POST the image to my network. My network is a modification of the example, and when I post it it gives the following error.
"error": "inputs is a plain value/list, but expecting an object as multiple input tensors required as per tensorinfo_map"
My cliente side is:
import requests
import json
import cv2
import numpy as np
from PIL import Image
import nsvision as nv
img = cv2.imread(r'./temp.png')
_, img_encoded = cv2.imencode('.png', img)
headers = {"content-type": "application/json"}
data = json.dumps({"signature_name": "serving_default", "inputs": [img_encoded.tolist()] })
json_response = requests.post(url="http://172.104.198.143:8501/v1/models/API_model:predict", data = data, headers = headers)
print(json_response.text)
My signature:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['image'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 200, 50, 1)
name: serving_default_image:0
inputs['label'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1)
name: serving_default_label:0
The given SavedModel SignatureDef contains the following output(s):
outputs['ctc_loss'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 50, 37)
name: StatefulPartitionedCall:0
Method name is: tensorflow/serving/predict

Related

Launch a neural network in a browser

After running the neural network in the browser, an error appears('The dtype of dict['input_tensor'] provided in model.execute(dict) must be int32, but was float32'), but it seems to me that the problem is not in the input_tensor, but in the neural network.
Either I trained it incorrectly, or converted it incorrectly.
I trained a pre-trained neural network 'ssd_mobilenet_v2_fpnlite_320x320_coco17' in a colobarator.
Saved the network in tensorflow format - save_model:
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['__saved_model_init_op']:
The given SavedModel SignatureDef contains the following input(s):
The given SavedModel SignatureDef contains the following output(s):
outputs['__saved_model_init_op'] tensor_info:
dtype: DT_INVALID
shape: unknown_rank
name: NoOp
Method name is:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['input_tensor'] tensor_info:
dtype: DT_UINT8
shape: (1, -1, -1, 3)
name: serving_default_input_tensor:0
The given SavedModel SignatureDef contains the following output(s):
outputs['detection_anchor_indices'] tensor_info:
dtype: DT_FLOAT
shape: (1, 100)
name: StatefulPartitionedCall:0
outputs['detection_boxes'] tensor_info:
dtype: DT_FLOAT
shape: (1, 100, 4)
name: StatefulPartitionedCall:1
outputs['detection_classes'] tensor_info:
dtype: DT_FLOAT
shape: (1, 100)
name: StatefulPartitionedCall:2
outputs['detection_multiclass_scores'] tensor_info:
dtype: DT_FLOAT
shape: (1, 100, 249)
name: StatefulPartitionedCall:3
outputs['detection_scores'] tensor_info:
dtype: DT_FLOAT
shape: (1, 100)
name: StatefulPartitionedCall:4
outputs['num_detections'] tensor_info:
dtype: DT_FLOAT
shape: (1)
name: StatefulPartitionedCall:5
outputs['raw_detection_boxes'] tensor_info:
dtype: DT_FLOAT
shape: (1, 130944, 4)
name: StatefulPartitionedCall:6
outputs['raw_detection_scores'] tensor_info:
dtype: DT_FLOAT
shape: (1, 130944, 249)
name: StatefulPartitionedCall:7
Method name is: tensorflow/serving/predict
Concrete Functions:
Function Name: '__call__'
Option #1
Callable with:
Argument #1
input_tensor: TensorSpec(shape=(1, None, None, 3), dtype=tf.uint8, name='input_tensor')
I checked the quality of the model (MobileNetv2, finds one class in the image), it is wonderful!
After that, I converted, save the model to tensorflow js, with the following code
tensorflowjs_converter \
--input_format=tf_saved_model \
--output_node_names='detection_boxes','detection_classes','detection_features','detection_multiclass_scores','num_detections','raw_detection_boxes','raw_detection_scores' \
--output_format=tfjs_graph_model \
/content/gdrive/MyDrive/model_scoarbord/export/inference_graph/saved_model
/content/gdrive/MyDrive/model_scoarbord/web_model
When loading the network into the browser, I created a zero tensor to test the performance of the neural network.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<script src="https://cdn.jsdelivr.net/npm/#tensorflow/tfjs#3.12.0/dist/tf.min.js"></script>
<title>Document</title>
</head>
<body onload="">
<script>
async function loadModel() {
const modelUrl ='model.json';
const model = await tf.loadGraphModel(modelUrl);
console.log('Model loaded')
//create a zero tensor to test the model
const zeros = tf.zeros([1, -1, -1, 3]);
const zeros2 = zeros.toInt()
//checking the performance of the model
model.predict(zeros).print();
return model
}
loadModel()
</script>
</body>
</html>
Accordingly, my directory looks like this:
group1-shard1of3.bin
group1-shard2of3.bin
group1-shard3of3.bin
index.html
model.json
After starting the live server in visual code, I see the following error:
util_base.js:153 Uncaught (in promise) Error: The dtype of dict['input_tensor'] provided in model.execute(dict) must be int32, but was float32
I tried to explicitly specify the type of tensor const zeros2 = zeros.toInt()
And made a test prediction with zeros2
And got other errors:
graph_executor.js:166 Uncaught (in promise) Error: This execution contains the node 'StatefulPartitionedCall/map/while/exit/_435', which has the dynamic op 'Exit'. Please use model.executeAsync() instead. Alternatively, to avoid the dynamic ops, specify the inputs [StatefulPartitionedCall/map/TensorArrayV2Stack_1/TensorListStack]
Please tell me what am I doing wrong?
How else can you check the performance of a neural network in the tfjs_graph_model format?

Why does torch.nn.Upsample return a junk image?

When I execute the code segment below, nn.Upsample seems to be completely destroying my image. Am I applying it in the wrong way?
import torch
import imageio
import torch.nn as nn
from matplotlib import pyplot as plt
small = imageio.imread('small.png') # shape 200, 390, 4
small_reshaped = small.reshape(4, 200, 390) # shape 4, 200, 390
batch = torch.as_tensor(small_reshaped).unsqueeze(0) # shape 1, 4, 200, 390
ups = nn.Upsample((500, 970))
upsampled_batch = ups(batch) # shape 1, 4, 500, 970
upsampled_small = upsampled_batch[0].reshape(500, 970, 4) # shape 500, 970, 4
plt.imshow(small)
plt.imshow(upsampled_small)
plt.show()
Before upsampling:
After upsampling:
Original image (small.png):
Resolved it. Reshaping destroys the image. I should have transposed instead.
See https://discuss.pytorch.org/t/for-beginners-do-not-use-view-or-reshape-to-swap-dimensions-of-tensors/75524 for more details.
A working solution:
...
small_reshaped = small.transpose(2, 0, 1) # shape 4, 200, 390
...
upsampled_small = upsampled_batch[0].transpose(0,1).transpose(1,2) # shape 500, 970, 4
...

hi, i want to traverse numpy array to show only 100 images out of 2656

this is my array shape
print (img_array.shape)
(2656, 256, 256, 3)
and this is how I am printing single image
from matplotlib import pyplot as plt
from google.colab.patches import cv2_imshow
img3 = img_array[2655,:,:,:]
cv2_imshow(img3)
i want to print 100
thanks in advance
For first 100 images
img3 = img_array[0:99]
but it is producing error
TypeError: Cannot handle this data type: (1, 1, 256, 3), |u1

how to convert 4d numpy array to PIL image?

I'm doing some image machine learning by keras and if i put one picture converted to numpy.array in my model, it returns a 4d numpy array(predicted picture).
I want to convert that array to image by using Image.fromarray in PIL library.
but Image.fromarray only accept 2d array or 3d array.
my predicted picture's array shape is (1, 256, 256, 3) 1 means number of data.
so 1 is useless data for image. I want to convert it to(256,256,3) with not damaging image data. what should I do? Thanks for your time.
1 is not useless data, it is a singular dimension. You can just leave it out, the size of the data wouldn't change.
You can do that with numpy.squeeze.
Also, make sure that your data is in the right format, for Image.fromarray this is uint8.
Example:
import numpy as np
from PIL import Image
data = np.ones((1,16,16,3))
for i in range(16):
data[0,i,i,1] = 0.0
print("size: %s, type: %s"%(data.shape, data.dtype))
# size: (1, 16, 16, 3), type: float64
data_img = (data.squeeze()*255).astype(np.uint8)
print("size: %s, type: %s"%(data_img.shape, data_img.dtype))
# size: (16, 16, 3), type: uint8
img = Image.fromarray(data_img, mode='RGB')
img.show()

ValueError: setting an array element with a sequence error while cross validation

I am trying to make text sentiment but I got always this error.
My training data consists of two columns.
List of occurrence (X): This is the list of 0,1s based on occurrence of the words in the text document. There are 2115 values in each array. Looks like this: [0 0 1 ..., 0 0 0]. There is no missing values, each array has 2115 values.
Label of the data (label): This is also list of 0 and 1s based on sentiment. Looks like this: 1. There is just one value in each row for label.
My training sample has 1440 observations.Here is my Data picture
Code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.datasets import load_digits
from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
plt.figure()
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
title = "Learning Curves (Naive Bayes)"
cv = ShuffleSplit(n_splits=100, test_size=0.2, random_state=0)
estimator = GaussianNB()
plot_learning_curve(estimator, title, data.X, data.label, ylim=(0.3, 1.01), cv=cv, n_jobs=4)
When I run the code I got this error:
/anaconda/lib/python3.6/site-packages/sklearn/utils/validation.py in check_array(array=231 [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ... 0, 0, ...
Name: arrr, Length: 129, dtype: object, accept_sparse=False, dtype=<class 'numpy.float64'>, order=None, copy=False, force_all_finite=True, ensure_2d=True, allow_nd=False, ensure_min_samples=1, ensure_min_features=1, warn_on_dtype=False, estimator=None)
397
398 if sp.issparse(array):
399 array = _ensure_sparse_format(array, accept_sparse, dtype, copy,
400 force_all_finite)
401 else:
--> 402 array = np.array(array, dtype=dtype, order=order, copy=copy)
array = 231 [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ... 0, 0, ...
Name: arrr, Length: 129, dtype: object
dtype = <class 'numpy.float64'>
order = None
copy = False
403
404 if ensure_2d:
405 if array.ndim == 1:
406 raise ValueError(
ValueError: setting an array element with a sequence.
What should I do for solving this problem?
Thanks
I solved the problem. The problem occurs with the dimensions. I also changed the label into array. From pandas DataFrame to Numpy Array there exists "lists" in each array. So I changed as follows:
featurelists=data.X.values.tolist()
X=np.array(featurelists)
y=data.label.as_matrix()
Now, it works.
Thanks all.

Resources