I'm new to Computer Vision model structure, and I'm using Tensorflow for Node JS #tensorflow/tfjs-node to make some models detect some objects. With Mobilenet and Resnet SSD, the models are using the Channels Last format, so when I create a Tensor with tf.node.decodeImage the format is by default Channels Last, like shape: [1, 1200, 1200, 3] for 3 channels, and the predictions data work great, able to recognize objects.
But model from Pytorch, converted to ONNX, then to Protobuf PB format, the saved_model.pb has the Channels First format, like shape: [1, 3, 1200, 1200].
Now I need to create Tensor from image but with Channels First format. I found many exemple of creating conv1d, conv2d specifying the format dataFormat='channelsFirst'. But I don't know how to apply it to an image data. Here is the API https://js.tensorflow.org/api/latest/#layers.conv2d .
Here is the Tensor codes:
const tf = require('#tensorflow/tfjs-node');
let imgTensor = tf.node.decodeImage(new Uint8Array(subBuffer), 3);
imgTensor = imgTensor.cast('float32').div(255);
imgTensor = imgTensor.expandDims(0); // to add the most left axis of size 1
console.log('tensor', imgTensor);
This gives me a shape with channels last that is not compatible with the Model shape with channels first:
tensor Tensor {
kept: false,
isDisposedInternal: false,
shape: [ 1, 1200, 1200, 3 ],
dtype: 'float32',
size: 4320000,
strides: [ 4320000, 3600, 3 ],
dataId: {},
id: 7,
rankType: '4',
scopeId: 4
}
I know of tf.shape, but it reshapes without converting to channels first, and the result seems useless in predictions results. Don't know what I'm missing.
you can use something like this:
const nchw = tf.transpose(nhwc, [0, 3, 1, 2]);
As per the example in https://keras.io/examples/generative/cyclegan/, a pre-existing dataset has been loaded for implementation. I am trying to add my dataset.
import tensorflow_datasets as tfds
data = tfds.folder_dataset.ImageFolder('Images', shape=(256, 256, 3))
ds = data.as_dataset()
where 'Images' is the root folder containing two subfolders train and test. train folder containing trainA and trainB , test containing testA and testB.
However, I am unable to understand on how to access trainA , trainB , testA and testB so that it gets accepted by keras cyclegan example.
Best practice is to write your own tensorflow dataset
you can do so with the TFDS CLI (command line interface).
Install the TFDS CLI: pip install -q tfds-nightly
Navigate into the directory of your dataset: cd path/to/my/project/datasets/
Create a new dataset: tfds new my_dataset
[...] Manually modify my_dataset/my_dataset.py to implement your dataset.
Navigate into your new dataset: cd my_dataset/
Build your new TFDS dataset: tfds build
Within your project you then need to import your dataset
import my.project.datasets.my_dataset
and access it as you would any other tfds dataset:
ds = tfds.load('my_dataset')
Tensorflow documentation for adding a dataset is to be found here.
Cant write a comment yet but I think this may help some others: kosas Pipeline was working for me, I did optional renamings for my usecase. But I could't load the dataset with the current tensorflow example for cycleGAN (https://www.tensorflow.org/tutorials/generative/cyclegan)
I used
tfds.load("Soiled")
and I got the errormessage, a 'label' was not found. I found a solution (TypeError: tf__normalize_img() missing 1 required positional argument: 'label') where it states that you have to use
tfds.load("Soiled", as_supervised=True)
as otherwise the data is loaded as a dictionary and not as a needed tulpe of (image, label)
This addon worked for me.
I curated/wrote the whole code here
https://github.com/asokraju/Soiled
and added a read me file with specific instructions on how-to. Hope this is helpful
Custom Tensorflow Input Pipeline for Cycle GANs
Steps to create the dataset
Organize the data set inside a Data.zip file
trainA
trainB
testA
testB
A and B represents the two classes.
Provide the path ( of the Data.zip file ) in line 28 of Soiled.py i.e.,
_DL_URLS = Soiled":"C:\\Users\\<user>\\Downloads\\Data_001.zip"}
cd into Soiled folder and use tfds build command to build the data
The Tensorflow record files can be found at C:\Users\<user>\tensorflow_datasets\soiled. If needed, these files can be taken elsewhere to use.
loading the data
There are multiple ways to do it.
Import the necessary packages:
import tensorflow as tf
import tensorflow_datasets as tfds
import sys
Ensure that the path to Soiled folder containg the code, NOT the data generated, is accessable to the code. For this I have added the path as follows:
sys.path.insert(1, 'C:\\Users\\<user>\\Downloads\\')
Then the data can be loaded using:
ds = tfds.load('Soiled')
ds
{'trainA': <PrefetchDataset shapes: {image: (None, None, 3), label: ()}, types: {image: tf.uint8, label: tf.int64}>,
'trainB': <PrefetchDataset shapes: {image: (None, None, 3), label: ()}, types: {image: tf.uint8, label: tf.int64}>,
'testA': <PrefetchDataset shapes: {image: (None, None, 3), label: ()}, types: {image: tf.uint8, label: tf.int64}>,
'testB': <PrefetchDataset shapes: {image: (None, None, 3), label: ()}, types: {image: tf.uint8, label: tf.int64}>}
test:
next(iter(ds['trainA']))
Output exceeds the size limit. Open the full output data in a text editor
{'image': <tf.Tensor: shape=(1200, 1920, 3), dtype=uint8, numpy=
array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[115, 173, 187],
[112, 174, 197],
[108, 172, 199]],
[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[119, 170, 191],
[115, 165, 192],
[117, 168, 197]],
[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[109, 145, 179],
[134, 162, 199],
[134, 158, 194]],
...
...,
[ 72, 95, 67],
[ 78, 99, 66],
[ 79, 99, 62]]], dtype=uint8)>,
'label': <tf.Tensor: shape=(), dtype=int64, numpy=0>}
Steps used to create the folder structure.
Install tensorflow_datasets package
On Command line type tfds new Soiled. This will create a Soiled folder with file structure
checksums.tsv
dummy_data/
Soiled.py
Soiled_test.py
edit Soiled.py as needed.
Possible issues:
If it fails to build the pipeline, delete the folder tesorflow_datasets folder BEFORE you retry. In windows it can found at C\users\<user>.
If it gives an error something similar to
# tensorflow.python.framework.errors_impl.NotFoundError: Could not find directory C:\Users\<user>\tensorflow_datasets\downloads\extracted\ZIP.Users_kkosara_Downloads_Data_18r38_Co4F-G6ka9wRk2wGFbDPqLZu8TekEV7s9L9enI.zip\testA\trainA
try changing the data_dirs in lines to path_to_dataset or something that ensures it has the correct path to the downloaded data.
Ensure that the folder structure is proper
1. Organize the data set inside a `Data.zip` file
trainA
trainB
testA
testB
A and B represents the two classes.
also ensure that there are nothing else except the image files inside the folder.
Used Resources
How to load custom data into tfds for keras cyclegan example?
https://www.tensorflow.org/datasets/cli
https://www.tensorflow.org/datasets/catalog/cycle_gan
https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/cyclegan.ipynb#scrollTo=Ds4o1h4WHz9U
I managed to get the server to work, but I can't POST the image to my network. My network is a modification of the example, and when I post it it gives the following error.
"error": "inputs is a plain value/list, but expecting an object as multiple input tensors required as per tensorinfo_map"
My cliente side is:
import requests
import json
import cv2
import numpy as np
from PIL import Image
import nsvision as nv
img = cv2.imread(r'./temp.png')
_, img_encoded = cv2.imencode('.png', img)
headers = {"content-type": "application/json"}
data = json.dumps({"signature_name": "serving_default", "inputs": [img_encoded.tolist()] })
json_response = requests.post(url="http://172.104.198.143:8501/v1/models/API_model:predict", data = data, headers = headers)
print(json_response.text)
My signature:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['image'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 200, 50, 1)
name: serving_default_image:0
inputs['label'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1)
name: serving_default_label:0
The given SavedModel SignatureDef contains the following output(s):
outputs['ctc_loss'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 50, 37)
name: StatefulPartitionedCall:0
Method name is: tensorflow/serving/predict
The issue
I developed a simple NodeJS app for object detection using #tensorflow/tfjs-node. Everything works fine on my development PC (Windows 10 Pro), but trying to execute on my Raspberry Pi 2B (Raspbian 10), I got the following error:
Overriding the gradient for 'Max'
Overriding the gradient for 'OneHot'
Overriding the gradient for 'PadV2'
Overriding the gradient for 'SpaceToBatchND'
Overriding the gradient for 'SplitV'
2020-07-31 11:25:12.068892: I tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: ./assets/saved_model
2020-07-31 11:25:12.643852: I tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2020-07-31 11:25:13.206821: I tensorflow/cc/saved_model/loader.cc:311] SavedModel load for tags { serve }; Status: fail. Took 1137915 microseconds.
Error: Failed to load SavedModel: Op type not registered 'NonMaxSuppressionV5' in binary running on raspberrypi. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
at NodeJSKernelBackend.loadSavedModelMetaGraph (/home/pi/storage/tensorflow-test-node/node_modules/#tensorflow/tfjs-node/dist/nodejs_kernel_backend.js:1588:29)
at Object.<anonymous> (/home/pi/storage/tensorflow-test-node/node_modules/#tensorflow/tfjs-node/dist/saved_model.js:429:45)
at step (/home/pi/storage/tensorflow-test-node/node_modules/#tensorflow/tfjs-node/dist/saved_model.js:48:23)
at Object.next (/home/pi/storage/tensorflow-test-node/node_modules/#tensorflow/tfjs-node/dist/saved_model.js:29:53)
at fulfilled (/home/pi/storage/tensorflow-test-node/node_modules/#tensorflow/tfjs-node/dist/saved_model.js:20:58)
I can reproduce it with the following lines:
const tf = require('#tensorflow/tfjs-node');
// Native SavedModel: ./assets/saved_model/saved_model.pb
const objectDetectionModel = await tf.node.loadSavedModel('./assets/saved_model'); // Error
// ...
I supose that the error is related with the SavedModel version, but I don't know how convert it to use in the Rapsberry Pi or why the NodeJS app needs different SavedModel if I execute in Windows or Raspbian.
Details
Enviroment
Development:
OS: Windows 10 Pro
NodeJS: v12.16.2
NPM: 6.11.3
Target (Raspberry PI):
OS: Raspbian 10
NodeJS: v12.18.3
NPM: 6.14.6
NodeJS app
#tensorflow/tfjs-node#2.0.1 is the only dependency declared in the package.json.
Training
The model was trained on Python following this guide (TensorFlow version used was 1.15.2).
SavedModel
Details of SavedModel (command saved_model_cli show --dir saved_model --tag_set serve --signature_def serving_default executed):
The given SavedModel SignatureDef contains the following input(s):
inputs['inputs'] tensor_info:
dtype: DT_INT32
shape: (-1, -1, -1, 3)
name: image_tensor:0
The given SavedModel SignatureDef contains the following output(s):
outputs['detection_boxes'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 300, 4)
name: detection_boxes:0
outputs['detection_classes'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 300)
name: detection_classes:0
outputs['detection_multiclass_scores'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 300, 37)
name: detection_multiclass_scores:0
outputs['detection_scores'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 300)
name: detection_scores:0
outputs['num_detections'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: num_detections:0
outputs['raw_detection_boxes'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 300, 4)
name: raw_detection_boxes:0
outputs['raw_detection_scores'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 300, 37)
name: raw_detection_scores:0
Method name is: tensorflow/serving/predict
You need to convert your model for Tensorflow Lite (with reduced ops). The error received is due to the lack of ops available on raspberry pi when loading a desktop compiled model (with higher ops available). Read more about ops here: https://www.tensorflow.org/lite/guide/ops_select
There's already is a build script that exports the model to TF Lite, similar to the one you're using (same folder in official examples repo). The functionality is the same, however the input format is slightly different. Check it out: https://www.github.com/tensorflow/models/tree/master/research%2Fobject_detection%2Fexport_tflite_ssd_graph.py
I have been trying to find answer for hours for this strange behavior of cv2.merge().
In short, I'm merging 3 images of uint8 1-channel with size of 960x1280, gets a merged image of 960x1280x3,
but each channel is 1280x3 instead of 960x1280.
As a result, I can't plot it.
I'm loading each image using:
img = cv2.imread(file).astype(np.uint8)
if len(img.shape) > 2: img = img[:,:,1]
Here is the code for merging (with additional information):
alg = (img1,img2,img3)
print('type: ',type(alg[0]),type(alg[1]),type(alg[2]))
print('dtype: ',alg[0].dtype, alg[1].dtype, alg[2].dtype)
print('shape: ',alg[0].shape, alg[1].shape, alg[2].shape)
PseudoRGB = cv2.merge(alg)
print('\nmerged type: ',type(PseudoRGB))
print('merged dtype: ',PseudoRGB.dtype)
print('merged shape: ',PseudoRGB.shape)
print('merged shape, each channel: ',PseudoRGB[0].shape, PseudoRGB[1].shape, PseudoRGB[2].shape)
That gives me:
type: <class 'numpy.ndarray'> <class 'numpy.ndarray'> <class 'numpy.ndarray'>
dtype: uint8 uint8 uint8
shape: (960, 1280) (960, 1280) (960, 1280)
merged type: <class 'numpy.ndarray'>
merged dtype: uint8
merged shape: (960, 1280, 3)
merged shape, each channel: (1280, 3) (1280, 3) (1280, 3)
Any help is much appreciated.