opencv python3 merge acting weird - python-3.x

I have been trying to find answer for hours for this strange behavior of cv2.merge().
In short, I'm merging 3 images of uint8 1-channel with size of 960x1280, gets a merged image of 960x1280x3,
but each channel is 1280x3 instead of 960x1280.
As a result, I can't plot it.
I'm loading each image using:
img = cv2.imread(file).astype(np.uint8)
if len(img.shape) > 2: img = img[:,:,1]
Here is the code for merging (with additional information):
alg = (img1,img2,img3)
print('type: ',type(alg[0]),type(alg[1]),type(alg[2]))
print('dtype: ',alg[0].dtype, alg[1].dtype, alg[2].dtype)
print('shape: ',alg[0].shape, alg[1].shape, alg[2].shape)
PseudoRGB = cv2.merge(alg)
print('\nmerged type: ',type(PseudoRGB))
print('merged dtype: ',PseudoRGB.dtype)
print('merged shape: ',PseudoRGB.shape)
print('merged shape, each channel: ',PseudoRGB[0].shape, PseudoRGB[1].shape, PseudoRGB[2].shape)
That gives me:
type: <class 'numpy.ndarray'> <class 'numpy.ndarray'> <class 'numpy.ndarray'>
dtype: uint8 uint8 uint8
shape: (960, 1280) (960, 1280) (960, 1280)
merged type: <class 'numpy.ndarray'>
merged dtype: uint8
merged shape: (960, 1280, 3)
merged shape, each channel: (1280, 3) (1280, 3) (1280, 3)
Any help is much appreciated.

Related

Tensorflow Serving - Error passing image to the server

I managed to get the server to work, but I can't POST the image to my network. My network is a modification of the example, and when I post it it gives the following error.
"error": "inputs is a plain value/list, but expecting an object as multiple input tensors required as per tensorinfo_map"
My cliente side is:
import requests
import json
import cv2
import numpy as np
from PIL import Image
import nsvision as nv
img = cv2.imread(r'./temp.png')
_, img_encoded = cv2.imencode('.png', img)
headers = {"content-type": "application/json"}
data = json.dumps({"signature_name": "serving_default", "inputs": [img_encoded.tolist()] })
json_response = requests.post(url="http://172.104.198.143:8501/v1/models/API_model:predict", data = data, headers = headers)
print(json_response.text)
My signature:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['image'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 200, 50, 1)
name: serving_default_image:0
inputs['label'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1)
name: serving_default_label:0
The given SavedModel SignatureDef contains the following output(s):
outputs['ctc_loss'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 50, 37)
name: StatefulPartitionedCall:0
Method name is: tensorflow/serving/predict

numpy ndarray dtype convertion failed

I have a piece of code that did some ndarray transformation, and I'd like to convert the final output to be np.int8 type and output it to file. However, the conversion did not work. Here is the piece of code:
print("origin dtype:", image[0].dtype)
print(type(image[0]))
image[0] = image[0].astype(np.uint8)
print(image[0])
print("image datatype1:",image[0].dtype)
image[0].tofile(f'{image_name}_{org_h}_{org_w}_{dst_h}_{dst_w}.bin')
print("image datatype2:",image[0].dtype)
Here is what I got:
origin dtype: float32
<class 'numpy.ndarray'>
[[[ 71. 73. 73. ... 167. 170. 173.]
[ 62. 63. 64. ... 164. 168. 170.]
[ 54. 56. 57. ... 157. 163. 165.]
...
[142. 154. 138. ... 115. 91. 111.]
[158. 127. 123. ... 128. 130. 113.]
[133. 114. 106. ... 114. 110. 106.]]]
image datatype1: float32
image datatype2: float32
Can somebody help me with where it went wrong?
Rows of a 2D array cannot have a different dtypes: when you assign a uint8 array to the row of a float32 array, it is cast to float32; for example:
image = np.ones((4, 4), dtype='float32')
print(image[0].dtype)
# float32
image[0] = image[0].astype('uint8')
print(image[0].dtype)
# float32
Your options are either to convert the dtype of the entire array at once:
image = image.astype('uint8')
print(image[0].dtype)
# uint8
Or to convert your 2D array to a list of 1D arrays, each of which can then have its own dtype:
image = list(image)
print(image[0].dtype)
# float32
image[0] = image[0].astype('uint8')
print(image[0].dtype)
# uint8

Error relating to conversion from list to tensor in Pytorch

There is a variable 'tmp' (3 dimension).
tmp = [torch.tensor([1]),torch.tensor([2,3])]
type(tmp) -> <class 'list'>
type(tmp[0]) -> <class 'torch.Tensor'>
type(tmp[0][0]) -> <class 'torch.Tensor'>
I want to convert 'tmp' into torch.Tensor type.
But, when I run this code below, an error occurs.
torch.Tensor(tmp)
>> ValueError: only one element tensors can be converted to Python scalars
How can I fix this?
torch.stack cannot be effective in this case because tensors in 'tmp' are not the same shape.
Use torch.stack - All tensors need to be of the same size in the list.
>>> torch.stack(tmp)
Ex:
>>> tmp = [torch.rand(2,2),torch.rand(2,2)]
>>> tmp = torch.stack(tmp)
>>> tmp
tensor([[[0.0212, 0.1864],
[0.0070, 0.3381]],
[[0.1607, 0.9568],
[0.9093, 0.1835]]])
>>> type(tmp)
<class 'torch.Tensor'>

OpenCV merge failing to merge image channel

I'm attempting to add Gaussian noise to a single channel of an image.
import cv2 as cv
import numpy as np
img1 = cv.imread('input/foo.png')
img1_blue, img1_green, img1_red = cv.split(img1)
img1_h, img1_w, _ = img1.shape
s = 5
noise = np.random.normal(0, s, (img1_h, img1_w))
img1_gn = img1_green + noise
print(img1_green.shape) # (512, 384)
print(img1_gn.shape) # (512, 384)
print(img1_blue.shape) # (512, 384)
img1_g_noise = cv.merge((img1_blue, img1_gn, img1_red))
This results in the following error:
---------------------------------------------------------------------------
error Traceback (most recent call last)
<ipython-input-34-049cf9e65133> in <module>
13
---> 14 img1_g_noise = cv.merge((img1_blue, img1_gn, img1_red))
15
error: OpenCV(3.4.5) /io/opencv/modules/core/src/merge.cpp:293: error: (-215:Assertion failed) mv[i].size == mv[0].size && mv[i].depth() == depth in function 'merge'
I'm not sure how or why this is happening. The resulting noisy green channel has the same dimensions and type as the other two channels. Recombining the original green channel works just fine. Any steering direction is appreciated, and thank you in advance.
This is because noise and channel datatype mismatch. numpy matrix has default datatype of numpy.float64. and you have to define noise in type of rach channel by adding .astype(img1_blue.dtype) to noise defenition.
edited code :
import cv2 as cv
import numpy as np
img1 = cv.imread('list.JPG')
img1_blue, img1_green, img1_red = cv.split(img1)
img1_h, img1_w, _ = img1.shape
s = 5
noise = np.random.normal(0, s, (img1_h, img1_w)).astype(img1_blue.dtype)
img1_gn = img1_green + noise
print(img1_green.shape) # (512, 384)
print(img1_gn.shape) # (512, 384)
print(img1_blue.shape) # (512, 384)
img1_g_noise = cv.merge((img1_blue, img1_gn, img1_red))
cv.imshow("img1_g_noise",img1_g_noise)
cv2.waitKey()
this is the dtype problem.
by default, the image_blue and image_red are uint8 type;
but the noise is float16 type.
Solution1
you can change the noise to 'unint8` type by:
noise = noise.astype('image_red.type')
but this will let the noise loss much information.
Solution2
you can also change the all of the rgb channel to float16 dtype, by adding this two line:
img1_blue = img1_blue.astype(img1_gn.dtype)
img1_red = img1_red.astype(img1_gn.dtype)

how to convert 4d numpy array to PIL image?

I'm doing some image machine learning by keras and if i put one picture converted to numpy.array in my model, it returns a 4d numpy array(predicted picture).
I want to convert that array to image by using Image.fromarray in PIL library.
but Image.fromarray only accept 2d array or 3d array.
my predicted picture's array shape is (1, 256, 256, 3) 1 means number of data.
so 1 is useless data for image. I want to convert it to(256,256,3) with not damaging image data. what should I do? Thanks for your time.
1 is not useless data, it is a singular dimension. You can just leave it out, the size of the data wouldn't change.
You can do that with numpy.squeeze.
Also, make sure that your data is in the right format, for Image.fromarray this is uint8.
Example:
import numpy as np
from PIL import Image
data = np.ones((1,16,16,3))
for i in range(16):
data[0,i,i,1] = 0.0
print("size: %s, type: %s"%(data.shape, data.dtype))
# size: (1, 16, 16, 3), type: float64
data_img = (data.squeeze()*255).astype(np.uint8)
print("size: %s, type: %s"%(data_img.shape, data_img.dtype))
# size: (16, 16, 3), type: uint8
img = Image.fromarray(data_img, mode='RGB')
img.show()

Resources