I'm doing some image machine learning by keras and if i put one picture converted to numpy.array in my model, it returns a 4d numpy array(predicted picture).
I want to convert that array to image by using Image.fromarray in PIL library.
but Image.fromarray only accept 2d array or 3d array.
my predicted picture's array shape is (1, 256, 256, 3) 1 means number of data.
so 1 is useless data for image. I want to convert it to(256,256,3) with not damaging image data. what should I do? Thanks for your time.
1 is not useless data, it is a singular dimension. You can just leave it out, the size of the data wouldn't change.
You can do that with numpy.squeeze.
Also, make sure that your data is in the right format, for Image.fromarray this is uint8.
Example:
import numpy as np
from PIL import Image
data = np.ones((1,16,16,3))
for i in range(16):
data[0,i,i,1] = 0.0
print("size: %s, type: %s"%(data.shape, data.dtype))
# size: (1, 16, 16, 3), type: float64
data_img = (data.squeeze()*255).astype(np.uint8)
print("size: %s, type: %s"%(data_img.shape, data_img.dtype))
# size: (16, 16, 3), type: uint8
img = Image.fromarray(data_img, mode='RGB')
img.show()
Related
this is my array shape
print (img_array.shape)
(2656, 256, 256, 3)
and this is how I am printing single image
from matplotlib import pyplot as plt
from google.colab.patches import cv2_imshow
img3 = img_array[2655,:,:,:]
cv2_imshow(img3)
i want to print 100
thanks in advance
For first 100 images
img3 = img_array[0:99]
but it is producing error
TypeError: Cannot handle this data type: (1, 1, 256, 3), |u1
I'm trying to just apply maxpool2d (from torch.nn) on a single image (not as a maxpool layer). Here is my code right now:
name = 'astronaut'
imshow(images[name], name)
img = images[name]
# pool of square window of size=3, stride=1
m = nn.MaxPool2d(3,stride = 1)
img_transform = torch.Tensor(images[name])
plt.imshow(m(img_transform).view((512,510)))
The issue is, this code gives me a very green image as a result. I am sure the problem is with the dimensions of view, but I was unable to find how to apply maxpool to just one image so I couldn't fix it. The dimension of the image I'm considering is 512x512. The arguments for view make no sense for me right now, it's just the only number that gives a result...
If for example, I gave 512,512 as the argument for view, I get the following error:
RuntimeError: shape '[512, 512]' is invalid for input of size 261120
If anyone can tell me how to apply maxpool, avgpool, or minpool to an image and display the result I would be super grateful!
Thanks (:
Assuming your image is a numpy.array upon loading (please see comments for explanation of each step):
import numpy as np
import torch
# Assuming you have 3 color channels in your image
# Assuming your data is in Width, Height, Channels format
numpy_img = np.random.randint(low=0, high=255, size=(512, 512, 3))
# Transform to tensor
tensor_img = torch.from_numpy(numpy_img)
# PyTorch takes images in format Channels, Width, Height
# We have to switch their dimensions using `permute`
tensor_img = tensor_img.permute(2, 0, 1)
tensor_img.shape # Shape [3, 512, 512]
# Layers always need batch as first dimension (even for one image)
# unsqueeze will add it for you
ready_tensor_img = tensor_img.unsqueeze(dim=0)
ready_tensor_img.shape # Shape [1, 3, 512, 512]
pooling = torch.nn.MaxPool2d(kernel_size=3, stride=1)
# You need to cast your image to float as
# pooling is not implemented for Tensors of type long
new_img = pooling(ready_tensor_img.float())
If your image is black and white you would need shape [1, 1, 512, 512] (single channel only), you can't leave/squeeze those dimensions, they always have to be there for any torch.nn.Module!
To transform tensor into image again you could use similar steps:
# Cast to long and squeeze batch dimension
no_batch = new_img.long().squeeze(dim=0)
# Unpermute
width_height_channels = no_batch.permute(1, 2, 0)
width_height_channels.shape # Shape: [510, 510, 3]
# Cast to numpy and you have your image
final_image = width_height_channels.numpy()
I have 1000 RGB images which I want to read from the current directory and store it in a numpy array in the shape of (1000,3,32,32) for using it in CNN.
For this reason, I have read a sample image, resized it to 32 * 32. Then appended it to an array 'a' which I have created using zeros for the shape (1000,3,32,32). But I am getting an error called " 'numpy.ndarray' an object has no attribute 'append' ". How can it be solved? If it needs any different approach I am open to that as well.
import matplotlib.pyplot as plt
import numpy as np
reshapedimage =cv2.resize(cv2.imread("0 (1).png", 1), (32, 32))
a = np.zeros((1000,3,32,32))
a.append(reshapedimage)
I think you mean this:
import numpy as np
# Create dummy image-like thing
w, h = 32, 32
im=np.arange(h*w*3).reshape((3,h,w))
# Create empty list
stack=[]
# Append the image to the stack 5 times
stack.append(im)
stack.append(im)
stack.append(im)
stack.append(im)
stack.append(im)
# Make Numpy array and check size
v = np.array(stack)
print(v.shape)
Output
(5, 3, 32, 32)
I am using mnist dataset for training a capsule network in keras background.
After training, I want to display an image from mnist dataset. For loading images, mnist.load_data() is used. The data is stored as (x_train, y_train),(x_test, y_test).
Now, for visualizing image, my code is as follows:
img_path = x_test[1]
print(img_path.shape)
plt.imshow(img_path)
plt.show()
The code gives output as follows:
(28, 28, 1)
and the error on plt.imshow(img_path) as follows:
TypeError: Invalid dimensions for image data
How to show image in png format. Help!
As per the comment of #sdcbr using np.sqeeze reduces unnecessary dimension. If image is 2 dimensions then imshow function works fine. If image has 3 dimensions then you have to reduce extra 1 dimension. But, for higher dim data you will have to reduce it to 2 dims, so np.sqeeze may be applied multiple times. (Or you may use some other dim reduction functions for higher dim data)
import numpy as np
import matplotlib.pyplot as plt
img_path = x_test[1]
print(img_path.shape)
if(len(img_path.shape) == 3):
plt.imshow(np.squeeze(img_path))
elif(len(img_path.shape) == 2):
plt.imshow(img_path)
else:
print("Higher dimensional data")
Example:
plt.imshow(test_images[0])
TypeError: Invalid shape (28, 28, 1) for image data
Correction:
plt.imshow((tf.squeeze(test_images[0])))
Number 7
You can use tf.squeeze for removing dimensions of size 1 from the shape of a tensor.
plt.imshow( tf.shape( tf.squeeze(x_train) ) )
Check out TF2.0 example
matplotlib.pyplot.imshow() does not support images of shape (h, w, 1). Just remove the last dimension of the image by reshaping the image to (h, w): newimage = reshape(img,(h,w)).
While I know how to convert a single color image (32,32,3) to grayscale using CV2:
img = cv2.cvtColor( img, cv2.COLOR_RGB2GRAY )
I need to convert a batch of 60,000 images in a 4D numpy array (60000,32,32,3), how can I achieve that?
Let's say your 4D array of images is called img_stack with shape (60000,32,32,3).
You could do:
gray_stack = np.empty_like(img_stack[...,0])
for i in range(img_stack.shape[0]):
gray_stack[i] = cv2.cvtColor(img_stack[i], cv2.COLOR_RGB2GRAY)
Resulting shape is (60000,32,32).
Or you could do:
gray_stack = np.empty_like(img_stack[...,:1])
for i in range(img_stack.shape[0]):
gray_stack[i,:,:,0] = cv2.cvtColor(img_stack[i], cv2.COLOR_RGB2GRAY)
Resulting shape is (60000,32,32,1).
Bonus Tensorflow solution:
gray_stack = tf.image.rgb_to_grayscale(img_stack, name=None)
Resulting shape will be (60000,32,32,1).
The above OpenCV solutions might actually perform faster.
One more option using numpy:
grayscale_imgs = np.dot(img_stack, [0.299 , 0.587, 0.114])
grayscale_imgs.shape # => (60000, 32, 32)
More about the weighted sum can be found here