Saving to .png from an array gives glitchy results - python-3.x

Not a big question,
This is the code:
def rgb_to_png(rgb, path):
rgbarray = np.array([np.array([x for x in i]) for i in rgb])
rgbimage = PIL.Image.fromarray(rgbarray, "RGB")
rgbimage.save(path+"\sample.png", "PNG", quality=100)
This is what it's supposed to look like:
Image1
And this is what the code outputs:
Output
It seems that it's saving each rgb value as a single pixel or something similar, but I don't get what's wrong in the code. What is happening in there?
(Note: Both pictures resolution is 128 by 128)

You need to specify the type of your Numpy arrays as you create them, else they will not be unsigned 8-bit like PIL expects:
rgbarray = np.array(..., np.uint8)
You can check yours with:
print(rgbarray.dtype)
Also, the last 2 parameters of your Image.save() are superfluous.

Related

I can't generate a word cloud with some images

I just start the module worcloud in Python 3.7, and I'm using the next cxode to generate wordclouds from a dictionary and I'm trying to use differents masks, but this works for some images: in two cases works with images of 831x816 and 1000x808. This has to be with the size of the image? Or is because the images is kind a blurry? Or what is it?
I paste my code:
from PIL import Image
our_mask = np.array(Image.open('twitter.png'))
twitter_cloud = WordCloud(background_color = 'white', mask = our_mask)
twitter_cloud.generate_from_frequencies(frequencies)
twitter_cloud.to_file("twitter_cloud.jpg")
plt.imshow(twitter_cloud)
plt.axis('off')
plt.show()
How can i fix this?
I had a similar problem with a black-and-white image I used. What fixed it for me was when I cropped the image more closely to the black drawing so there was no unnecessary bulk white area on the edges.
Some images should be adjusted for the process. Note only white point values for image is mask_out (other values are mask_in). The problem is that some of images are not suitable for masking. The reason is that the color's np.array somewhat mismatches. To solve this, following can be done:
1.Creating mask object: (Please try with your own image as I couldn't upload:)
import numpy as np;
import pandas as pd;
from PIL import Image;
from wordcloud import WordCloud
mask = np.array(Image.open("filepath/picture.png"))
print(mask)
If the output values for white np.array is 255, then it is okay. But if it is 0 or probably other value, we have to change this to 255.
2.In the case of other values, the code for changing the values:
2-1. Create function for transforming (here our value = 0)
def transform_zeros(val):
if val == 0:
return 255
else:
return val
2-2. Creating the same shaped np.array:
maskable_image = np.ndarray((mask.shape[0],mask.shape[1]), np.int32)
2-3. Transformation:
for i in range(len(mask)):
maskable_image[i] = list(map(transform_zeros, mask[i]))
3.Checking:
print(maskable_image)
Then you can use this array for your mask.
mask = maskable_image
All this is copied and interpreted from this link, so check it if you find my attempted explanation unclear, as I just provided solution but don't understand that much about color arrays of image and its transformation.

'numpy.ndarray' object has no attribute 'write'

I am writing a python code to calculate the background of an astronomical image of globular cluster M15 (M15 reduced). My code can calculate the background and plot it using plt.imshow(). To save the background subtracted image I have to convert it to a str from a numpy.nparray. I have tried many things including the np.array2string used here. The file just stays as an array, which can't be saved as I need it to save as a .fits file. Any ideas how to get this to a str?
The code:
#sigma clip is the number of standard deviations from centre value that value can be before being rejected
sigma_clip = SigmaClip(sigma=2.)
#used to estimate the background in each of the meshes
bkg_estimator = MedianBackground()
#define path for reading in images
M15red_path = Path('.', 'ObservingData/M15normalised/')
M15red_images = ccdp.ImageFileCollection(M15red_path)
M15reduced = M15red_images.files_filtered(imagetyp='Light Frame', include_path=True)
M15backsub_path = Path('.', 'ObservingData/M15backsub/')
for n in range (0,59):
bkg = Background2D(CCDData.read(M15reduced[n]).data, box_size=(20,20),
filter_size=(3, 3),
edge_method='pad',
sigma_clip=sigma_clip,
bkg_estimator=bkg_estimator)
M15subback = CCDData.read(M15reduced[n]).data - bkg.background
np.array2string(M15subback)
#M15subback.write(M15backsub_path / 'M15backsub{}.fits'.format(n))
print(type(M15subback[1]))
You could try using [numpy.save][1] (but it saves a '.npy' file). In your case,
import numpy as np
...
for n in range (0,59):
...
np.save('M15backsub{}.npy'.format(n), M15backsub)
Since you need to store a numpy array, this should work.

How to normalize an uint16 image and convert it to uint8 when max pixel value is less than 65535?

I need to convert an image from uint16 to uint8 in order to save it to disk when the max pixel value of the uint16 image is not 65535(it is less than that value)(it is 2970, in fact). I have noticed that scikit-image has the method img_as_ubyte for such conversion. It seems this method converts 65535 into 255 and all values in proportion to that. Problem is that image has a maximum value of 2000 which is converted to 12 and a lot of resolution is lost. Also I am considering to save the image as a numpy
I tried using the rescale function proposed here and also the cv2.normalize function. However, I noticed that cv2.normalize function creates an image of dtype=uint16.
Also, I checked with mat2gray from matlab and cv2.normalize was more similar to mat2gray than the method with normalize function in plain python.
Using plain python:
orig_min = mammogram_dicom.min()
orig_max = mammogram_dicom.max()
target_min = 0.0
target_max = 255.0
mammogram_scaled = (mammogram_dicom-orig_min)*((target_max-
target_min)/(orig_max-orig_min))+target_min
mammogram_uint8_by_function = mammogram_scaled.astype(np.uint8)
I feel it strange to use the np.uint8 I would rather not use it but it is the only way I got to go to uint 8
For cv2.normalized I also had to use np.uint8 to get uint8:
mammogram_uint8_by_cv2 = np.zeros(mammogram_dicom.shape).astype(np.uint8)
mammogram_uint8_by_cv2 = cv2.normalize(mammogram_dicom, None, 0, 255, cv2.NORM_MINMAX)
Is there a better way to convert uint16 in uint8 this image?
I am expecting a similar or better behavior to mat2gray in matlab. I made a comparison between the same image from matlab and one calculated with above code. Cv2 normalize is the one most similar. The method with the rescale function -which I called plain python- looks similar to the naked eye, but making the difference:
mat2gray_from_matlab_image - plain_python_image
has some diferences with a value of 1 pixel
Is there a way to normalize the image inside scikit-image?
1) OpenCV solution
OpenCV normalize returns an image of the same type as the source if dtpye is not specified. To normalize a uint16 to uint8 without numpy use:
mammogram_uint8_by_cv2 = cv2.normalize(mammogram_dicom, None, 0, 255, cv2.NORM_MINMAX, dtype=cv2.CV_8U)
2) Skimage solution
First rescale the image to the full range and the convert it to uint8 using img_as_ubyte :
from skimage import exposure, img_as_ubyte
mammogram_uint8_by_ski = img_as_ubyte(exposure.rescale_intensity(mammogram_dicom))

How do i convert an image read with cv2.imread('img.png',cv2.IMREAD_UNCHANGED) to the format of cv2.imread('img.png',cv2.IMREAD_COLOR)

I'm trying to read an image in unchanged format, do some operations and convert it back to the colored format
im = cv2.imread(fname,cv2.IMREAD_UNCHANGED) # shape(240,240,4)
....
im2 = cv2.imread(im,cv2.IMREAD_COLOR) # required shape(240,240,3)
But, looks like I can't input the result of first numpy array into the second imread.
So currently I've created a temporary image after the operations and reading that value to get the required im2 value.
im = cv2.imread(fname,cv2.IMREAD_UNCHANGED) # shape(240,240,4)
....
cv2.imwrite('img.png',im)
im2 = cv2.imread('img.png',cv2.IMREAD_COLOR) # required shape(240,240,3)
However I would like to avoid the step of creating temporary image. How would I achieve the same with a better approach
OpenCV has a function for color conversion cvtColor
https://docs.opencv.org/3.1.0/de/d25/imgproc_color_conversions.html
im2 = cv2.cvtColor(im, <conversion code>)
You should figure out conversion code yourself, based on image format you have. Probably, it would be cv2.COLOR_BGRA2BGR

concatenating images in numpy array python

I'm working with Python and Numpy to take several images of the same pixel dimension and create an 2D array, so each row of the array represents one image and each column will represent the pixel at a certain location.
To achieve this, I have read in the image files and tried to use numpy.concatenate. The code is
#url of picture data
X_p = data.link
#list for storing the picture data
X= []
#read in the image from the url, and skip poster with 404 error
for url in X_p:
try:
loadimg = urllib.request.urlopen(url)
image_file = io.BytesIO(loadimg.read())
img = Image.open(image_file)
#Concatenate to linearize
X.append(np.concatenate(np.array(img)))
#404 error
except urllib.error.HTTPError as err:
if err.code == 404:
continue
else:
raise
#cast the list into numpy array
X = np.array(X)
#test to see if X is in correct dimension
print(X.shape)
I ran this code and the shape of X comes out in this format every single time
(number of images, height X width, 3)
for instance, if I load 12 image urls of 200x200 pixels, the outcome is
(12, 40000, 3)
What I need is to get rid of the 3 at the end, and it's difficult when I do not even understand where the 3 comes from.
I assume the problem I have is appending or concatenating at the wrong place. when I removed the np.concatenate, it simply did showed (12, 200, 200, 3).
I've searched online for numpy image processing and concatenations but I did not run across anything that would explain and fix what's happening.
Any and all help is appreciated. Thank you in advance for spending the time to read this post and answering..
I figured out the problem. I was curious with dimension of my array, so I search SO for questions asking incrementing or decrementing 1 dimension. and I ran across a post that explained what the 3 stood for.
How can I save 3D array results to a 4D array in Python/numpy?
Image.open().convert("L")
did not work for me, so I had to use a trick
with Image.open().convert("L") as img
I added this line after the for loop, and the dimension problem was fixed.

Resources