How to normalize an uint16 image and convert it to uint8 when max pixel value is less than 65535? - python-3.x

I need to convert an image from uint16 to uint8 in order to save it to disk when the max pixel value of the uint16 image is not 65535(it is less than that value)(it is 2970, in fact). I have noticed that scikit-image has the method img_as_ubyte for such conversion. It seems this method converts 65535 into 255 and all values in proportion to that. Problem is that image has a maximum value of 2000 which is converted to 12 and a lot of resolution is lost. Also I am considering to save the image as a numpy
I tried using the rescale function proposed here and also the cv2.normalize function. However, I noticed that cv2.normalize function creates an image of dtype=uint16.
Also, I checked with mat2gray from matlab and cv2.normalize was more similar to mat2gray than the method with normalize function in plain python.
Using plain python:
orig_min = mammogram_dicom.min()
orig_max = mammogram_dicom.max()
target_min = 0.0
target_max = 255.0
mammogram_scaled = (mammogram_dicom-orig_min)*((target_max-
target_min)/(orig_max-orig_min))+target_min
mammogram_uint8_by_function = mammogram_scaled.astype(np.uint8)
I feel it strange to use the np.uint8 I would rather not use it but it is the only way I got to go to uint 8
For cv2.normalized I also had to use np.uint8 to get uint8:
mammogram_uint8_by_cv2 = np.zeros(mammogram_dicom.shape).astype(np.uint8)
mammogram_uint8_by_cv2 = cv2.normalize(mammogram_dicom, None, 0, 255, cv2.NORM_MINMAX)
Is there a better way to convert uint16 in uint8 this image?
I am expecting a similar or better behavior to mat2gray in matlab. I made a comparison between the same image from matlab and one calculated with above code. Cv2 normalize is the one most similar. The method with the rescale function -which I called plain python- looks similar to the naked eye, but making the difference:
mat2gray_from_matlab_image - plain_python_image
has some diferences with a value of 1 pixel
Is there a way to normalize the image inside scikit-image?

1) OpenCV solution
OpenCV normalize returns an image of the same type as the source if dtpye is not specified. To normalize a uint16 to uint8 without numpy use:
mammogram_uint8_by_cv2 = cv2.normalize(mammogram_dicom, None, 0, 255, cv2.NORM_MINMAX, dtype=cv2.CV_8U)
2) Skimage solution
First rescale the image to the full range and the convert it to uint8 using img_as_ubyte :
from skimage import exposure, img_as_ubyte
mammogram_uint8_by_ski = img_as_ubyte(exposure.rescale_intensity(mammogram_dicom))

Related

Saving to .png from an array gives glitchy results

Not a big question,
This is the code:
def rgb_to_png(rgb, path):
rgbarray = np.array([np.array([x for x in i]) for i in rgb])
rgbimage = PIL.Image.fromarray(rgbarray, "RGB")
rgbimage.save(path+"\sample.png", "PNG", quality=100)
This is what it's supposed to look like:
Image1
And this is what the code outputs:
Output
It seems that it's saving each rgb value as a single pixel or something similar, but I don't get what's wrong in the code. What is happening in there?
(Note: Both pictures resolution is 128 by 128)
You need to specify the type of your Numpy arrays as you create them, else they will not be unsigned 8-bit like PIL expects:
rgbarray = np.array(..., np.uint8)
You can check yours with:
print(rgbarray.dtype)
Also, the last 2 parameters of your Image.save() are superfluous.

I can't generate a word cloud with some images

I just start the module worcloud in Python 3.7, and I'm using the next cxode to generate wordclouds from a dictionary and I'm trying to use differents masks, but this works for some images: in two cases works with images of 831x816 and 1000x808. This has to be with the size of the image? Or is because the images is kind a blurry? Or what is it?
I paste my code:
from PIL import Image
our_mask = np.array(Image.open('twitter.png'))
twitter_cloud = WordCloud(background_color = 'white', mask = our_mask)
twitter_cloud.generate_from_frequencies(frequencies)
twitter_cloud.to_file("twitter_cloud.jpg")
plt.imshow(twitter_cloud)
plt.axis('off')
plt.show()
How can i fix this?
I had a similar problem with a black-and-white image I used. What fixed it for me was when I cropped the image more closely to the black drawing so there was no unnecessary bulk white area on the edges.
Some images should be adjusted for the process. Note only white point values for image is mask_out (other values are mask_in). The problem is that some of images are not suitable for masking. The reason is that the color's np.array somewhat mismatches. To solve this, following can be done:
1.Creating mask object: (Please try with your own image as I couldn't upload:)
import numpy as np;
import pandas as pd;
from PIL import Image;
from wordcloud import WordCloud
mask = np.array(Image.open("filepath/picture.png"))
print(mask)
If the output values for white np.array is 255, then it is okay. But if it is 0 or probably other value, we have to change this to 255.
2.In the case of other values, the code for changing the values:
2-1. Create function for transforming (here our value = 0)
def transform_zeros(val):
if val == 0:
return 255
else:
return val
2-2. Creating the same shaped np.array:
maskable_image = np.ndarray((mask.shape[0],mask.shape[1]), np.int32)
2-3. Transformation:
for i in range(len(mask)):
maskable_image[i] = list(map(transform_zeros, mask[i]))
3.Checking:
print(maskable_image)
Then you can use this array for your mask.
mask = maskable_image
All this is copied and interpreted from this link, so check it if you find my attempted explanation unclear, as I just provided solution but don't understand that much about color arrays of image and its transformation.

How do i convert an image read with cv2.imread('img.png',cv2.IMREAD_UNCHANGED) to the format of cv2.imread('img.png',cv2.IMREAD_COLOR)

I'm trying to read an image in unchanged format, do some operations and convert it back to the colored format
im = cv2.imread(fname,cv2.IMREAD_UNCHANGED) # shape(240,240,4)
....
im2 = cv2.imread(im,cv2.IMREAD_COLOR) # required shape(240,240,3)
But, looks like I can't input the result of first numpy array into the second imread.
So currently I've created a temporary image after the operations and reading that value to get the required im2 value.
im = cv2.imread(fname,cv2.IMREAD_UNCHANGED) # shape(240,240,4)
....
cv2.imwrite('img.png',im)
im2 = cv2.imread('img.png',cv2.IMREAD_COLOR) # required shape(240,240,3)
However I would like to avoid the step of creating temporary image. How would I achieve the same with a better approach
OpenCV has a function for color conversion cvtColor
https://docs.opencv.org/3.1.0/de/d25/imgproc_color_conversions.html
im2 = cv2.cvtColor(im, <conversion code>)
You should figure out conversion code yourself, based on image format you have. Probably, it would be cv2.COLOR_BGRA2BGR

misc.imread and misc.imsave changes pixel values

I am:
reading a grayscale image,
performing normalization of all pixel values and then
saving the image.
When I reopen the image I see different pixel values.
My code snippet:
image = misc.imread('lena.jpg')
maximum = np.max(image) # finds maximum pixel value of image
img = np.divide(image, maximum) # divide every pixel value by maximum
# scale every pixel value between 0 and 127
img_scale = np.round(img * (np.power(2,7)-1)).astype(int)
misc.imsave('lena_scaled.jpg', img_scale)
img_reopen = misc.imread('lena_scaled.jpg')
When I compare img_scale and img_reopen I get different values:
By executing np.max(img_scale), I get 127.
By executing np.max(img_reopen), I get 255
By executing img_scale[0][0], I get [82,82,82]
By executing img_reopen[0][0], I get [156][156][156]
Question
Why do the pixel values get changed after saving the image and reopening it?
The imsave function rescales the image when it saves to the disk.
misc.imsave function uses bytescale under the hood to rescale image to the full range (0,255).
That's why you get np.max 255 when you reopen. Please refer to the documentation here.
Follow-up:
To preserve your values without rescaling, you can try use misc.toimage function and save the resulting as follows,
im = misc.toimage(img_scale, high=np.max(img_scale), low=np.max(img_scale)
im.save('lena_scaled.jpg')
When you read the 'lena_scaled.jpg' with misc.imsave you can try use the following:
misc.imread('lena_scaled.jpg', mode='I')
I - ā€˜Lā€™ (8-bit pixels, black and white)
which I believe would work for your grayscale image.
Hope this helps.

mangle images of vtk from itk

I am reading an image from SimpleITK but I get these results in vtk any help?
I am not sure where things are going wrong here.
Please see image here.
####
CODE
def sitk2vtk(img):
size = list(img.GetSize())
origin = list(img.GetOrigin())
spacing = list(img.GetSpacing())
sitktype = img.GetPixelID()
vtktype = pixelmap[sitktype]
ncomp = img.GetNumberOfComponentsPerPixel()
# there doesn't seem to be a way to specify the image orientation in VTK
# convert the SimpleITK image to a numpy array
i2 = sitk.GetArrayFromImage(img)
#import pylab
#i2 = reshape(i2, size)
i2_string = i2.tostring()
# send the numpy array to VTK with a vtkImageImport object
dataImporter = vtk.vtkImageImport()
dataImporter.CopyImportVoidPointer( i2_string, len(i2_string) )
dataImporter.SetDataScalarType(vtktype)
dataImporter.SetNumberOfScalarComponents(ncomp)
# VTK expects 3-dimensional parameters
if len(size) == 2:
size.append(1)
if len(origin) == 2:
origin.append(0.0)
if len(spacing) == 2:
spacing.append(spacing[0])
# Set the new VTK image's parameters
#
dataImporter.SetDataExtent (0, size[0]-1, 0, size[1]-1, 0, size[2]-1)
dataImporter.SetWholeExtent(0, size[0]-1, 0, size[1]-1, 0, size[2]-1)
dataImporter.SetDataOrigin(origin)
dataImporter.SetDataSpacing(spacing)
dataImporter.Update()
vtk_image = dataImporter.GetOutput()
return vtk_image
###
END CODE
You are ignoring two things:
There is an order change when you perform GetArrayFromImage:
The order of index and dimensions need careful attention during conversion. Quote from SimpleITK Notebooks at http://insightsoftwareconsortium.github.io/SimpleITK-Notebooks/01_Image_Basics.html:
ITK's Image class does not have a bracket operator. It has a GetPixel which takes an ITK Index object as an argument, which is an array ordered as (x,y,z). This is the convention that SimpleITK's Image class uses for the GetPixel method as well.
While in numpy, an array is indexed in the opposite order (z,y,x).
There is a change of coordinates between ITK and VTK image representations. Historically, in computer graphics there is a tendency to align the camera in such a way that the positive Y axis is pointing down. This results in a change of coordinates between ITK and VTK images.

Resources