How do i convert an image read with cv2.imread('img.png',cv2.IMREAD_UNCHANGED) to the format of cv2.imread('img.png',cv2.IMREAD_COLOR) - python-3.x

I'm trying to read an image in unchanged format, do some operations and convert it back to the colored format
im = cv2.imread(fname,cv2.IMREAD_UNCHANGED) # shape(240,240,4)
....
im2 = cv2.imread(im,cv2.IMREAD_COLOR) # required shape(240,240,3)
But, looks like I can't input the result of first numpy array into the second imread.
So currently I've created a temporary image after the operations and reading that value to get the required im2 value.
im = cv2.imread(fname,cv2.IMREAD_UNCHANGED) # shape(240,240,4)
....
cv2.imwrite('img.png',im)
im2 = cv2.imread('img.png',cv2.IMREAD_COLOR) # required shape(240,240,3)
However I would like to avoid the step of creating temporary image. How would I achieve the same with a better approach

OpenCV has a function for color conversion cvtColor
https://docs.opencv.org/3.1.0/de/d25/imgproc_color_conversions.html
im2 = cv2.cvtColor(im, <conversion code>)
You should figure out conversion code yourself, based on image format you have. Probably, it would be cv2.COLOR_BGRA2BGR

Related

Saving to .png from an array gives glitchy results

Not a big question,
This is the code:
def rgb_to_png(rgb, path):
rgbarray = np.array([np.array([x for x in i]) for i in rgb])
rgbimage = PIL.Image.fromarray(rgbarray, "RGB")
rgbimage.save(path+"\sample.png", "PNG", quality=100)
This is what it's supposed to look like:
Image1
And this is what the code outputs:
Output
It seems that it's saving each rgb value as a single pixel or something similar, but I don't get what's wrong in the code. What is happening in there?
(Note: Both pictures resolution is 128 by 128)
You need to specify the type of your Numpy arrays as you create them, else they will not be unsigned 8-bit like PIL expects:
rgbarray = np.array(..., np.uint8)
You can check yours with:
print(rgbarray.dtype)
Also, the last 2 parameters of your Image.save() are superfluous.

How to change only array for Dicom file with Simple ITK in python

I have a bunch of medical images in dicom that I want to correct for bias field inhomogeneity using SimpleITK in Python. The workflow is straightforward: I want to (1) open the dicom image, (2) create a binary mask of the object in the image, (3) apply N4 bias field correction to the masked image, (4) write back the corrected image in dicom format. Note that no spatial transformation is applied to the image, but only intensity transformation, so that I could copy all spatial information and all meta data (except for date/hour of creation and instance number) from the original to the corrected image.
I have written this function to achieve my goal:
def n4_dcm_correction(dcm_in_file):
metadata_to_set = ["0008|0012", "0008|0013", "0020|0013"]
filepath = PurePath(dcm_in_file)
root_dir = str(filepath.parent)
file_name = filepath.stem
dcm_reader = sitk.ImageFileReader()
dcm_reader.SetFileName(dcm_in_file)
dcm_reader.LoadPrivateTagsOn()
inputImage = dcm_reader.Execute()
metadata_to_copy = [k for k in inputImage.GetMetaDataKeys() if k not in metadata_to_set]
maskImage = sitk.OtsuThreshold(inputImage,0,1,200)
filledImage = sitk.BinaryFillhole(maskImage)
floatImage = sitk.Cast(inputImage,sitk.sitkFloat32)
corrector = sitk.N4BiasFieldCorrectionImageFilter();
output = corrector.Execute(floatImage, filledImage)
output.CopyInformation(inputImage)
for k in metadata_to_copy:
print("key is: {}; value is {}".format(k, inputImage.GetMetaData(k)))
output.SetMetaData(k, inputImage.GetMetaData(k))
output.SetMetaData("0008|0012", time.strftime("%Y%m%d"))
output.SetMetaData("0008|0013", time.strftime("%H%M%S"))
output.SetMetaData("0008|0013", str(float(inputImage.GetMetaData("0008|0013")) + randint(1, 999)))
out_file = "{}/{}_biascorrected.dcm".format(root_dir, file_name)
writer = sitk.ImageFileWriter()
writer.KeepOriginalImageUIDOn()
writer.SetFileName(out_file)
writer.Execute(sitk.Cast(output, sitk.sitkUInt16))
return
n4_dcm_correction("/path/to/my/dcm/image.dcm")
As much as the bias correction part works (the bias is removed), the writing part is a mess. I would expect my output dicom to have the exact same metadata of the original one, however they are all missing, notably the patient name, the protocol name and the manufacturer. Similalry, something is very wrong with the spatial information, since if I try to convert the dicom to the nifti format with dcm2niix, the directions are reversed: superior is down and inferior is up, forward is back and backward is front. What step am I missing ?
I suspect you are working with a MRI series, not a single file. Likely this example does what you want, read-modify-write a volume stored in a set of files.
If the example did not resolve your issue, please post to the ITK discourse which is the primary location for ITK/SimpleITK related discussions.

'numpy.ndarray' object has no attribute 'write'

I am writing a python code to calculate the background of an astronomical image of globular cluster M15 (M15 reduced). My code can calculate the background and plot it using plt.imshow(). To save the background subtracted image I have to convert it to a str from a numpy.nparray. I have tried many things including the np.array2string used here. The file just stays as an array, which can't be saved as I need it to save as a .fits file. Any ideas how to get this to a str?
The code:
#sigma clip is the number of standard deviations from centre value that value can be before being rejected
sigma_clip = SigmaClip(sigma=2.)
#used to estimate the background in each of the meshes
bkg_estimator = MedianBackground()
#define path for reading in images
M15red_path = Path('.', 'ObservingData/M15normalised/')
M15red_images = ccdp.ImageFileCollection(M15red_path)
M15reduced = M15red_images.files_filtered(imagetyp='Light Frame', include_path=True)
M15backsub_path = Path('.', 'ObservingData/M15backsub/')
for n in range (0,59):
bkg = Background2D(CCDData.read(M15reduced[n]).data, box_size=(20,20),
filter_size=(3, 3),
edge_method='pad',
sigma_clip=sigma_clip,
bkg_estimator=bkg_estimator)
M15subback = CCDData.read(M15reduced[n]).data - bkg.background
np.array2string(M15subback)
#M15subback.write(M15backsub_path / 'M15backsub{}.fits'.format(n))
print(type(M15subback[1]))
You could try using [numpy.save][1] (but it saves a '.npy' file). In your case,
import numpy as np
...
for n in range (0,59):
...
np.save('M15backsub{}.npy'.format(n), M15backsub)
Since you need to store a numpy array, this should work.

How to normalize an uint16 image and convert it to uint8 when max pixel value is less than 65535?

I need to convert an image from uint16 to uint8 in order to save it to disk when the max pixel value of the uint16 image is not 65535(it is less than that value)(it is 2970, in fact). I have noticed that scikit-image has the method img_as_ubyte for such conversion. It seems this method converts 65535 into 255 and all values in proportion to that. Problem is that image has a maximum value of 2000 which is converted to 12 and a lot of resolution is lost. Also I am considering to save the image as a numpy
I tried using the rescale function proposed here and also the cv2.normalize function. However, I noticed that cv2.normalize function creates an image of dtype=uint16.
Also, I checked with mat2gray from matlab and cv2.normalize was more similar to mat2gray than the method with normalize function in plain python.
Using plain python:
orig_min = mammogram_dicom.min()
orig_max = mammogram_dicom.max()
target_min = 0.0
target_max = 255.0
mammogram_scaled = (mammogram_dicom-orig_min)*((target_max-
target_min)/(orig_max-orig_min))+target_min
mammogram_uint8_by_function = mammogram_scaled.astype(np.uint8)
I feel it strange to use the np.uint8 I would rather not use it but it is the only way I got to go to uint 8
For cv2.normalized I also had to use np.uint8 to get uint8:
mammogram_uint8_by_cv2 = np.zeros(mammogram_dicom.shape).astype(np.uint8)
mammogram_uint8_by_cv2 = cv2.normalize(mammogram_dicom, None, 0, 255, cv2.NORM_MINMAX)
Is there a better way to convert uint16 in uint8 this image?
I am expecting a similar or better behavior to mat2gray in matlab. I made a comparison between the same image from matlab and one calculated with above code. Cv2 normalize is the one most similar. The method with the rescale function -which I called plain python- looks similar to the naked eye, but making the difference:
mat2gray_from_matlab_image - plain_python_image
has some diferences with a value of 1 pixel
Is there a way to normalize the image inside scikit-image?
1) OpenCV solution
OpenCV normalize returns an image of the same type as the source if dtpye is not specified. To normalize a uint16 to uint8 without numpy use:
mammogram_uint8_by_cv2 = cv2.normalize(mammogram_dicom, None, 0, 255, cv2.NORM_MINMAX, dtype=cv2.CV_8U)
2) Skimage solution
First rescale the image to the full range and the convert it to uint8 using img_as_ubyte :
from skimage import exposure, img_as_ubyte
mammogram_uint8_by_ski = img_as_ubyte(exposure.rescale_intensity(mammogram_dicom))

Creating a greyscale image with a Matrix in python

I'm Marius, a maths student in the first year.
We have recieved a team-assignment where we have to implement a fourier transformation and we chose to try to encode the transformation of an image to a JPEG image.
to simplify the problem for ourselves, we chose to do it only for pictures that are greyscaled.
This is my code so far:
from PIL import Image
import numpy as np
import sympy as sp
#
#ALLEMAAL INFORMATIE GEEN BEREKENINGEN
img = Image.open('mario.png')
img = img.convert('L') # convert to monochrome picture
img.show() #opens the picture
pixels = list(img.getdata())
print(pixels) #to see if we got the pixel numeric values correct
grootte = list(img.size)
print(len(pixels)) #to check if the amount of pixels is correct.
kolommen, rijen = img.size
print("het aantal kolommen is",kolommen,"het aantal rijen is",rijen)
#tot hier allemaal informatie
pixelMatrix = []
while pixels != []:
pixelMatrix.append(pixels[:kolommen])
pixels = pixels[kolommen:]
print(pixelMatrix)
pixelMatrix = np.array(pixelMatrix)
print(pixelMatrix.shape)
Now the problem forms itself in the last 3 lines. I want to try to convert the matrix of values back into an Image with the matrix 'pixelMatrix' as it's value.
I've tried many things, but this seems to be the most obvious way:
im2 = Image.new('L',(kolommen,rijen))
im2.putdata(pixels)
im2.show()
When I use this, it just gives me a black image of the correct dimensions.
Any ideas on how to get back the original picture, starting from the values in my matrix pixelMatrix?
Post Scriptum: We still have to implement the transformation itself, but that would be useless unless we are sure we can convert a matrix back into a greyscaled image.

Resources