Does PIL work with skimage? - python-3.x

Why is is that when I do this:
from skimage import feature, io
from PIL import Image
edges = feature.canny(blimage)
io.imshow(edges)
io.show()
I get exactly what I want, that being the full edged-only image. But when I do this:
edges = feature.canny(blimage)
edges = Image.fromarray(edges)
edges.show()
I get a whole mess of random dots, lines and other things that have more resemble a Jackson Pollock painting than the image? Whats wrong, and how do I fix it such that I can get what I want through both methods?
for full code, visit my Github here:
https://github.com/Speedyflames/Image-Functions/blob/master/Image_Processing.py

Let's see what kind of image is produced by skimage.feature.canny:
edges = feature.canny(blimage)
>>> print(edges.dtype)
bool
It'a boolean image, True for white, False for black. PIL correctly identifies the datatype and tries to map it to its own 1-bit mode which is "1" (see PIL docs about it).
This looks broken though, it doesn't seem to properly get byte width or something like that.
There was an issue about it, they seem to have fixed the PIL to NumPy conversion but apparently the converse is still broken.
Anyway, long story short, your best bet to successfully convert a binary image from NumPy to PIL is converting it to grayscale:
edges_pil = Image.fromarray((edges * 255).astype(np.uint8))
>>> print edges_pil.mode
L
If you actually need a 1-bit image you can convert it afterwards with
edges_pil = edges_pil.convert('1')

Related

Crop TIFF using JPG mask

I'm currently working on cloud removals from satellite data (I'm pretty new).
This is the image I'm working on (TIFF)
And this is the mask, where black pixels represent clouds (JPG)
I'm trying to remove the clouds from the TIFF, using the mask to identify the position of the cloud, and the cloudless image itself, like this (the area is the same, but the period is different):
I'm kindly ask how can I achieve that. A Python solution, with libraries like Rasterio or skimage is particularly appreciated.
Thanks in advance.
You can read the images with rasterio, PIL, OpenCV or tifffile, so I use OpenCV
import cv2
import numpy as np
# Load the 3 images
cloudy = cv2.imread('cloudy.png')
mask = cv2.imread('mask.jpg')
clear = cv2.imread('clear.png')
Then just use Numpy where() to choose whether you want the clear or cloudy image at each location according to the mask:
res = np.where(mask<128, clear, cloudy)
Note that if your mask was a single channel PNG rather than JPEG, or if it was read as greyscale like this:
mask = cv2.imread('mask.jpg', cv2.IMREAD_GRAYSCALE)
you would have to make it broadcastable to the 3 channels of the other two arrays by adding a new axis like this:
res = np.where(mask[...,np.newaxis]<128, clear, cloudy)

Problems Converting Numpy/OpenCV Array Image into a Wand Image

I'm currently trying to perform a Polar to Cartesian Coordinate Image transformation, to display a raw sonar image into a 'fan-display'.
Initially I have a Numpy Array image of type np.float64, that can be seen below:
After doing some searching, I came across this StackOverflow post Inverse transform an image from Polar to Cartesian in OpenCV with a very similar problem, in which the poster seemed to have solved his/her issue by using the Python Wand library (http://docs.wand-py.org/en/0.5.9/index.html), specifically using their set of Distortion functions.
However, when I tried to use Wand and read the image in, I instead ended up with Wand getting the image below, which seems to be smaller than the original one. However, the weird thing is that img.size still gives the same size number as the original image's shape.
The code for this transformation can be seen below:
print(raw_img.shape)
wand_img = Image.from_array(raw_img.astype(np.uint8), channel_map="I") #=> (369, 256)
display(wand_img)
print("Current image size", wand_img.size) #=> "Current image size (369, 256)"
This is definitely quite problematic as Wand will automatically give the wrong 'fan image'. Is anybody familiar with this kind of problem with the Wand library previously, and if yes, may I ask what is the recommended solution to fix this issue?
If this issue isn't resolved soon I have an alternative backup of using OpenCV's cv::remap function (https://docs.opencv.org/4.1.2/da/d54/group__imgproc__transform.html#ga5bb5a1fea74ea38e1a5445ca803ff121). However the problem with this is that I'm not sure what mapping arrays (i.e. map_x and map_y) to use to perform the Polar->Cartesian transformation, as using a mapping matrix that implements the transformation equations below:
r = polar_distances(raw_img)
x = r * cos(theta)
y = r * sin(theta)
didn't seem to work and instead threw out errors from OpenCV as well.
Any kind of help and insight into this issue is greatly appreciated. Thank you!
- NickS
EDIT I've tried on another image example as well, and it still shows a similar problem. So first, I imported the image into Python using OpenCV, using these lines of code:
import matplotlib.pyplot as plt
from wand.image import Image
from wand.display import display
import cv2
img = cv2.imread("Test_Img.jpg")
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.figure()
plt.imshow(img_rgb)
plt.show()
which showed the following display as a result:
However, as I continued and tried to open the img_rgb object with Wand, using the code below:
wand_img = Image.from_array(img_rgb)
display(img_rgb)
I'm getting the following result instead.
I tried to open the image using wand.image.Image() on the file directly, which is able to display the image correctly when using display() function, so I believe that there isn't anything wrong with the wand library installation on the system.
Is there a missing step that I required to convert the numpy into Wand Image that I'm missing? If so, what would it be and what is the suggested method to do so?
Please do keep in mind that I'm stressing the conversion of Numpy to Wand Image quite crucial, the raw sonar images are stored as binary data, thus the required use of Numpy to convert them to proper images.
Is there a missing step that I required to convert the numpy into Wand Image that I'm missing?
No, but there is a bug in Wand's Numpy implementation in Wand 0.5.x. The shape of OpenCV's ndarray is (ROWS, COLUMNS, CHANNELS), but Wand's ndarray is (WIDTH, HEIGHT, CHANNELS). I believe this has been fixed for the future 0.6.x releases.
If so, what would it be and what is the suggested method to do so?
Swap the values in img_rgb.shape before passing to Wand.
img_rgb.shape = (img_rgb.shape[1], img_rgb.shape[0], img_rgb.shape[2],)
with Image.from_array(img_rgb) as img:
display(img)

Photutils DAOPhot Not Fitting stars well?

I recently ran across the PhotUtils package and am trying to use it to perform PSF Photometry on some images I have. However, when I try to run the code, I get very strange results. When I plot the image generated by get_residual_image(), the stars are not removed well. Some sample images are shown below.
The first image has sigma set to 2.05, as it is in one of the sample programs in the PhotUtils documentation:
However, the stars only appear to be removed in their center.
The second image has sigma set to 5.0. This one is especially strange. Some stars are way over-removed, some are under removed, some black squares are added to the image, etc.
Here is my code:
import photutils
from photutils.psf import DAOPhotPSFPhotometry as DAOP
from photutils.psf import IntegratedGaussianPRF as PRF
from photutils.background import MMMBackground
bkg = MMMBackground()
background = 2.5*bkg(img)
gaussian_prf = PRF(sigma=5.0)
gaussian_prf.sigma.fixed = False
photTester = DAOP(8,background,5,gaussian_prf,31)
photResults = photTester(imgStars)
finalImg = photTester.get_residual_image()
After this, I simply plot the original and final image in MatPlotLib. I use a greyscale colormap. The reason that the left images appear slightly darker is that they use a different color scaling.
Perhaps I have set one of the parameters incorrectly?
Could someone help me out with this? Thank you!
Looking at the residual image instantly told me that the background subtraction might be wrong. I could reproduce the result and wondered, if MMMBackground did not do the job correctly.
After taking a closer look at the documentation, Getting startet with Photutils finally gave the essential hint:
image -= np.median(image)

Grayscale .png to numpy array

Indeed, this question has been answered many times. However, as I am not allowed to add a comment to an answer due to "too low" reputation, I would like to discuss the solution presented in the most comprehensive answer.
Wouldn't the solution:
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt #Used in the comparison below
im = Image.open('file.png').convert('RGB') #Opens a picture in grayscale
pic = np.array(im)
im.close()
work properly? I am wondering whether unacceptable changes in the quality occur. I have noticed some differences (i.e. black rows at the top in plt.imshow()) when I display the image:
im.show() #Before closing
plt.imshow(pic)
but I don't know whether they are only inevitable consequences of converting to np.array.
PS - If it is important, I would mention that I prepare the image for color quantization (KMeans) and Floyd dithering.
PPS - If you advised me how not to post duplicate question but discuss answers directly - it would be really appreciated.
Try it and see!
from PIL import Image
import numpy as np
# Other answer method
im1 = Image.open('gray.png').convert('L')
im1 = np.stack((im1,)*3, axis=-1)
# Your method
im2 = Image.open('gray.png').convert('RGB')
im2 = np.array(im2)
# Test if identical
print(np.array_equal(im1,im2))
Sample Output
True
I would say the one aspect that is different, is that the method in the other answer will work (insofar as it actually makes a greyscale image where R=G=B) even if the input image is colour, whereas your method will produce a colour image.
I was working on doing a similar thing, and I'm not sure why but I ran into a bunch of issues. In the end this worked for me pretty well without loss of any data.
from PIL import Image
import numpy as np
img=np.array(Image.open(filename).convert('L'))
and to convert back:
import imageio
array = array.astype(np.uint8)
imageio.imwrite(newfilename, array)
edit: this only works for black and white images. Color images need 3D arrays rather than 2D arrays

How to overlay images on each other in python and opencv?

I am trying to write images over each other. Ideally, what I want to do is to write every image in one folder over every image in another folder and output every unique image to another folder. So far, I am just working on having one image write over one image, but I can't seem to get that to work.
import numpy as np
import cv2
import matplotlib
def opencv_createsamples():
mask = ('resized_pos/2')
img = cv2.imread('neg/1')
new_img = img * (mask.astype(img.dtype))
cv2.imwrite('samp', new_img)
opencv_createsamples()
It would be helpful to have more information about your errors.
Something that stands out immediately is the lack of file type extensions. Your images are probably not being read correctly, to begin with. Also, image sizes would be a good thing to consider so you could resize as required.
If the goal is to blend images, considering the alpha channel is important. Here is a relevant question on StackOverflow:How to overlay images in python
Some other OpenCV docs that have helped me in the past: https://docs.opencv.org/trunk/d0/d86/tutorial_py_image_arithmetics.html
https://docs.opencv.org/3.1.0/d5/dc4/tutorial_adding_images.html
Hope this helps!

Resources