PIL cannot write mode F to jpeg - jpeg

I am taking a jpg image and using numpy's fft2 to create/save a new image. However it throws this error
"IOError: cannot write mode F as JPEG"
Is there an issue with CMYK and JPEG files in PIL???
p = Image.open('kibera.jpg')
bw_p = p.convert('L')
array_p = numpy.asarray(bw_p)
fft_p = abs(numpy.fft.rfft2(array_p))
new_p = Image.fromarray(fft_p)
new_p.save('kibera0.jpg')
new_p.histogram()

Try convert the image to RGB:
...
new_p = Image.fromarray(fft_p)
if new_p.mode != 'RGB':
new_p = new_p.convert('RGB')
...

Semente's answer is right for color images
For grayscale images you can use below:-
new_p = Image.fromarray(fft_p)
new_p = new_p.convert("L")
If you use new_p = new_p.convert('RGB') for a grayscale image then the image will still have 24 bit depth instead of 8 bit and would occupy thrice the size on hard disk and it wont be a true grayscale image.

I think it may be that your fft_p array is in float type and the image should have every pixel in the format 0-255 (which is uint8), so maybe you can try doing this before creating the image from array:
fft_p = fft_p.astype(np.uint8)
new_p = Image.fromarray(fft_p)
But be aware that every element in the fft_p array should be in the 0-255 range, so maybe you would need to do some processing to that before to get the desired results, for example if you every element is a float between 0 and 1 you can multiply them by 255.

def save_img(img, path):
img = Image.fromarray(img)
img.save(path)
raise OSError(f"cannot write mode {mode} as PNG") from e
OSError: cannot write mode F as PNG
Here the meaning of mode F is the floating point value in the image. So please convert the floating point image to the uint8 image before saving.
image.astype(np.uint8)

If you are working with PyTorch
import torchvision.transforms as T
transform=T.ToPILImage()
imt=transform(img)

Related

how to improve edge smoothness of an image rotated using pillow

I have this image
And I want to rotate it, and keep a smooth looking edge.
I have tried this approach below which adds some transparent borders to the image, to allow for the interpolation of the rotation to sample the transparent padding and the opaque image intensities when it renders the edge.
img = Image.open("sunset200x100.jpg")
im_array = np.asarray(img)
w, h = img.size
padding = 4
new_padded_size = (w+padding, h+padding)
img = img.convert('RGBA') # converting to RGBA adds transparency to the areas that aren't opaque
img = ImageOps.pad(img, size=new_padded_size)
im_array_rgba_padded = np.asarray(img)
rotated_im = img.rotate(56, expand=True, resample=PIL.Image.BICUBIC)
as_array = np.asarray(rotated_im)
#rotated_im.show()
rotated_im.save("rotated_sunset200x100_padded_with_2px.png")
However, it doesn't seem to do interpolation on the left, and right sides of the image. Inspecting the im_array_rgba_padded, I see that the first line, and last line of pixels have been made all black, however the left and right haven't got the same zero padding.
So the result ends up looking like this:-
wondering how I can get the padding into the left and right aswell, using the pad function, so that the left and right edges also look smooth ?? or why it is that the padding is not applied to the left and right aswell ?
you can use this change your code :
mg = Image.open("sunset200x100.jpg")
im_array = np.asarray(img)
w, h = img.size
print((w,h))
padding =4
new_padded_size = (w+padding, h+padding)
img = img.convert('RGBA') # converting to RGBA adds transparency to the areas that aren't opaque
# img = ImageOps.pad(img, size=new_padded_size)
img = ImageOps.expand(img,new_padded_size,fill='black')
im_array_rgba_padded = np.asarray(img)
rotated_im = img.rotate(56, expand=True, resample=PIL.Image.BICUBIC)
as_array = np.asarray(rotated_im)
rotated_im.show()
this work in windows10,python 3.9, pillow 8.3.
for more information go to this and pillow/ImageOps

OpenCV get pixels on an circle

I'm new to OpenCV and I'm trying to get the pixels of a circle from an image.
For example, I draw a circle on a random image:
import cv2
raw_img = cv2.imread('sample_picture.png')
x = 50
y = 50
rad = 20
cv2.circle(raw_img,(x,y),rad,(0,255,0),-1)
cv2.imshow('output', raw_img)
cv2.waitKey(0)
cv2.destroyAllWindows()
The output shows an image with a circle.
However, I want to be able to get all the pixel on the circle back in the form of an array. Is there any way to do this? I know I can get the approximate coordinates from the circle formula, but it will involve a lot of decimal calculations, and I'm pretty sure that the function cv2.circle() has already calculated the pixel, so is there a way to get it out from the function itself instead of calculating my self?
Also, if it is possible I would like to get the pixel of an ellipse using cv2.ellipse() back as an array of coordinates. But this time, I want to get the pixel only from a part of an ellipse (from a certain angle to another angle, which I can specify in the parameter of cv2.ellipse()).
Thank you.
You can achieve what you are looking for by using the numpy function:
numpy.where(condition[, x, y])
Detailed explanation of function in link :https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.where.html
In your case, you would want to it to return the coordinates that has non-zero values. Using this method, you can draw anything on an empty array and it will return all rows and columns corresponding to non-zeros.
It will return the index of the array that satisfies the condition you set. Below is a code showing an example of the usage.
import cv2
import numpy as np
raw_img = cv2.imread('sample_picture.png')
x = 50
y = 50
rad = 20
cv2.circle(raw_img,(x,y),rad,(0,255,0),-1)
# Here is where you can obtain the coordinate you are looking for
combined = raw_img[:,:,0] + raw_img[:,:,1] + raw_img[:,:,2]
rows, cols, channel = np.where(combined > 0)
cv2.imshow('output', raw_img)
cv2.waitKey(0)
cv2.destroyAllWindows()

Convert Colored Image to Gray Scale In Codenameone

Can anyone tell me how to go about converting a RGB Image object to Gray Scale? I know there is a lot of information on how to do this in Java already, but I just wanted to get an answer specific to Codenameone so others can benefit.
I am trying to implement image binarization using Otsu’s algorithm
You can use Image.getRGB() then modify the array as explained in this answer:
Convert Image to Grayscale with array matrix RGB java
Notice that the answer above is a bit over simplistic as it doesn't take into account the correct weight per color channel for proper grayscale effect but this depends on your nitpicking levels.
Then use this version of createImage with the resulting array.
For anyone looking for a simplified way (not using matrices) of doing what Shai is hinting, here is some sample code
int[] rgb = image.getRGB();
for(int k = 0;k<rgb.length;k++)
{
if(rgb[k]!=0)
{
int r = rgb[k]/256/256;
rgb[k]=rgb[k]-r*0x10000;
int g = rgb[k]/256;
rgb[k]=rgb[k]-g*0x100;
int b = rgb[k];
int intensity = (int)Math.round(((r+g+b)/(256.0*3.0))*256);
rgb[k] = intensity+(intensity*256)+intensity*(256*256);
}
}
Image grayImage = Image.createImage(rgb,image.getWidth(),image.getHeight());

Use Pillow (PIL fork) for chroma key [duplicate]

I'm writing a script to chroma key (green screen) and composite some videos using Python and PIL (pillow). I can key the 720p images, but there's some left over green spill. Understandable but I'm writing a routine to remove that spill...however I'm struggling with how long it's taking. I can probably get better speeds using numpy tricks, but I'm not that familiar with it. Any ideas?
Here's my despill routine. It takes a PIL image and a sensitivity number but I've been leaving that at 1 so far...it's been working well. I'm coming in at just over 4 seconds for a 720p frame to remove this spill. For comparison, the chroma key routine runs in about 2 seconds per frame.
def despill(img, sensitivity=1):
"""
Blue limits green.
"""
start = time.time()
print '\t[*] Starting despill'
width, height = img.size
num_channels = len(img.getbands())
out = Image.new("RGBA", img.size, color=0)
for j in range(height):
for i in range(width):
#r,g,b,a = data[j,i]
r,g,b,a = img.getpixel((i,j))
if g > (b*sensitivity):
out_g = (b*sensitivity)
else:
out_g = g
# end if
out.putpixel((i,j), (r,out_g,b,a))
# end for
# end for
out.show()
print '\t[+] done.'
print '\t[!] Took: %0.1f seconds' % (time.time()-start)
exit()
return out
# end despill
Instead of putpixel, I tried to write the output pixel values to a numpy array then convert the array to a PIL image, but that was averaging just over 5 seconds...so this was faster somehow. I know putpixel isn't the snappiest option but I'm at a loss...
putpixel is slow, and loops like that are even slower, since they are run by the Python interpreter, which is slow as hell. The usual solution is to convert immediately the image to a numpy array and solve the problem with vectorized operations on it, which run in heavily optimized C code. In your case I would do something like:
arr = np.array(img)
g = arr[:,:,1]
bs = arr[:,:,2]*sensitivity
cond = g>bs
arr[:,:,1] = cond*bs + (~cond)*g
out = Image.fromarray(arr)
(it may not be correct and I'm sure it can be optimized way better, this is just a sketch)

Converting an image to rows of grayscale pixel values

I'd like to use the node indico API. I need to convert the image to grayscale and then to arrays containing arrays/rows of pixel values. Where do I start?
These tools take a specific format for images, a list of lists, each
sub-list containing a 'row' of values corresponding to n pixels in the
image.
e.g. [[float, float, float ... *n ], [float, float, float ... *n ], ... *n]
Since pixels tend to be represented by RGBA values, you can use the
following formula to convert to grayscale.
Y = (0.2126 * R + 0.7152 * G + 0.0722 * B) * A
We're working on automatically scaling images, but for the moment it's
up to you provide a square image
It looks like node's image manipulation tools are sadly a little lacking, but there is a good solution.
get-pixels allows reading in images either from URL or from local path and will convert it into an ndarray that should work excellently for the API.
The API will accept RGB images in the format that get-pixels produces them, but if you're still interested in converting the images to grayscale, which can be helpful for other applications it's actually a little strange.
In a standard RGB image there's basically a luminence score given to each color, which is how bright the color appears. Based on the luminance, a conversion to grayscale for each pixel happens as follows:
Grayscale = 0.2126*R + 0.7152*G + 0.0722*B
Soon the API will also support the direct use of URLs, stay tuned on that front.
I maintain the sharp Node.js module that may be able to get you a little closer to what you need.
The following example will convert input to greyscale and generate a Bufferof integer values, one byte per pixel.
You'll need to add logic to divide by 255 to convert to float then split into an array of arrays to keep the Indico API happy.
sharp(input)
.resize(width, height)
.grayscale()
.raw()
.toBuffer(function(err, data) {
// data is a Buffer containing uint8 values (0-255)
// with each byte representing one pixel
});

Resources