Python PIL: How To Draw Custom Filled Polygons - python-3.x

I'm wondering if there is a way in pillow where I can draw custom filled polygons, I know I can draw rectangles and circles, what about custom polygons.
Specifically, I want to draw something like the below image:
How can I achieve this, any ideas. Thanks

I'm not known for my graphic design abilities or patience tinkering with aesthetics, but this should give you an idea:
#!/usr/bin/env python3
from PIL import Image, ImageDraw
# Form basic purple image without alpha
w, h = 700, 250
im = Image.new('RGB', (w,h), color=(66,0,102))
# Form alpha channel independently
alpha = Image.new('L', (w,h), color=0)
# Get drawing context
draw = ImageDraw.Draw(alpha)
radius = 50
hline = h-radius
OPAQUE, TRANSPARENT = 255, 0
# Draw white where we want opaque
draw.rectangle((0,0,w,hline), fill=OPAQUE)
draw.ellipse((w//2-radius,hline-radius,w//2+radius,h), fill=OPAQUE)
# Draw black where we want transparent
draw.ellipse(((w//3)-radius,-radius,(w//3)+radius,radius), fill=TRANSPARENT)
draw.ellipse(((2*w//3)-radius,-radius,(2*w//3)+radius,radius), fill=TRANSPARENT)
# DEBUG only - not necessary
alpha.save('alpha.png')
# Put our shiny new alpha channel into our purple background and save
im.putalpha(alpha)
im.save('result.png')
The alpha channel looks like this - I artificially added a red border so you can see the extent of it:

Related

Change image color in PIL module

I am trying to vary the intensity of colors to obtain a different colored image...
import PIL
from PIL import Image
from PIL import ImageEnhance
from PIL import ImageDraw
# read image and convert to RGB
image=Image.open("readonly/msi_recruitment.gif")
image=image.convert('RGB')
# build a list of 9 images which have different brightnesses
enhancer=ImageEnhance.Brightness(image)
images=[]
for i in range(1, 10):
images.append(enhancer.enhance(i/10))
# create a contact sheet from different brightnesses
first_image=images[0]
contact_sheet=PIL.Image.new(first_image.mode, (first_image.width*3,first_image.height*3))
x=0
y=0
for img in images:
# Lets paste the current image into the contact sheet
contact_sheet.paste(img, (x, y) )
# Now we update our X position. If it is going to be the width of the image, then we set it to 0
# and update Y as well to point to the next "line" of the contact sheet.
if x+first_image.width == contact_sheet.width:
x=0
y=y+first_image.height
else:
x=x+first_image.width
# resize and display the contact sheet
contact_sheet = contact_sheet.resize((int(contact_sheet.width/2),int(contact_sheet.height/2) ))
display(contact_sheet)
But the above code just varies brightness....
Please tell me what changes should i make to vary color intensity in this code.....
Im sorry but i am unable to upload the picture now, consider any image you find suitable and help me out... Appreciated!!!!
Please go to this link and answer this question instead of this one, I apologise for inconvenience....
Pixel colour intensity
Many colour operations are best done in a colourspace such as HSV which you can get in PIL with:
HSV = rgb.convert('HSV')
You can then use split() to get 3 separate channels:
H, S, V = hsv.split()
Now you can change your colours. You seem a little woolly on what you want. If you want to change the intensity of the colours, i.e. make them less saturated and less vivid decrease the S (Saturation). If you want to change the reds to purples, i.e. change the Hues, then add something to the Hue channel. If you want to make the image brighter or darker, change the Value (V) channel.
When you have finished, merge merge((H,S,V)) the edited channels back together and convert back to RGB with convert('RGB').
See Splitting and Merging and Processing Individual Bands on this page.
Here is an example, using this image:
Here is the basic framework to load the image, convert to HSV colourspace, split the channels, do some processing, recombine the channels and revert to RGB colourspace and save the result.
#!/usr/bin/env python3
from PIL import Image
# Load image and create HSV version
im = Image.open('colorwheel.jpg')
HSV= im.convert('HSV')
# Split into separate channels
H, S, V = HSV.split()
######################################
########## PROCESSING HERE ###########
######################################
# Recombine processed H, S and V back into a recombined image
HSVr = Image.merge('HSV', (H,S,V))
# Convert recombined HSV back to reconstituted RGB
RGBr = HSVr.convert('RGB')
# Save processed result
RGBr.save('result.png')
So, if you find the chunk labelled "PROCESSING HERE" and put code in there to divide the saturation by 2, it will make the colours less vivid:
# Desaturate the colours by halving the saturation
S = S.point(lambda p: p//2)
If, instead, we halve the brightness (V), like this:
# Halve the brightness
V=V.point(lambda p: p//2)
the result will be darker:
If, instead, we add 80 to the Hue, all the colours will rotate around the circle - this is called a "Hue rotation":
# Rotate Hues around the Hue circle by 80 on a range of 0..255, so around 1/3 or a circle, i.e. 120 degrees:
H = H.point(lambda p: p+80)
which gives this:

With Image Process change each shape's color to a unique color

I have a png with multiple shapes on it that represent fictional geographic locations. I would like for each of theses locations to render with a different color so that they are easier to process.
I know how to change anything that is x color to y color, but I don't know how to set up the code so that it changes the color of the current shape and then moves on.
if pixelMap[i,j] == white:
node.append pixelMap[i,j]
#look at all white pixels connected to current node 1
expected result is that I can input an image like this:
https://cdn.discordapp.com/attachments/404701706820124676/608881289080209408/Asset_2.png
and come out with each shape a unique color.
You are essentially looking for "flood-filling" of an area constrained by a boundary, and you can do that with PIL/Pillow's ImageDraw.floodfill() method like this:
#!/usr/bin/env python3
from PIL import Image, ImageDraw
import numpy as np
# Open the image
im = Image.open('map.png').convert('RGB')
# Make all pixels in top-left country into magenta (255,0,255)
ImageDraw.floodfill(im,xy=(40,40),value=(255,0,255),thresh=50)
# Make all pixels in bottom-right country into yellow (255,255,0)
ImageDraw.floodfill(im,xy=(100,100),value=(255,255,0),thresh=50)
That gets you this:
I am not certain what you mean by "moving on to the next area", but I presume you could put the flood fill commands in a loop and keep taking the position of the first white pixel as the seed for the flood filling until there are no more uncoloured (white) pixels left. To do that, I would convert the PIL Image into a Numpy Array and find the coordinates of the white pixels like this:
# Convert PIL Image to Numpy Array
n = np.array(im)
# Get X,Y coordinates of all remaining white pixels
y, x = np.nonzero(np.all(n==[255,255,255],axis=2))

How can I quickly change pixels in a image from a color dictionary?

I have an image, I want to change all the colors in the image from a color map eg. {(10,20,212) : (60,40,112)...}
Currently, I am reading the image OpenCV and then iterating over the image array and changing each pixel, but this is very slow.
Is there any way I can do it faster?
I am providing two answers to this question. This answer is more based in OpenCV and the other is more based in PIL/Pillow. Read this answer in conjunction with my other answer and potentially mix and match.
You can use Numpy's linalg.norm() to find the distances between colours and then argmin() to choose the nearest. You can then use a LUT "Look Up Table" to look up a new value based on the existing values in an image.
#!/usr/bin/env python3
import numpy as np
import cv2
def QuantizeToGivenPalette(im, palette):
"""Quantize image to a given palette.
The input image is expected to be a Numpy array.
The palette is expected to be a list of R,G,B values."""
# Calculate the distance to each palette entry from each pixel
distance = np.linalg.norm(im[:,:,None] - palette[None,None,:], axis=3)
# Now choose whichever one of the palette colours is nearest for each pixel
palettised = np.argmin(distance, axis=2).astype(np.uint8)
return palettised
# Open input image and palettise to "inPalette" so each pixel is replaced by palette index
# ... so all black pixels become 0, all red pixels become 1, all green pixels become 2...
im=cv2.imread("image.png",cv2.IMREAD_COLOR)
inPalette = np.array([
[0,0,0], # black
[0,0,255], # red
[0,255,0], # green
[255,0,0], # blue
[255,255,255]], # white
)
r = QuantizeToGivenPalette(im,inPalette)
# Now make LUT (Look Up Table) with the 5 new colours
LUT = np.zeros((5,3),dtype=np.uint8)
LUT[0]=[255,255,255] # white
LUT[1]=[255,255,0] # cyan
LUT[2]=[255,0,255] # magenta
LUT[3]=[0,255,255] # yellow
LUT[4]=[0,0,0] # black
# Look up each pixel in the LUT
result = LUT[r]
# Save result
cv2.imwrite('result.png', result)
Input Image
Output Image
Keywords: Python, PIL, Pillow, image, image processing, quantise, quantize, specific palette, given palette, specified palette, known palette, remap, re-map, colormap, map, LUT, linalg.norm.
I am providing two answers to this question. This answer is more based in PIL/Pillow and the other is more based in OpenCV. Read this answer in conjunction with my other answer and potentially mix and match.
You can do it using the palette. In case you are unfamiliar with palettised images, rather than having an RGB value at each pixel location, you have a simple 8-bit index into a palette of up to 256 colours.
So, what we can do, is load your image as a PIL Image, and quantise it to the set of input colours you have. Then each pixel will have the index of the colour in your map. Then just replace the palette with the colours you want to map to.
#!/usr/bin/env python3
import numpy as np
from PIL import Image
def QuantizeToGivenPalette(im, palette):
"""Quantize image to a given palette.
The input image is expected to be a PIL Image.
The palette is expected to be a list of no more than 256 R,G,B values."""
e = len(palette)
assert e>0, "Palette unexpectedly short"
assert e<=768, "Palette unexpectedly long"
assert e%3==0, "Palette not multiple of 3, so not RGB"
# Make tiny, 1x1 new palette image
p = Image.new("P", (1,1))
# Zero-pad the palette to 256 RGB colours, i.e. 768 values and apply to image
palette += (768-e)*[0]
p.putpalette(palette)
# Now quantize input image to the same palette as our little image
return im.convert("RGB").quantize(palette=p)
# Open input image and palettise to "inPalette" so each pixel is replaced by palette index
# ... so all black pixels become 0, all red pixels become 1, all green pixels become 2...
im = Image.open('image.png').convert('RGB')
inPalette = [
0,0,0, # black
255,0,0, # red
0,255,0, # green
0,0,255, # blue
255,255,255 # white
]
r = QuantizeToGivenPalette(im,inPalette)
# Now simply replace the palette leaving the indices unchanged
newPalette = [
255,255,255, # white
0,255,255, # cyan
255,0,255, # magenta
255,255,0, # yellow
0,0,0 # black
]
# Zero-pad the palette to 256 RGB colours, i.e. 768 values
newPalette += (768-len(newPalette))*[0]
# And finally replace the palette with the new one
r.putpalette(newPalette)
# Save result
r.save('result.png')
Input Image
Output Image
So, to do specifically what you asked with a dictionary that maps old colour values to new ones, you will want to initialise oldPalette to the keys of your dictionary and newPalette to the values of your dictionary.
Keywords: Python, PIL, Pillow, image, image processing, quantise, quantize, specific palette, given palette, specified palette, known palette, remap, re-map, colormap, map.
There are some hopefully useful words about palettised images here, and here.
I think you might find using the built in LUT function of opencv helpful, as documented here.
There is already a python binding for the function, and it takes as input the original matrix and a LUT, and returns the new matrix as an output.
There isn't a tutorial for using it in python, but there is one for using it in C++ which I imagine will be useful, found here. That tutorial lists this method as the fastest one for this sort of problem.

How to convert the background of the entire image to white when both white and black backgrounds are present?

The form image contains text in different background. The image needs to be converted to one background (here white) and hence the heading needs to be converted into black.
input image :
output image:
My approach was to detect the grid(horizontal lines and vertical lines and sum them up) and then crop each section of the grid into new sub-images and then check the majority pixel color and transform accordingly. But after implementing that, the blue background image is not getting detected and getting cropped like :
So I am trying to convert the entire form image into one background so that I can avoid such outcomes.
Here's a different way of doing it that will cope with the "reverse video" being black, rather than relying on some colour saturation to find it.
#!/usr/bin/env python3
import cv2
import numpy as np
# Load image, greyscale and threshold
im = cv2.imread('form.jpg',cv2.IMREAD_GRAYSCALE)
# Threshold and invert
_,thr = cv2.threshold(im,127,255,cv2.THRESH_BINARY)
inv = 255 - thr
# Perform morphological closing with square 7x7 structuring element to remove details and thin lines
SE = np.ones((7,7),np.uint8)
closed = cv2.morphologyEx(thr, cv2.MORPH_CLOSE, SE)
# DEBUG save closed image
cv2.imwrite('closed.png', closed)
# Find row numbers of dark rows
meanByRow=np.mean(closed,axis=1)
rows = np.where(meanByRow<50)
# Replace selected rows with those from the inverted image
im[rows]=inv[rows]
# Save result
cv2.imwrite('result.png',im)
The result looks like this:
And the intermediate closed image looks like this - I artificially added a red border so you can see its extent on Stack Overflow's white background:
You can read about morphology here and an excellent description by Anthony Thyssen, here.
Here's a possible approach. Shades of blue will show up with a higher saturation than black and white if you convert to HSV colourspace, so...
convert to HSV
find mean saturation for each row and select rows where mean saturation exceeds a threshold
greyscale those rows, invert and threshold them
This approach should work if the reverse (standout) backgrounds are any colour other than black or white. It assumes you have de-skewed your images to be truly vertical/horizontal per your example.
That could look something like this in Python:
#!/usr/bin/env python3
import cv2
import numpy as np
# Load image
im = cv2.imread('form.jpg')
# Make HSV and extract S, i.e. Saturation
hsv = cv2.cvtColor(im, cv2.COLOR_BGR2HSV)
s=hsv[:,:,1]
# Save saturation just for debug
cv2.imwrite('saturation.png',s)
# Make greyscale version and inverted, thresholded greyscale version
gr = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
_,grinv = cv2.threshold(gr,127,255,cv2.THRESH_BINARY_INV)
# Find row numbers of rows with colour in them
meanSatByRow=np.mean(s,axis=1)
rows = np.where(meanSatByRow>50)
# Replace selected rows with those from the inverted, thresholded image
gr[rows]=grinv[rows]
# Save result
cv2.imwrite('result.png',gr)
The result looks like this:
The saturation image looks as follows - note that saturated colours (i.e. the blues) show up as light, everything else as black:
The greyscale, inverted image looks like this:

Change background and pixel color of image

For an experiment I want to show the participants drawings from a database which includes black drawn lines on a white background. Eventually I only want to shown what is the 'drawn part' per image in a certain color. So I want the white parts of the image to be made gray, so it is indistinguishable from the gray background. And I want to show the black parts of the image (the actual drawing) in other colors, for example red.
I am quite new to programming and so far I couldn't find an answer. I have tried several things, including the 2 options below.
Could anyone maybe show me an example of how to change the colors of the image I have attached to this message?
It would be very much appreciated!
[enter image description here][1]
####### OPTION 1, not working
#picture = Image.open(fname)
fname = exp.get_file('PICTURE_1.png')
picture = Image.open(fname)
# Get the size of the image
width, height = picture.size
# Process every pixel
for x in range(width):
for y in range(height):
current_color = picture.getpixel( (x,y) )
if current_color == (255,255,255):
new_color = (255,0,0)
picture.putpixel( (x,y), new_color)
elif current_color == (0,0,0):
new_color2 = (115,115,115)
picture.putpixel( (x,y), new_color2)
picture.show()
#picture.show()
win.flip()
clock.sleep(1000)
Implemented changes as you suggested gives: TypeError: 'int' object has no attribute 'getitem'
for x in range(width):
for y in range(height):
current_color = picture.getpixel( (x,y) )
if (current_color[0]<200) and (current_color[1]<200) and (current_color[2]<200):
new_color = (255,0,0)
picture.putpixel( (x,y), new_color)
elif (current_color[0]>200) and (current_color[1]>200) and (current_color[2]>200):
new_color2 = (115,115,115)
picture.putpixel( (x,y), new_color2)
picture.show()
Your approach in option one is basically correct, but here are a few tips to help you get it working properly:
Instead of saying if current_color == (255,255,255):, you should instead put
if (current_color[0]>200) and (current_color[1]>200) and (current_color[2]>200):
as even though the white parts of the image look white the pixels may not be exactly (255,255,255).
I thought you wanted to turn the white parts grey and the black parts red? In your code for option one, the lines
if current_color == (255,255,255):
new_color = (255,0,0)
will turn white pixels red. To turn black pixels red, it should be if current_color == (0,0,0).
If your code is still not working when these changes are made, you could try creating a new image with the same dimensions as the original one, and adding pixels to the new image rather than editing the pixels in the original one.
Also, it would help if you could tell us what actually happens when you run your code. Is there an error message, or is an image shown but the image is not correct? Could you please attach an example output?
Update:
I fiddled around with your code, and got it to do what you want it to do. Here is the code I ended up with:
import PIL
from PIL import Image
picture = Image.open('image_one.png')
# Get the size of the image
width, height = picture.size
for x in range(width):
for y in range(height):
current_color = picture.getpixel( (x,y) )
if (current_color[0]<200) and (current_color[1]<200) and (current_color[2]<200):
new_color = (255,0,0)
picture.putpixel( (x,y), new_color)
elif (current_color[0]>200) and (current_color[1]>200) and (current_color[2]>200):
new_color2 = (115,115,115)
picture.putpixel( (x,y), new_color2)
picture.show()
If you copy and paste this code into a script and run it in the same folder as your image, it should work.
There are much more efficient ways to do this than looping through each pixel and changing its value.
Since it looks like you're using PsychoPy, you can save your images as greyscale with a transparent background. By using the greyscale image format you allow PsychoPy to change the color of the lines to anything you want simply by altering the stimulus color setting. By using a transparent background, whatever you see behind your lines will show through, so you can choose to have a white square, a different square or no square at all. By this method, all the calculations for the colors are being done on the graphics card and can be changed every frame with no problems.
If for some reason you need to alter the image in ways that PsychoPy doesn't inherently allow (and if speed of processing matters) then you should try to change all the pixels in a single operation (using the numpy arrays) rather than one pixel at a time in a for-loop.

Resources