When pasting a reshaped image, why does it lose its shape? - python-3.x

I'm using Pillow with Python 3. I want to reshape a square image into a circle, then pasting it on another image.
My problem is, that the reshaping is done properly, and the image is made a circle, but when pasting it on the other image, it becomes a square again:
That's my code:
#client.command()
async def test(ctx):
owms = Image.open("bg.jpg")
asset = ctx.author.avatar_url_as(size = 128)
data = BytesIO(await asset.read())
img=Image.open(data).convert("RGB")
npImage=np.array(img)
h,w=img.size
alpha = Image.new('L', img.size,0)
draw = ImageDraw.Draw(alpha)
draw.pieslice([0,0,h,w],0,360,fill=255)
npAlpha=np.array(alpha)
npImage=np.dstack((npImage,npAlpha))
pfp = Image.fromarray(npImage)
pfp.save("outpfp.png")
owms.paste(Image.fromarray(npImage), (101, 67))
owms.save("outlvl.jpg")
await ctx.send(file = discord.File("outpfp.png"))
await ctx.send(file = discord.File("outlvl.jpg"))
(I have made it such that the output is both the circle shaped image and the intended original output.)
The solution I seek is the pasted image to be circle and not square like in the image above.

The only thing left is to properly use the mask parameter in Image.paste. Also, there's no need for this NumPy detour solely for adding the alpha channel. There's Image.putalpha for that. Here's the minimized code (I left out the Discord stuff):
from PIL import Image, ImageDraw
owms = Image.open('bg.jpg')
img = Image.open('get/your/avatar/here').resize((128, 128))
h, w = img.size
alpha = Image.new('L', img.size, 0)
draw = ImageDraw.Draw(alpha)
draw.pieslice([0, 0, h, w], 0, 360, fill=255)
img.putalpha(alpha)
img.save('outpfp.png')
owms.paste(img, (101, 67), mask=img)
owms.save('outlvl.jpg')
I get the following outputs:
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.9.1
PyCharm: 2021.1.1
Pillow: 8.2.0
----------------------------------------

Related

Paste an image to another image at two given co-ordinates with altered opacity using PIL or OpenCV in Python

I have two images with given points, one point each image, that need to be aligned so that the result image is a summation of both images, while image 2 is pasted on image 1 with 40% opacity. I have taken this question into consideration but our case does not exactly match as the image co-ordinate is supplied by user and images can have wide range of sizes.
Image 1:
Image2:
Final result(desired output):
For this I have tried img.paste() function of PIL and replacing values in numpy array of images in cv2, both giving results that are far from desired.
I made two input images with ImageMagick like this:
magick -size 300x400 xc:"rgb(1,204,255)" -fill red -draw "point 280,250" 1.png
magick -size 250x80 xc:"rgb(150,203,0)" -fill red -draw "point 12,25" 2.png
Then ran the following code:
#!/usr/bin/env python3
"""
Paste one image on top of another such that given points in each are coincident.
"""
from PIL import Image
# Open images and ensure RGB
im1 = Image.open('1.png').convert('RGB')
im2 = Image.open('2.png').convert('RGB')
# x,y coordinates of point in each image
p1x, p1y = 280, 250
p2x, p2y = 12, 25
# Work out how many pixels of space we need left, right, above, below common point in new image
pL = max(p1x, p2x)
pR = max(im1.width-p1x, im2.width-p2x)
pT = max(p1y, p2y)
pB = max(im1.height-p1y, im2.height-p2y)
# Create background in solid white
bg = Image.new('RGB', (pL+pR, pT+pB),'white')
bg.save('DEBUG-bg.png')
# Paste im1 onto background
bg.paste(im1, (pL-p1x, pT-p1y))
bg.save('DEBUG-bg+im1.png')
# Make 40% opacity mask for im2
alpha = Image.new('L', (im2.width,im2.height), int(40*255/100))
alpha.save('DEBUG-alpha.png')
# Paste im2 over background with alpha
bg.paste(im2, (pL-p2x, pT-p2y), alpha)
bg.save('result.png')
The result is this:
The lines that save images with names starting "DEBUG-xxx.png" are just for easy debugging and can be removed. I can easily view them all to see what is going on with the code and I can easily delete them all by removing "DEBUG*png".
Without any more details, I will try to answer the question as best as I can and will name all the extra assumptions that I made (and how to handle them if you can't make them).
Since there were no provided images, I created a blue and green image with a black dot as merging coordinate, using the following code:
import numpy as np
from PIL import Image, ImageDraw
def create_image_with_point(name, color, x, y, width=3):
image = np.full((400, 400, 3), color, dtype=np.uint8)
image[y - width:y + width, x - width:x + width] = (0, 0, 0)
image = Image.fromarray(image, mode='RGB')
ImageDraw.Draw(image).text((x - 15, y - 20), 'Point', (0, 0, 0))
image.save(name)
return image
blue = create_image_with_point('blue.png', color=(50, 50, 255), x=300, y=100)
green = create_image_with_point('green.png', color=(50, 255, 50), x=50, y=50)
This results in the following images:
Now I will make the assumption that the images do not contain an alpha layer yet (as I created them without). Therefore I will load the image and add an alpha layer to them:
import numpy as np
from PIL import Image
blue = Image.open('blue.png')
blue.putalpha(255)
green = Image.open('green.png')
green.putalpha(255)
My following assumption is that you know the merge coordinates beforehand:
# Assuming x, y coordinates.
point_blue = (300, 100)
point_green = (50, 50)
Then you can create an empty image, that can hold both of the images easily:
new_image = np.zeros((1000, 1000, 4), dtype=np.uint8)
This is a far stretch assumption if you do not know the image size beforehand, and in case you do not know this you will have to calculate the combining size of the two images.
Then you can place the images dot in the center of the newly created images (in my case (500, 500). For this you use the merging points as offsets. And you can perform alpha blending (in any case: np.uint8(img_1*alpha + img_2*(1-alpha))) to merge the images using different opacity.
Which is in code:
def place_image(image: Image, point_xy: tuple[int, int], dest: np.ndarray, alpha: float = 1.) -> np.ndarray:
# Place the merging dot on (500, 500).
offset_x, offset_y = 500 - point_xy[0], 500 - point_xy[1]
# Calculate the location of the image and perform alpha blending.
destination = dest[offset_y:offset_y + image.height, offset_x:offset_x + image.width]
destination = np.uint8(destination * (1 - alpha) + np.array(image) * alpha)
# Copy the 'merged' imaged to the destination location.
dest[offset_y:offset_y + image.height, offset_x:offset_x + image.width] = destination
return dest
# Add the background image blue with alpha 1
new_image = place_image(blue, point_blue, dest=new_image, alpha=1)
# Add the second image with 40% opacity
new_image = place_image(green, point_green, dest=new_image, alpha=0.4)
# Store the resulting image.
image = Image.fromarray(new_image)
image.save('result.png')
The final result will be a bigger image, of the combined images, again you can calculate the correct bounding box, so you don't have these huge areas of 'nothing' sticking out. The final result will look like this:

Is there a solution to find the external contour correctly in my images

I got a segmented image as entry in my program the goal is to split regions into two images one contains external contours(regions) and the other contains internal contours (regions).
Programme in python 3.7 and opencv
I try to use some morphological operations (close) and smoothing filter (median) then I apply a binary and otsu threshold and canny edge detection to get a better version of contours with the fonction find contour
In first I extrac external contours with CV2.RETR_EXTERNAL but this is what I get:
def function(image):
#pretraitement
im = cv2.imread(image,0)
_Kernel = 3
iteration__ = 5
im = Pretraitement.pretraitement.lissage_median(im, _Kernel, iteration__)
kernel = (3,3)
im = cv2.morphologyEx(im, cv2.MORPH_CLOSE,cv2.getStructuringElement(cv2.MORPH_CROSS,kernel))
high_thresh, im = cv2.threshold(im, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
lowThresh = 0.5 * high_thresh
cv2.rectangle(im, (0, 0), (im.shape[1], im.shape[0]), 0, 3)
contour = cv2.findcontours(
cv2.Canny(im.copy(), lowThresh, high_thresh),
Img_Colored_Readed.shape, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
MaskExtern = np.zeros((im.shape[0],im.shape[1],3),dtype=np.uint8)
MaskRegion = np.zeros((im.shape[0],im.shape[1],3),dtype=np.uint8)
MaskContour = = np.zeros(im.shape,dtype=np.uint8)
for i in range(len(contour)):
for j in range(len(contour)):
#to check if the contour j is inside contour i
if BoundaryBasedDescriptors.Contours.pointInContour3(contour[i],contour[j]):
pass
else:
cv2.drawContours(MaskExtern,contour, j, (0,255,255), 1)
cv2.drawContours(MaskContour,contour,i,255,1)
cv2.drawContours(MaskRegion,contour,i,(255,i*10,255-i*10),-1)
cv2.imwrite('_external.jpg', MaskExtern)
cv2.imwrite('_contour.bmp', MaskContour)
cv2.imwrite('_colore.jpg', MaskRegion)
The link to the image represent the segmented imageenter image description here
and this is what I get when I draw all contours with thickness -1enter image description here
I expect to get the rigth external contour (regions) I get some regions that are internalenter image description here
This is the error in your code:
cv2.rectangle(im, (0, 0), (im.shape[1], im.shape[0]), 0, 3)
The result is assigned to nothing, so it does nothing. If you add im = in front you'll get the behavior you're expecting.
If your purpose is to separate the internal and the external white area's, you could also try this approach. First invert the image. Then find the external outline of the black area (in original, white in inverted), which you can then use as a mask to separate the area's. If necessary you you can use the the masked interior image to find the smaller contours.
Result:
Code:
import cv2
import numpy as np
# load image as grayscale
img = cv2.imread('cSxN8.png',0)
# treshold to create binary image
tr, img = cv2.threshold(img,50,255,cv2.THRESH_BINARY)
# invert image
img_inv = cv2.bitwise_not(img)
# find external contours
contours, hier = cv2.findContours(img_inv, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
# draw contours in gray
for cnt in contours:
cv2.drawContours(img,[cnt],0,(127),5)
# display image
cv2.imshow('Result', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
```python```
import cv2
import numpy as np
# load image as grayscale
img = cv2.imread('cSxN8.png',0)
# treshold to create binary image
tr, thresh = cv2.threshold(img,50,255,cv2.THRESH_BINARY)
img = thresh.copy()
# invert image
img_inv_ = cv2.bitwise_not(img)
#find_external_contours
_,cnt = cv2.findcontours(img_inv.copy(),cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
for i in cnt:
cv2.drawContours(img, [cnt], 0, (127), 1)
#to extract the edges of the external contour
img = cv2.bitwise_xor(img,img_inv)
#binarisation of the external contour
_, img = cv2.threshold(img,126,255,cv2.THRESH_BINARY)
#now we fill the external region
cnt = cv2.findcontours(img_prete_2,cv2.RETR_CCOPM,cv2.CHAIN_APPROX_SIMPLE)
mask_externe=np.zeros(img.shape,dtype=np.uint8)
for i in range(cnt.longeurCnt):
cv2.drawContours(mask_externe,[cnt.contour [i]],-1,255,-1)
#get the internal region
mask_internal = cv2.bitwise_xor(img_inv,mask_extern)
```
this is the complete solution (the approach is poposed by J.D.
, thanks to you)

How to resize an image according to a reference point in another

I currently have two images (im1 and im2) with pixel dimensions of (1725, 1580). im2 however possesses a large border around it that i had to create to ensure that the images were of the same size. im2 was originally (1152, 864).
As such. when i overlay im2 ontop of im1 using PIL.Image.blend, im2 appears overlayed onto im1, but is a lot smaller. I have 2 distinct reference points on the images i think i could use (present on im1 and im2) to rescale im2 (zoom it in somehow?) to overlay im2 ontop of im1.
My issue is that i have been looking through various python modules (PIL, scipy, matplotlib etc) but cant seem to really be getting anywhere or find a solution with which i could approach this issue.
I have 2 reference points i think i could use (present on im1 and im2) to rescale im2 (zoom it in somehow?) to overlay im2 ontop of im1.
i have looked at various modules but cant to seem to find anything that might work (scipy, PIL, matplotlib)
#im1 https://i.imgur.com/dF8uyPw.jpg
#im2 https://i.imgur.com/o4RAhOQ.png
#im2_resized https://i.imgur.com/jfWz1LE.png
im1 = Image.open("pit5Film/Pit_5_5mm_inf.tif")
im2 = Image.open("pit5Overlay/overlay_132.png")
old_size = im2.size
new_size = im1.size
im2_resized = Image.new("RGB", new_size)
im2_resized.paste(im2,((round((new_size[0]-old_size[0])/2)),round(((new_size[1]-old_size[1])/2))))
Image.blend(im1,im2_resized,0.2)
I think you are trying to do an "affine distortion". I can maybe work out how to do it in OpenCV or PIL, but for the minute, here's what I did with ImageMagick.
First, I located the centre of the registration hole (?) on both the left and right side of the first image. I got these coordinates:
422,775 # left hole centre 1st picture
1246,799 # right hole centre 1st picture
Then I found these same features in the second picture at:
514,426 # left hole centre 2nd picture
668,426 # right hole centre 2nd picture
Then I ran this in Terminal to do the 2-point affine transformation:
convert imageA.jpg -virtual-pixel white \
-distort affine '422,775 514,426 1246,799 668,426' +repage \
imageB.png -compose overlay -composite result.jpg
There is loads of great information from Anthony Thyssen here if you fancy a read.
This is how to do it in Python Wand, which is based upon Imagemagick. I use Mark Setchell's images and the Python Wand equivalent command. The distort command needs Imagemagick 7, according to the documentation. Using Python Wand 0.5.5, the current version.
Script:
#!/bin/python3.7
from wand.image import Image
from wand.color import Color
from wand.display import display
with Image(filename='imageA.jpg') as Aimg:
with Image(filename='imageB.jpg') as Bimg:
Aimg.virtual_pixel = 'background'
Aimg.background_color = Color('white')
arguments = (422, 775, 514, 426, 1246, 799, 668, 426)
Aimg.distort('affine', arguments)
Aimg.composite(Bimg, 0, 0, 'overlay')
Aimg.save(filename='image_BoverlayA_composite.png')
display(Aimg)
Calling Command:
python3.7 wand_affine_overlay.py
Result:
ADDITION:
If you want to trim the image to its minimum bounding box, then add trim to the command as follows, where the trim value is in the range 0 to quantum range.
#!/bin/python3.7
from wand.image import Image
from wand.color import Color
from wand.display import display
with Image(filename='imageA.jpg') as Aimg:
with Image(filename='imageB.jpg') as Bimg:
Aimg.virtual_pixel = 'background'
Aimg.background_color = Color('white')
arguments = (422, 775, 514, 426, 1246, 799, 668, 426)
Aimg.distort('affine', arguments)
Aimg.composite(Bimg, 0, 0, 'overlay')
Aimg.trim(fuzz=10000)
Aimg.save(filename='image_BoverlayA_composite.png')
display(Aimg)
batter way for resizing an image using OpenCV
dim = (width, height)
# resize image
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
for blend two images using OpenCV
1: load both image
src1 = cv.imread(cv.samples.findFile('img1.jpg'))
src2 = cv.imread(cv.samples.findFile('img2.jpg'))
2: blend both images (alpha is in between 0 to 1 any float)
beta = (1.0 - alpha)
dst = cv.addWeighted(src1, alpha, src2, beta, 0.0)
3: for displaying result
cv.imshow('dst', dst)
cv.waitKey(0)
if you wont to resize an image according to a reference point consider this awsome blog by PYIMAGESEARCH : PYIMAGESEARCH 4 POINT

How do you change the color of specified pixels in an image?

I want to be able to detect a certain area of pixels based on their RGB values and change them to some other color (not black/white).
I have tried changing these values in the code, but my resulting images always show black pixels replacing the specified locations:
pixelMap[i,j]= (255,255,255)
from PIL import Image
im = Image.open('Bird.jpg')
pixelMap = im.load()
img = Image.new(im.mode, im.size)
pixelsNew = img.load()
for i in range(img.size[0]):
for j in range(img.size[1]):
toup = pixelMap[i,j]
if(int(toup[0]>175) and int(toup[1]<100 and int(toup[2])<100) ):
pixelMap[i,j]= (255,255,255)
else:
pixelsNew[i,j] = pixelMap[i,j]
img.show()
You will find that iterating over images with Python loops is really slow and should get in the habit of using Numpy or optimised OpenCV or skimage code.
So, starting with this image:
from PIL import Image
import numpy as np
# Open image
im = Image.open('bird.jpg')
# Make into Numpy array
imnp = np.array(im)
# Make all reddish pixels white
imnp[(imnp[:,:,0]>170) & (imnp[:,:,1]<100) & (imnp[:,:,2]<100)] = [255,255,255]
# Convert back to PIL and save
Image.fromarray(imnp).save('result.jpg')
It looks like a tiny bug:
Instead of: pixelMap[i,j]= (255,255,255)
Use: pixelsNew[i,j] = (255,255,255)

Not getting the box on detected car

I'm trying to run a python code to detect a car in an image and draw a box around it. But somehow, I'm not getting the box even though there is no error. The code I'm using is below:
The function to draw the box
def draw_boxes(img, bboxes, color=(0, 0, 255), thick=6):
# Make a copy of the image
imcopy = np.copy(img)
# Iterate through the bounding boxes
for bbox in bboxes:
# Draw a rectangle given bbox coordinates
cv2.rectangle(imcopy, bbox[0], bbox[1], color, thick)
# Return the image copy with boxes drawn
return imcopy
The function to draw the box
def search_windows(img, windows, model):
#1) Create an empty list to receive positive detection windows
#2) Iterate over all windows in the list
test_images = []
for window in windows:
#3) Extract the test window from original image
test_img = cv2.resize(img[window[0][1]:window[1][1], window[0][0]:window[1][0]], (64, 64))
# Normalize image
test_img = test_img/255
# Predict and round the result
test_images.append(test_img)
test_images = np.array(test_images)
prediction = np.around(model.predict(test_images))
on_windows = [windows[i] for i in np.where(prediction==1)[0]]
return on_windows
Read an image and use the functions to draw the box
img = mpimg.imread(test_images[0])
detected_windows = search_windows(img, windows, model)
window_img = draw_boxes(img, detected_windows, color=(0, 255, 0), thick=3)
plt.imshow(window_img)
Thanks in adanvce.
I found the answer. I need to set a better limit for my prediction in the code. If I set 0.4 or 0.5 as the value, then I get the boxes and then work my way for better predictions.

Resources