How to solve problem with transparent area in PNG image? - python-3.x

I use Paint.Net in Windows to make mask png image from source png.
def mask (im):
newimdata = []
transparent = (255, 255, 255, 0)
black = (0,0,0)
white = (255,255,255)
for color in im.getdata():
if color == transparent:
newimdata.append(white)
else:
newimdata.append(black)
newim = Image.new(im.mode,im.size)
newim.putdata(newimdata)
return newim
img = Image.open(thumb)
img = img.convert("RGBA")
mask(img).show()
The result is little weird.
Source png.
Mask png.
Left transparent rectangle I made in PaintNet: I clicked mouse, made transparent area.
Right transparent rectangle I made: I clicked mouse, made transparent area. After I clicked mouse once again and made transparent vertical figures on transparent rectangle.
I don't understand: Is it two transparent layers (right rectangle and vertical figures)?
How can I merge this to make mask as in left clean rectangle?

I don't understand what you are trying to do, but want to show you how the 4 channels (RGBA) of your image look. R is on the left, then G, then B with A (alpha/transparency) on the right.
I guess you just want the rightmost (A) channel, so with PIL, that is:
from PIL import Image
im = Image.open('....')
alpha = im.getchannel('A')
If you want all the channels, use:
R, G, B, A = im.split()

Related

how to improve edge smoothness of an image rotated using pillow

I have this image
And I want to rotate it, and keep a smooth looking edge.
I have tried this approach below which adds some transparent borders to the image, to allow for the interpolation of the rotation to sample the transparent padding and the opaque image intensities when it renders the edge.
img = Image.open("sunset200x100.jpg")
im_array = np.asarray(img)
w, h = img.size
padding = 4
new_padded_size = (w+padding, h+padding)
img = img.convert('RGBA') # converting to RGBA adds transparency to the areas that aren't opaque
img = ImageOps.pad(img, size=new_padded_size)
im_array_rgba_padded = np.asarray(img)
rotated_im = img.rotate(56, expand=True, resample=PIL.Image.BICUBIC)
as_array = np.asarray(rotated_im)
#rotated_im.show()
rotated_im.save("rotated_sunset200x100_padded_with_2px.png")
However, it doesn't seem to do interpolation on the left, and right sides of the image. Inspecting the im_array_rgba_padded, I see that the first line, and last line of pixels have been made all black, however the left and right haven't got the same zero padding.
So the result ends up looking like this:-
wondering how I can get the padding into the left and right aswell, using the pad function, so that the left and right edges also look smooth ?? or why it is that the padding is not applied to the left and right aswell ?
you can use this change your code :
mg = Image.open("sunset200x100.jpg")
im_array = np.asarray(img)
w, h = img.size
print((w,h))
padding =4
new_padded_size = (w+padding, h+padding)
img = img.convert('RGBA') # converting to RGBA adds transparency to the areas that aren't opaque
# img = ImageOps.pad(img, size=new_padded_size)
img = ImageOps.expand(img,new_padded_size,fill='black')
im_array_rgba_padded = np.asarray(img)
rotated_im = img.rotate(56, expand=True, resample=PIL.Image.BICUBIC)
as_array = np.asarray(rotated_im)
rotated_im.show()
this work in windows10,python 3.9, pillow 8.3.
for more information go to this and pillow/ImageOps

Count non-zero pixels in area rotated rectangle

I've got a binary image with an object and a rotated rectangle over it, found with cv2.findContours and cv2.minAreaRect. The image is normalized to [0;1]
What is the most efficient way to count non-zero area within the bounding rectangle?
Create new zero values Mat that has the same size of your original image.
Draw your rotated rectangle on it in (fillConvexPoly using the RotatedRect vertices).
Bitwise_and this image with your original mask
apply findnonzero function on the result image
You may also apply the previous steps on ROI of the image since you have the bounding box of your rotated rectangle.
According to Humam Helfawi's answer I've tuned a bit suggested steps, so the following code seems doing what i need:
rectangles = [(cv2.minAreaRect(cnt)) for cnt in contours]
for rect in rectangles:
rect = cv2.boxPoints(rect)
rect = np.int0(rect)
coords = cv2.boundingRect(rect)
rect[:,0] = rect[:,0] - coords[0]
rect[:,1] = rect[:,1] - coords[1]
area = cv2.contourArea(rect)
zeros = np.zeros((coords[3], coords[2]), np.uint8)
cv2.fillConvexPoly(zeros, rect, 255)
im = greyscale[coords[1]:coords[1]+coords[3],
coords[0]:coords[0]+coords[2]]
print(np.sum(cv2.bitwise_and(zeros,im))/255)
contours is a list of points. You can fill this shape on an empty binary image with the same size using cv2.fillConvexPoly and then use cv2.countNonZero or numpy.count_nonzero to get the number of occupied pixels.

How to set relative position (oCoords) in FabricJs?

I have a Text in fabricJs. I set top and left.
This sets the aCoords properly to those values.
However the oCoords dont match. And the Text is not displayed at the right position.
I suspect that I need to set to oCoords somehow. So that the Text is displayed at the right pixel coordinates (top & left) on the canvas.
aCoords and oCoords are two different things and should not be in sync.
In your comment you speak about scaled canvas.
Top and Left are 2 absolute values that represent the position of the object on the canvas. This position match with the canvas pixels when the canvas has a identity transform matrix.
If you apply a zoom, this coordinates diverge.
To get the position of pixel 300,100 of the scaled canvas on the unscaled canvas, you need to apply some basic math.
1) get the transform applied to the canvas
canvas.viewportTransform
2) invert it
var iM = fabric.util.invertTransform(canvas.viewportTransform)
3) multiply the wanted point by this matrix
var point = new fabric.Point(myX, myY);
var transformedPoint = fabric.util.transformPoint(point, iM)
4) set the object at that point.

tinted overlay - negate tint effect for a specific element

I have a layout that is covered entirely by a tint overlay (it's the last element in my RelativeLayout).
I have TextView1 and TextView2 with textColor set to red (#FF0000).
My tint overlay is grey with transparency set - #88676767.
I want my TextView1 tinted but TextView2 appear red (#FF0000).
Is there a way for me to calculate color value X for TextView2 so when it is overlayed with a tint layer it appears to the user as red (#FF0000)? If so, how do I go about calculating this value?
No, there is no way to achieve this. The color is calculated as
(color1.R*color1.A + color2.R*color2.A)/(color1.A+color2.A)
This equation does not have solution for color1.R in (0, 255) and color1.A in (0, 1) when color2 is your overlay and result color is 255.
Find more info in this answer.

Why doesn't the alpha pixel in html canvas blend in with the background color?

http://jsfiddle.net/jBgqW/
I've painted the background with fillRect and fillStyle set to rgb(255,0,0) but when I iterate through the pixels and set some random color and value of the alpha pixel to 0 everything becomes white. I've assumed that when the pixel is transparent it should blend with the previously painted background color or does it always default to white.
I hope that it's just my wrong way of using the canvas.
Can anyone explain why the background isn't red in this case and how do i use the alpha pixel properly? I would like to know if this has something to do with the alpha premultiplication.
When using globalAlpha, the pixel colors are calculated with the current rgba values and the new values.
However, in this case you're setting the values manually and therefore doing no calculations. You're just setting the rgba values yourself, which means that the alpha channel is not used for calculating but is just altered without further use. The previous color (red) is basically overwritten in a 'brute force' way - instead of rgba(255, 0, 0, 255), it's now just rgba(128, 53, 82, 0). The original red color has simply been thrown away.
As a result, an alpha of 0 represents complete transparency, so you see the colors of the parent element.
This can be confirmed if you change the body background color: http://jsfiddle.net/jBgqW/2/.
This is somewhat thread necromancy, but I've just faced this problem and have a solution to it, if not for the original poster then for people like me coming from google.
As noted, putImageData directly replaces pixels rather than alpha blends, but that also means it preserves your alpha data. You can then redraw that image with alpha blending using drawImage.
To give an example, lets says we have a canvas that is 200 by 100 pixels and a 100 by 100 imageData object.
// our canvas
var canvas = document.getElementById("mycanvas");
var ctx = canvas.getContext("2d");
// our imageData, created in whatever fashion, with alpha as appropriate...
var data = /* ... */
// lets make the right half of our canvas blue
ctx.fillStyle="blue";
ctx.rect(100, 0, 100, 100);
ctx.fill();
// now draw our image data to the left (white) half, pixels are replaced
ctx.putImageData(data, 0, 0, 100, 100);
// now the magic, draw the canvas to itself with clipping
ctx.drawImage(canvas, 100, 0, 100, 100, 100, 0, 100, 100);
Voila. The right half of the image is now your image data blended with the blue background, rendered with hardware assistance.

Resources