Converting PNG PIL Image into OpenCV Image replaces transparency with black background - python-3.x

When I try to convert PNG type PIL Image into OpenCV Image, transparent background at PNG turns into black background. How can I keep the transparent background in OpenCV Image object.
Here is the code piece :
# PIL Image object which holds a transparent background png image.
pil_img = Image.open(ioFile).convert('RGBA')
pil_img.show()
# I use numpy to convert the pil_image into a numpy array
numpy_image = np.array(pil_img)
# I convert to a openCV2 image, notice the COLOR_RGB2BGR which means that
# the color is converted from RGBA to BGR format
opencvImage = cv2.cvtColor(numpy_image, cv2.COLOR_RGBA2BGRA)
#
#(I commented below lines, to show that I tried them but did not work.)
#
# opencvImage = cv2.cvtColor(numpy_image, cv2.IMREAD_UNCHANGED)
# opencvImage = cv2.cvtColor(numpy_image, cv2.COLOR_RGB2BGR)
showImage(opencvImage)
The last line of code piece shows an image with black background. I probably choose the wrong convert method and, could not find the proper one.

You can use this code for save transparency when converting.
To convert (with Alpha) from Pillow image to OpenCv image:
You can manually change the color order.
import cv2
from PIL import Image
import numpy as np
pillowImage = Image.open('picturePath.png').convert('RGBA')
img = np.array(pillowImage) # 'img' Color order: RGBA
red = img[:,:,0].copy() # Copy R from RGBA
img[:,:,0] = img[:,:,2].copy() # Copy B to first order. Color order: BGBA
img[:,:,2] = red # Copy R to the third order. Color order: BGRA
opencvImage = img # img is OpenCV variable

Related

Color diffusion when merging multiple images in a folder using PIL in python

I have set of 17 images and one of them has a highlighted pixel for my use. But, when I merge these 17 images, I get the color but it diffuses out of the pixel boundaries and I start seeing some colored pixel in black background.
I am using PIL library for the merging. I am attaching my code and the images for the reference. Any help would be appreciated.
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Cretaing the Pixel array
from PIL import Image
from PIL import ImageColor
img_path = '/Volumes/MY_PASSPORT/JRF/cancer_genome/gopal_gen/png_files/'
image_list = []
for entry in os.listdir(img_path):
if entry.endswith('.png'):
entry = int(entry.rstrip('.csv.png'))
image_list.append(entry)
image_list.sort()
list_img = []
for j in range(len(image_list)):
stuff = str(image_list[j])+'.csv.png'
list_img.append(stuff)
#print(list_img[0])
images = [Image.open(img_path+x) for x in list_img]
widths, heights = zip(*(i.size for i in images))
total_width = sum(widths)
max_height = max(heights)
#print(total_width, max_height)
new_im = Image.new('RGB', (total_width, max_height))
x_offset = 0
for im in images:
new_im.paste(im, (x_offset,0))
#print(im.size)
x_offset += im.size[0]
#print(x_offset)
new_im.save(img_path+'final_result_image.jpg')
Here is the combined image:
The third column has a pixel highlighted.
Here is the zoomed in part with the problem.
The JPEG format is lossy - it is allowed to change your pixels to make the file smaller. If your image is a conventional photo of a real-life scene, this doesn't normally matter. If your data is a blocky, computer-generated image, or a set of classes from a classification process, it can go horribly wrong if you use JPEG.
So, the answer is to use PNG (or potentially TIFF) format for images that need to be lossless.

How do you change the color of specified pixels in an image?

I want to be able to detect a certain area of pixels based on their RGB values and change them to some other color (not black/white).
I have tried changing these values in the code, but my resulting images always show black pixels replacing the specified locations:
pixelMap[i,j]= (255,255,255)
from PIL import Image
im = Image.open('Bird.jpg')
pixelMap = im.load()
img = Image.new(im.mode, im.size)
pixelsNew = img.load()
for i in range(img.size[0]):
for j in range(img.size[1]):
toup = pixelMap[i,j]
if(int(toup[0]>175) and int(toup[1]<100 and int(toup[2])<100) ):
pixelMap[i,j]= (255,255,255)
else:
pixelsNew[i,j] = pixelMap[i,j]
img.show()
You will find that iterating over images with Python loops is really slow and should get in the habit of using Numpy or optimised OpenCV or skimage code.
So, starting with this image:
from PIL import Image
import numpy as np
# Open image
im = Image.open('bird.jpg')
# Make into Numpy array
imnp = np.array(im)
# Make all reddish pixels white
imnp[(imnp[:,:,0]>170) & (imnp[:,:,1]<100) & (imnp[:,:,2]<100)] = [255,255,255]
# Convert back to PIL and save
Image.fromarray(imnp).save('result.jpg')
It looks like a tiny bug:
Instead of: pixelMap[i,j]= (255,255,255)
Use: pixelsNew[i,j] = (255,255,255)

Not able to extract number from a Image

I am developing an application to read the numbers from an image using opencv in Python 3. I first converted the image to gray scale,then Apply dilation and erosion to remove some noise, then Apply threshold to get image with only black and white, then Write the image to local disk to do some ..., then apply tesseract to recognise the number for python.
I need to extract the numbers from the image. I am new to openCV. Does anybody know any other method to get the result??
I have share the image link bellow, i was trying to extract from that image. Thanks in advance
https://drive.google.com/file/d/141y-3okLPGP_STje14ukSqSHcgtwMdRO/view?usp=sharing
import cv2
import numpy as np
import pytesseract
from PIL import Image
from pytesseract import image_to_string
# Path of working folder on Disk
src_path = "/Users/sougata.a.roy/Desktop/Images/"
def get_string(img_path):
# Read image with opencv
img = cv2.imread(img_path)
# Convert to gray
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Apply dilation and erosion to remove some noise
kernel = np.ones((1, 1), np.uint8)
img = cv2.dilate(img, kernel, iterations=1)
img = cv2.erode(img, kernel, iterations=1)
# Write image after removed noise
cv2.imwrite(src_path + "removed_noise.jpg", img)
# Apply threshold to get image with only black and white
img = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 31, 2)
# Write the image after apply opencv to do some ...
cv2.imwrite(src_path + 'thres.jpg', img)
# Recognize text with tesseract for python
result = pytesseract.image_to_string(Image.open(src_path + "thres.jpg"), lang='eng')
return result
print('--- Start recognize text from image ---')
print(get_string(src_path + 'abcdefg195.jpg'))
print("------ Done -------")
365

I am unable print colored text

I am unable to print the text in Orange colored.I identified the edges of the image and then printed a text on it.
%matplotlib inline
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('ind_maharashtra.png',0)
edges = cv2.Canny(img,100,20)
cv2.imwrite('Edged_img.jpg',edges)
#plt.subplot(121)
img1 = cv2.imread('Edged_img.jpg',0)
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(img1,'JAI MAHARASHTRA !!',(70,150), font, 0.7,(255,69,0),2,cv2.LINE_8)
cv2.imshow('Maharashtra Map',img1)
#cv2.imshow('Maharashtra Map',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
The problem is that the image on which you are trying to draw ( the image named img1) is a gray-scale image since the 2nd argument of cv2.imread is 0 in the following line:
img1 = cv2.imread('Edged_img.jpg',0)
You have 2 options to fix this issue. First one is to load the image as a color image as follows:
img1 = cv2.imread('Edged_img.jpg')
Alternatively, if you want your canvas to have a gray-ish look, you can just replicate the single channel to form a 3 channel image as follows:
img1 = cv2.imread('Edged_img.jpg', 0)
img1 = cv2.cvtColor(img1, cv2.COLOR_GRAY2BGR)
You are loading your jpg in grayscale, so you will only be able to write grayscale to img1
OpenCV Imread docs
change this line
img1 = cv2.imread('Edged_img.jpg',0)
to
img1 = cv2.imread('Edged_img.jpg',1)
As you can see from the above linked docs, using these numbers is OK but you are actually setting a flag, so you could use the flag definition to make your code clearer. Coincidentally, if you had used the flags you would likely not have had this issue.
You can change your line to
img1 = cv2.imread('Edged_img', cv2.IMREAD_COLOR)
Look how much clearer, and understandable that is. Especially when you come back to this code/hand it over to another developer in a few months time.

How can I get an array of ImageDraw object?

I'm writing a generic algorithm for the pictures so I started with Image class of PIL library and created a numpy array of input image. So now I want to draw some figures and the easiest way is to use ImageDraw, but then again I should use arrays for the next evolution so I need to convert ImageDraw object either to the Image object or to a numpy array.
Any suggestions how can I do that?
I tried to use a numpy conversion which worked on the Image objects. Tried to find included methods of conversion
from PIL import Image, ImageDraw
import numpy
input_image = Image.open("i2.jpg")
width, height = input_image.size
num_weights = width * height
image_draw = ImageDraw.Draw(Image.new('RGB', (width, height), 'WHITE'))
input_image = numpy.array(input_image.getdata())
#Do some staff with image_draw using information from input_image
#And try to convert image_draw to input_image
I want to have as the output a numpy array or Image object
I think you want to process an image both as a PIL Image so you can draw on it, and also as a Numpy array so you can do processing on it.
So, here is an example of how to draw on an image with PIL, then convert it to a Numpy array and do some processing on it, then convert it back to a PIL Image.
#!/usr/bin/env python3
from PIL import Image, ImageDraw
# Create a black 600x200 image
img = Image.new('RGB', (600, 200))
# Get a drawing handle
draw = ImageDraw.Draw(img)
# Draw on image
draw.rectangle(xy=[10,20,300,80], fill='red')
# Save as "result1.png"
img.save('result1.png')
# Convert PIL Image to Numpy array for processing
na = np.array(img)
# Make mask selecting red pixels then make them blue
Rmask =(na[:, :, 0:3] == [255,0,0]).all(2)
na[Rmask] = [0,0,255]
# Convert Numpy array back to PIL Image
img = Image.fromarray(na)
# Save as "result2.png"
img.save('result2.png')
The two images are "result1.png":
and "result2.png":

Resources