Python - Multiple images to multilayered .xcf - python-3.x

I have three PIL images of the same size in RAM (no disk here). They represent different frequencies (details, shadow, mask).
Is it possible to overlay these images into a .xcf file to further process them by hand in gimp? If so, can the opacity of the layers be controlled before saving the image?
I'm ideally looking for a python solution.

Not certain what GIMP expects, but this might get you started as a multi-layer TIFF with varying opacities:
#!/usr/bin/env python3
import numpy as np
from PIL import Image
w, h = 400, 400
# Our layers to write to output file
layers = []
# Make layer 0 - 400x400 RGBA with red square, A=64
im = np.full((w,h,4), [0,0,0,64], np.uint8)
im[10:110, 10:110, :3] = [255,0,0]
layers.append(Image.fromarray(im))
# Make layer 1 - 400x400 RGBA with green square, A=128
im = np.full((w,h,4), [0,0,0,128], np.uint8)
im[20:120, 20:120, :3] = [0,255,0]
layers.append(Image.fromarray(im))
# Make layer 2 - 400x400 RGBA with blue square, A=192
im = np.full((w,h,4), [0,0,0,192], np.uint8)
im[30:130, 30:130, :3] = [0,0,255]
layers.append(Image.fromarray(im))
# Save as multi-layer TIFF with PIL
layers[0].save('result.tif', save_all=True, append_images=layers[1:], compression='tiff_lzw')
The Preview app on macOS displays it like this - hopefully you can see the red at least is more transparent, or less vibrant:
tiffinfo perceives it like this:
TIFF Directory at offset 0x9740 (260c)
Image Width: 400 Image Length: 400
Bits/Sample: 8
Compression Scheme: LZW
Photometric Interpretation: RGB color
Extra Samples: 1<unassoc-alpha>
Samples/Pixel: 4
Rows/Strip: 40
Planar Configuration: single image plane
TIFF Directory at offset 0x20096 (4e80)
Image Width: 400 Image Length: 400
Bits/Sample: 8
Compression Scheme: LZW
Photometric Interpretation: RGB color
Extra Samples: 1<unassoc-alpha>
Samples/Pixel: 4
Rows/Strip: 40
Planar Configuration: single image plane
TIFF Directory at offset 0x30506 (772a)
Image Width: 400 Image Length: 400
Bits/Sample: 8
Compression Scheme: LZW
Photometric Interpretation: RGB color
Extra Samples: 1<unassoc-alpha>
Samples/Pixel: 4
Rows/Strip: 40
Planar Configuration: single image plane

Related

Paste an image to another image at two given co-ordinates with altered opacity using PIL or OpenCV in Python

I have two images with given points, one point each image, that need to be aligned so that the result image is a summation of both images, while image 2 is pasted on image 1 with 40% opacity. I have taken this question into consideration but our case does not exactly match as the image co-ordinate is supplied by user and images can have wide range of sizes.
Image 1:
Image2:
Final result(desired output):
For this I have tried img.paste() function of PIL and replacing values in numpy array of images in cv2, both giving results that are far from desired.
I made two input images with ImageMagick like this:
magick -size 300x400 xc:"rgb(1,204,255)" -fill red -draw "point 280,250" 1.png
magick -size 250x80 xc:"rgb(150,203,0)" -fill red -draw "point 12,25" 2.png
Then ran the following code:
#!/usr/bin/env python3
"""
Paste one image on top of another such that given points in each are coincident.
"""
from PIL import Image
# Open images and ensure RGB
im1 = Image.open('1.png').convert('RGB')
im2 = Image.open('2.png').convert('RGB')
# x,y coordinates of point in each image
p1x, p1y = 280, 250
p2x, p2y = 12, 25
# Work out how many pixels of space we need left, right, above, below common point in new image
pL = max(p1x, p2x)
pR = max(im1.width-p1x, im2.width-p2x)
pT = max(p1y, p2y)
pB = max(im1.height-p1y, im2.height-p2y)
# Create background in solid white
bg = Image.new('RGB', (pL+pR, pT+pB),'white')
bg.save('DEBUG-bg.png')
# Paste im1 onto background
bg.paste(im1, (pL-p1x, pT-p1y))
bg.save('DEBUG-bg+im1.png')
# Make 40% opacity mask for im2
alpha = Image.new('L', (im2.width,im2.height), int(40*255/100))
alpha.save('DEBUG-alpha.png')
# Paste im2 over background with alpha
bg.paste(im2, (pL-p2x, pT-p2y), alpha)
bg.save('result.png')
The result is this:
The lines that save images with names starting "DEBUG-xxx.png" are just for easy debugging and can be removed. I can easily view them all to see what is going on with the code and I can easily delete them all by removing "DEBUG*png".
Without any more details, I will try to answer the question as best as I can and will name all the extra assumptions that I made (and how to handle them if you can't make them).
Since there were no provided images, I created a blue and green image with a black dot as merging coordinate, using the following code:
import numpy as np
from PIL import Image, ImageDraw
def create_image_with_point(name, color, x, y, width=3):
image = np.full((400, 400, 3), color, dtype=np.uint8)
image[y - width:y + width, x - width:x + width] = (0, 0, 0)
image = Image.fromarray(image, mode='RGB')
ImageDraw.Draw(image).text((x - 15, y - 20), 'Point', (0, 0, 0))
image.save(name)
return image
blue = create_image_with_point('blue.png', color=(50, 50, 255), x=300, y=100)
green = create_image_with_point('green.png', color=(50, 255, 50), x=50, y=50)
This results in the following images:
Now I will make the assumption that the images do not contain an alpha layer yet (as I created them without). Therefore I will load the image and add an alpha layer to them:
import numpy as np
from PIL import Image
blue = Image.open('blue.png')
blue.putalpha(255)
green = Image.open('green.png')
green.putalpha(255)
My following assumption is that you know the merge coordinates beforehand:
# Assuming x, y coordinates.
point_blue = (300, 100)
point_green = (50, 50)
Then you can create an empty image, that can hold both of the images easily:
new_image = np.zeros((1000, 1000, 4), dtype=np.uint8)
This is a far stretch assumption if you do not know the image size beforehand, and in case you do not know this you will have to calculate the combining size of the two images.
Then you can place the images dot in the center of the newly created images (in my case (500, 500). For this you use the merging points as offsets. And you can perform alpha blending (in any case: np.uint8(img_1*alpha + img_2*(1-alpha))) to merge the images using different opacity.
Which is in code:
def place_image(image: Image, point_xy: tuple[int, int], dest: np.ndarray, alpha: float = 1.) -> np.ndarray:
# Place the merging dot on (500, 500).
offset_x, offset_y = 500 - point_xy[0], 500 - point_xy[1]
# Calculate the location of the image and perform alpha blending.
destination = dest[offset_y:offset_y + image.height, offset_x:offset_x + image.width]
destination = np.uint8(destination * (1 - alpha) + np.array(image) * alpha)
# Copy the 'merged' imaged to the destination location.
dest[offset_y:offset_y + image.height, offset_x:offset_x + image.width] = destination
return dest
# Add the background image blue with alpha 1
new_image = place_image(blue, point_blue, dest=new_image, alpha=1)
# Add the second image with 40% opacity
new_image = place_image(green, point_green, dest=new_image, alpha=0.4)
# Store the resulting image.
image = Image.fromarray(new_image)
image.save('result.png')
The final result will be a bigger image, of the combined images, again you can calculate the correct bounding box, so you don't have these huge areas of 'nothing' sticking out. The final result will look like this:

How to set a colormap for PySimpleGUI Image()

Is there a way to set a colormap for sg.Image() or sg.DrawImage()? In my case I have a grayscale (single-band) thermal image that I'd like to show with a heat colormap. Short example of current code:
import PySimpleGUI as sg
layout = [[sg.Image(thermal_image_path, size=(600, 600))]]
window = sg.Window('Show image', size=(600, 600),
resizable=True).Layout(layout).finalize()
You could map the grey-tones to a range of different Hues and set the Saturation and Lightness constant - see Wikipedia article on HSL
#!/usr/bin/env python3
import numpy as np
import cv2
def heatmap(im):
# Map range 0..255 of greys to Hues in range 60..180
# Keep Lightness=127, Saturation=255
# https://en.wikipedia.org/wiki/HSL_and_HSV#Hue_and_chroma
H = (im.astype(np.float32) * 120./255.).astype(np.uint8) + 60
L = np.full((h,w), 127, np.uint8)
S = np.full((h,w), 255, np.uint8)
HLS = cv2.merge((H,L,S))
return cv2.cvtColor(HLS,cv2.COLOR_HLS2RGB)
# Create greyscale gradient
w, h = 256, 100
grey = np.repeat(np.arange(w,dtype=np.uint8).reshape(1,-1), h, axis=0)
cv2.imwrite('grey.png',grey) # debug only
# Apply heatmap to greyscale image
hm = heatmap(grey)
# Just for display
from PIL import Image
Image.fromarray(hm).save('result.png')
That makes the following greyscale image:
And then gets transformed to this:
Or you could shell out to ImageMagick with subprocess.run(), or use wand (its Python binding) to do this:
Make a 100x100 greyscale ramp - this is just setup to create an image to work with:
magick -size 100x100 gradient: grey.png
Make a 5-colour heatmap by varying the hues around the HSL circle - this only needs doing once and you can keep and reuse the image heat.png:
magick xc:"hsl(240,255,128)" xc:"hsl(180,255,128)" xc:"hsl(120,255,128))" xc:"hsl(60,255,128)" xc:"hsl(0,255,128)" +append heat.png
Map the shades of the greyscale image to our CLUT (colour lookup table) - this is the actual answer:
magick grey.png heat.png -clut result.png

How to Segment handwritten and printed digit without losing information in opencv?

I've written an algorithm that would detect printed and handwritten digit and segment it but while removing outer rectangle handwritten digit is lost using clear_border from ski-image package. Any suggestion to prevent information.
Sample:
How to get all 5 characters separately?
Segmenting characters from the image -
Approach -
Threshold the image (Convert it to BW)
Perform Dilation
Check the contours are large enough
Find rectangular Contours
Take ROI and save the characters
Python Code -
# import the necessary packages
import numpy as np
import cv2
import imutils
# load the image, convert it to grayscale, and blur it to remove noise
image = cv2.imread("sample1.jpg")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (7, 7), 0)
# threshold the image
ret,thresh1 = cv2.threshold(gray ,127,255,cv2.THRESH_BINARY_INV)
# dilate the white portions
dilate = cv2.dilate(thresh1, None, iterations=2)
# find contours in the image
cnts = cv2.findContours(dilate.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if imutils.is_cv2() else cnts[1]
orig = image.copy()
i = 0
for cnt in cnts:
# Check the area of contour, if it is very small ignore it
if(cv2.contourArea(cnt) < 100):
continue
# Filtered countours are detected
x,y,w,h = cv2.boundingRect(cnt)
# Taking ROI of the cotour
roi = image[y:y+h, x:x+w]
# Mark them on the image if you want
cv2.rectangle(orig,(x,y),(x+w,y+h),(0,255,0),2)
# Save your contours or characters
cv2.imwrite("roi" + str(i) + ".png", roi)
i = i + 1
cv2.imshow("Image", orig)
cv2.waitKey(0)
First of all I thresholded the image to convert it to black n white. I get characters in white portion of image and background as black. Then I Dilated the image to make the characters (white portions) thick, this will make it easy to find the appropriate contours. Then find findContours method is used to find the contours. Then we need to check that the contour is large enough, if the contour is not large enough then it is ignored ( because that contour is noise ). Then boundingRect method is used to find the rectangle for the contour. And finally, the detected contours are saved and drawn.
Input Image -
Threshold -
Dilated -
Contours -
Saved characters -
Problem of eroded/cropped handwritten digits:
you may solve this problem in the recognition step, or even in image improvement step (before recognition).
if only a very small part of digit is cropped (such your image example), it's enough to pad your image around by 1 or 2 pixels to make the segmentation process easy. Or some morpho filter (dilate) can improve your digit even after padding. (these solution are available in Opencv)
if a enough good part of digit is cropped, you need to add a degraded/cropped pattern of digits to the training Dataset used for digit recognition algorithm, (i.e. digit 3 with all possible cropping cases.. etc)
Problem of characters separation :
opencv offers blob detection algorithm that works well on your issue (choose the correct value for concave & convexity params)
opencv offers as well contour detector (canny() function), that helps to detect the contours of your character then you can find the fitted bounding (offered by Opencv as well : cv2.approxPolyDP(contour,..,..)) box around each character

Creating a 32 bit image and saving as tif

This is a very basic question but I do not seem to find a good solution to it. I want to create a black (all zeros) 32 bit image with dimension 244 X 244 and save it as tif. I tried some modules like PIL but all I got was a single channel RGB image. Any suggestions? Any links?
Thank you for the help and apologies if the question is too basic!
Hopefully this will help:
#!/usr/local/bin/python3
import numpy as np
from PIL import Image
# Numpy array containing 244x244 solid black image
solidBlackImage=np.zeros([244,244,3],dtype=np.uint8)
img=Image.fromarray(solidBlackImage,mode="RGB")
img.save("result.tif")
The image I get as a result can be examined as follows with ImageMagick, and seen to be a 24-bit image:
identify -verbose result.tif | more
Output
Image: result.tif
Format: TIFF (Tagged Image File Format)
Mime type: image/tiff
Class: DirectClass
Geometry: 244x244+0+0
Units: PixelsPerInch
Colorspace: sRGB
Type: Bilevel
Base type: TrueColor
Endianess: LSB
Depth: 8/1-bit
Channel depth:
Red: 1-bit
Green: 1-bit
Blue: 1-bit
...
...
Or, you can verify with tiffinfo:
tiffinfo result.tif
Output
TIFF Directory at offset 0x8 (8)
Image Width: 244 Image Length: 244
Bits/Sample: 8
Compression Scheme: None
Photometric Interpretation: RGB color
Samples/Pixel: 3
Rows/Strip: 244
Planar Configuration: single image plane
Another option might be pyvips as follows, where I can specify LZW compression as well:
#!/usr/local/bin/python3
import numpy as np
import pyvips
width,height,bands=244,244,3
# Numpy array containing 244x244 solid black image
solidBlackImage=np.zeros([height,width,bands],dtype=np.uint8)
# Convert numpy to vips image and save with LZW compression
vi = pyvips.Image.new_from_memory(solidBlackImage.ravel(), width, height, bands,'uchar')
vi.write_to_file('result.tif',compression='lzw')
That results in this:
tiffinfo result.tif
Output
TIFF Directory at offset 0x3ee (1006)
Image Width: 244 Image Length: 244
Resolution: 10, 10 pixels/cm
Bits/Sample: 8
Sample Format: unsigned integer
Compression Scheme: LZW
Photometric Interpretation: RGB color
Orientation: row 0 top, col 0 lhs
Samples/Pixel: 3
Rows/Strip: 128
Planar Configuration: single image plane
Predictor: horizontal differencing 2 (0x2)

Convert own image to MNIST's image

I am newbie of tensorflow.
I trained the digit prediction model using MNIST's train data.
And then I test the model using my own image.
It cannot predict the actual result.
The problems are :
MNIST's images are needed black and white
The images are size normalized to fit in a 20x20 pixel box and there are centered in a 28x28 image using the center of mass.
I don't want to use OpenCV
The question is How to shift my own handwritten digit image to the center of 28x28 image. Own image can be any color and that image to change Black and White MNIST's image
from PIL import Image, ImageFilter
def imageprepare(argv):
"""
This function returns the pixel values.
The imput is a png file location.
"""
im = Image.open(argv).convert('L')
width = float(im.size[0])
height = float(im.size[1])
newImage = Image.new('L', (28, 28), (255)) # creates white canvas of 28x28 pixels
if width > height: # check which dimension is bigger
# Width is bigger. Width becomes 20 pixels.
nheight = int(round((20.0 / width * height), 0)) # resize height according to ratio width
if (nheight == 0): # rare case but minimum is 1 pixel
nheight = 1
# resize and sharpen
img = im.resize((20, nheight), Image.ANTIALIAS).filter(ImageFilter.SHARPEN)
wtop = int(round(((28 - nheight) / 2), 0)) # calculate horizontal position
newImage.paste(img, (4, wtop)) # paste resized image on white canvas
else:
# Height is bigger. Heigth becomes 20 pixels.
nwidth = int(round((20.0 / height * width), 0)) # resize width according to ratio height
if (nwidth == 0): # rare case but minimum is 1 pixel
nwidth = 1
# resize and sharpen
img = im.resize((nwidth, 20), Image.ANTIALIAS).filter(ImageFilter.SHARPEN)
wleft = int(round(((28 - nwidth) / 2), 0)) # caculate vertical pozition
newImage.paste(img, (wleft, 4)) # paste resized image on white canvas
# newImage.save("sample.png
tv = list(newImage.getdata()) # get pixel values
# normalize pixels to 0 and 1. 0 is pure white, 1 is pure black.
tva = [(255 - x) * 1.0 / 255.0 for x in tv]
print(tva)
return tva
x=imageprepare('./image.png')#file path here
print(len(x))# mnist IMAGES are 28x28=784 pixels
I would use numpy recipe like this one --
https://www.kaggle.com/c/digit-recognizer/forums/t/6366/normalization-and-centering-of-images-in-mnist
You could probably remap this to pure TensorFlow pipeline, but I'm not sure it's necessary given that it's tiny images.
Also you would get better accuracy if you went the other way -- instead of normalizing your input data, make your network robust to lack of normalization by training on a larger dataset of randomly shifted/rescaled MNIST digits.

Resources