Python add padding to images that need it - python-3.x

I have a bunch of images that aren't equal size, and where some fit entirely to the frame and some have blank padding.
I would like to know how I can resize each of them to be the same image size and to have roughly the same border size.
Currently I am doing
from PIL import Image
from glob import glob
images = glob('src/assets/emotes/medals/**/*.png', recursive=True)
for image_path in images:
im = Image.open(image_path).convert('RGBA')
im = im.resize((100, 100))
im.save(image_path)
but this doesn't account for a possible border.
Image 1 - 101 x 101
Image 2 - 132 x 160
Desired result - 100 x 100
Images arent always bigger than (100, 100) so I will need to use resize.
I can also maybe remove the PNG border for all images, and then resize which might be easier.

Taken from Crop a PNG image to its minimum size, im.getbbox() will give you the original image without transparent background.
Documentation : Pillow (PIL Fork)

Related

Find all images that are over 50% black - Python

I am trying to write code that will iterate over a directory of images and tell me which images are over 50% black - so I can get rid of those. This is what I have, but it isn't returning any results:
from PIL import Image
import glob
images = glob.glob('/content/drive/MyDrive/cropped_images/*.png') #find all filenames specified by pattern
for image in images:
with open(image, 'rb') as file:
img = Image.open(file)
pixels = list(img.getdata()) # get the pixels as a flattened sequence
black_thresh = (50,50,50)
nblack = 0
for pixel in pixels:
if pixel < black_thresh:
nblack += 1
n = len(pixels)
if (nblack / float(n)) > 0.5:
print(file)
Hannah, tuple comparison in python might not be what you expect: How does tuple comparison work in Python? . It is element by element, only for tiebreaking. Perhaps your definition of over 50% black is not what you have mapped to code?
I ran the above code on a dark image and my file printed just fine. Example png: https://www.nasa.gov/mission_pages/chandra/news/black-hole-image-makes-history .
Recommendations: define over 50% black to something you can put in code, pull the n out of the for loop, add files to a list when they satisfy your criteria.

How can I make the text of a photo list clearer?

I have about a hundred photos that aren't very sharp and I'd like to make them sharper.
So I created a script with python that already tries with one. I have tried with PIL, OpenCV and OCR readers to read texts from Images.
# External libraries used for
# Image IO
from PIL import Image
# Morphological filtering
from skimage.morphology import opening
from skimage.morphology import disk
# Data handling
import numpy as np
# Connected component filtering
import cv2
black = 0
white = 255
threshold = 160
# Open input image in grayscale mode and get its pixels.
img = Image.open("image3.png").convert("LA")
pixels = np.array(img)[:,:,0]
# Remove pixels above threshold
pixels[pixels > threshold] = white
pixels[pixels < threshold] = black
# Morphological opening
blobSize = 1 # Select the maximum radius of the blobs you would like to remove
structureElement = disk(blobSize) # you can define different shapes, here we take a disk shape
# We need to invert the image such that black is background and white foreground to perform the opening
pixels = np.invert(opening(np.invert(pixels), structureElement))
# Create and save new image.
newImg = Image.fromarray(pixels).convert('RGB')
newImg.save("newImage1.PNG")
# Find the connected components (black objects in your image)
# Because the function searches for white connected components on a black background, we need to invert the image
nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(np.invert(pixels), connectivity=8)
# For every connected component in your image, you can obtain the number of pixels from the stats variable in the last
# column. We remove the first entry from sizes, because this is the entry of the background connected component
sizes = stats[1:,-1]
nb_components -= 1
# Define the minimum size (number of pixels) a component should consist of
minimum_size = 100
# Create a new image
newPixels = np.ones(pixels.shape)*255
# Iterate over all components in the image, only keep the components larger than minimum size
for i in range(1, nb_components):
if sizes[i] > minimum_size:
newPixels[output == i+1] = 0
# Create and save new image.
newImg = Image.fromarray(newPixels).convert('RGB')
newImg.save("newImage2.PNG")
But it returns:
I would prefer it not to be black and white, the best output would be one which upscale both text and image
As mentioned in the comments, the quality is very bad. This is not an easy problem. However, there may be a couple of tricks you can try.
This looks like it is due to some anti-aliasing that has been applied to the image/scan. I would try reversing anti-aliasing if possible. As descriped in the post the steps would be similar to this:
Apply low pass filter
difference = original_image - low_pass_image
sharpened_image = original_image + alpha*difference
Code may look something like this:
from skimage.filters import gaussian
alpha = 1 # Sharpening factor
low_pass_image = gaussian(image, sigma=1)
difference = original_image - low_pass_image
sharpened_image = original_image + alpha*difference
Also, scikit image has an implementation of an unsharp mask as well as the wiener filter.

Change image color in PIL module

I am trying to vary the intensity of colors to obtain a different colored image...
import PIL
from PIL import Image
from PIL import ImageEnhance
from PIL import ImageDraw
# read image and convert to RGB
image=Image.open("readonly/msi_recruitment.gif")
image=image.convert('RGB')
# build a list of 9 images which have different brightnesses
enhancer=ImageEnhance.Brightness(image)
images=[]
for i in range(1, 10):
images.append(enhancer.enhance(i/10))
# create a contact sheet from different brightnesses
first_image=images[0]
contact_sheet=PIL.Image.new(first_image.mode, (first_image.width*3,first_image.height*3))
x=0
y=0
for img in images:
# Lets paste the current image into the contact sheet
contact_sheet.paste(img, (x, y) )
# Now we update our X position. If it is going to be the width of the image, then we set it to 0
# and update Y as well to point to the next "line" of the contact sheet.
if x+first_image.width == contact_sheet.width:
x=0
y=y+first_image.height
else:
x=x+first_image.width
# resize and display the contact sheet
contact_sheet = contact_sheet.resize((int(contact_sheet.width/2),int(contact_sheet.height/2) ))
display(contact_sheet)
But the above code just varies brightness....
Please tell me what changes should i make to vary color intensity in this code.....
Im sorry but i am unable to upload the picture now, consider any image you find suitable and help me out... Appreciated!!!!
Please go to this link and answer this question instead of this one, I apologise for inconvenience....
Pixel colour intensity
Many colour operations are best done in a colourspace such as HSV which you can get in PIL with:
HSV = rgb.convert('HSV')
You can then use split() to get 3 separate channels:
H, S, V = hsv.split()
Now you can change your colours. You seem a little woolly on what you want. If you want to change the intensity of the colours, i.e. make them less saturated and less vivid decrease the S (Saturation). If you want to change the reds to purples, i.e. change the Hues, then add something to the Hue channel. If you want to make the image brighter or darker, change the Value (V) channel.
When you have finished, merge merge((H,S,V)) the edited channels back together and convert back to RGB with convert('RGB').
See Splitting and Merging and Processing Individual Bands on this page.
Here is an example, using this image:
Here is the basic framework to load the image, convert to HSV colourspace, split the channels, do some processing, recombine the channels and revert to RGB colourspace and save the result.
#!/usr/bin/env python3
from PIL import Image
# Load image and create HSV version
im = Image.open('colorwheel.jpg')
HSV= im.convert('HSV')
# Split into separate channels
H, S, V = HSV.split()
######################################
########## PROCESSING HERE ###########
######################################
# Recombine processed H, S and V back into a recombined image
HSVr = Image.merge('HSV', (H,S,V))
# Convert recombined HSV back to reconstituted RGB
RGBr = HSVr.convert('RGB')
# Save processed result
RGBr.save('result.png')
So, if you find the chunk labelled "PROCESSING HERE" and put code in there to divide the saturation by 2, it will make the colours less vivid:
# Desaturate the colours by halving the saturation
S = S.point(lambda p: p//2)
If, instead, we halve the brightness (V), like this:
# Halve the brightness
V=V.point(lambda p: p//2)
the result will be darker:
If, instead, we add 80 to the Hue, all the colours will rotate around the circle - this is called a "Hue rotation":
# Rotate Hues around the Hue circle by 80 on a range of 0..255, so around 1/3 or a circle, i.e. 120 degrees:
H = H.point(lambda p: p+80)
which gives this:

grayscale image rotation with cv2

I'm trying to crop and rotate an grayscale image.
The image is being rotated successfully according to my defined dimensions, but the intensity channel seems to get zeroed up across the entire rotated image.
image - the original 32,000X1024X1 grayscale image.
i - an index from which I want to crop the image.
windowWidth - a size constant, which defines the number of pixels I wish to crop (e.g in our case, windowWidth = 5000).
cropped - the piece from the original image I wish to rotate.
code example:
cropped = image[i:i+windowWidth, :]
ch, cw = cropped.shape[:2]
rotation_matrix = cv2.getRotationMatrix2D((cw/2,ch/2),90,1)
return cv2.warpAffine(cropped,rotation_matrix, (ch,cw))
The returned 1024X5000X1 matrix contains only 0's, although the original image does not.
It is possible that you are using width instead of height, then maybe this would solve your problem:
cropped = image[:, i:i+windowWidth]

How to convert the background of the entire image to white when both white and black backgrounds are present?

The form image contains text in different background. The image needs to be converted to one background (here white) and hence the heading needs to be converted into black.
input image :
output image:
My approach was to detect the grid(horizontal lines and vertical lines and sum them up) and then crop each section of the grid into new sub-images and then check the majority pixel color and transform accordingly. But after implementing that, the blue background image is not getting detected and getting cropped like :
So I am trying to convert the entire form image into one background so that I can avoid such outcomes.
Here's a different way of doing it that will cope with the "reverse video" being black, rather than relying on some colour saturation to find it.
#!/usr/bin/env python3
import cv2
import numpy as np
# Load image, greyscale and threshold
im = cv2.imread('form.jpg',cv2.IMREAD_GRAYSCALE)
# Threshold and invert
_,thr = cv2.threshold(im,127,255,cv2.THRESH_BINARY)
inv = 255 - thr
# Perform morphological closing with square 7x7 structuring element to remove details and thin lines
SE = np.ones((7,7),np.uint8)
closed = cv2.morphologyEx(thr, cv2.MORPH_CLOSE, SE)
# DEBUG save closed image
cv2.imwrite('closed.png', closed)
# Find row numbers of dark rows
meanByRow=np.mean(closed,axis=1)
rows = np.where(meanByRow<50)
# Replace selected rows with those from the inverted image
im[rows]=inv[rows]
# Save result
cv2.imwrite('result.png',im)
The result looks like this:
And the intermediate closed image looks like this - I artificially added a red border so you can see its extent on Stack Overflow's white background:
You can read about morphology here and an excellent description by Anthony Thyssen, here.
Here's a possible approach. Shades of blue will show up with a higher saturation than black and white if you convert to HSV colourspace, so...
convert to HSV
find mean saturation for each row and select rows where mean saturation exceeds a threshold
greyscale those rows, invert and threshold them
This approach should work if the reverse (standout) backgrounds are any colour other than black or white. It assumes you have de-skewed your images to be truly vertical/horizontal per your example.
That could look something like this in Python:
#!/usr/bin/env python3
import cv2
import numpy as np
# Load image
im = cv2.imread('form.jpg')
# Make HSV and extract S, i.e. Saturation
hsv = cv2.cvtColor(im, cv2.COLOR_BGR2HSV)
s=hsv[:,:,1]
# Save saturation just for debug
cv2.imwrite('saturation.png',s)
# Make greyscale version and inverted, thresholded greyscale version
gr = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
_,grinv = cv2.threshold(gr,127,255,cv2.THRESH_BINARY_INV)
# Find row numbers of rows with colour in them
meanSatByRow=np.mean(s,axis=1)
rows = np.where(meanSatByRow>50)
# Replace selected rows with those from the inverted, thresholded image
gr[rows]=grinv[rows]
# Save result
cv2.imwrite('result.png',gr)
The result looks like this:
The saturation image looks as follows - note that saturated colours (i.e. the blues) show up as light, everything else as black:
The greyscale, inverted image looks like this:

Resources