grayscale image rotation with cv2 - python-3.x

I'm trying to crop and rotate an grayscale image.
The image is being rotated successfully according to my defined dimensions, but the intensity channel seems to get zeroed up across the entire rotated image.
image - the original 32,000X1024X1 grayscale image.
i - an index from which I want to crop the image.
windowWidth - a size constant, which defines the number of pixels I wish to crop (e.g in our case, windowWidth = 5000).
cropped - the piece from the original image I wish to rotate.
code example:
cropped = image[i:i+windowWidth, :]
ch, cw = cropped.shape[:2]
rotation_matrix = cv2.getRotationMatrix2D((cw/2,ch/2),90,1)
return cv2.warpAffine(cropped,rotation_matrix, (ch,cw))
The returned 1024X5000X1 matrix contains only 0's, although the original image does not.

It is possible that you are using width instead of height, then maybe this would solve your problem:
cropped = image[:, i:i+windowWidth]

Related

Python add padding to images that need it

I have a bunch of images that aren't equal size, and where some fit entirely to the frame and some have blank padding.
I would like to know how I can resize each of them to be the same image size and to have roughly the same border size.
Currently I am doing
from PIL import Image
from glob import glob
images = glob('src/assets/emotes/medals/**/*.png', recursive=True)
for image_path in images:
im = Image.open(image_path).convert('RGBA')
im = im.resize((100, 100))
im.save(image_path)
but this doesn't account for a possible border.
Image 1 - 101 x 101
Image 2 - 132 x 160
Desired result - 100 x 100
Images arent always bigger than (100, 100) so I will need to use resize.
I can also maybe remove the PNG border for all images, and then resize which might be easier.
Taken from Crop a PNG image to its minimum size, im.getbbox() will give you the original image without transparent background.
Documentation : Pillow (PIL Fork)

Image Processing to remove noise from image

I am using opencv to do image processing on image.
I would like to transform my image in black and white only, but there is some gray color (noise) that I would like to remove
Here is my image:
I would like to have an Image in white and black only to get clearly the text:
"
PARTICIPATION -3.93 C
Redevance Patronale -1.92 C
"
I have tried to change the threshold of the image with OpenCV but without success
#grayscale
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
#binary
ret,thresh = cv2.threshold(gray,175,255,cv2.THRESH_BINARY_INV)
I think you mean to remove the noise of the image. For this you can choose a lower threshold value. I choose 64 using ret,thresh = cv2.threshold(img,64,255,cv2.THRESH_BINARY) and I got this result:
But this is not that clear and letters are very thin so we use cv2.erode. This gives:
and now we perform cv2.bitwise_or between original image and eroded image to obtain noise free image.
The full code used is
img = cv2.imread('grayed.png', 0)
ret,thresh = cv2.threshold(img,64,255,cv2.THRESH_BINARY)
kernel = np.ones((5, 5), np.uint8)
erode = cv2.erode(thresh, kernel, iterations = 1)
result = cv2.bitwise_or(img, erode)
Your colour conversion converts to an RGB image with grey colour (according to GIMP it is still an RGB image). The opencv documentation says that the image must be grey scale not colour. Even though your image is grey it is still a colour image. Not sure that a color image with the GRAY colourspace is the same as a gray scale image.
This is really a duplicate of :-
Converting an OpenCV Image to Black and White

How to convert the background of the entire image to white when both white and black backgrounds are present?

The form image contains text in different background. The image needs to be converted to one background (here white) and hence the heading needs to be converted into black.
input image :
output image:
My approach was to detect the grid(horizontal lines and vertical lines and sum them up) and then crop each section of the grid into new sub-images and then check the majority pixel color and transform accordingly. But after implementing that, the blue background image is not getting detected and getting cropped like :
So I am trying to convert the entire form image into one background so that I can avoid such outcomes.
Here's a different way of doing it that will cope with the "reverse video" being black, rather than relying on some colour saturation to find it.
#!/usr/bin/env python3
import cv2
import numpy as np
# Load image, greyscale and threshold
im = cv2.imread('form.jpg',cv2.IMREAD_GRAYSCALE)
# Threshold and invert
_,thr = cv2.threshold(im,127,255,cv2.THRESH_BINARY)
inv = 255 - thr
# Perform morphological closing with square 7x7 structuring element to remove details and thin lines
SE = np.ones((7,7),np.uint8)
closed = cv2.morphologyEx(thr, cv2.MORPH_CLOSE, SE)
# DEBUG save closed image
cv2.imwrite('closed.png', closed)
# Find row numbers of dark rows
meanByRow=np.mean(closed,axis=1)
rows = np.where(meanByRow<50)
# Replace selected rows with those from the inverted image
im[rows]=inv[rows]
# Save result
cv2.imwrite('result.png',im)
The result looks like this:
And the intermediate closed image looks like this - I artificially added a red border so you can see its extent on Stack Overflow's white background:
You can read about morphology here and an excellent description by Anthony Thyssen, here.
Here's a possible approach. Shades of blue will show up with a higher saturation than black and white if you convert to HSV colourspace, so...
convert to HSV
find mean saturation for each row and select rows where mean saturation exceeds a threshold
greyscale those rows, invert and threshold them
This approach should work if the reverse (standout) backgrounds are any colour other than black or white. It assumes you have de-skewed your images to be truly vertical/horizontal per your example.
That could look something like this in Python:
#!/usr/bin/env python3
import cv2
import numpy as np
# Load image
im = cv2.imread('form.jpg')
# Make HSV and extract S, i.e. Saturation
hsv = cv2.cvtColor(im, cv2.COLOR_BGR2HSV)
s=hsv[:,:,1]
# Save saturation just for debug
cv2.imwrite('saturation.png',s)
# Make greyscale version and inverted, thresholded greyscale version
gr = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
_,grinv = cv2.threshold(gr,127,255,cv2.THRESH_BINARY_INV)
# Find row numbers of rows with colour in them
meanSatByRow=np.mean(s,axis=1)
rows = np.where(meanSatByRow>50)
# Replace selected rows with those from the inverted, thresholded image
gr[rows]=grinv[rows]
# Save result
cv2.imwrite('result.png',gr)
The result looks like this:
The saturation image looks as follows - note that saturated colours (i.e. the blues) show up as light, everything else as black:
The greyscale, inverted image looks like this:

Creating a greyscale image with a Matrix in python

I'm Marius, a maths student in the first year.
We have recieved a team-assignment where we have to implement a fourier transformation and we chose to try to encode the transformation of an image to a JPEG image.
to simplify the problem for ourselves, we chose to do it only for pictures that are greyscaled.
This is my code so far:
from PIL import Image
import numpy as np
import sympy as sp
#
#ALLEMAAL INFORMATIE GEEN BEREKENINGEN
img = Image.open('mario.png')
img = img.convert('L') # convert to monochrome picture
img.show() #opens the picture
pixels = list(img.getdata())
print(pixels) #to see if we got the pixel numeric values correct
grootte = list(img.size)
print(len(pixels)) #to check if the amount of pixels is correct.
kolommen, rijen = img.size
print("het aantal kolommen is",kolommen,"het aantal rijen is",rijen)
#tot hier allemaal informatie
pixelMatrix = []
while pixels != []:
pixelMatrix.append(pixels[:kolommen])
pixels = pixels[kolommen:]
print(pixelMatrix)
pixelMatrix = np.array(pixelMatrix)
print(pixelMatrix.shape)
Now the problem forms itself in the last 3 lines. I want to try to convert the matrix of values back into an Image with the matrix 'pixelMatrix' as it's value.
I've tried many things, but this seems to be the most obvious way:
im2 = Image.new('L',(kolommen,rijen))
im2.putdata(pixels)
im2.show()
When I use this, it just gives me a black image of the correct dimensions.
Any ideas on how to get back the original picture, starting from the values in my matrix pixelMatrix?
Post Scriptum: We still have to implement the transformation itself, but that would be useless unless we are sure we can convert a matrix back into a greyscaled image.

Opencv rectangle gets drawn on original image (unable to remove)

I used the cv2.rectangle() function to draw a rectangle around my image. This works fine. However, I realise that it is also drawn on my original copy. (I made two copies of the same image from my original image file). Can someone please point out where I went wrong?
Here's the code (left out chunks to reduce length):
img=cv2.imread(file,1)
height, width = img.shape[:2]
max_height = 800
max_width = 800
#some image resizing function
#...
resized=img
originalImg=resized
height0, width0=resized.shape[:2]
image=resized
#image manipulation on (image)
#...
imageRect=cv2.rectangle(image,(xCoor,yCoor),(xCoor+width, yCoor+height), (255,0,255), 3)
#This shows the drawn on rectangle fine
cv2.imshow("imageRect", imageRect)
#problem is, this originalImg still has the rectangle drawn on it
cv2.imshow("originalImg", originalImg)

Resources