misc.imread and misc.imsave changes pixel values - python-3.x

I am:
reading a grayscale image,
performing normalization of all pixel values and then
saving the image.
When I reopen the image I see different pixel values.
My code snippet:
image = misc.imread('lena.jpg')
maximum = np.max(image) # finds maximum pixel value of image
img = np.divide(image, maximum) # divide every pixel value by maximum
# scale every pixel value between 0 and 127
img_scale = np.round(img * (np.power(2,7)-1)).astype(int)
misc.imsave('lena_scaled.jpg', img_scale)
img_reopen = misc.imread('lena_scaled.jpg')
When I compare img_scale and img_reopen I get different values:
By executing np.max(img_scale), I get 127.
By executing np.max(img_reopen), I get 255
By executing img_scale[0][0], I get [82,82,82]
By executing img_reopen[0][0], I get [156][156][156]
Question
Why do the pixel values get changed after saving the image and reopening it?

The imsave function rescales the image when it saves to the disk.
misc.imsave function uses bytescale under the hood to rescale image to the full range (0,255).
That's why you get np.max 255 when you reopen. Please refer to the documentation here.
Follow-up:
To preserve your values without rescaling, you can try use misc.toimage function and save the resulting as follows,
im = misc.toimage(img_scale, high=np.max(img_scale), low=np.max(img_scale)
im.save('lena_scaled.jpg')
When you read the 'lena_scaled.jpg' with misc.imsave you can try use the following:
misc.imread('lena_scaled.jpg', mode='I')
I - ā€˜Lā€™ (8-bit pixels, black and white)
which I believe would work for your grayscale image.
Hope this helps.

Related

Change width of image in Opencv using Numpy

I'm making a Python file that will make a filter to have color on the Canny filter in OpenCV. I do this change from grayscale to color using the code provided below. My problem is when I apply the concatenate method (to add the color back as Canny filter is converted to grayscale), it cuts the width of the screen in 3 as I show in the 2 screenshots of before the color is added and after. The code snippet shown is only the transformation from grayscale to colored images.
What I've tried:
Tried using NumPy.tile: this wasn't the wisest attempt as it just repeated the same 1/3 of the screen twice more and didn't expand it to take up the whole screen as I had hoped.
Tried changing the image to only be from the index of 1/3 of the screen to cover the entire screen.
Tried setting the column index that is blank to equal None.
Image without the color added
Image with the color added
My code:
def convert_pixels(image, color):
rows, cols = image.shape
concat = np.zeros(image.shape)
image = np.concatenate((image, concat), axis=1)
image = np.concatenate((image, concat), axis=1)
image = image.reshape(rows, cols, 3)
index = image.nonzero()
#TODO: turn color into constantly changing color wheel or shifting colors
for i in zip(index[0], index[1], index[2]):
color.next_color()
image[i[0]][i[1]] = color.color
#TODO: fix this issue below:
#image[:, int(cols/3):cols] = None # turns right side (gliched) into None type
return image, color
In short, you're using concatenate on the wrong axis. axis=1 is the "columns" axis, so you're just putting two copies of zeros next to each other in the x direction. Since you want a three-channel image I would just initialize color_image with three channels and leave the original grayscale image alone:
def convert_pixels(image,color):
rows, cols = image.shape
color_image = np.zeros((rows,cols,3),dtype=np.uint8)
idx = image.nonzero()
for i in zip(*idx):
color_image[i] = color.color
return color_image,color
I've changed the indexing to match. I can't check this exactly since I don't know what your color object is, but I can confirm this works in terms of correctly shaping and indexing the new image.

Change image color in PIL module

I am trying to vary the intensity of colors to obtain a different colored image...
import PIL
from PIL import Image
from PIL import ImageEnhance
from PIL import ImageDraw
# read image and convert to RGB
image=Image.open("readonly/msi_recruitment.gif")
image=image.convert('RGB')
# build a list of 9 images which have different brightnesses
enhancer=ImageEnhance.Brightness(image)
images=[]
for i in range(1, 10):
images.append(enhancer.enhance(i/10))
# create a contact sheet from different brightnesses
first_image=images[0]
contact_sheet=PIL.Image.new(first_image.mode, (first_image.width*3,first_image.height*3))
x=0
y=0
for img in images:
# Lets paste the current image into the contact sheet
contact_sheet.paste(img, (x, y) )
# Now we update our X position. If it is going to be the width of the image, then we set it to 0
# and update Y as well to point to the next "line" of the contact sheet.
if x+first_image.width == contact_sheet.width:
x=0
y=y+first_image.height
else:
x=x+first_image.width
# resize and display the contact sheet
contact_sheet = contact_sheet.resize((int(contact_sheet.width/2),int(contact_sheet.height/2) ))
display(contact_sheet)
But the above code just varies brightness....
Please tell me what changes should i make to vary color intensity in this code.....
Im sorry but i am unable to upload the picture now, consider any image you find suitable and help me out... Appreciated!!!!
Please go to this link and answer this question instead of this one, I apologise for inconvenience....
Pixel colour intensity
Many colour operations are best done in a colourspace such as HSV which you can get in PIL with:
HSV = rgb.convert('HSV')
You can then use split() to get 3 separate channels:
H, S, V = hsv.split()
Now you can change your colours. You seem a little woolly on what you want. If you want to change the intensity of the colours, i.e. make them less saturated and less vivid decrease the S (Saturation). If you want to change the reds to purples, i.e. change the Hues, then add something to the Hue channel. If you want to make the image brighter or darker, change the Value (V) channel.
When you have finished, merge merge((H,S,V)) the edited channels back together and convert back to RGB with convert('RGB').
See Splitting and Merging and Processing Individual Bands on this page.
Here is an example, using this image:
Here is the basic framework to load the image, convert to HSV colourspace, split the channels, do some processing, recombine the channels and revert to RGB colourspace and save the result.
#!/usr/bin/env python3
from PIL import Image
# Load image and create HSV version
im = Image.open('colorwheel.jpg')
HSV= im.convert('HSV')
# Split into separate channels
H, S, V = HSV.split()
######################################
########## PROCESSING HERE ###########
######################################
# Recombine processed H, S and V back into a recombined image
HSVr = Image.merge('HSV', (H,S,V))
# Convert recombined HSV back to reconstituted RGB
RGBr = HSVr.convert('RGB')
# Save processed result
RGBr.save('result.png')
So, if you find the chunk labelled "PROCESSING HERE" and put code in there to divide the saturation by 2, it will make the colours less vivid:
# Desaturate the colours by halving the saturation
S = S.point(lambda p: p//2)
If, instead, we halve the brightness (V), like this:
# Halve the brightness
V=V.point(lambda p: p//2)
the result will be darker:
If, instead, we add 80 to the Hue, all the colours will rotate around the circle - this is called a "Hue rotation":
# Rotate Hues around the Hue circle by 80 on a range of 0..255, so around 1/3 or a circle, i.e. 120 degrees:
H = H.point(lambda p: p+80)
which gives this:

grayscale image rotation with cv2

I'm trying to crop and rotate an grayscale image.
The image is being rotated successfully according to my defined dimensions, but the intensity channel seems to get zeroed up across the entire rotated image.
image - the original 32,000X1024X1 grayscale image.
i - an index from which I want to crop the image.
windowWidth - a size constant, which defines the number of pixels I wish to crop (e.g in our case, windowWidth = 5000).
cropped - the piece from the original image I wish to rotate.
code example:
cropped = image[i:i+windowWidth, :]
ch, cw = cropped.shape[:2]
rotation_matrix = cv2.getRotationMatrix2D((cw/2,ch/2),90,1)
return cv2.warpAffine(cropped,rotation_matrix, (ch,cw))
The returned 1024X5000X1 matrix contains only 0's, although the original image does not.
It is possible that you are using width instead of height, then maybe this would solve your problem:
cropped = image[:, i:i+windowWidth]

How to convert the background of the entire image to white when both white and black backgrounds are present?

The form image contains text in different background. The image needs to be converted to one background (here white) and hence the heading needs to be converted into black.
input image :
output image:
My approach was to detect the grid(horizontal lines and vertical lines and sum them up) and then crop each section of the grid into new sub-images and then check the majority pixel color and transform accordingly. But after implementing that, the blue background image is not getting detected and getting cropped like :
So I am trying to convert the entire form image into one background so that I can avoid such outcomes.
Here's a different way of doing it that will cope with the "reverse video" being black, rather than relying on some colour saturation to find it.
#!/usr/bin/env python3
import cv2
import numpy as np
# Load image, greyscale and threshold
im = cv2.imread('form.jpg',cv2.IMREAD_GRAYSCALE)
# Threshold and invert
_,thr = cv2.threshold(im,127,255,cv2.THRESH_BINARY)
inv = 255 - thr
# Perform morphological closing with square 7x7 structuring element to remove details and thin lines
SE = np.ones((7,7),np.uint8)
closed = cv2.morphologyEx(thr, cv2.MORPH_CLOSE, SE)
# DEBUG save closed image
cv2.imwrite('closed.png', closed)
# Find row numbers of dark rows
meanByRow=np.mean(closed,axis=1)
rows = np.where(meanByRow<50)
# Replace selected rows with those from the inverted image
im[rows]=inv[rows]
# Save result
cv2.imwrite('result.png',im)
The result looks like this:
And the intermediate closed image looks like this - I artificially added a red border so you can see its extent on Stack Overflow's white background:
You can read about morphology here and an excellent description by Anthony Thyssen, here.
Here's a possible approach. Shades of blue will show up with a higher saturation than black and white if you convert to HSV colourspace, so...
convert to HSV
find mean saturation for each row and select rows where mean saturation exceeds a threshold
greyscale those rows, invert and threshold them
This approach should work if the reverse (standout) backgrounds are any colour other than black or white. It assumes you have de-skewed your images to be truly vertical/horizontal per your example.
That could look something like this in Python:
#!/usr/bin/env python3
import cv2
import numpy as np
# Load image
im = cv2.imread('form.jpg')
# Make HSV and extract S, i.e. Saturation
hsv = cv2.cvtColor(im, cv2.COLOR_BGR2HSV)
s=hsv[:,:,1]
# Save saturation just for debug
cv2.imwrite('saturation.png',s)
# Make greyscale version and inverted, thresholded greyscale version
gr = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
_,grinv = cv2.threshold(gr,127,255,cv2.THRESH_BINARY_INV)
# Find row numbers of rows with colour in them
meanSatByRow=np.mean(s,axis=1)
rows = np.where(meanSatByRow>50)
# Replace selected rows with those from the inverted, thresholded image
gr[rows]=grinv[rows]
# Save result
cv2.imwrite('result.png',gr)
The result looks like this:
The saturation image looks as follows - note that saturated colours (i.e. the blues) show up as light, everything else as black:
The greyscale, inverted image looks like this:

How do i convert an image read with cv2.imread('img.png',cv2.IMREAD_UNCHANGED) to the format of cv2.imread('img.png',cv2.IMREAD_COLOR)

I'm trying to read an image in unchanged format, do some operations and convert it back to the colored format
im = cv2.imread(fname,cv2.IMREAD_UNCHANGED) # shape(240,240,4)
....
im2 = cv2.imread(im,cv2.IMREAD_COLOR) # required shape(240,240,3)
But, looks like I can't input the result of first numpy array into the second imread.
So currently I've created a temporary image after the operations and reading that value to get the required im2 value.
im = cv2.imread(fname,cv2.IMREAD_UNCHANGED) # shape(240,240,4)
....
cv2.imwrite('img.png',im)
im2 = cv2.imread('img.png',cv2.IMREAD_COLOR) # required shape(240,240,3)
However I would like to avoid the step of creating temporary image. How would I achieve the same with a better approach
OpenCV has a function for color conversion cvtColor
https://docs.opencv.org/3.1.0/de/d25/imgproc_color_conversions.html
im2 = cv2.cvtColor(im, <conversion code>)
You should figure out conversion code yourself, based on image format you have. Probably, it would be cv2.COLOR_BGRA2BGR

Resources