Why do assigned RGB values get changed automatically? - python-3.x

First, consider this code:
from PIL import Image
im = Image.open("best_tt.jpg")
im2 = Image.new("RGB", im.size, (255,255,255))
b = 200
for i in range(im.size[0]):
for j in range(im.size[1]):
rgb = im.getpixel((i,j))
if rgb[0] <= b and rgb[1] <= b and rgb[2] <= b:
im2.putpixel((i,j), (0,0,0))
else:
im2.putpixel((i,j), (0, rgb[1], rgb[2]))
im2.save("tmp.jpg")
What I am doing is simply removing the RED component from each pixel (other than black pixels: the if statement checks for pixels that look black). In other words, I'm converting the given image to a yellow scale (since G+B = Y).
In that way, every pixel should have an RGB value like (0, G, B).
However, certain pixels of the new image returned values like:
(1, 255, 203)
(3, 205, 243)
(16, 242, 47)
though some had the red component as 0.
What causes this arbitrary adjustment of the RGB values?

The save() function will determine the type as a jpeg, which has a default compression quality of 75. The way the file is encoded and compressed can end up changing values after the fact.
See the PIL documentation for save() below:
https://pillow.readthedocs.io/en/3.1.x/handbook/image-file-formats.html

Related

Image intensity distribution changes during opencv warp affine

I am using python 3.8.5 and opencv 4.5.1 on windows 7
I am using the following code to rotate images.
def pad_rotate(image, ang, pad, pad_value=0):
(h, w) = image.shape[:2]
#create larger image and paste original image at the center.
# this is done to avoid any cropping during rotation
nH, nW = h + 2*pad, w + 2*pad #new height and width
cY, cX = nW//2, nH//2 #center of the new image
#create new image with pad_values
newImg = np.zeros((h+2*pad, w+2*pad), dtype=image.dtype)
newImg[:,:] = pad_value
#paste new image at the center
newImg[pad:pad+h, pad:pad+w] = image
#rotate CCW (for positive angles)
M = cv2.getRotationMatrix2D(center=(cX, cY), angle=ang, scale=1.0)
rotImg = cv2.warpAffine(newImg, M, (nW, nH), cv2.INTER_CUBIC,
borderMode=cv2.BORDER_CONSTANT, borderValue=pad_value)
return rotImg
My issue is that after the rotation, image intensity distribution is different than original.
Following part of the question is edited to clarify the issue
img = np.random.rand(500,500)
Rimg = pad_rotate(img, 15, 300, np.nan)
Here is what these images look like:
Their intensities have clearly shifted:
np.percentile(img, [20, 50, 80])
# prints array([0.20061218, 0.50015415, 0.79989986])
np.nanpercentile(Rimg, [20, 50, 80])
# prints array([0.32420028, 0.50031483, 0.67656537])
Can someone please tell me how to avoid this normalization?
The averaging effect of the interpolation changes the distribution...
Note:
There is a mistake in your code sample (not related to the percentiles).
The 4'th argument of warpAffine is dst.
replace cv2.warpAffine(newImg, M, (nW, nH), cv2.INTER_CUBIC with:
cv2.warpAffine(newImg, M, (nW, nH), flags=cv2.INTER_CUBIC
I tried to simplify the code sample that reproduces the problem.
The code sample uses linear interpolation, 1 degree rotation, and no NaN values.
import numpy as np
import cv2
img = np.random.rand(1000, 1000)
M = cv2.getRotationMatrix2D((img.shape[1]//2, img.shape[0]//2), 1, 1) # Rotate by 1 degree
Rimg = cv2.warpAffine(img, M, (img.shape[1], img.shape[0]), flags=cv2.INTER_LINEAR) # Use Linear interpolation
Rimg = Rimg[20:-20, 20:-20] # Crop the part without the margins.
print(np.percentile(img, [20, 50, 80])) #[0.20005696 0.49990526 0.79954818]
print(np.percentile(Rimg, [20, 50, 80])) #[0.32244747 0.4998595 0.67698961]
cv2.imshow('img', img)
cv2.imshow('Rimg', Rimg)
cv2.waitKey()
cv2.destroyAllWindows()
When we disable the interpolation,
Rimg = cv2.warpAffine(img, M, (img.shape[1], img.shape[0]), flags=cv2.INTER_NEAREST)
The percentiles are: [0.19943713 0.50004768 0.7995525 ].
Simpler example for showing that averaging elements changes the distribution:
A = np.random.rand(10000000)
B = (A[0:-1:2] + A[1::2])/2 # Averaging every two elements.
print(np.percentile(A, [20, 50, 80])) # [0.19995436 0.49999472 0.80007232]
print(np.percentile(B, [20, 50, 80])) # [0.31617922 0.50000145 0.68377251]
Why does interpolation skews the distribution towered the median?
I am not a mathematician.
I am sure you can get a better explanation...
Here is an intuitive example:
Assume there is list of values with uniform distribution in range [0, 1].
Assume there is a zero value in the list:
[0.2, 0.7, 0, 0.5... ]
After averaging every two sequential elements, the probability for getting a zero element in the output list is very small (only two sequential zeros result a zero).
The example shows that averaging pushes the extreme values towered the center.

Threshold using OpenCv?

As the questions states, I want to apply a two-way Adaptive Thresholding technique to my image. That is to say, I want to find each pixel value in the neighborhood and set it to 255 if it is less than or greater than the mean of the neighborhood minus a constant c.
Take this image, for example, as the neighborhood of pixels. The desired pixel areas to keep are the darker areas on the third and sixth squares' upper-half (from left-to-right and top-to-bottom), as well as the eight and twelve squares' upper-half.
Obviously, this all depends on the set constant value, but ideally areas that are significantly different than the mean pixel value of the neighborhood will be kept. I can worry about the tuning myself though.
Your question and comment are contradictory: Keep everything (significantly) brighter/darker than the mean (+/- constant) of the neighbourhood (question) vs. keep everything within mean +/- constant (comment). I assume the first one to be the correct, and I'll try to give an answer.
Using cv2.adaptiveThreshold is certainly useful; parameterization might be tricky, especially given the example image. First, let's have a look at the output:
We see, that the intensity value range in the given image is small. The upper-halfs of the third and sixth' squares don't really differ from their neighbourhood. It's quite unlikely to find a proper difference there. The upper-halfs of squares #8 and #12 (or also the lower-half of square #10) are more likely to be found.
Top row now shows some more "global" parameters (blocksize = 151, c = 25), bottom row more "local" parameters (blocksize = 51, c = 5). Middle column is everything darker than the neighbourhood (with respect to the paramters), right column is everything brighter than the neighbourhood. We see, in the more "global" case, we get the proper upper-halfs, but there are mostly no "significant" darker areas. Looking, at the more "local" case, we see some darker areas, but we won't find the complete upper-/lower-halfs in question. That's just because how the different triangles are arranged.
On the technical side: You need two calls of cv2.adaptiveThreshold, one using the cv2.THRESH_BINARY_INV mode to find everything darker and one using the cv2.THRESH_BINARY mode to find everything brighter. Also, you have to provide c or -c for the two different cases.
Here's the full code:
import cv2
from matplotlib import pyplot as plt
from skimage import io # Only needed for web grabbing images
plt.figure(1, figsize=(15, 10))
img = cv2.cvtColor(io.imread('https://i.stack.imgur.com/dA1Vt.png'), cv2.COLOR_RGB2GRAY)
plt.subplot(2, 3, 1), plt.imshow(img, cmap='gray'), plt.colorbar()
# More "global" parameters
bs = 151
c = 25
img_le = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, bs, c)
img_gt = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, bs, -c)
plt.subplot(2, 3, 2), plt.imshow(img_le, cmap='gray')
plt.subplot(2, 3, 3), plt.imshow(img_gt, cmap='gray')
# More "local" parameters
bs = 51
c = 5
img_le = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, bs, c)
img_gt = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, bs, -c)
plt.subplot(2, 3, 5), plt.imshow(img_le, cmap='gray')
plt.subplot(2, 3, 6), plt.imshow(img_gt, cmap='gray')
plt.tight_layout()
plt.show()
Hope that helps – somehow!
-----------------------
System information
-----------------------
Python: 3.8.1
Matplotlib: 3.2.0rc1
OpenCV: 4.1.2
-----------------------
Another way to look at this is that where abs(mean - image) <= c, you want that to become white, otherwise you want that to become black. In Python/OpenCV/Scipy/Numpy, I first compute the local uniform mean (average) using a uniform 51x51 pixel block averaging filter (boxcar average). You could use some weighted averaging method such as the Gaussian average, if you want. Then I compute the abs(mean - image). Then I use Numpy thresholding. Note: You could also just use one simple threshold (cv2.threshold) on the abs(mean-image) result in place of two numpy thresholds.
Input:
import cv2
import numpy as np
from scipy import ndimage
# read image as grayscale
# convert to floats in the range 0 to 1 so that the difference keeps negative values
img = cv2.imread('squares.png',0).astype(np.float32)/255.0
# get uniform (51x51 block) average
ave = ndimage.uniform_filter(img, size=51)
# get abs difference between ave and img and convert back to integers in the range 0 to 255
diff = 255*np.abs(ave - img)
diff = diff.astype(np.uint8)
# threshold
# Note: could also just use one simple cv2.Threshold on diff
c = 5
diff_thresh = diff.copy()
diff_thresh[ diff_thresh <= c ] = 255
diff_thresh[ diff_thresh != 255 ] = 0
# view result
cv2.imshow("img", img)
cv2.imshow("ave", ave)
cv2.imshow("diff", diff)
cv2.imshow("threshold", diff_thresh)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save result
cv2.imwrite("squares_2way_thresh.jpg", diff_thresh)
Result:

Color Detection And Comaparision In Python

I am trying to figure out whether a particular color exists in an image or not? I want to write a Python code to compare the given color value with a color from the certain location coordinates of the image. I already tried to get a solution with Segmentation of Image on Color space, but I can not make it.
I am using Python "OpenCV".
I want to make program like:
given_color = Blue (Color Values)
if Blue == Color_values_detected_from_image:
print("Blue Color is present at your given area")
else:
print("Given Color Not Found")
Could you please advise me on where should I start?
I am expecting that if I am giving coordinates of rectangle in certain area of image then it should be compared with my given color values.
This can be done by simple pixel-wise comparison and NumPy's all method.
Let's have a look at the following code:
import cv2
import numpy as np
# Read input image
img = cv2.imread('images/colors.png', cv2.IMREAD_COLOR)
cv2.imshow('img', img)
# Region of interest (x1, x2, y1, y2)
roi = (200, 700, 0, 100)
imgRoi = img[roi[2]:roi[3], roi[0]:roi[1]]
cv2.imshow('imgRoi', imgRoi)
# Color of interest [B, G, R]
coi = [0, 255, 0]
# Compare each pixel with color; logical AND over all colors (axis=2)
cmp = np.all(imgRoi == coi, axis=2)
# From here, do whatever you like with this information...
# For example, show mask where color of interest was found
out = np.zeros((imgRoi.shape[0], imgRoi.shape[1], 1), np.uint8)
out[cmp] = 255
cv2.imshow('out', out)
cv2.waitKey(0)
The input image looks like this:
The region of interest (ROI) looks like this:
As an exemplary output, here's the mask where the color of interest #00ff00 was found:
Hope that helps!
P.S. The Python/NumPy masters may please suggest a more elegant way to "translate" the two points (x1, y1), (x2, y2) to the indices x1:x2, y1:y2. Right now, this notation looks quite cumbersome...

Change Dimensions of ndarray and Multiply Contents

I have an MxN ndarray that contains True and False values inside those arrays and want to draw those as an image.
The goal is to convert the array to a pillow image with each True value as a constant color. I was able to get it working by looping through each pixel and changing them individually by a comparison and drawing the pixel on a blank image, but that method is way too slow.
# img is a PIL image result
# image is the MxN ndarray
pix = img.load()
for x in range(image.shape[0]):
for y in range(image.shape[1]):
if image[x, y]:
pix[y, x] = (255, 0, 0)
Is there a way to change the ndarray to a MxNx3 by replacing the tuples directly to the True values?
If you have your True/False 2D array and the label for the color, for example [255,255,255], the following will work:
colored = np.expand_dims(bool_array_2d,axis=-1)*np.array([255,255,255])
To illustrate it with a dummy example: in the following code I have created a random matrix of 0s and 1s and then have turned the 1s to white ([255,255,255]).
import numpy as np
import matplotlib.pyplot as plt
array = np.random.randint(0,2, (100,100))
colors = np.array([255,255,255])
colored = np.expand_dims(array, axis=-1)*colors
plt.imshow(colored)
Hope this has helped
Did find another solution, converted to an image first, then converted to RGB, then converted back to separate to 3 channels. When I was trying to combine multiple boolean arrays together, this way was a lot faster.
img = Image.fromarray(image * 1, 'L').convert('RGB')
data = np.array(img)
red, green, blue = data.T
area = (red == 1)
data[...][area.T] = (255, 255, 255)
img = Image.fromarray(data)
I think you can do this quite simply and fast like this:
# Make a 2 row by 3 column image of True/False values
im = np.random.choice((True,False),(2,3))
Mine looks like this:
array([[False, False, True],
[ True, True, True]])
Now add a new axis it make it 3 channel and multiply the truth values by your new "colour":
result = im[..., np.newaxis]*[255,255,255]
which gives you this:
array([[[ 0, 0, 0],
[ 0, 0, 0],
[255, 255, 255]],
[[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]])

Image segmentation of objects in any illumination(low or high)

The problem I have at hand is to draw boundaries around a white ball. But the ball is present in different illuminations. Using canny edge detections and Hough transform for circles, I am able to detect the ball in bright light/partial bright light but not in low illumination.
So can anyone help with this problem.
The code that I have tried is below.
img=cv2.imread('14_04_2018_10_38_51_.8242_P_B_142_17197493.png.png')
cimg=img.copy()
img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.medianBlur(img,5)
edges=cv2.Canny(edges,200,200)
circles = cv2.HoughCircles(edges,cv2.HOUGH_GRADIENT,1,20,
param1=25,param2=10,minRadius=0,maxRadius=0)
if circles is not None:
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(cimg,(i[0],i[1]),i[2],(255,255,255),2)
# draw the center of the circle
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
cv2.imwrite('segmented_out.png',cimg)
else:
print("no circles")
cv2.imwrite('edges_out.png',edges)
In the image below we need to segment if the ball is in the shadow region as well.
The output should be something like below images..
Well I am not very experienced in OpenCV or Python but I am learning as well. Probably not very pythonic piece of code but you could try this:
import cv2
import math
circ=0
n = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200, 210, 220]
img = cv2.imread("ball1.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
for i in n:
ret, threshold = cv2.threshold(gray,i,255,cv2.THRESH_BINARY)
im, contours, hierarchy = cv2.findContours(threshold,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
for j in range(0, len(contours)):
size = cv2.contourArea(contours[j])
if 500 < size < 5000:
if circ > 0:
(x,y),radius = cv2.minEnclosingCircle(contours[j])
radius = int(radius)
area = cv2.contourArea(contours[j])
circif = 4*area/(math.pi*(radius*2)**2)
if circif > circ:
circ = float(circif)
radiusx = radius
center = (int(x),int(y))
elif circ == 0:
(x,y),radius = cv2.minEnclosingCircle(contours[j])
radius = int(radius)
area = cv2.contourArea(contours[j])
circ = 4*area/(math.pi*(radius*2)**2)
else:
pass
cv2.circle(img,center,radiusx,(0,255,0),2)
cv2.imshow('image', img)
cv2.waitKey(0)
cv2.detroyAllWindows()
What it does is acctually you convert your picture to grayscale and apply different threshold settings to it. Then you eliminate noises with adding size to your specific contour. When you find it, you check its circularity (NOTE: it is not a scientific formula) and compare it to the next circularity. Perfect circle should return the result 1, so the highest number that will get in a contour (of all the contours) will be your ball.
Result:
NOTE: I haven't tried increasing the limit of size so maybe higher limit could return better result if you have a high resolution picture
Working with grayscale image will make you subject to different light conditions.
To be free from this I suggest to work in HSV color space, then use the Hue component instead of the grayscale image.
Hue is independent from the light condition, since it gives you information about the color, regardless of its Saturation or Value (a value bound to the brightness of the image).
This might bring you some clarity about color spaces and which is best to use for image segmentation.
In your case here. We have a white ball.White is not a color by itself.The main factor here is, what kind light actually falls on the white ballAs the kind of light that falls on it has a direct influence on the kind of extraction you might plan to do using a color space like HSV as mentioned above by #magicleon
HSV is your best bet for segmentation here.Using
whiteObject = cv2.inRange(hsvImage,lowerHSVLimit,upperHSVLimit)
lowerHSVLimit and upperHSVLimit HSV color range
Keeping in mind that the conditions
1) The image have similar conditions while they were clicked
2) You cover all the ranges of HSV before extraction
Hope you get an idea
Consider this example
Selecting a particular hue range from 45 to 60
Code
image = cv2.imread('allcolors.png')
hsvImg = cv2.cvtColor(image,cv2.COLOR_BGR2HSV)
lowerHSVLimit = np.array([45,0,0])
upperHSVLimit = np.array([60,255,255])
colour = cv2.inRange(hsvImg,lowerHSVLimit,upperHSVLimit)
plt.subplot(111), plt.imshow(colour,cmap="gray")
plt.title('hue range from 45 to 60'), plt.xticks([]), plt.yticks([])
plt.show()
Here the hue selected from 45 to 60

Resources