Image intensity distribution changes during opencv warp affine - python-3.x

I am using python 3.8.5 and opencv 4.5.1 on windows 7
I am using the following code to rotate images.
def pad_rotate(image, ang, pad, pad_value=0):
(h, w) = image.shape[:2]
#create larger image and paste original image at the center.
# this is done to avoid any cropping during rotation
nH, nW = h + 2*pad, w + 2*pad #new height and width
cY, cX = nW//2, nH//2 #center of the new image
#create new image with pad_values
newImg = np.zeros((h+2*pad, w+2*pad), dtype=image.dtype)
newImg[:,:] = pad_value
#paste new image at the center
newImg[pad:pad+h, pad:pad+w] = image
#rotate CCW (for positive angles)
M = cv2.getRotationMatrix2D(center=(cX, cY), angle=ang, scale=1.0)
rotImg = cv2.warpAffine(newImg, M, (nW, nH), cv2.INTER_CUBIC,
borderMode=cv2.BORDER_CONSTANT, borderValue=pad_value)
return rotImg
My issue is that after the rotation, image intensity distribution is different than original.
Following part of the question is edited to clarify the issue
img = np.random.rand(500,500)
Rimg = pad_rotate(img, 15, 300, np.nan)
Here is what these images look like:
Their intensities have clearly shifted:
np.percentile(img, [20, 50, 80])
# prints array([0.20061218, 0.50015415, 0.79989986])
np.nanpercentile(Rimg, [20, 50, 80])
# prints array([0.32420028, 0.50031483, 0.67656537])
Can someone please tell me how to avoid this normalization?

The averaging effect of the interpolation changes the distribution...
Note:
There is a mistake in your code sample (not related to the percentiles).
The 4'th argument of warpAffine is dst.
replace cv2.warpAffine(newImg, M, (nW, nH), cv2.INTER_CUBIC with:
cv2.warpAffine(newImg, M, (nW, nH), flags=cv2.INTER_CUBIC
I tried to simplify the code sample that reproduces the problem.
The code sample uses linear interpolation, 1 degree rotation, and no NaN values.
import numpy as np
import cv2
img = np.random.rand(1000, 1000)
M = cv2.getRotationMatrix2D((img.shape[1]//2, img.shape[0]//2), 1, 1) # Rotate by 1 degree
Rimg = cv2.warpAffine(img, M, (img.shape[1], img.shape[0]), flags=cv2.INTER_LINEAR) # Use Linear interpolation
Rimg = Rimg[20:-20, 20:-20] # Crop the part without the margins.
print(np.percentile(img, [20, 50, 80])) #[0.20005696 0.49990526 0.79954818]
print(np.percentile(Rimg, [20, 50, 80])) #[0.32244747 0.4998595 0.67698961]
cv2.imshow('img', img)
cv2.imshow('Rimg', Rimg)
cv2.waitKey()
cv2.destroyAllWindows()
When we disable the interpolation,
Rimg = cv2.warpAffine(img, M, (img.shape[1], img.shape[0]), flags=cv2.INTER_NEAREST)
The percentiles are: [0.19943713 0.50004768 0.7995525 ].
Simpler example for showing that averaging elements changes the distribution:
A = np.random.rand(10000000)
B = (A[0:-1:2] + A[1::2])/2 # Averaging every two elements.
print(np.percentile(A, [20, 50, 80])) # [0.19995436 0.49999472 0.80007232]
print(np.percentile(B, [20, 50, 80])) # [0.31617922 0.50000145 0.68377251]
Why does interpolation skews the distribution towered the median?
I am not a mathematician.
I am sure you can get a better explanation...
Here is an intuitive example:
Assume there is list of values with uniform distribution in range [0, 1].
Assume there is a zero value in the list:
[0.2, 0.7, 0, 0.5... ]
After averaging every two sequential elements, the probability for getting a zero element in the output list is very small (only two sequential zeros result a zero).
The example shows that averaging pushes the extreme values towered the center.

Related

How can i get the inner contour points without redundancy in OpenCV - Python

I'm new with OpenCV and the thing is that i need to get all the contour points. This is easy setting the cv2.RETR_TREE mode in findContours method. The thing is that in this way, returns redundant coordinates. So, for example, in this polygon, i don't want to get the contour points like this:
But like this:
So according to the first image, green color are the contours detected with RETR_TREE mode, and points 1-2, 3-5, 4-6, ... are redundant, because they are so close to each other. I need to put together those redundant points into one, and append it in the customContours array.
For the moment, i only have the code according for the first picture, setting up the distance between the points and the points coordinates:
def getContours(img, minArea=20000, cThr=[100, 100]):
font = cv2.FONT_HERSHEY_COMPLEX
imgColor = img
imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
imgBlur = cv2.GaussianBlur(imgGray, (5, 5), 1)
imgCanny = cv2.Canny(imgBlur, cThr[0], cThr[1])
kernel = np.ones((5, 5))
imgDial = cv2.dilate(imgCanny, kernel, iterations=3)
imgThre = cv2.erode(imgDial, kernel, iterations=2)
cv2.imshow('threshold', imgThre)
contours, hierachy = cv2.findContours(imgThre, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
customContours = []
for cnt in contours:
area = cv2.contourArea(cnt)
if area > minArea:
peri = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.009*peri, True)
bbox = cv2.boundingRect(approx)
customContours.append([len(approx), area, approx, bbox, cnt])
print('points: ', len(approx))
n = approx.ravel()
i = 0
for j in n:
if i % 2 == 0:
x = n[i]
y = n[i + 1]
string = str(x)+" " + str(y)
cv2.putText(imgColor, str(i//2+1) + ': ' + string, (x, y), font, 2, (0, 0, 0), 2)
i = i + 1
customContours = sorted(customContours, key=lambda x: x[1], reverse=True)
for cnt in customContours:
cv2.drawContours(imgColor, [cnt[2]], 0, (0, 0, 255), 5)
return imgColor, customContours
Could you help me to get the real points regarding to i.e. the second picture?
(EDIT 01/07/21)
I want a generic solution, because the image could be more complex, such as the following picture:
NOTE: notice that the middle arrow (points 17 and 18) doesn't have a closed area, so isn't a polygon to study. Then, that region is not interested to obtain his points. Also, notice that the order of the points aren't important, but if the entry is the hole image, it should know that there are 4 polygons, so for each polygon points starts with 0, then 1, etc.
Here's my approach. It is mainly morphological-based. It involves convolving the image with a special kernel. This convolution identifies the end-points of the triangle as well as the intersection points where the middle line is present. This will result in a points mask containing the pixel that matches the points you are looking for. After that, we can apply a little bit of morphology to join possible duplicated points. What remains is to get a list of the coordinate of these points for further processing.
These are the steps:
Get a binary image of the input via Otsu's thresholding
Get the skeleton of the binary image
Define the special kernel and convolve the skeleton image
Apply a morphological dilate to join possible duplicated points
Get the centroids of the points and store them in a list
Here's the code:
# Imports:
import numpy as np
import cv2
# image path
path = "D://opencvImages//"
fileName = "triangle.png"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Prepare a deep copy for results:
inputImageCopy = inputImage.copy()
# Convert BGR to Grayscale
grayImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Threshold via Otsu:
_, binaryImage = cv2.threshold(grayImage, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
The first bit computes the binary image. Very straightforward. I'm using this image as base, which is just a cleaned-up version of what you posted without the annotations. This is the resulting binary image:
Now, to perform the convolution we must first get the image "skeleton". The skeleton is a version of the binary image where lines have been normalized to have a width of 1 pixel. This is useful because we can then convolve the image with a 3 x 3 kernel and look for specific pixel patterns. Let's compute the skeleton using OpenCV's extended image processing module:
# Get image skeleton:
skeleton = cv2.ximgproc.thinning(binaryImage, None, 1)
This is the image obtained:
We can now apply the convolution. The approach is based on Mark Setchell's info on this post. The post mainly shows the method for finding end-points of a shape, but I extended it to also identify line intersections, such as the middle portion of the triangle. The main idea is that the convolution yields a very specific value where patterns of black and white pixels are found in the input image. Refer to the post for the theory behind this idea, but here, we are looking for two values: 110 and 40. The first one occurs when an end-point has been found. The second one when a line intersections is found. Let's setup the convolution:
# Threshold the image so that white pixels get a value of 0 and
# black pixels a value of 10:
_, binaryImage = cv2.threshold(skeleton, 128, 10, cv2.THRESH_BINARY)
# Set the convolution kernel:
h = np.array([[1, 1, 1],
[1, 10, 1],
[1, 1, 1]])
# Convolve the image with the kernel:
imgFiltered = cv2.filter2D(binaryImage, -1, h)
# Create list of thresholds:
thresh = [110, 40]
The first part is done. We are going to detect end-points and intersections in two separated steps. Each step will produce a partial result, we can OR both results to get a final mask:
# Prepare the final mask of points:
(height, width) = binaryImage.shape
pointsMask = np.zeros((height, width, 1), np.uint8)
# Perform convolution and create points mask:
for t in range(len(thresh)):
# Get current threshold:
currentThresh = thresh[t]
# Locate the threshold in the filtered image:
tempMat = np.where(imgFiltered == currentThresh, 255, 0)
# Convert and shape the image to a uint8 height x width x channels
# numpy array:
tempMat = tempMat.astype(np.uint8)
tempMat = tempMat.reshape(height,width,1)
# Accumulate mask:
pointsMask = cv2.bitwise_or(pointsMask, tempMat)
This is the final mask of points:
Note that the white pixels are the locations that matched our target patterns. Those are the points we are looking for. As the shape is not a perfect triangle, some points could be duplicated. We can "merge" neighboring blobs by applying a morphological dilation:
# Set kernel (structuring element) size:
kernelSize = 7
# Set operation iterations:
opIterations = 3
# Get the structuring element:
morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
# Perform Dilate:
morphoImage = cv2.morphologyEx(pointsMask, cv2.MORPH_DILATE, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101)
This is the result:
Very nice, we have now big clusters of pixels (or blobs). To get their coordinates, one possible approach would be to get the bounding rectangles of these contours and compute their centroids:
# Look for the outer contours (no children):
contours, _ = cv2.findContours(morphoImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Store the points here:
pointsList = []
# Loop through the contours:
for i, c in enumerate(contours):
# Get the contours bounding rectangle:
boundRect = cv2.boundingRect(c)
# Get the centroid of the rectangle:
cx = int(boundRect[0] + 0.5 * boundRect[2])
cy = int(boundRect[1] + 0.5 * boundRect[3])
# Store centroid into list:
pointsList.append( (cx,cy) )
# Set centroid circle and text:
color = (0, 0, 255)
cv2.circle(inputImageCopy, (cx, cy), 3, color, -1)
font = cv2.FONT_HERSHEY_COMPLEX
string = str(cx) + ", " + str(cy)
cv2.putText(inputImageCopy, str(i) + ':' + string, (cx, cy), font, 0.5, (255, 0, 0), 1)
# Show image:
cv2.imshow("Circles", inputImageCopy)
cv2.waitKey(0)
These are the points located in the original input:
Note also that I've stored their coordinates in the pointsList list:
# Print the list of points:
print(pointsList)
This prints the centroids as the tuple (centroidX, centroidY):
[(717, 971), (22, 960), (183, 587), (568, 586), (388, 98)]

Threshold using OpenCv?

As the questions states, I want to apply a two-way Adaptive Thresholding technique to my image. That is to say, I want to find each pixel value in the neighborhood and set it to 255 if it is less than or greater than the mean of the neighborhood minus a constant c.
Take this image, for example, as the neighborhood of pixels. The desired pixel areas to keep are the darker areas on the third and sixth squares' upper-half (from left-to-right and top-to-bottom), as well as the eight and twelve squares' upper-half.
Obviously, this all depends on the set constant value, but ideally areas that are significantly different than the mean pixel value of the neighborhood will be kept. I can worry about the tuning myself though.
Your question and comment are contradictory: Keep everything (significantly) brighter/darker than the mean (+/- constant) of the neighbourhood (question) vs. keep everything within mean +/- constant (comment). I assume the first one to be the correct, and I'll try to give an answer.
Using cv2.adaptiveThreshold is certainly useful; parameterization might be tricky, especially given the example image. First, let's have a look at the output:
We see, that the intensity value range in the given image is small. The upper-halfs of the third and sixth' squares don't really differ from their neighbourhood. It's quite unlikely to find a proper difference there. The upper-halfs of squares #8 and #12 (or also the lower-half of square #10) are more likely to be found.
Top row now shows some more "global" parameters (blocksize = 151, c = 25), bottom row more "local" parameters (blocksize = 51, c = 5). Middle column is everything darker than the neighbourhood (with respect to the paramters), right column is everything brighter than the neighbourhood. We see, in the more "global" case, we get the proper upper-halfs, but there are mostly no "significant" darker areas. Looking, at the more "local" case, we see some darker areas, but we won't find the complete upper-/lower-halfs in question. That's just because how the different triangles are arranged.
On the technical side: You need two calls of cv2.adaptiveThreshold, one using the cv2.THRESH_BINARY_INV mode to find everything darker and one using the cv2.THRESH_BINARY mode to find everything brighter. Also, you have to provide c or -c for the two different cases.
Here's the full code:
import cv2
from matplotlib import pyplot as plt
from skimage import io # Only needed for web grabbing images
plt.figure(1, figsize=(15, 10))
img = cv2.cvtColor(io.imread('https://i.stack.imgur.com/dA1Vt.png'), cv2.COLOR_RGB2GRAY)
plt.subplot(2, 3, 1), plt.imshow(img, cmap='gray'), plt.colorbar()
# More "global" parameters
bs = 151
c = 25
img_le = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, bs, c)
img_gt = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, bs, -c)
plt.subplot(2, 3, 2), plt.imshow(img_le, cmap='gray')
plt.subplot(2, 3, 3), plt.imshow(img_gt, cmap='gray')
# More "local" parameters
bs = 51
c = 5
img_le = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, bs, c)
img_gt = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, bs, -c)
plt.subplot(2, 3, 5), plt.imshow(img_le, cmap='gray')
plt.subplot(2, 3, 6), plt.imshow(img_gt, cmap='gray')
plt.tight_layout()
plt.show()
Hope that helps – somehow!
-----------------------
System information
-----------------------
Python: 3.8.1
Matplotlib: 3.2.0rc1
OpenCV: 4.1.2
-----------------------
Another way to look at this is that where abs(mean - image) <= c, you want that to become white, otherwise you want that to become black. In Python/OpenCV/Scipy/Numpy, I first compute the local uniform mean (average) using a uniform 51x51 pixel block averaging filter (boxcar average). You could use some weighted averaging method such as the Gaussian average, if you want. Then I compute the abs(mean - image). Then I use Numpy thresholding. Note: You could also just use one simple threshold (cv2.threshold) on the abs(mean-image) result in place of two numpy thresholds.
Input:
import cv2
import numpy as np
from scipy import ndimage
# read image as grayscale
# convert to floats in the range 0 to 1 so that the difference keeps negative values
img = cv2.imread('squares.png',0).astype(np.float32)/255.0
# get uniform (51x51 block) average
ave = ndimage.uniform_filter(img, size=51)
# get abs difference between ave and img and convert back to integers in the range 0 to 255
diff = 255*np.abs(ave - img)
diff = diff.astype(np.uint8)
# threshold
# Note: could also just use one simple cv2.Threshold on diff
c = 5
diff_thresh = diff.copy()
diff_thresh[ diff_thresh <= c ] = 255
diff_thresh[ diff_thresh != 255 ] = 0
# view result
cv2.imshow("img", img)
cv2.imshow("ave", ave)
cv2.imshow("diff", diff)
cv2.imshow("threshold", diff_thresh)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save result
cv2.imwrite("squares_2way_thresh.jpg", diff_thresh)
Result:

Rotating 2D grayscale image with transformation matrix

I am new to image processing so i am really confused regarding the coordinate system with images. I have a sample image and i am trying to rotate it 45 clockwise. My transformation matrix is T = [ [cos45 sin45] [-sin45 cos45] ]
Here is the code:
import numpy as np
from matplotlib import pyplot as plt
from skimage import io
image = io.imread('sample_image')
img_transformed = np.zeros((image.shape), dtype=np.uint8)
trans_matrix = np.array([[np.cos(45), np.sin(45)], [-np.sin(45), np.cos(45)]])
for i, row in enumerate(image):
for j,col in enumerate(row):
pixel_data = image[i,j] #get the value of pixel at corresponding location
input_coord = np.array([i, j]) #this will be my [x,y] matrix
result = trans_matrix # input_coord
i_out, j_out = result #store the resulting coordinate location
#make sure the the i and j values remain within the index range
if (0 < int(i_out) < image.shape[0]) and (0 < int(j_out) < image.shape[1]):
img_transformed[int(i_out)][int(j_out)] = pixel_data
plt.imshow(img_transformed, cmap='gray')
The image comes out distorted and doesn't seems right. I know that in pixel coordinate, the origin is at the top left corner (row, column). is the rotation happening with respect to origin from the top left corner? is there a way to shift origin to center or any other given point?
Thank you all!
Yes, as you suspect, the rotation is happening with respect to the top left corner, which has coordinates (0, 0). (Also: the NumPy trigonometric functions use radians rather than degrees, so you need to convert your angle.) To compute a rotation with respect to the center, you do a little hack: you compute the transformation for moving the image so that it is centered on (0, 0), then you rotate it, then you move the result back. You need to combine these transformations in a sequence because if you do it one after the other, you'll lose everything in negative coordinates.
It's much, much easier to do this using Homogeneous coordinates, which add an extra "dummy" dimension to your image. Here's what your code would look like in homogeneous coordinates:
import numpy as np
from matplotlib import pyplot as plt
from skimage import io
image = io.imread('sample_image')
img_transformed = np.zeros((image.shape), dtype=np.uint8)
c, s = np.cos(np.radians(45)), np.sin(np.radians(45))
rot_matrix = np.array([[c, s, 0], [-s, c, 0], [0, 0, 1]])
x, y = np.array(image.shape) // 2
# move center to (0, 0)
translate1 = np.array([[1, 0, -x], [0, 1, -y], [0, 0, 1]])
# move center back to (x, y)
translate2 = np.array([[1, 0, x], [0, 1, y], [0, 0, 1]])
# compose all three transformations together
trans_matrix = translate2 # rot_matrix # translate1
for i, row in enumerate(image):
for j,col in enumerate(row):
pixel_data = image[i,j] #get the value of pixel at corresponding location
input_coord = np.array([i, j, 1]) #this will be my [x,y] matrix
result = trans_matrix # input_coord
i_out, j_out, _ = result #store the resulting coordinate location
#make sure the the i and j values remain within the index range
if (0 < int(i_out) < image.shape[0]) and (0 < int(j_out) < image.shape[1]):
img_transformed[int(i_out)][int(j_out)] = pixel_data
plt.imshow(img_transformed, cmap='gray')
The above should work ok, but you will probably get some black spots due to aliasing. What can happen is that no coordinates i, j from the input land exactly on an output pixel, so that pixel never gets updated. Instead, what you need to do is iterate over the pixels of the output image, then use the inverse transform to find which pixel in the input image maps closest to that output pixel. Something like:
inverse_tform = np.linalg.inv(trans_matrix)
for i, j in np.ndindex(img_transformed.shape):
i_orig, j_orig, _ = np.round(inverse_tform # [i, j, 1]).astype(int)
if i_orig in range(image.shape[0]) and j_orig in range(image.shape[1]):
img_transformed[i, j] = image[i_orig, j_orig]
Hope this helps!

How can I refresh Trimesh(Pyglet) viewer to see my mesh(stl) rotation and interupt this visualisation after an angular condition?

After long hours to search by myself a solution to my question, I am here to find some help so that, I hope, someone could help me to unfreeze my actual situation. So if there is any specialist or nice "Python Guru" who has some time to give me a hand on it, here is the context :
I am working on a mesh manipulation script thanks to the wonderful Trimesh library on Python 3.6 and I would like, while applying some matrix rotation transformation, to refresh the mesh visualisation in order to see the real time rotation evolution of the mesh.
Without success, I did some try following the hereinbelow script found on the Trimesh GitHub but I am not able to stop it without clicking on the upper right "closing cross". Here is the original code:
"""
view_callback.py
------------------
Show how to pass a callback to the scene viewer for
easy visualizations.
"""
import time
import trimesh
import numpy as np
def sinwave(scene):
"""
A callback passed to a scene viewer which will update
transforms in the viewer periodically.
Parameters
-------------
scene : trimesh.Scene
Scene containing geometry
"""
# create an empty homogenous transformation
matrix = np.eye(4)
# set Y as cos of time
matrix[1][3] = np.cos(time.time()) * 2
# set Z as sin of time
matrix[2][3] = np.sin(time.time()) * 3
# take one of the two spheres arbitrarily
node = s.graph.nodes_geometry[0]
# apply the transform to the node
scene.graph.update(node, matrix=matrix)
if __name__ == '__main__':
# create some spheres
a = trimesh.primitives.Sphere()
b = trimesh.primitives.Sphere()
# set some colors for the balls
a.visual.face_colors = [255, 0, 0, 255]
b.visual.face_colors = [0, 0, 100, 255]
# create a scene with the two balls
s = trimesh.Scene([a, b])
# open the scene viewer and move a ball around
s.show(callback=sinwave)
And here is my try to integrate a matrix rotation transformation (to apply rotation on the imported mesh) to see the evolution.
But the rotation is not smooth (the animation is crenellated) and I am not able to stop it automatically lets say after a 97° rotation on z. (And the code is based on time while I would like it to be based on angular position).
from pathlib import Path
import pandas as pd
import time
import xlsxwriter
import numpy as np
import trimesh
from trimesh import transformations as trf
# Actual directory loading and stl adress saving
actual_dir = Path(__file__).resolve().parent
stl = Path(actual_dir/"Belt_Bearing_Gear.stl")
mesh = trimesh.load(f"{stl}")
def R_matrix(scene):
u= 0
o= 0
t= time.time()
time.sleep(0.1)
rotation = (u, o, t)
# Angle conversion from degres to radian
def trig(angle):
r = np.deg2rad(angle)
return r
alpha = trig(rotation[0])
beta = trig(rotation[1])
gamma = trig(rotation[2])
origin, xaxis, yaxis, zaxis = [0, 0, 0], [1, 0, 0], [0, 1, 0], [0, 0, 1]
Rx = trf.rotation_matrix(alpha, xaxis)
Ry = trf.rotation_matrix(beta, yaxis)
Rz = trf.rotation_matrix(gamma, zaxis)
R = trf.concatenate_matrices(Rx, Ry, Rz)
R2=R[:3,:3]
# The rotation matrix is applyed to the mesh
mesh.vertices = np.matmul(mesh.vertices,R2)
# apply the transform to the node
s = trimesh.Scene([mesh])
scene.graph.update(s, matrix=R)
if __name__ == '__main__':
# set some colors for the mesh and the bounding box
mesh.visual.face_colors = [102, 255, 255, 255]
# create a scene with the mesh and the bounding box
s = trimesh.Scene([mesh])
liste=list(range(1,10))
# open the scene viewer and move a ball around
s.show(callback=R_matrix)
All your ideas and suggestions are welcome since I am a young Python beginner :)
Thanks in advance for your help,
Warm regards,
RV

How do I discriminate two different type of abnormalities in curvature of the object?

I have been working on a project that require finding defect in onions. The second image that's attached shows an abnormal onion. You can see that the onion is made-up of two smaller onion twins. What's interesting is that human eye can easily detect whats wrong with the structure.
One can do an structural analysis and can observe that a normal onion has almost smooth curvature while an abnormal one doesn't. Thus quite simply I want to build a classification algorithm based on the edges of the object.
However there are times when the skin of onion makes the curve irregular. See the image, there's a small part of skin that's outside the actual curvature. I want to discriminate the bulged part due to the skin vs the deformities produced at the point where the two subsection meet and then reconstruct the contour of object for further analysis .
Is there a mathematical thing that would help me here given the fact that I have majority of the points that makes the outer edge of onion including the two irregularities?
[
See the code below:
import cv2
import numpy as np
import sys
cv2.ocl.setUseOpenCL(False)
cv2.namedWindow('test', cv2.WINDOW_NORMAL)
cv2.namedWindow('orig', cv2.WINDOW_NORMAL)
cv2.resizeWindow('test', 600,600)
cv2.resizeWindow('orig', 600,600)
image = cv2.imread('./buffer/crp'+str(sys.argv[1])+'.JPG')
tim = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
hsv_image = cv2.cvtColor(image,cv2.COLOR_BGR2HSV)
frame_threshed = cv2.inRange(hsv_image, np.array([70,0,0],np.uint8),
np.array([140,255,255],np.uint8))
canvas = np.zeros(image.shape, np.uint8)
framhreshed=cv2.threshold(frame_threshed,10,255,cv2.THRESH_BINARY_INV)
kernel = np.ones((3,3),np.uint8)
frame_threshed = cv2.erode(frame_threshed,kernel,iterations = 1)
kernel = np.ones((5,5),np.uint8)
frame_threshed = cv2.erode(frame_threshed,kernel,iterations = 1)
kernel = np.ones((7,7),np.uint8)
frame_threshed = cv2.erode(frame_threshed,kernel,iterations = 1)
_, cnts, hierarchy = cv2.findContours(frame_threshed.copy(),
cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts= sorted(cnts, key=cv2.contourArea, reverse=True)
big_contours = [c for c in cnts if cv2.contourArea(c) > 100000]
for cnt in big_contours:
perimeter = cv2.arcLength(cnt,True)
epsilon = 0.0015*cv2.arcLength(cnt,True)
approx = cv2.approxPolyDP(cnt,epsilon,True)
# print(len(approx))
hull = cv2.convexHull(cnt,returnPoints = False)
# try:
defects = cv2.convexityDefects(cnt,hull)
for i in range(defects.shape[0]):
s,e,f,d = defects[i,0]
start = tuple(cnt[s][0])
end = tuple(cnt[e][0])
far = tuple(cnt[f][0])
cv2.line(canvas,start,end,[255,0,0],2)
cv2.circle(canvas,far,5,[255,255,255],-1)
cv2.drawContours(image, [approx], -1, (0, 0, 255), 5)
cv2.drawContours(canvas, [approx], -1, (0, 0, 255), 5)
cv2.imshow('orig',image)
cv2.imshow('test',canvas)
cv2.waitKey(0)
cv2.destroyAllWindows()
I would suggest you to try HuMoments since you already have extracted the shape of your objects. It would allow you to calculate a distance between two shapes, so basically between your abnormal onion and a reference onion.
The Hu Moments shape descriptor is available for Python using OpenCV. If image is binary, you can use it like this :
# Reference image
shapeArray1 = cv2.HuMoments(cv2.moments(image1)).flatten()
# Abnormal image
shapeArray2 = cv2.HuMoments(cv2.moments(image2)).flatten()
# Calculation of distance between both arrays
# Threshold based on the distancce
# Classification as abnormal or normal
MatchShapes could do the job too. It takes two binary images of contours to return a float that evaluate the distance between both.
Python: cv.MatchShapes(object1, object2, method, parameter=0) → float
More details
So when an onion shape is detected as abnormal, you would have to fill this shape and apply some binary morphology to erase the imperfection and extract the shape without imperfection.
Fill your shape
Apply an opening (erosion followed by dilatation) with a disk structural element to get rid of the irregularities
Extract the contours again
You should have a form without your irregularities. If not, go back to step 2 and change the size of the structural element
OK so if you look at the first two pictures of your onions you can see that they have a circular shape (except the peel peaks) and the "defect" one has more of an oval shape. What you could try is to find your contour (after you apply image transformation of course) and determine its center points. Then you could measure the distance from the center of the contour to each point of the contour. You can do it using scipy (ckd.tree() and tree.query()) or simply by mathematical formula for distance between two points sqrt(x2-x1)^2+(y2-y1)^2. Then you can say that if some number of points are out of bounds it is still an OK onion but if there are a lot of points out of bounds then it is a defective onion. I drew two example images just for the sake of demonstration.
Example in code:
import cv2
import numpy as np
import scipy
from scipy import spatial
img = cv2.imread('oniond.png')
gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(gray_image,180,255,cv2.THRESH_BINARY_INV)
im2, cnts, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(cnts, key=cv2.contourArea)
list_distance = []
points_minmax = []
M = cv2.moments(cnt)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
center = (cX, cY)
for i in cnt:
tree = spatial.cKDTree(i)
mindist, minid = tree.query(center)
list_distance.append(mindist)
if float(mindist) < 100:
points_minmax.append(i)
elif float(mindist) > 140:
points_minmax.append(i)
else:
pass
reshape = np.reshape(list_distance, (-1,1))
under_min = [i for i in list_distance if i < 100]
over_max = [i for i in list_distance if i > 140]
for i in points_minmax:
cv2.line(img,center,(i[0,0],i[0,1]),(0,0,255),2)
if len(over_max) > 50:
print('defect')
print('distances over maximum: ', len(over_max))
print('distances over minimum: ', len(under_min ))
elif len(under_min ) > 50:
print('defect')
print('distances over maximum: ', len(over_max))
print('distances over minimum: ', len(under_min ))
else:
print('OK')
print('distances over maximum: ', len(over_max))
print('distances over minimum: ', len(under_min ))
cv2.imshow('img', img)
Result:
OK
distances over maximum: 37
distance over minimum: 0
The output shows that there are 37 points out of bounds (red color) but the onion is still OK.
Result 2:
defect
distances over maximum: 553
distances over minimum: 13
And here you can see that there are more points out of bounds (red color) and the onion is not OK.
Hope this gives at least an idea on how to solve your problem. Cheers!

Resources