How to find the direction of triangles in an image using OpenCV - python-3.x

I am trying to find the direction of triangles in an image. below is the image:
These triangles are pointing upward/downward/leftward/rightward. This is not the actual image. I have already used canny edge detection to find edges then contours and then the dilated image is shown below.
My logic to find the direction:
The logic I am thinking to use is that among the three corner coordinates If I can identify the base coordinates of the triangle (having the same abscissa or ordinates values coordinates), I can make a base vector. Then angle between unit vectors and base vectors can be used to identify the direction. But this method can only determine if it is up/down or left/right but cannot differentiate between up and down or right and left. I tried to find the corners using cv2.goodFeaturesToTrack but as I know it's giving only the 3 most effective points in the entire image. So I am wondering if there is other way to find the direction of triangles.
Here is my code in python to differentiate between the triangle/square and circle:
#blue_masking
mask_blue=np.copy(img1)
row,columns=mask_blue.shape
for i in range(0,row):
for j in range(0,columns):
if (mask_blue[i][j]==25):
mask_blue[i][j]=255
else:
mask_blue[i][j]=0
blue_edges = cv2.Canny(mask_blue,10,10)
kernel_blue = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(2,2))
dilated_blue = cv2.dilate(blue_edges, kernel)
blue_contours,hierarchy =
cv2.findContours(dilated_blue,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
for cnt in blue_contours:
area = cv2.contourArea(cnt)
perimeter = cv2.arcLength(cnt,True)
M = cv2.moments(cnt)
cx = int(M['m10']/M['m00'])
cy = int(M['m01']/M['m00'])
if(12<(perimeter*perimeter)/area<14.8):
shape="circle"
elif(14.8<(perimeter*perimeter)/area<18):
shape="squarer"
elif(18<(perimeter*perimeter)/area and area>200):
shape="triangle"
print(shape)
print(area)
print((perimeter*perimeter)/area,"\n")
cv2.imshow('mask_blue',dilated_blue)
cv2.waitKey(0)
cv2.destroyAllWindows()
Source image can be found here: img1
Please help, how can I found the direction of triangles?
Thank you.

Assuming that you only have four cases: [up, down, left, right], this code should work well for you.
The idea is simple:
Get the bounding rectangle for your contour. Use: box = cv2.boundingRect(contour_pnts)
Crop the image using the bounding rectangle.
Reduce the image vertically and horizontally using the Sum option. Now you have the sum of pixels along each axis. The axis with the largest sum determines whether the triangle base is vertical or horizontal.
To identify whether the triangle is pointing left/right or up/down: you need to check whether the bounding rectangle center is before or after the max col/row:
The code (assumes you start from the cropped image):
ver_reduce = cv2.reduce(img, 0, cv2.REDUCE_SUM, None, cv2.CV_32F)
hor_reduce = cv2.reduce(img, 1, cv2.REDUCE_SUM, None, cv2.CV_32F)
#For smoothing the reduced vector, could be removed
ver_reduce = cv2.GaussianBlur(ver_reduce, (3, 1), 0)
hor_reduce = cv2.GaussianBlur(hor_reduce, (1, 3), 0)
_,ver_max, _, ver_col = cv2.minMaxLoc(ver_reduce)
_,hor_max, _, hor_row = cv2.minMaxLoc(hor_reduce)
ver_col = ver_col[0]
hor_row = hor_row[1]
contour_pnts = cv2.findNonZero(img) #in my code I do not have the original contour points
rect_center, size, angle = cv2.minAreaRect(contour_pnts )
print(rect_center)
if ver_max > hor_max:
if rect_center[0] > ver_col:
print ('right')
else:
print ('left')
else:
if rect_center[1] > hor_row:
print ('down')
else:
print ('up')
Photos:

Well, Mark has mentioned a solution that may not be as efficient but perhaps more accurate. I think this one should be equally efficient but perhaps less accurate. But since you already have a code that finds triangles, try adding the following code after you have found triangle contour:
hull = cv2.convexHull(cnt) # convex hull of contour
hull = cv2.approxPolyDP(hull,0.1*cv2.arcLength(hull,True),True)
# You can double check if the contour is a triangle here
# by something like len(hull) == 3
You should get 3 hull points for a triangle, these should be the 3 vertices of your triangles. Given your triangles always 'face' only in 4 directions; Y coordinate of the hull will have close value to the Y coordinate of the centroid for triangle facing left or right and whether it's pointing left or right will depend on whether hull X is less than or greater than centroid X. Similarly use hull and centroid X and Y for triangle pointing up or down.

Related

De-Skewing image

I am unable to figure out how does this deskew is working
def deskew(img):
m = cv2.moments(img)
if abs(m['mu02']) < 1e-2:
return img.copy()
skew = m['mu11']/m['mu02']
M = np.float32([[1, skew, -0.5*SZ*skew], [0, 1, 0]])
img = cv2.warpAffine(img,M,(SZ, SZ),flags=affine_flags)
return img
I know that the moment is a quantitative measure of the shape.
In image processing, the moments give information about the total
area or Intensity, the centroid of the shape and the orientation of the
shape.
Area or total Mass:-
The zeroth moment M(0,0) gives the total Mass or Area.
In image processing, the M(0,0) is the sum of all the pixels and if it is a binary image then sum of pixels gives the area.
Center of mass or Centroid:- When the first moment is divided by
the total mass then it gives the centroid.
Centroid is that point where the shape is perfectly balanced on the
tip of the pin.
M(0,1)/M(0,0) ,M(1,0)/M(0,0)
I think the image from the tutorial you got the code from gives the intuitive idea pretty well:
To deskew the image, they used skewness on x axis (mu02) relative to the variance mu11. They used shear matrix with inverse of image skewness, which is why in skew = m['mu11']/m['mu02'] mu02 and mu11 fraction is flipped. To deskew relative to the center of the top of the image, rather than the (0,0) point, they also used translation, which is where you get M[0, 2] = -0.5*SZ*skew

Count non-zero pixels in area rotated rectangle

I've got a binary image with an object and a rotated rectangle over it, found with cv2.findContours and cv2.minAreaRect. The image is normalized to [0;1]
What is the most efficient way to count non-zero area within the bounding rectangle?
Create new zero values Mat that has the same size of your original image.
Draw your rotated rectangle on it in (fillConvexPoly using the RotatedRect vertices).
Bitwise_and this image with your original mask
apply findnonzero function on the result image
You may also apply the previous steps on ROI of the image since you have the bounding box of your rotated rectangle.
According to Humam Helfawi's answer I've tuned a bit suggested steps, so the following code seems doing what i need:
rectangles = [(cv2.minAreaRect(cnt)) for cnt in contours]
for rect in rectangles:
rect = cv2.boxPoints(rect)
rect = np.int0(rect)
coords = cv2.boundingRect(rect)
rect[:,0] = rect[:,0] - coords[0]
rect[:,1] = rect[:,1] - coords[1]
area = cv2.contourArea(rect)
zeros = np.zeros((coords[3], coords[2]), np.uint8)
cv2.fillConvexPoly(zeros, rect, 255)
im = greyscale[coords[1]:coords[1]+coords[3],
coords[0]:coords[0]+coords[2]]
print(np.sum(cv2.bitwise_and(zeros,im))/255)
contours is a list of points. You can fill this shape on an empty binary image with the same size using cv2.fillConvexPoly and then use cv2.countNonZero or numpy.count_nonzero to get the number of occupied pixels.

Why does the projection of an image over 3d points show this distortion?

I have a question regarding the projection of an image over a set of 3D points. The image is given to me as a JPG, together with position and attitude information of the camera relative to a cartesian coordinate system (Xc,Yc,Zc and yaw, pitch, roll), as well as the horizontal and vertical field of view (in degrees).
Points are given using solely their 3d position in the same coordinate system (Xp,Yp,Zp).
In my coordinate system, Z is up. To project the image onto the points, I
compute the vector from camera to each point
Vector3 c2p = (Xp,Yp,Zp)-(Xc,Yc,Zc);
rotate c2p according to my camera's attitude (quaternion):
Vector3 c2pCamFrame = getCamQuaternion().conjugate().rotate(c2p);
compute azimuth and elevation from the camera's "center ray" to the point:
float azimuth = atan2(c2pCamFrame.x(),c2pCamFrame.y()));
float elevation = atan2(c2pCamFrame.z(),sqrt(pow(c2pCamFrame.x(),2)+pow(c2pCamFrame.y(),2)));
if azimuth and elevation are within the field of view, I assign the color of the corresponding pixel to the point.
This works almost perfectly, and the "almost" motivates my question. Let me show you:
I cannot figure out why the elevation of the projection is distorted. In the bottom right of the image, you can see that points outside the frustum (exceeding the elevation) actually become colored - and this distortion is null at an azimuth of 0 degrees and peaks at the left and right edges of the image, creating the pillow distortion.
Why does this distortion appear? I'd love to understand this problem both in geometrical as well as mathematical terms. Thank you!
The field of view angles are only valid on the principal axes. But you can do it the other way around. I.e. calculate the x/y bounds from the angles:
maxX = tan(horizontal_fov / 2)
maxY = tan(vertical_fov / 2)
And check
if(abs(c2pCamFrame.x() / c2pCamFrame.z()) <= maxX
&& abs(c2pCamFrame.y() / c2pCamFrame.z()) <= maxY)
Additionally, you might want to check if the points are in front of the camera:
... && c2pCamFrame.z() > 0
This assumes a left-handed coordinate system.

Find size of inner rect of a circle

I have a circle, say radius of 10, and I can find the outer bounding rect easy enough since its width and height is equal to the radius, but what I need is the inner bounding rect. Does anyone know how to calculate the difference in size from the outer and inner bounding rectangles of a circle?
Here's an image to illustrate what I'm talking about. The red rectangle is the outer bounding box of the circle, which I know. The yellow rectangle is the inner bounding rectangle of the circle, which I need to find the difference in size from the outer rectangle.
My first guess to find the difference is to find one of the four points of the inner rectangle by finding that point along the circumference of the circle, each point being at a 45 degree offsets, and then just find the different from that point and the related point in the larger rect.
EDIT: Based off of the solution given by Steve B. I've come up with the algorithm to get what I want which is the following:
r*2 - sqrt(2)*r
If the radius is r, the outer rectangle size will be r*2.
The inner rectangle will have size equals to 2*sqrt(2*r).
So the diff will be equals to 2*(r-sqrt(2*r^2)).
You know the size of the radius and you have a triangle with a corner of 90 degrees with one point as the center of your circle and another two as two corners of your inner square.
Now if you know two sides of a triangle you can use Pythagoras:
x^2 = a^2 + b^2
= 2* r^2
So
x = sqrt(2 * r^2)
With r the radius of the circle, x the side of the square.
It's simple geometry: Outer rectangle has length of edge equal to 2*R, inner - diagonal equal to 2*R. So the edge of inner rectangle is equal to sqrt(2)*R. The ratio of edges of outer rectangle divided by inner is obviously sqrt(2).

Convert 3D(x,y,z) to 2D(x,y) (orthogonal) along its direction

I have gone through all available study resources in the internet as much as possible, which are in form of simple equations, vectors or trigonometric equations.
I couldn't find the way of doing following thing:
Assuming Y is up in a 3D world.
I need to draw two 2D trajectories orthogonally (not the projections) for a 3D trajectory, say XY-plane for side view of the trajectory w.r.t. the trajectory itself and XZ-plane for top view for the same.
I have all the 3D points of the 3D trajectory, initial velocity, both the angles can be calculated by vector mathematics.
How should I proceed further?
refer:
Below a curve in different angles, which can loose its significance if projected along XY-plane. All I want is to convert the red curve along itself, the green curve along green curve and so on. and further how would I map side view to a plane. Top view is comparatively easy and done just by taking X and Z ordinates of each points.
I mean this the requirement. :)
I don't think I understand the question, but I'll answer my interpretation anyway.
You have a 3D trajectory described by a sequence of points p0, ..., pN. We are given an angle v for a plane P parallel to the Y-axis, and wish to compute the 2D coordinates (di, hi) of the points pi projected onto that plane, where hi is the height coordinate in the direction Y and di is the distance coordinate in the direction v. Assume p0 = (0, 0, 0) or else subtract p0 from all vectors.
Let pi = (xi, yi, zi). The height coordinate is hi = yi. Assume the angle v is given relative to the Z-axis. The vector for the direction v is then r = (sin(v), 0, cos(v)), and the distance coordinates becomes di = dot(pi, r).

Resources