Apply filters to Hough Line Detection - python-3.x

In my application, I use Hough Line Detection to detect lines inside an image. What I'm trying to do is to retrieve only the lines that compose the border and the corners of each square of the chessboard. How can I apply filters to obtain a clear view of the lines?
My idea is to apply filters to check the angle between each line(90 degrees) or the distance to get only the lines that count. The final goal will be to obtain the intersection between these lines to get the coordinates of each square.
Code:
chessBoard = cv2.imread('img.png')
gray = cv2.cvtColor(chessBoard,cv2.COLOR_BGR2GRAY)
dst = cv2.Canny(gray, 50, 200)
lines= cv2.HoughLines(dst, 1, math.pi/180.0, 100, np.array([]), 0, 0)
a,b,c = lines.shape
for i in range(a):
rho = lines[i][0][0]
theta = lines[i][0][1]
a = math.cos(theta)
b = math.sin(theta)
x0, y0 = a*rho, b*rho
pt1 = ( int(x0+1000*(-b)), int(y0+1000*(a)) )
pt2 = ( int(x0-1000*(-b)), int(y0-1000*(a)) )
cv2.line(chessBoard, pt1, pt2, (0, 255, 0), 2, cv2.LINE_AA)

Related

How to count the vehicles if boundingRect touch the line

Below is the code i am trying:
def mouse_drawing(event, x, y, flags, params):
global point1, point2, drawing
if event == cv2.EVENT_LBUTTONDOWN:
if drawing is False:
drawing = True
point1 = (x, y)
else:
drawing = False
elif event == cv2.EVENT_MOUSEMOVE:
if drawing is True:
point2 = (x, y)
cap = cv2.VideoCapture("new2.asf")
cv2.namedWindow("App", cv2.WINDOW_FREERATIO)
cv2.setMouseCallback("App", mouse_drawing)
fgbg = cv2.createBackgroundSubtractorMOG2()
kernel = np.ones((5, 5), np.uint8)
while True:
ret, frame = cap.read()
frame = cv2.resize(frame,None,fx=scaling_factorx,fy=scaling_factory,interpolation=cv2.INTER_AREA)
imgray = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
fgmask1 = cv2.GaussianBlur(imgray, (7,7), 0)
fgmask = fgbg.apply(fgmask1)
if point1 and point2:
cv2.line(frame, point1, point2, (0, 255, 0), 3)
contours, hierarchy = cv2.findContours(fgmask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
try:
hierarchy = hierarchy[0]
except:
hierarchy = []
for contour, hier in zip(contours, hierarchy):
(x, y, w, h) = cv2.boundingRect(contour)
if w > 80 and h > 80:
cv2.rectangle(frame, (x,y), (x+w,y+h), (0, 255, 0), 2)
cv2.imshow("App", frame)
How to write a image of a vehicle with cv2.imwrite which is reached the line, the line which is drawn as manually. And the vehicles are having the rectangular boxes that's fine But some vehicles having more than one box. One vehicle should be have only one rectangular box. And that should be saved if reached the line, rest of vehicles should not to be saved. Please let me know the solution.
First, you need to group intersecting rectangles into one.
You do so by checking the intersection area between each pair of rectangles.
If the intersection area is larger than a pre-defined heuristic ratio of the small rectangle area then the smaller rectangle should be removed. For example, intersection_area / smaller_rect_area > 0.75
Please check this answer for the rectangles intersection.
Second, to check that a rectangle has passed the line:
Use your points to find the parameters for the general line formula: ax + by + c = 0
For each frame, plug the rectangle center coordinates in the formula and keep track of the sign of the result.
If the sign of the result changes that means that the rectangle center has passed the line.

Finding the characteristics of a hand written Arrows with opencv

I'm trying to retrieve the orientation of a hand written arrows:
after removing shadows and applying binarization and dilating the lines, here are the images:
Now I'd like to get the orientation of the arrow so I have tried using HoughLines,
lines = cv2.HoughLines(edges, rho=1, theta=np.pi / 180, threshold=20)
But is seems it generates too many lines (around 54 lines), I'd like it to generate only 3 lines so I would be able to find the intersection of those lines. I can group the lines into groups of similar angle (+/-20 degrees) and then average the angle. but I'm not sure what should be rho of an average line, can somebody please give a simple example?
Is there any other approach which may be more accurate?
I'll be glad to hear, thank you all
I suggest a different approach. In summary, the approach goes as follows (made it in a hurry, might need some tuning):
Find the center of the minimum area rectangle (rotated rectangle) that encloses the whole arrow. (The circle drawn in the third image)
Find the center of gravity for all white points. It will be shifted a bit towards the actual head of the arrow. (Drawn in 4th pic as the origin of the eigenvector)
Find eigenvectors for all white points.
Find the displacement vector (the center of gravity - the center of the rotated rectangle)
Now:
Arrow angle(unoriented): is the angle of the first eigenvector
Arrow direction: is the sign of the dot product of (the first eigenvector and the centers' displacement vector)
Code:
Parts related to PCA are inspired by and mostly copied from this. I only made a minor change to the "getOrientation" method, added the following lines before it returns
angle = (angle - math.pi) * 180 / math.pi
return angle, (mean[0,0]), (mean[0,1]), p1
Code implementing the logic above:
#threshold
_, img = cv2.threshold(img, 128, 255, cv2.THRESH_OTSU)
imshow(img)
#close the image to make sure the contour is connected)
st_el = cv2.getStructuringElement(cv2.MORPH_RECT, (5, 5))
img = cv2.morphologyEx(img, cv2.MORPH_CLOSE, st_el)
imshow(img)
#get white points
pnts = cv2.findNonZero(img)
#min area rect
rect_center = cv2.minAreaRect(pnts)[0]
#draw rect center
cv2.circle(img, (int(rect_center[0]), int(rect_center[1])), 3, 128, -1)
imshow(img)
angle, pca_center, eigen_vec = getOrientation(pnts, img)
cc_vec = (rect_center[0] - pca_center[0], rect_center[1] - pca_center[1])
dot_product = cc_vec[0] * eigen_vec[0] + cc_vec[1] * eigen_vec[1]
if dot_product > 0:
angle *= -1
print ("Angle = ", angle)
imshow(img)
Edit
I suggest a simpler method. This new method does not depend on PCA for finding the unoriented angle [0 - 180]. Instead, uses the min area rectangle angle immediately. And uses the contour momentum for finding the center of gravity.
Simpler Method Code:
#get white points
pnts = cv2.findNonZero(img)
#min area rect
rect_center, size, angle = cv2.minAreaRect(pnts)
#simple fix for angle to make it in [0, 180]
angle = abs(angle)
if size[0] < size[1]:
angle += 90
#find center of gravity
M = cv2.moments(img)
gravity_center = (M["m10"] / M["m00"], M["m01"] / M["m00"])
#rot rect vec based on angle
angle_unit_vec = (math.cos(angle * 180 / math.pi), math.sin(angle * 180 / math.pi))
#cc_vec = gravity center - rect center
cc_vec = (gravity_center[0] - rect_center[0], gravity_center[1] - rect_center[1])
#if dot product is negative add 180 -> angle between [0, 360]
dot_product = cc_vec[0] * angle_unit_vec[0] + cc_vec[1] * angle_unit_vec[1]
angle += (dot_product < 0) * 180
#draw rect center
cv2.circle(img, (int(rect_center[0]), int(rect_center[1])), 3, 128, -1)
cv2.circle(img, (int(gravity_center[0]), int(gravity_center[1])), 3, 20, -1)
imshow(img)
print ("Angle = ", angle)
Edit2:
This edit includes these changes:
Use cv2.fitLine() and use the fitted line angle for orientation.
Replace angle_unit_vec with a vector that has the gravity center as the origin and goes parallel to the fitted line.
Code
#get white points
pnts = cv2.findNonZero(img)
#min area rect
rect_center, size, angle = cv2.minAreaRect(pnts)
#fit line to get angle
[vx, vy, x, y] =cv2.fitLine(pnts, cv2.DIST_L12, 0, 0.01, 0.01)
angle = (math.atan2(vy, -vx)) * 180 / math.pi
M = cv2.moments(img)
gravity_center = (M["m10"] / M["m00"], M["m01"] / M["m00"])
angle_vec = (int(gravity_center[0] + 100 * vx), int(gravity_center[1] + 100 * vy))
#cc_vec = gravity center - rect center
cc_vec = (gravity_center[0] - rect_center[0], gravity_center[1] - rect_center[1])
#if dot product is positive add 180 -> angle between [0, 360]
dot_product = cc_vec[0] * angle_vec[0] + cc_vec[1] * angle_vec[1]
angle += (dot_product > 0) * 180
angle += (angle < 0) * 360
#draw rect center
cv2.circle(img, (int(rect_center[0]), int(rect_center[1])), 3, 128, -1)
cv2.circle(img, (int(gravity_center[0]), int(gravity_center[1])), 3, 20, -1)
imshow(img)
print ("Angle = ", angle)
Output:
Using code from edit2:
First image:
Second image:
Third image:

How to recognize and count circles in a rectangle?

I would like to count how many circles are in static rectangles for more than 3 seconds. The circles represent the center of objects recognized by the camera and static rectangles are areas of interest where I would like to count the total amount of circles that are inside the area of interest for more than 3 seconds. Currently I am able to recognize objects in real-time, find the center of each object and draw static rectangles, but I don't know how to do the rest. Below is my current while loop. Any help would be greatly appreciated.
while True:
frame = vs.read()
frame = imutils.resize(frame, width=720)
box_one = cv2.rectangle(frame, (30,30), (330,330), color, 2)
box_two = cv2.rectangle(frame, (350,30), (630,330), color, 2)
(h, w) = frame.shape[:2]
blob = cv2.dnn.blobFromImage(cv2.resize(frame, (300, 300)),
0.007843, (300, 300), 127.5)
net.setInput(blob)
detections = net.forward()
for i in np.arange(0, detections.shape[2]):
confidence = detections[0, 0, i, 2]
if confidence > args["confidence"]:
idx = int(detections[0, 0, i, 1])
box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
(startX, startY, endX, endY) = box.astype("int")
label = "{}: {:.2f}%".format(CLASSES[idx],
confidence * 100)
cv2.rectangle(frame, (startX, startY), (endX, endY),
COLORS[idx], 2)
center = ((startX+endX)/2, (startY+endY)/2)
first_value = int(center[0])
second_value = int(center[1])
coordinates = (first_value, second_value)
cv2.circle(frame, coordinates, 5, (255,255,255), -1)
cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
The output looks like this

Python shapely: How to triangulate just the inner area of the shape

I wanna convert this shape to triangles mesh using shapely in order to be used later as a 3d surface in unity3d, but the result seems is not good, because the triangles mesh cover areas outside this shape.
def get_complex_shape(nb_points = 10):
nb_shifts = 2
nb_lines = 0
shift_parameter = 2
r = 1
xy = get_points(0, 0, r, nb_points)
xy = np.array(xy)
shifts_indices = np.random.randint(0, nb_points ,nb_shifts) # choose random points
if(nb_shifts > 0):
xy[shifts_indices] = get_shifted_points(shifts_indices, nb_points, r + shift_parameter, 0, 0)
xy = np.append(xy, [xy[0]], axis = 0) # close the circle
x = xy[:,0]
y = xy[:,1]
if(nb_lines < 1): # normal circles
tck, u = interpolate.splprep([x, y], s=0)
unew = np.arange(0, 1.01, 0.01) # from 0 to 1.01 with step 0.01 [the number of points]
out = interpolate.splev(unew, tck) # a circle of 101 points
out = np.array(out).T
else: # lines and curves
out = new_add_random_lines(xy, nb_lines)
return out
enter code here
data = get_complex_shape(8)
points = MultiPoint(data)
union_points = cascaded_union(points)
triangles = triangulate(union_points)
This link is for the picture:
the blue picture is the polygon that I want to convert it to mesh of triangles, the right picture is the mesh of triangles which cover more than the inner area of the polygon. How could I cover just the inner area of the polygon?

Template Matching: efficient way to create mask for minMaxLoc?

Template matching in OpenCV is great. And you can pass a mask to cv2.minMaxLoc so that you only search (sort of) in part of the image for the template you want. You can also use a mask at the matchTemplate operation, but this only masks the template.
I want to find a template and I want to be assured that this template is within some other region of my image.
Calculating the mask for minMaxLoc seems kind of heavy. That is, calculating an accurate mask feels heavy. If you calculate a mask the easy way, it ignores the size of the template.
Examples are in order. My input images are show below. They're a bit contrived. I want to find the candy bar, but only if it's completely inside the white circle of the clock face.
clock1
clock2
template
In clock1, the candy bar is inside the circular clock face and it's a "PASS". But in clock2, the candy bar is only partially inside the face and I want it to be a "FAIL". Here's a code sample for doing it the easy way. I use cv.HoughCircles to find the clock face.
import numpy as np
import cv2
img = cv2.imread('clock1.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
template = cv2.imread('template.png')
t_h, t_w = template.shape[0:2] # template height and width
# find circle in gray image using Hough transform
circles = cv2.HoughCircles(gray, method = cv2.HOUGH_GRADIENT, dp = 1,
minDist = 150, param1 = 50, param2 = 70,
minRadius = 131, maxRadius = 200)
i = circles[0,0]
x0 = i[0]
y0 = i[1]
r = i[2]
# display circle on color image
cv2.circle(img,(x0, y0), r,(0,255,0),2)
# do the template match
result = cv2.matchTemplate(img, template, cv2.TM_CCOEFF_NORMED)
# finally, here is the part that gets tricky. we want to find highest
# rated match inside circle and we'd like to use minMaxLoc
# make mask by drawing circle on zero array
mask = np.zeros(result.shape, dtype = np.uint8) # minMaxLoc will throw
# error w/o np.uint8
cv2.circle(mask, (x0, y0), r, color = 1, thickness = -1)
# call minMaxLoc
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(result, mask = mask)
# draw found rectangle on img
if max_val > 0.4: # use 0.4 as threshold for finding candy bar
cv2.rectangle(img, max_loc, (max_loc[0]+t_w, max_loc[1]+t_h), (0,255,0), 4)
cv2.imwrite('output.jpg', img)
output using clock1
output using clock2
finds candy bar even
though part of it is outside circle
So to properly make a mask, I use a bunch of NumPy operations. I make four separate masks (one for each corner of the template bounding box) and then AND them together. I'm not aware of any convenience functions in OpenCV that would do the mask for me. I'm a little nervous that all of the array operations will be expensive. Is there a better way to do this?
h, w = result.shape[0:2]
# make arrays that hold x,y coords
grid = np.indices((h, w))
x = grid[1]
y = grid[0]
top_left_mask = np.hypot(x - x0, y - y0) - r < 0
top_right_mask = np.hypot(x + t_w - x0, y - y0) - r < 0
bot_left_mask = np.hypot(x - x0, y + t_h - y0) - r < 0
bot_right_mask = np.hypot(x + t_w - x0, y + t_h - y0) - r < 0
mask = np.logical_and.reduce((top_left_mask, top_right_mask,
bot_left_mask, bot_right_mask))
mask = mask.astype(np.uint8)
cv2.imwrite('mask.png', mask*255)
Here's what the "fancy" mask looks like:
Seems about right. It cannot be circular because of the template shape. If I run clock2.jpg with this mask I get:
It works. No candy bars are identified. But I wish I could do it in fewer lines of code...
EDIT:
I've done some profiling. I ran 100 cycles of the "easy" way and the "accurate" way and calculated frames per second (fps):
easy way: 12.7 fps
accurate way: 7.8 fps
so there is some price to pay for making the mask with NumPy. These tests were done on a relatively powerful workstation. It could get uglier on more modest hardware...
Method 1: 'mask' image before cv2.matchTemplate
Just for kicks, I tried to make my own mask of the image that I pass to cv2.matchTemplate to see what kind of performance I can achieve. To be clear, this isn't a proper mask -- I set all of the pixels to ignore to one color (black or white). This is to get around the fact only TM_SQDIFF and TM_CORR_NORMED support a proper mask.
#Alexander Reynolds makes a very good point in the comments that some care must be taken if the template image (the thing we're trying to find) has lots of black or lots of white. For many problems, we will know a priori what the template looks like and we can specify a white background or black background.
I use cv2.multiply, which seems to be faster than numpy.multiply. cv2.multiply has the added advantage that it automatically clips the results to the range 0 to 255.
import numpy as np
import cv2
import time
img = cv2.imread('clock1.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
template = cv2.imread('target.jpg')
t_h, t_w = template.shape[0:2] # template height and width
mask_background = 'WHITE'
start_time = time.time()
for i in range(100): # do 100 cycles for timing
# find circle in gray image using Hough transform
circles = cv2.HoughCircles(gray, method = cv2.HOUGH_GRADIENT, dp = 1,
minDist = 150, param1 = 50, param2 = 70,
minRadius = 131, maxRadius = 200)
i = circles[0,0]
x0 = i[0]
y0 = i[1]
r = i[2]
# display circle on color image
cv2.circle(img,(x0, y0), r,(0,255,0),2)
if mask_background == 'BLACK': # black = 0, white = 255 on grayscale
mask = np.zeros(img.shape, dtype = np.uint8)
elif mask_background == 'WHITE':
mask = 255*np.ones(img.shape, dtype = np.uint8)
cv2.circle(mask, (x0, y0), r, color = (1,1,1), thickness = -1)
img2 = cv2.multiply(img, mask) # element wise multiplication
# values > 255 are truncated at 255
# do the template match
result = cv2.matchTemplate(img2, template, cv2.TM_CCOEFF_NORMED)
# call minMaxLoc
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(result)
# draw found rectangle on img
if max_val > 0.4:
cv2.rectangle(img, max_loc, (max_loc[0]+t_w, max_loc[1]+t_h), (0,255,0), 4)
fps = 100/(time.time()-start_time)
print('fps ', fps)
cv2.imwrite('output.jpg', img)
Profiling results:
BLACK background 12.3 fps
WHITE background 12.1 fps
Using this method has very little performance hit relative to 12.7 fps in original question. However, it has the drawback that it will still find templates that still stick over the edge a little bit. Depending on the exact nature of the problem, this may be acceptable in many applications.
Method 2: use cv2.boxFilter to create mask for minMaxLoc
In this technique, we start with a circular mask (as in OP), but then modify it with cv2.boxFilter. We change the anchor from default center of kernel to the top left corner (0, 0)
import numpy as np
import cv2
import time
img = cv2.imread('clock1.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
template = cv2.imread('target.jpg')
t_h, t_w = template.shape[0:2] # template height and width
print('t_h, t_w ', t_h, ' ', t_w)
start_time = time.time()
for i in range(100):
# find circle in gray image using Hough transform
circles = cv2.HoughCircles(gray, method = cv2.HOUGH_GRADIENT, dp = 1,
minDist = 150, param1 = 50, param2 = 70,
minRadius = 131, maxRadius = 200)
i = circles[0,0]
x0 = i[0]
y0 = i[1]
r = i[2]
# display circle on color image
cv2.circle(img,(x0, y0), r,(0,255,0),2)
# do the template match
result = cv2.matchTemplate(img, template, cv2.TM_CCOEFF_NORMED)
# finally, here is the part that gets tricky. we want to find highest
# rated match inside circle and we'd like to use minMaxLoc
# start to make mask by drawing circle on zero array
mask = np.zeros(result.shape, dtype = np.float)
cv2.circle(mask, (x0, y0), r, color = 1, thickness = -1)
mask = cv2.boxFilter(mask,
ddepth = -1,
ksize = (t_w, t_h),
anchor = (0,0),
normalize = True,
borderType = cv2.BORDER_ISOLATED)
# mask now contains values from zero to 1. we want to make anything
# less than 1 equal to zero
_, mask = cv2.threshold(mask, thresh = 0.9999,
maxval = 1.0, type = cv2.THRESH_BINARY)
mask = mask.astype(np.uint8)
# call minMaxLoc
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(result, mask = mask)
# draw found rectangle on img
if max_val > 0.4:
cv2.rectangle(img, max_loc, (max_loc[0]+t_w, max_loc[1]+t_h), (0,255,0), 4)
fps = 100/(time.time()-start_time)
print('fps ', fps)
cv2.imwrite('output.jpg', img)
This code gives a mask identical to OP, but at 11.89 fps. This technique gives us more accuracy with slightly more performance hit than Method 1.

Resources