I am creating a program which allows a user to annotate images with points.
This program allows user to zoom in an image so user can annotate more precisely.
Program zooms in an image doing the following:
Find the center of image
Find minimum and maximum coordinates of new cropped image relative to center
Crop image
Resize the image to original size
For this I have written the following Python code:
import cv2
def zoom_image(original_image, cut_off_percentage, list_of_points):
height, width = original_image.shape[:2]
center_x, center_y = int(width/2), int(height/2)
half_new_width = center_x - int(center_x * cut_off_percentage)
half_new_height = center_y - int(center_y * cut_off_percentage)
min_x, max_x = center_x - half_new_width, center_x + half_new_width
min_y, max_y = center_y - half_new_height, center_y + half_new_height
#I want to include max coordinates in new image, hence +1
cropped = original_image[min_y:max_y+1, min_x:max_x+1]
new_height, new_width = cropped.shape[:2]
resized = cv2.resize(cropped, (width, height))
translate_points(list_of_points, height, width, new_height, new_width, min_x, min_y)
I want to resize the image to original width and height so user always works on same "surface"
regardless of how zoomed image is.
The problem I encounter is how to correctly scale points (annotations) when doing this. My algorithm to do so was following:
Translate points on original image by subtracting min_x from x coordinate and min_y from y coordinate
Calculate constants for scaling x and y coordinates of points
Multiply coordinates by constants
For this I use the following Python code:
import cv2
def translate_points(list_of_points, height, width, new_height, new_width, min_x, min_y):
#Calculate constants for scaling points
scale_x, scale_y = width / new_width, height / new_height
#Translate and scale points
for point in list_of_points:
point.x = (point.x - min_x) * scale_x
point.y = (point.y - min_y) * scale_y
This code doesn't work. If I zoom in once, it is hard to detect the offset of pixels but it happens. If I keep zooming in, it will be much easier to detect the "drift" of points. Here are images to provide examples. On original image (1440x850) I places a point in the middle of blue crosshair. The more I zoom in the image it is easier to see that algorithm doesn't work with bigger cut-ofs.
Original image. Blue crosshair is middle point of an image. Red angles indicate what will be borders after image is zoomed once
Image after zooming in once.
Image after zooming in 5 times. Clearly, green point is no longer in the middle of image
The cut_off_percentage I used is 15% (meaning that I keep 85% of width and height of original image, calculated from the center).
I have also tried the following library: Augmentit python library
Library has functions for cropping images and resizing them together with points. Library also causes the points to drift. This is expected since the code I implemented and library's functions use the same algorithm.
Additionally, I have checked whether this is a rounding problem. It is not. Library rounds the points after multiplying coordinates with scales. Regardless on how they are rounded, points are still off by 4-5 px. This increases the more I zoom in the picture.
EDIT: A more detailed explanation is given here since I didn't understand a given answer.
The following is an image of right human hand.
Image of a hand in my program
Original dimension of this image is 1440 pixels in width and 850 pixels in height. As you can see in this image, I have annotated right wrist at location (756.0, 685.0). To check whether my program works correctly, I have opened this exact image in GIMP and placed a white point at location (756.0, 685.0). The result is following:
Image of a hand in GIMP
Coordinates in program work correctly. Now, if I were to calculate parameters given in first answer according to code given in first answer I get following:
vec = [756, 685]
hh = 425
hw = 720
cov = [720, 425]
These parameters make sense to me. Now I want to zoom the image to scale of 1.15. I crop the image by choosing center point and calculating low and high values which indicate what rectangle of image to keep and what to cut. On the following image you can see what is kept after cutting (everything inside red rectangle).
What is kept when cutting
Lows and highs when cutting are:
xb = [95,1349]
yb = [56,794]
Size of cropped image: 1254 x 738
This cropped image will be resized back to original image. However, when I do that my annotation gets completely wrong coordinates when using parameters described above.
After zoom
This is the code I used to crop, resize and rescale points, based on the first answer:
width, height = image.shape[:2]
center_x, center_y = int(width / 2), int(height / 2)
scale = 1.15
scaled_width = int(center_x / scale)
scaled_height = int(center_y / scale)
xlow = center_x - scaled_width
xhigh = center_x + scaled_width
ylow = center_y - scaled_height
yhigh = center_y + scaled_height
xb = [xlow, xhigh]
yb = [ylow, yhigh]
cropped = image[yb[0]:yb[1], xb[0]:xb[1]]
resized = cv2.resize(cropped, (width, height), cv2.INTER_CUBIC)
#Rescaling poitns
cov = (width / 2, height / 2)
width, height = resized.shape[:2]
hw = width / 2
hh = height / 2
for point in points:
x, y = point.scx, point.scy
x -= xlow
y -= ylow
x -= cov[0] - (hw / scale)
y -= cov[1] - (hh / scale)
x *= scale
y *= scale
x = int(x)
y = int(y)
point.set_coordinates(x, y)
So this really is an integer rounding issue. It's magnified at high zoom levels because being off by 1 pixel at 20x zoom throws you off much further. I tried out two versions of my crop-n-zoom gui. One with int rounding, another without.
You can see that the one with int rounding keeps approaching the correct position as the zoom grows, but as soon as the zoom takes another step, it rebounds back to being wrong. The non-rounded version sticks right up against the mid-lines (denoting the proper position) the whole time.
Note that the resized rectangle (the one drawn on the non-zoomed image) blurs past the midlines. This is because of the resize interpolation from OpenCV. The yellow rectangle that I'm using to check that my points are correctly scaling is redrawn on the zoomed frame so it stays crisp.
With Int Rounding
Without Int Rounding
I have the center-of-view locked to the bottom right corner of the rectangle for this demo.
import cv2
import numpy as np
# clamp value
def clamp(val, low, high):
if val < low:
return low;
if val > high:
return high;
return val;
# bound the center-of-view
def boundCenter(cov, scale, hh, hw):
# scale half res
scaled_hw = int(hw / scale);
scaled_hh = int(hh / scale);
# bound
xlow = scaled_hw;
xhigh = (2*hw) - scaled_hw;
ylow = scaled_hh;
yhigh = (2*hh) - scaled_hh;
cov[0] = clamp(cov[0], xlow, xhigh);
cov[1] = clamp(cov[1], ylow, yhigh);
# do a zoomed view
def zoomView(orig, cov, scale, hh, hw):
# calculate crop
scaled_hh = int(hh / scale);
scaled_hw = int(hw / scale);
xlow = cov[0] - scaled_hw;
xhigh = cov[0] + scaled_hw;
ylow = cov[1] - scaled_hh;
yhigh = cov[1] + scaled_hh;
xb = [xlow, xhigh];
yb = [ylow, yhigh];
# crop and resize
copy = np.copy(orig);
crop = copy[yb[0]:yb[1], xb[0]:xb[1]];
display = cv2.resize(crop, (width, height), cv2.INTER_CUBIC);
return display;
# draw vector shape
def drawVec(img, vec, pos, cov, hh, hw, scale):
con = [];
for point in vec:
# unpack point
x,y = point;
x += pos[0];
y += pos[1];
# here's the int version
# Note: this is the same as xlow and ylow from the above function
# x -= cov[0] - int(hw / scale);
# y -= cov[1] - int(hh / scale);
# rescale point
x -= cov[0] - (hw / scale);
y -= cov[1] - (hh / scale);
x *= scale;
y *= scale;
x = int(x);
y = int(y);
# add
con.append([x,y]);
con = np.array(con);
cv2.drawContours(img, [con], -1, (0,200,200), -1);
# font stuff
font = cv2.FONT_HERSHEY_SIMPLEX;
fontScale = 1;
fontColor = (255, 100, 0);
thickness = 2;
# draw blank
res = (800,1200,3);
blank = np.zeros(res, np.uint8);
print(blank.shape);
# draw a rectangle on the original
cv2.rectangle(blank, (100,100), (400,200), (200,150,0), -1);
# vectored shape
# comparison shape
bshape = [[100,100], [400,100], [400,200], [100,200]];
bpos = [0,0]; # offset
# random shape
vshape = [[148, 89], [245, 179], [299, 67], [326, 171], [385, 222], [291, 235], [291, 340], [229, 267], [89, 358], [151, 251], [57, 167], [167, 164]];
vpos = [100,100]; # offset
# get original image res
height, width = blank.shape[:2];
hh = int(height / 2);
hw = int(width / 2);
# center of view
cov = [600, 400];
camera_spd = 5;
# scale
scale = 1;
scale_step = 0.2;
# loop
done = False;
while not done:
# crop and show image
display = zoomView(blank, cov, scale, hh, hw);
# drawVec(display, vshape, vpos, cov, hh, hw, scale);
drawVec(display, bshape, bpos, cov, hh, hw, scale);
# draw a dot in the middle
cv2.circle(display, (hw, hh), 4, (0,0,255), -1);
# draw center lines
cv2.line(display, (hw,0), (hw,height), (0,0,255), 1);
cv2.line(display, (0,hh), (width,hh), (0,0,255), 1);
# draw zoom text
cv2.putText(display, "Zoom: " + str(scale), (15,40), font,
fontScale, fontColor, thickness, cv2.LINE_AA);
# show
cv2.imshow("Display", display);
key = cv2.waitKey(1);
# check keys
done = key == ord('q');
# Note: if you're actually gonna make a GUI
# use the keyboard module or something else for this
# wasd to move center-of-view
if key == ord('d'):
cov[0] += camera_spd;
if key == ord('a'):
cov[0] -= camera_spd;
if key == ord('w'):
cov[1] -= camera_spd;
if key == ord('s'):
cov[1] += camera_spd;
# z,x to decrease/increase zoom (lower bound is 1.0)
if key == ord('x'):
scale += scale_step;
if key == ord('z'):
scale -= scale_step;
scale = round(scale, 2);
# bound cov
boundCenter(cov, scale, hh, hw);
Edit: Explanation of the drawVec parameters
img: The OpenCV image to be drawn on
vec: A list of [x,y] points
pos: The offset to draw those points at
cov: Center-Of-View, where the middle of our zoomed display is pointed at
hh: Half-Height, the height of "img" divided by 2
hw: Half-Width, the width of "img" divided by 2
I have looked through my code and realized where I was making a mistake which caused points to be offset.
In my program, I have a canvas of specific size. The size of canvas is a constant and is always larger than images being drawn on canvas. When program draws an image on canvas it first resizes that image so it could fit on canvas. The size of resized image is somewhat smaller than size of canvas. Image is usually drawn starting from top left corner of canvas. Since I wanted to always draw image in the center of canvas, I shifted the location from top left corner of canvas to another point. This is what I didn't account when doing image zooming.
def zoom(image, ratio, points, canvas_off_x, canvas_off_y):
width, height = image.shape[:2]
new_width, new_height = int(ratio * width), int(ratio * height)
center_x, center_y = int(new_width / 2), int(new_height / 2)
radius_x, radius_y = int(width / 2), int(height / 2)
min_x, max_x = center_x - radius_x, center_x + radius_x
min_y, max_y = center_y - radius_y, center_y + radius_y
img_resized = cv2.resize(image, (new_width,new_height), interpolation=cv2.INTER_LINEAR)
img_cropped = img_resized[min_y:max_y+1, min_x:max_x+1]
for point in points:
x, y = point.get_original_coordinates()
x -= canvas_off_x
y -= canvas_off_y
x = int((x * ratio) - min_x + canvas_off_x)
y = int((y * ratio) - min_y + canvas_off_y)
point.set_scaled_coordinates(x, y)
In the code below canvas_off_x and canvas_off_y is the location of offset from top left corner of canvas
Related
I'm using this code to identify tops and bottoms of photographs:
( as of now I only have it working for tops. one thing at a time ;) )
def get_file(path):
client = vision.ImageAnnotatorClient()
for images in os.listdir(path):
# # Loads the image into memory
with io.open(images, "rb") as image_file:
content = image_file.read()
image = types.Image(content=content)
objects = client.object_localization(image=image).localized_object_annotations
im = Image.open(images)
width, height = im.size
print("Number of objects found: {}".format(len(objects)))
for object_ in objects:
if object_.name == "Top":
print("Top")
l1 = object_.bounding_poly.normalized_vertices[0].x
l2 = object_.bounding_poly.normalized_vertices[0].y
l3 = object_.bounding_poly.normalized_vertices[2].x
l4 = object_.bounding_poly.normalized_vertices[3].y
left = l1 * width
top = l2 * height
right = l3 * width
bottom = l4 * height
im = im.crop((left, top, right, bottom))
im.save('new_test_cropped.tif', 'tiff')
im.show()
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Script to automatically crop images based on google vision predictions of 'tops' and 'bottoms'")
parser.add_argument('--path', help='Include the path to the images folder')
args = parser.parse_args()
get_file(args.path)
The images are opened, clothing is identified, and then the images are cropped and saved to a new file. (granted as of now they are being overwritten within the loop, but I'll fix that later)
What I cant figure out, is how to make the crop a 1:1 ratio. I need to save them out as square-cropped to be put on our website.
I'll be honest, the normalized_vertices make no sense to me. Hence why I'm having trouble.
Starting image:
Output:
Desired Output:
"Normalized" means the coordinates are divided by the width or height of the image, so normalized coordinates [1, 0.5] would indicate all the way (1) across the image and halfway down (0.5).
For a 1:1 aspect ratio you want right - left to be equal to top - bottom. So you want to find out which dimension (width or height) you need to increase, and by how much.
height = abs(top - bottom)
width = abs(right - left)
extrawidth = max(0, height - width)
extraheight = max(0, width - height)
If height > width, we want to increase width but not height. Since height - width > 0, the correct value will go into extrawidth. But because width - height < 0, extraheight will be 0.
Now let's say we want to increase the dimensions of our image symmetrically around the original crop rectangle.
top -= extraheight // 2
bottom += extraheight // 2
left -= extrawidth // 2
right += extrawidth // 2
And finally, do the crop:
im = im.crop((left, top, right, bottom))
For your image, let's say you get left = 93, right = 215, top = 49, and bottom = 205
Before:
After:
I have bunch of images of gear and they all are in different orientation and I need them all in same orientation. I mean there is one reference image and rest of the images should be rotated so they look like same as reference image. I followed these steps, first segment the gear and then tried to find an angle using moments but its not working correctly. I've attached the 3 images considering the first image as reference image and here's the code so far
def adjust_gamma(image, gamma=1.0):
invGamma = 1.0 / gamma
table = np.array([((i / 255.0) ** invGamma) * 255
for i in np.arange(0, 256)]).astype("uint8")
return cv2.LUT(image, table)
def unsharp_mask(image, kernel_size=(13, 13), sigma=1.0, amount=2.5, threshold=10):
"""Return a sharpened version of the image, using an unsharp mask."""
blurred = cv2.GaussianBlur(image, kernel_size, sigma)
sharpened = float(amount + 1) * image - float(amount) * blurred
sharpened = np.maximum(sharpened, np.zeros(sharpened.shape))
sharpened = np.minimum(sharpened, 255 * np.ones(sharpened.shape))
sharpened = sharpened.round().astype(np.uint8)
if threshold > 0:
low_contrast_mask = np.absolute(image - blurred) < threshold
np.copyto(sharpened, image, where=low_contrast_mask)
return sharpened
def find_orientation(cont):
m = cv2.moments(cont, True)
cen_x = m['m10'] / m['m00']
cen_y = m['m01'] / m['m00']
m_11 = 2*m['m11'] - m['m00'] * (cen_x*cen_x+cen_y*cen_y)
m_02 = m['m02'] - m['m00'] * cen_y*cen_y
m_20 = m['m20'] - m['m00'] * cen_x*cen_x
theta = 0 if m_20==m_02 else atan2(m_11, m_20-m_02)/2.0
theta = theta * 180 / pi
return (cen_x, cen_y, theta)
def rotate_image(img, angles):
height, width = img.shape[:2]
rotation_matrix = cv2.getRotationMatrix2D((width/2, height/2), angles, 1)
rotated_image = cv2.warpAffine(img, rotation_matrix, (width,height))
return rotated_image
img = cv2.imread('gear1.jpg')
resized_img = imutils.resize(img, width=540)
height, width = resized_img.shape[:2]
gamma_adjusted = adjust_gamma(resized_img, 2.5)
sharp = unsharp_mask(gamma_adjusted)
gray = cv2.cvtColor(sharp, cv2.COLOR_BGR2GRAY)
gauss_blur = cv2.GaussianBlur(gray, (13,13), 2.5)
ret, thresh = cv2.threshold(gauss_blur, 250, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3,3))
thresh = cv2.morphologyEx(thresh, cv2.MORPH_DILATE, kernel, iterations=2)
kernel = np.ones((3,3), np.uint8)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=2)
contours = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)[0]
cen_x, cen_y, theta = find_orientation(contours[0])
reference_angle = -24.14141919602858
rot_angle = 0.0
if theta < reference_angle:
rot_angle = -(theta - reference_angle)
else:
rot_angle = (reference_angle - theta)
rot_img = rotate_image(resized_img, rot_angle)
Can anyone tell me where did i go wrong? Any help would be appreciated.
Binarization of the gear and the holes seems easy. You should be able to discriminate the holes from noise and extra small features.
First find the geometric center, and sort the holes by angle around the center. Also compute the areas of the holes. Then you can try to match the holes to the model in a cyclic way. There are 20 holes, and you just need to test 20 positions. You can rate a matching by some combination of the differences in the angles and the areas. The best match tells you the orientation.
This should be very reliable.
You can obtain a very accurate value of the angle by computing the average error per hole and correcting to cancel that value (this is equivalent to least-squares fitting).
I'm trying to retrieve the orientation of a hand written arrows:
after removing shadows and applying binarization and dilating the lines, here are the images:
Now I'd like to get the orientation of the arrow so I have tried using HoughLines,
lines = cv2.HoughLines(edges, rho=1, theta=np.pi / 180, threshold=20)
But is seems it generates too many lines (around 54 lines), I'd like it to generate only 3 lines so I would be able to find the intersection of those lines. I can group the lines into groups of similar angle (+/-20 degrees) and then average the angle. but I'm not sure what should be rho of an average line, can somebody please give a simple example?
Is there any other approach which may be more accurate?
I'll be glad to hear, thank you all
I suggest a different approach. In summary, the approach goes as follows (made it in a hurry, might need some tuning):
Find the center of the minimum area rectangle (rotated rectangle) that encloses the whole arrow. (The circle drawn in the third image)
Find the center of gravity for all white points. It will be shifted a bit towards the actual head of the arrow. (Drawn in 4th pic as the origin of the eigenvector)
Find eigenvectors for all white points.
Find the displacement vector (the center of gravity - the center of the rotated rectangle)
Now:
Arrow angle(unoriented): is the angle of the first eigenvector
Arrow direction: is the sign of the dot product of (the first eigenvector and the centers' displacement vector)
Code:
Parts related to PCA are inspired by and mostly copied from this. I only made a minor change to the "getOrientation" method, added the following lines before it returns
angle = (angle - math.pi) * 180 / math.pi
return angle, (mean[0,0]), (mean[0,1]), p1
Code implementing the logic above:
#threshold
_, img = cv2.threshold(img, 128, 255, cv2.THRESH_OTSU)
imshow(img)
#close the image to make sure the contour is connected)
st_el = cv2.getStructuringElement(cv2.MORPH_RECT, (5, 5))
img = cv2.morphologyEx(img, cv2.MORPH_CLOSE, st_el)
imshow(img)
#get white points
pnts = cv2.findNonZero(img)
#min area rect
rect_center = cv2.minAreaRect(pnts)[0]
#draw rect center
cv2.circle(img, (int(rect_center[0]), int(rect_center[1])), 3, 128, -1)
imshow(img)
angle, pca_center, eigen_vec = getOrientation(pnts, img)
cc_vec = (rect_center[0] - pca_center[0], rect_center[1] - pca_center[1])
dot_product = cc_vec[0] * eigen_vec[0] + cc_vec[1] * eigen_vec[1]
if dot_product > 0:
angle *= -1
print ("Angle = ", angle)
imshow(img)
Edit
I suggest a simpler method. This new method does not depend on PCA for finding the unoriented angle [0 - 180]. Instead, uses the min area rectangle angle immediately. And uses the contour momentum for finding the center of gravity.
Simpler Method Code:
#get white points
pnts = cv2.findNonZero(img)
#min area rect
rect_center, size, angle = cv2.minAreaRect(pnts)
#simple fix for angle to make it in [0, 180]
angle = abs(angle)
if size[0] < size[1]:
angle += 90
#find center of gravity
M = cv2.moments(img)
gravity_center = (M["m10"] / M["m00"], M["m01"] / M["m00"])
#rot rect vec based on angle
angle_unit_vec = (math.cos(angle * 180 / math.pi), math.sin(angle * 180 / math.pi))
#cc_vec = gravity center - rect center
cc_vec = (gravity_center[0] - rect_center[0], gravity_center[1] - rect_center[1])
#if dot product is negative add 180 -> angle between [0, 360]
dot_product = cc_vec[0] * angle_unit_vec[0] + cc_vec[1] * angle_unit_vec[1]
angle += (dot_product < 0) * 180
#draw rect center
cv2.circle(img, (int(rect_center[0]), int(rect_center[1])), 3, 128, -1)
cv2.circle(img, (int(gravity_center[0]), int(gravity_center[1])), 3, 20, -1)
imshow(img)
print ("Angle = ", angle)
Edit2:
This edit includes these changes:
Use cv2.fitLine() and use the fitted line angle for orientation.
Replace angle_unit_vec with a vector that has the gravity center as the origin and goes parallel to the fitted line.
Code
#get white points
pnts = cv2.findNonZero(img)
#min area rect
rect_center, size, angle = cv2.minAreaRect(pnts)
#fit line to get angle
[vx, vy, x, y] =cv2.fitLine(pnts, cv2.DIST_L12, 0, 0.01, 0.01)
angle = (math.atan2(vy, -vx)) * 180 / math.pi
M = cv2.moments(img)
gravity_center = (M["m10"] / M["m00"], M["m01"] / M["m00"])
angle_vec = (int(gravity_center[0] + 100 * vx), int(gravity_center[1] + 100 * vy))
#cc_vec = gravity center - rect center
cc_vec = (gravity_center[0] - rect_center[0], gravity_center[1] - rect_center[1])
#if dot product is positive add 180 -> angle between [0, 360]
dot_product = cc_vec[0] * angle_vec[0] + cc_vec[1] * angle_vec[1]
angle += (dot_product > 0) * 180
angle += (angle < 0) * 360
#draw rect center
cv2.circle(img, (int(rect_center[0]), int(rect_center[1])), 3, 128, -1)
cv2.circle(img, (int(gravity_center[0]), int(gravity_center[1])), 3, 20, -1)
imshow(img)
print ("Angle = ", angle)
Output:
Using code from edit2:
First image:
Second image:
Third image:
Template matching in OpenCV is great. And you can pass a mask to cv2.minMaxLoc so that you only search (sort of) in part of the image for the template you want. You can also use a mask at the matchTemplate operation, but this only masks the template.
I want to find a template and I want to be assured that this template is within some other region of my image.
Calculating the mask for minMaxLoc seems kind of heavy. That is, calculating an accurate mask feels heavy. If you calculate a mask the easy way, it ignores the size of the template.
Examples are in order. My input images are show below. They're a bit contrived. I want to find the candy bar, but only if it's completely inside the white circle of the clock face.
clock1
clock2
template
In clock1, the candy bar is inside the circular clock face and it's a "PASS". But in clock2, the candy bar is only partially inside the face and I want it to be a "FAIL". Here's a code sample for doing it the easy way. I use cv.HoughCircles to find the clock face.
import numpy as np
import cv2
img = cv2.imread('clock1.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
template = cv2.imread('template.png')
t_h, t_w = template.shape[0:2] # template height and width
# find circle in gray image using Hough transform
circles = cv2.HoughCircles(gray, method = cv2.HOUGH_GRADIENT, dp = 1,
minDist = 150, param1 = 50, param2 = 70,
minRadius = 131, maxRadius = 200)
i = circles[0,0]
x0 = i[0]
y0 = i[1]
r = i[2]
# display circle on color image
cv2.circle(img,(x0, y0), r,(0,255,0),2)
# do the template match
result = cv2.matchTemplate(img, template, cv2.TM_CCOEFF_NORMED)
# finally, here is the part that gets tricky. we want to find highest
# rated match inside circle and we'd like to use minMaxLoc
# make mask by drawing circle on zero array
mask = np.zeros(result.shape, dtype = np.uint8) # minMaxLoc will throw
# error w/o np.uint8
cv2.circle(mask, (x0, y0), r, color = 1, thickness = -1)
# call minMaxLoc
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(result, mask = mask)
# draw found rectangle on img
if max_val > 0.4: # use 0.4 as threshold for finding candy bar
cv2.rectangle(img, max_loc, (max_loc[0]+t_w, max_loc[1]+t_h), (0,255,0), 4)
cv2.imwrite('output.jpg', img)
output using clock1
output using clock2
finds candy bar even
though part of it is outside circle
So to properly make a mask, I use a bunch of NumPy operations. I make four separate masks (one for each corner of the template bounding box) and then AND them together. I'm not aware of any convenience functions in OpenCV that would do the mask for me. I'm a little nervous that all of the array operations will be expensive. Is there a better way to do this?
h, w = result.shape[0:2]
# make arrays that hold x,y coords
grid = np.indices((h, w))
x = grid[1]
y = grid[0]
top_left_mask = np.hypot(x - x0, y - y0) - r < 0
top_right_mask = np.hypot(x + t_w - x0, y - y0) - r < 0
bot_left_mask = np.hypot(x - x0, y + t_h - y0) - r < 0
bot_right_mask = np.hypot(x + t_w - x0, y + t_h - y0) - r < 0
mask = np.logical_and.reduce((top_left_mask, top_right_mask,
bot_left_mask, bot_right_mask))
mask = mask.astype(np.uint8)
cv2.imwrite('mask.png', mask*255)
Here's what the "fancy" mask looks like:
Seems about right. It cannot be circular because of the template shape. If I run clock2.jpg with this mask I get:
It works. No candy bars are identified. But I wish I could do it in fewer lines of code...
EDIT:
I've done some profiling. I ran 100 cycles of the "easy" way and the "accurate" way and calculated frames per second (fps):
easy way: 12.7 fps
accurate way: 7.8 fps
so there is some price to pay for making the mask with NumPy. These tests were done on a relatively powerful workstation. It could get uglier on more modest hardware...
Method 1: 'mask' image before cv2.matchTemplate
Just for kicks, I tried to make my own mask of the image that I pass to cv2.matchTemplate to see what kind of performance I can achieve. To be clear, this isn't a proper mask -- I set all of the pixels to ignore to one color (black or white). This is to get around the fact only TM_SQDIFF and TM_CORR_NORMED support a proper mask.
#Alexander Reynolds makes a very good point in the comments that some care must be taken if the template image (the thing we're trying to find) has lots of black or lots of white. For many problems, we will know a priori what the template looks like and we can specify a white background or black background.
I use cv2.multiply, which seems to be faster than numpy.multiply. cv2.multiply has the added advantage that it automatically clips the results to the range 0 to 255.
import numpy as np
import cv2
import time
img = cv2.imread('clock1.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
template = cv2.imread('target.jpg')
t_h, t_w = template.shape[0:2] # template height and width
mask_background = 'WHITE'
start_time = time.time()
for i in range(100): # do 100 cycles for timing
# find circle in gray image using Hough transform
circles = cv2.HoughCircles(gray, method = cv2.HOUGH_GRADIENT, dp = 1,
minDist = 150, param1 = 50, param2 = 70,
minRadius = 131, maxRadius = 200)
i = circles[0,0]
x0 = i[0]
y0 = i[1]
r = i[2]
# display circle on color image
cv2.circle(img,(x0, y0), r,(0,255,0),2)
if mask_background == 'BLACK': # black = 0, white = 255 on grayscale
mask = np.zeros(img.shape, dtype = np.uint8)
elif mask_background == 'WHITE':
mask = 255*np.ones(img.shape, dtype = np.uint8)
cv2.circle(mask, (x0, y0), r, color = (1,1,1), thickness = -1)
img2 = cv2.multiply(img, mask) # element wise multiplication
# values > 255 are truncated at 255
# do the template match
result = cv2.matchTemplate(img2, template, cv2.TM_CCOEFF_NORMED)
# call minMaxLoc
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(result)
# draw found rectangle on img
if max_val > 0.4:
cv2.rectangle(img, max_loc, (max_loc[0]+t_w, max_loc[1]+t_h), (0,255,0), 4)
fps = 100/(time.time()-start_time)
print('fps ', fps)
cv2.imwrite('output.jpg', img)
Profiling results:
BLACK background 12.3 fps
WHITE background 12.1 fps
Using this method has very little performance hit relative to 12.7 fps in original question. However, it has the drawback that it will still find templates that still stick over the edge a little bit. Depending on the exact nature of the problem, this may be acceptable in many applications.
Method 2: use cv2.boxFilter to create mask for minMaxLoc
In this technique, we start with a circular mask (as in OP), but then modify it with cv2.boxFilter. We change the anchor from default center of kernel to the top left corner (0, 0)
import numpy as np
import cv2
import time
img = cv2.imread('clock1.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
template = cv2.imread('target.jpg')
t_h, t_w = template.shape[0:2] # template height and width
print('t_h, t_w ', t_h, ' ', t_w)
start_time = time.time()
for i in range(100):
# find circle in gray image using Hough transform
circles = cv2.HoughCircles(gray, method = cv2.HOUGH_GRADIENT, dp = 1,
minDist = 150, param1 = 50, param2 = 70,
minRadius = 131, maxRadius = 200)
i = circles[0,0]
x0 = i[0]
y0 = i[1]
r = i[2]
# display circle on color image
cv2.circle(img,(x0, y0), r,(0,255,0),2)
# do the template match
result = cv2.matchTemplate(img, template, cv2.TM_CCOEFF_NORMED)
# finally, here is the part that gets tricky. we want to find highest
# rated match inside circle and we'd like to use minMaxLoc
# start to make mask by drawing circle on zero array
mask = np.zeros(result.shape, dtype = np.float)
cv2.circle(mask, (x0, y0), r, color = 1, thickness = -1)
mask = cv2.boxFilter(mask,
ddepth = -1,
ksize = (t_w, t_h),
anchor = (0,0),
normalize = True,
borderType = cv2.BORDER_ISOLATED)
# mask now contains values from zero to 1. we want to make anything
# less than 1 equal to zero
_, mask = cv2.threshold(mask, thresh = 0.9999,
maxval = 1.0, type = cv2.THRESH_BINARY)
mask = mask.astype(np.uint8)
# call minMaxLoc
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(result, mask = mask)
# draw found rectangle on img
if max_val > 0.4:
cv2.rectangle(img, max_loc, (max_loc[0]+t_w, max_loc[1]+t_h), (0,255,0), 4)
fps = 100/(time.time()-start_time)
print('fps ', fps)
cv2.imwrite('output.jpg', img)
This code gives a mask identical to OP, but at 11.89 fps. This technique gives us more accuracy with slightly more performance hit than Method 1.
I have an image with bounding box in it, and I want to resize the image.
img = cv2.imread("img.jpg",3)
x_ = img.shape[0]
y_ = img.shape[1]
img = cv2.resize(img,(416,416));
Now I want to calculate the scale factor:
x_scale = ( 416 / x_)
y_scale = ( 416 / y_ )
And draw an image, this is the code for the original bounding box:
( 128, 25, 447, 375 ) = ( xmin,ymin,xmax,ymax)
x = int(np.round(128*x_scale))
y = int(np.round(25*y_scale))
xmax= int(np.round (447*(x_scale)))
ymax= int(np.round(375*y_scale))
However using this I get:
While the original is:
I don't see any flag in this logic, what's wrong?
Whole code:
imageToPredict = cv2.imread("img.jpg",3)
print(imageToPredict.shape)
x_ = imageToPredict.shape[0]
y_ = imageToPredict.shape[1]
x_scale = 416/x_
y_scale = 416/y_
print(x_scale,y_scale)
img = cv2.resize(imageToPredict,(416,416));
img = np.array(img);
x = int(np.round(128*x_scale))
y = int(np.round(25*y_scale))
xmax= int(np.round (447*(x_scale)))
ymax= int(np.round(375*y_scale))
Box.drawBox([[1,0, x,y,xmax,ymax]],img)
and drawbox
def drawBox(boxes, image):
for i in range (0, len(boxes)):
cv2.rectangle(image,(boxes[i][2],boxes[i][3]),(boxes[i][4],boxes[i][5]),(0,0,120),3)
cv2.imshow("img",image)
cv2.waitKey(0)
cv2.destroyAllWindows()
The image and the data for the bounding box are loaded separately. I am drawing the bounding box inside the image. The image does not contain the box itself.
I believe there are two issues:
You should swap x_ and y_ because shape[0] is actually y-dimension and shape[1] is the x-dimension
You should use the same coordinates on the original and scaled image. On your original image the rectangle is (160, 35) - (555, 470) rather than (128,25) - (447,375) that you use in the code.
If I use the following code:
import cv2
import numpy as np
def drawBox(boxes, image):
for i in range(0, len(boxes)):
# changed color and width to make it visible
cv2.rectangle(image, (boxes[i][2], boxes[i][3]), (boxes[i][4], boxes[i][5]), (255, 0, 0), 1)
cv2.imshow("img", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
def cvTest():
# imageToPredict = cv2.imread("img.jpg", 3)
imageToPredict = cv2.imread("49466033\\img.png ", 3)
print(imageToPredict.shape)
# Note: flipped comparing to your original code!
# x_ = imageToPredict.shape[0]
# y_ = imageToPredict.shape[1]
y_ = imageToPredict.shape[0]
x_ = imageToPredict.shape[1]
targetSize = 416
x_scale = targetSize / x_
y_scale = targetSize / y_
print(x_scale, y_scale)
img = cv2.resize(imageToPredict, (targetSize, targetSize));
print(img.shape)
img = np.array(img);
# original frame as named values
(origLeft, origTop, origRight, origBottom) = (160, 35, 555, 470)
x = int(np.round(origLeft * x_scale))
y = int(np.round(origTop * y_scale))
xmax = int(np.round(origRight * x_scale))
ymax = int(np.round(origBottom * y_scale))
# Box.drawBox([[1, 0, x, y, xmax, ymax]], img)
drawBox([[1, 0, x, y, xmax, ymax]], img)
cvTest()
and use your "original" image as "49466033\img.png",
I get the following image
And as you can see my thinner blue line lies exactly inside your original red line and it stays there whatever targetSize you chose (so the scaling actually works correctly).
Another way of doing this is to use CHITRA
image = Chitra(img_path, box, label)
# Chitra can rescale your bounding box automatically based on the new image size.
image.resize_image_with_bbox((224, 224))
print('rescaled bbox:', image.bounding_boxes)
plt.imshow(image.draw_boxes())
https://chitra.readthedocs.io/en/latest/
pip install chitra
I encountered an issue with bounding box coordinates in Angular when using TensorFlow.js and MobileNet-v2 for prediction. The coordinates were based on the resolution of the video frame.
but I was displaying the video on a canvas with a fixed height and width. I resolved the issue by dividing the coordinates by the ratio of the original video resolution to the resolution of the canvas.
const x = prediction.bbox[0] / (this.Owidth / 300);
const y = prediction.bbox[1] / (this.Oheight / 300);
const width = prediction.bbox[2] / (this.Owidth / 300);
const height = prediction.bbox[3] / (this.Oheight / 300);
// Draw the bounding box.
ctx.strokeStyle = '#99ff00';
ctx.lineWidth = 2;
ctx.strokeRect(x, y, width, height);
this.Owidth & this.Oheight are original resolution of video. it is obtained by.
this.video.addEventListener(
'loadedmetadata',
(e: any) => {
this.Owidth = this.video.videoWidth;
this.Oheight = this.video.videoHeight;
console.log(this.Owidth, this.Oheight, ' pixels ');
},
false
);
300 X 300 is my static canvas width and height.
you can use the resize_dataset_pascalvoc
it's easy to use python3 main.py -p <IMAGES_&_XML_PATH> --output <IMAGES_&_XML> --new_x <NEW_X_SIZE> --new_y <NEW_X_SIZE> --save_box_images <FLAG>"
It resize all your dataset and rewrite new annotations files to resized images