I have a nested loop that has to loop through a huge amount of data.
Assuming a data frame with random values with a size of 1000,000 rows each has an X,Y location in 2D space. There is a window of 10 length that go through all the 1M data rows one by one till all the calculations are done.
Explaining what the code is supposed to do:
Each row represents a coordinates in X-Y plane.
r_test is containing the diameters of different circles of investigations in our 2D plane (X-Y plane).
For each 10 points/rows, for every single diameter in r_test, we compare the distance between every point with the remaining 9 points and if the value is less than R we add 2 to H. Then we calculate H/(N**5) and store it in c_10 with the index corresponding to that of the diameter of investigation.
For this first 10 points finally when the loop went through all those diameters in r_test, we read the slope of the fitted line and save it to S_wind[ii]. So the first 9 data points will have no value calculated for them thus giving them np.inf to be distinguished later.
Then the window moves one point down the rows and repeat this process till S_wind is completed.
What's a potentially better algorithm to solve this than the one I'm using? in python 3.x?
Many thanks in advance!
import numpy as np
import pandas as pd
####generating input data frame
df = pd.DataFrame(data = np.random.randint(2000, 6000, (1000000, 2)))
df.columns= ['X','Y']
####====creating upper and lower bound for the diameter of the investigation circles
x_range =max(df['X']) - min(df['X'])
y_range = max(df['Y']) - min(df['Y'])
R = max(x_range,y_range)/20
d = 2
N = 10 #### Number of points in each window
#r1 = 2*R*(1/N)**(1/d)
#r2 = (R)/(1+d)
#r_test = np.arange(r1, r2, 0.05)
##===avoiding generation of empty r_test
r1 = 80
r2= 800
r_test = np.arange(r1, r2, 5)
S_wind = np.zeros(len(df['X'])) + np.inf
for ii in range (10,len(df['X'])): #### maybe the code run slower because of using len() function instead of a number
c_10 = np.zeros(len(r_test)) +np.inf
H = 0
C = 0
N = 10 ##### maybe I should also remove this
for ind in range(len(r_test)):
for i in range (ii-10,ii):
for j in range(ii-10,ii):
dd = r_test[ind] - np.sqrt((df['X'][i] - df['X'][j])**2+ (df['Y'][i] - df['Y'][j])**2)
if dd > 0:
H += 1
c_10[ind] = (H/(N**2))
S_wind[ii] = np.polyfit(np.log10(r_test), np.log10(c_10), 1)[0]
You can use numpy broadcasting to eliminate all of the inner loops. I'm not sure if there's an easy way to get rid of the outermost loop, but the others are not too hard to avoid.
The inner loops are comparing ten 2D points against each other in pairs. That's just dying for using a 10x10x2 numpy array:
# replacing the `for ind` loop and its contents:
points = np.hstack((np.asarray(df['X'])[ii-10:ii, None], np.asarray(df['Y'])[ii-10:ii, None]))
differences = np.subtract(points[None, :, :], points[:, None, :]) # broadcast to 10x10x2
squared_distances = (differences * differences).sum(axis=2)
within_range = squared_distances[None,:,:] < (r_test*r_test)[:, None, None] # compare squares
c_10 = within_range.sum(axis=(1,2)).cumsum() * 2 / (N**2)
S_wind[ii] = np.polyfit(np.log10(r_test), np.log10(c_10), 1)[0] # this is unchanged...
I'm not very pandas savvy, so there's probably a better way to get the X and Y values into a single 2-dimensional numpy array. You generated the random data in the format that I'd find most useful, then converted into something less immediately useful for numeric operations!
Note that this code matches the output of your loop code. I'm not sure that's actually doing what you want it to do, as there are several slightly strange things in your current code. For example, you may not want the cumsum in my code, which corresponds to only re-initializing H to zero in the outermost loop. If you don't want the matches for smaller values of r_test to be counted again for the larger values, you can skip that sum (or equivalently, move the H = 0 line to in between the for ind and the for i loops in your original code).
I'm trying to find a match between a set of 2D boxes with coordinates (A) (from a template with known sizes and distances between boxes) to another set of 2D boxes with coordinates (B) (which may contain more boxes than A). They should match in terms of each box from A corresponds to a single Box in B. The boxes in A together form a "stamp" which is assymmetrical in atleast one dimension.
Illustration of problem
explanation: "Stanz" in the illustration is a box from set A.
One might even think of the Set A as only 2D points (the centerpoint of the box) to make it simpler.
The end result will be to know which A box corresponds to which B box.
I can only think of very specific ways of doing this, tailored to a specific layout of boxes, is there any known generic ways of dealing with this forms of matching/search problems and what are they called?
Edit: Possible solution
I have come up with one possible solution, looking for all the possible rotations at each possible B center position for a single box from set A. Here all of the points in A would be rotated and compared against the distance to B centers. Not sure if this is a good way.
Looking for the possible rotations at each B centerpoint- solution
In your example, the transformation between the template and its presence in B can be entirely defined (actually, over-defined) by two matching points.
So here's a simple approach which is kind of performant. First, put all the points in B into a kD-tree. Now, pick a canonical "first" point in A, and hypothesize matching it to each of the points in B. To check whether it matches a particular point in B, pick a canonical "second" point in A and measure its distance to the "first" point. Then, use a standard kD proximity-bounding query to find all the points in B which are roughly that distance from your hypothesized matched "first" point in B. For each of those, determine the transformation between A and B, and for each of the other points in A, determine whether there's a point in A at roughly the right place (again, using the kD-tree), early-outing with the first unmatched point.
The worst-case performance there can get quite bad with pathological cases (O(n^3 log n), I think) but in general I would expect roughly O(n log n) for well-behaved data with a low threshold. Note that the thresholding is a bit rough-and-ready, and the results can depend on your choice of "first" and "second" points.
This is more of an idea than an answer, but it's too long for a comment. I asked some additional questions in a comment above, but the answers may not be particular relevant, so I'll go ahead and offer some thoughts in the meantime.
As you may know, point matching is its own problem domain, and if you search for 'point matching algorithm', you'll find various articles, papers, and other resources. It seems though that an ad hoc solution might be appropriate here (one that's simpler than more generic algorithms that are available).
I'll assume that the input point set can only be rotated, and not also flipped. If this idea were to work though, it should also work with flipping - you'd just have to run the algorithm separately for each flipped configuration.
In your example image, you've matched a point from set A with a point from set B so that they're coincident. Call this shared point the 'anchor' point. You'd need to do this for every combination of a point from set A and a point from set B until you found a match or exhausted the possibilities. The problem then is to determine if a match can be made given one of these matched point pairs.
It seems that for a given anchor point, a necessary but not sufficient condition for a match is that a point from set A and a point from set B can be found that are approximately the same distance from the anchor point. (What 'approximately' means would depend on the input, and would need to be tuned appropriately given that you're using integers.) This condition is met in your example image in that the center point of each point set is (approximately) the same distance from the anchor point. (Note that there could be multiple pairs of points that meet this condition, in which case you'd have to examine each such pair in turn.)
Once you have such a pair - the center points in your example - you can use some simple trigonometry and linear algebra to rotate set A so that the points in the pair coincide, after which the two point sets are locked together at two points and not just one. In your image that would involve rotating set A about 135 degrees clockwise. Then you check to see if every point in set B has a point in set A with which it's coincident, to within some threshold. If so, you have a match.
In your example, this fails of course, because the rotation is not actually a match. Eventually though, if there's a match, you'll find the anchor point pair for which the test succeeds.
I realize this would be easier to explain with some diagrams, but I'm afraid this written explanation will have to suffice for the moment. I'm not positive this would work - it's just an idea. And maybe a more generic algorithm would be preferable. But, if this did work, it might have the advantage of being fairly straightforward to implement.
[Edit: Perhaps I should add that this is similar to your solution, except for the additional step to allow for only testing a subset of the possible rotations.]
[Edit: I think a further refinement may be possible here. If, after choosing an anchor point, matching is possible via rotation, it should be the case that for every point p in B there's a point in A that's (approximately) the same distance from the anchor point as p is. Again, it's a necessary but not sufficient condition, but it allows you to quickly eliminate cases where a match isn't possible via rotation.]
Below follows a finished solution in python without kD-tree and without early outing candidates. A better way is to do the implementation yourself according to Sneftel but if you need anything quick and with a plot this might be useful.
Plot shows the different steps, starts off with just the template as a collection of connected lines. Then it is translated to a point in B where the distances between A and B points fits the best. Finally it is rotated.
In this example it was important to also match up which of the template positions was matched to which boundingbox position, so its an extra step in the end. There might be some deviations in the code compared to the outline above.
import numpy as np
import random
import math
import matplotlib.pyplot as plt
def to_polar(pos_array):
x = pos_array[:, 0]
y = pos_array[:, 1]
length = np.sqrt(x ** 2 + y ** 2)
t = np.arctan2(y, x)
zip_list = list(zip(length, t))
array_polar = np.array(zip_list)
return array_polar
def to_cartesian(pos):
# first element radius
# second is angle(theta)
# Converting polar to cartesian coordinates
radius = pos[0]
theta = pos[1]
x = radius * math.cos(theta)
y = radius * math.sin(theta)
return x,y
def calculate_distance_points(p1,p2):
return np.sqrt((p1[0]-p2[0])**2+(p1[1]-p2[1])**2)
def find_closest_point_inx(point, neighbour_set):
shortest_dist = None
closest_index = -1
# Find the point in the secondary array that is the closest
for index,curr_neighbour in enumerate(neighbour_set):
distance = calculate_distance_points(point, curr_neighbour)
if shortest_dist is None or distance < shortest_dist:
shortest_dist = distance
closest_index = index
return closest_index
# Find the sum of distances between each point in primary to the closest one in secondary
def calculate_agg_distance_arrs(primary,secondary):
total_distance = 0
for point in primary:
closest_inx = find_closest_point_inx(point, secondary)
dist = calculate_distance_points(point, secondary[closest_inx])
total_distance += dist
return total_distance
# returns a set of <primary_index,neighbour_index>
def pair_neighbours_by_distance(primary_set, neighbour_set, distance_limit):
pairs = {}
for num, point in enumerate(primary_set):
closest_inx = find_closest_point_inx(point, neighbour_set)
if calculate_distance_points(neighbour_set[closest_inx], point) > distance_limit:
closest_inx = None
pairs[num]=closest_inx
return pairs
def rotate_array(array, angle,rot_origin=None):
if rot_origin is not None:
array = np.subtract(array,rot_origin)
# clockwise rotation
theta = np.radians(angle)
c, s = np.cos(theta), np.sin(theta)
R = np.array(((c, -s), (s, c)))
rotated = np.matmul(array, R)
if rot_origin is not None:
rotated = np.add(rotated,rot_origin)
return rotated
# Finds out a point in B_set and a rotation where the points in SetA have the best alignment towards SetB.
def find_stamp_rotation(A_set, B_set):
# Step 1
anchor_point_A = A_set[0]
# Step 2. Convert all points to polar coordinates with anchor as origin
A_anchor_origin = A_set - anchor_point_A
anchor_A_polar = to_polar(A_anchor_origin)
print(anchor_A_polar)
# Step 3 for each point in B
score_tuples = []
for num_anchor, B_anchor_point_try in enumerate(B_set):
# Step 3.1
B_origin_rel_point = B_set-B_anchor_point_try
B_polar_rp_origin = to_polar(B_origin_rel_point)
# Step 3.3 select arbitrary point q from Ap
point_Aq = anchor_A_polar[1]
# Step 3.4 test each rotation, where pointAq is rotated to each B-point (except the B anchor point)
for try_rot_point_B in [B_rot_point for num_rot, B_rot_point in enumerate(B_polar_rp_origin) if num_rot != num_anchor]:
# positive rotation is clockwise
# Step 4.1 Rotate Ap by the angle between q and n
angle_to_try = try_rot_point_B[1]-point_Aq[1]
rot_try_arr = np.copy(anchor_A_polar)
rot_try_arr[:,1]+=angle_to_try
cart_rot_try_arr = [to_cartesian(e) for e in rot_try_arr]
cart_B_rp_origin = [to_cartesian(e) for e in B_polar_rp_origin]
distance_score = calculate_agg_distance_arrs(cart_rot_try_arr, cart_B_rp_origin)
score_tuples.append((B_anchor_point_try,angle_to_try,distance_score))
# Step 4.3
lowest=None
for b_point,angle,distance in score_tuples:
print("point:{} angle(rad):{} distance(sum):{}".format(b_point,360*(angle/(2*math.pi)),distance))
if lowest is None or distance < lowest[2]:
lowest = b_point, 360*angle/(2*math.pi), distance
return lowest
def test_example():
ax = plt.subplot()
ax.grid(True)
plt.title('Fit Template to BBoxes by translation and rotation')
plt.xlim(-20, 20)
plt.ylim(-20, 20)
ax.set_xticks(range(-20,20), minor=True)
ax.set_yticks(range(-20,20), minor=True)
template = np.array([[-10,-10],[-10,10],[0,0],[10,-10],[10,10], [0,20]])
# Test Bboxes are Rotated 40 degree, translated 2,2
rotated = rotate_array(template,40)
rotated = np.subtract(rotated,[2,2])
# Adds some extra bounding boxes as noise
for i in range(8):
rotated = np.append(rotated,[[random.randrange(-20,20), random.randrange(-20,20)]],axis=0)
# Scramble entries in array and return the position change.
rnd_rotated = rotated.copy()
np.random.shuffle(rnd_rotated)
element_positions = []
# After shuffling, looks at which index the "A"-marks has ended up at. For later comparison to see that the algo found the correct answer.
# This is to represent the actual case, where I will get a bunch of unordered bboxes.
rnd_map = {}
indexes_translation = [num2 for num,point in enumerate(rnd_rotated) for num2,point2 in enumerate(rotated) if point[0]==point2[0] and point[1]==point2[1]]
for num,inx in enumerate(indexes_translation):
rnd_map[num]=inx
# algo part 1/3
b_point,angle,_ = find_stamp_rotation(template,rnd_rotated)
# Plot for visualization
legend_list = np.empty((0,2))
leg_template = plt.plot(template[:,0],template[:,1],c='r')
legend_list = np.append(legend_list,[[leg_template[0],'1. template-pattern']],axis=0)
leg_bboxes = plt.scatter(rnd_rotated[:,0],rnd_rotated[:,1],c='b',label="scatter")
legend_list = np.append(legend_list,[[leg_bboxes,'2. bounding boxes']],axis=0)
leg_anchor = plt.scatter(b_point[0],b_point[1],c='y')
legend_list = np.append(legend_list,[[leg_anchor,'3. Discovered bbox anchor point']],axis=0)
# algo part 2/3
# Superimpose A onto B by A[0] to b_point
offset = b_point - template[0]
super_imposed_A = template + offset
# Plot superimposed, but not yet rotated
leg_s_imposed = plt.plot(super_imposed_A[:,0],super_imposed_A[:,1],c='k')
#plt.legend(rubberduckz, "superimposed template on anchor")
legend_list = np.append(legend_list,[[leg_s_imposed[0],'4. Templ superimposed on Bbox']],axis=0)
print("Superimposed A on B by A[0] to {}".format(b_point))
print(super_imposed_A)
# Rotate, now the template should match pattern of bboxes
# algo part 3/4
super_imposed_rotated_A = rotate_array(super_imposed_A,-angle,rot_origin=super_imposed_A[0])
# Show the beautiful match in a last plot
leg_s_imp_rot = plt.plot(super_imposed_rotated_A[:,0],super_imposed_rotated_A[:,1],c='g')
legend_list = np.append(legend_list,[[leg_s_imp_rot[0],'5. final fit']],axis=0)
plt.legend(legend_list[:,0], legend_list[:,1],loc="upper left")
plt.show()
# algo part 4/4
pairs = pair_neighbours_by_distance(super_imposed_rotated_A, rnd_rotated, 10)
print(pairs)
for inx in range(len(pairs)):
bbox_num = pairs[inx]
print("template id:{}".format(inx))
print("bbox#id:{}".format(bbox_num))
#print("original_bbox:{}".format(rnd_map[bbox_num]))
if __name__ == "__main__":
test_example()
Result on actual image with bounding boxes. Here it can be seen that the scaling is incorrect which makes the template a bit off but it will still be able to pair up and thats the desired end-result in my case.
So, I have on the input few masked arrays. For computation I use slices: top left, top right, bottom left and bottom right:
dy0 = dy__[:-1, :-1]
dy1 = dy__[:-1, 1:]
dy2 = dy__[1:, 1:]
dy3 = dy__[1:, :-1]
The same is done with dx and g values.
To compute sums or differences correctly I need to change the mask to make it the same for all of them. For now I count the sum of converted into int masks of 4 arrays and check if it's more than one. So if there is more than one masked element - I mask it.
import functools
sum = functools.reduce(lambda x1, x2: x1.astype('int') + x2.astype('int'), list_of_masks)
mask = sum > 1 # mask output if more than 1 input is masked
But when I initialize masks like dy0.mask = new_mask they don't change.
Also when I replace 0 elements in one array with 1 using numpy.where() the mask disappears, so I can initialize the new one. But for those arrays which stay the same mask still doesn't change. (I checked the numpy.ma documentation, and it should)
The problem is in some functions there are too many arrays which mask might be changed to the new one, so it's better to find a good way to initialize it in one operation for few arrays and be sure it works.
Is there any way to do it or to find why it doesn`t work as it should?
I an running OCR on bank cheques using pyimagesearch tutorial to detect micr code. The code used in the tutorial detects group contours & character contours from a reference image containing symbols.
In the tutorial when finding the contours for symbol below
the code uses an built-in python iterator to iterate over the contours (here 3 seperate contours) and combined to give a character for recognition purposes.
But in the cheque dataset that I use, I have the symbol with low resolution
The actual bottom of the cheque is :
which causes the iterator to consider the contour-2 & contour-3 as a single contour. Due to this the iterator iterates over the character following the above symbol (here '0') and prepares a incorrect template to match with the reference symbols. You can see the code below for better understanding.
I know here noise in the image is a factor, but is it possible to reduce the noise & also find the exact contour to detect the symbol?
I tried using noise reduction techniques like cv2.fastNlMeansDenoising & cv2.GaussianBlur before cv2.findContours step the contours 2&3 are detected as single contour instead of 2 seperate contours.
Also I tried altering the `cv2.findContours' parameters
Below is the working code where the characters are iterated for better understanding of python builtin iterator:
def extract_digits_and_symbols(image, charCnts, minW=5, minH=10):
# grab the internal Python iterator for the list of character
# contours, then initialize the character ROI and location
# lists, respectively
charIter = charCnts.__iter__()
rois = []
locs = []
# keep looping over the character contours until we reach the end
# of the list
while True:
try:
# grab the next character contour from the list, compute
# its bounding box, and initialize the ROI
c = next(charIter)
(cX, cY, cW, cH) = cv2.boundingRect(c)
roi = None
# check to see if the width and height are sufficiently
# large, indicating that we have found a digit
if cW >= minW and cH >= minH:
# extract the ROI
roi = image[cY:cY + cH, cX:cX + cW]
rois.append(roi)
cv2.imshow('roi',roi)
cv2.waitKey(0)
locs.append((cX, cY, cX + cW, cY + cH))
# otherwise, we are examining one of the special symbols
else:
# MICR symbols include three separate parts, so we
# need to grab the next two parts from our iterator,
# followed by initializing the bounding box
# coordinates for the symbol
parts = [c, next(charIter), next(charIter)]
(sXA, sYA, sXB, sYB) = (np.inf, np.inf, -np.inf,
-np.inf)
# loop over the parts
for p in parts:
# compute the bounding box for the part, then
# update our bookkeeping variables
# c = next(charIter)
# (cX, cY, cW, cH) = cv2.boundingRect(c)
# roi = image[cY:cY+cH, cX:cX+cW]
# cv2.imshow('symbol', roi)
# cv2.waitKey(0)
# roi = None
(pX, pY, pW, pH) = cv2.boundingRect(p)
sXA = min(sXA, pX)
sYA = min(sYA, pY)
sXB = max(sXB, pX + pW)
sYB = max(sYB, pY + pH)
# extract the ROI
roi = image[sYA:sYB, sXA:sXB]
cv2.imshow('symbol', roi)
cv2.waitKey(0)
rois.append(roi)
locs.append((sXA, sYA, sXB, sYB))
# we have reached the end of the iterator; gracefully break
# from the loop
except StopIteration:
break
# return a tuple of the ROIs and locations
return (rois, locs)
edit: contour 2 & 3 instead of contours 1 & 2
Try to find the right threshold value, instead of using cv2.THRESH_OTSU. It seems should be possible to find a suitable threshold from the provided example. If you can't find the threshold value that works for all images, you can try morphological closing on the threshold result with structuring element with 1-pixel width.
Edit (steps):
For threshold, you need to find appropriate value by hand, in your image threhsold value 100 seems to work:
i = cv.imread('image.png')
g = cv.cvtColor(i, cv.COLOR_BGR2GRAY)
_, tt = cv.threshold(g, 100, 255, cv.THRESH_BINARY_INV)
as for closing variant:
_, t = cv.threshold(g, 0,255,cv.THRESH_BINARY_INV | cv.THRESH_OTSU)
kernel = np.ones((12,1), np.uint8)
c = cv.morphologyEx(t, cv.MORPH_OPEN, kernel)
Note that I used import cv2 as cv. I also used opening instead of closing since in the example they inverted colors during thresholding
I'm pretty new to python and Opencv, but I have a few pieces from cv2 and random in mind for a simple test program to make sure I understood how these libraries worked.
I'm trying to create a program that effectively generates colored "snow", similar to what an old fashioned television shows when it has no signal.
Basically I generate a random color with random.randint(-1,256) to get a value between 0 and 255. I do it three times and store each in a different variable, randB/G/R. Then I do it twice more for coordinates randX/Y, using img.shape to get variables for width and height for the max number.
I don't think my variables are being interpreted as strings. If I quickly break the loop and print my variables, no errors are shown. If I remove the randX and randY variables and specify fixed coordinates or a range of [X1:Y1, X2:Y2] it doesn't crash.
import cv2
import numpy as np
import random
img = cv2.imread('jake_twitch.png', cv2.IMREAD_COLOR)
height, width, channels = img.shape
while True:
randB = (random.randint(-1,256))
randG = (random.randint(-1,256))
randR = (random.randint(0,256))
randX = (random.randint(0,width))
randY = (random.randint(0,height))
img[randX,randY] = [randB,randG,randR]
cv2.imshow('Snow', img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.imwrite('Snow.png', img)
cv2.destroyAllWindows
I would expect my code to run indefinitely coloring pixels random colors within a specified "box" defined by the width and height variables from img.shape.
it seems to start doing that, but If the program runs for more than about a second it crashes and spits out this error
"IndexError: index 702 is out of bounds for axis 1 with size 702"
Your image is width and height pixels wide - but the corresponding indexes run from 0..width-1 and 0..height-1
The randint function returns inclusive limits - so
random.randint(0,width)
might give you width ... which is 1 too big:
random.randint(a, b)
Return a random integer N such that a <= N <= b. Alias for randrange(a, b+1).
Use
randX = (random.randint(0,width-1))
randY = (random.randint(0,height-1))
instead.
Or change it to use random.randrange(0, width) or random.choice(range(width)) - both omit the upper limit value.