So the slope of a non-endpoint point P1 in a hermite curve is (P2-P0)/2. But how would you get the slope of the endpoints, if I don't want the slope to be 0?
I'm guessing you mean a quadratic Bezier curve which is defined by two end points and one (inner) control point, because the Hermite curve is already defined by tangent vectors (from which the slope is simply Ri y / Ri x, i=0..1, where R0 and R1 are the tangent vectors). Also, the Hermite curve is cubic and has 4 control points, ie. 2 inner control points.
So, for a quadratic Bezier curve, defined by P0, P1, P2, the tangents at the end=points P0 and P3, are just
T0 = P1 - P0
T1 = P2 - P1
So the slopes are
s0 = T0 y / T0 x
s1 = T1 y / T1 x
That's why these curve are so amazingly useful, because they're defined by the features that we want to control for the purposes of design (continuity among segments by placing control points on a straight line through the common endpoint).
The quadratic Bezier can also be considered a degenerate cubic Bezier, where the 2 inner control points coincide (they are the same point); so the first step to convert the "3-point" curve into the Hermite form is to duplicate the middle point producing the cubic Bezier form.
B0 = P0
B1 = P1
B2 = P1
B3 = P2
Then, using equation (13.32) from Foley and Van Dam, Fundamentals of Interactive Computer Graphics, the Hermite form can be produced with a matrix multiplication
G_h = [ [ H_0 ] = [ [ 1 0 0 0 ] [ [ B_0 ] = M_hb G_b
[ H_1 ] [ 0 0 0 1 ] [ B_1 ]
[ T_0 ] [ -3 3 0 0 ] [ B_2 ]
[ T_1 ] ] [ 0 0 -3 3 ] ] [ B_3 ] ]
Ie. The two end-points are the same (H0 = B0, H1 = B3), and the tangent vectors are just weighted sums of the relevant points (T0 = -3*B0 + 3*B1, T1 = -3*B2 + 3*B3).
The tangent vectors here differ in magnitude from the first definitions above, but the directions (and hence, the slope) are the same.
Related
Given two vectors X and Y, where the number of elements in each is 5. Find a V vector that satisfies :
||X-V||=||Y-V||=||X-Y||
(X,Y,V) are the vertices of an equilateral triangle.
I have tried the following:
To get a vector V that is perpendicular to A and B :
import NumPy as np
# Example vectors
x = [ 0.93937874, 0.05568767, -2.05847484, -1.15965884, -0.34035054]
y = [-0.45921145, -0.55653187, 0.6027685, 0.13113272, -1.2176953 ]
# convert those vectors to a matrix to apply SVD (sure there is a shorter code to do so)
A_list=[]
A_list.append(x)
A_list.append(y)
A=np.array(A_list) # A is a Numpy matrix
u,s,vh=np.linalg.svd(A)
v=vh[-1:1]
From here, what should I do? assuming that what I have done so far is correct
let said I have two arrays of points, and I want to know what is the distance between each point.
For example:
array_1 = [p1,p2,p3,p4]
array_2 = [p5,p6]
p1 to p6 is point, something like [1,1,1] (3D)
the output I want is
output = [[distance of p1 to p5, distance of p2 to p5, ... distance of p4 to p5],
[distance of p1 to p6, distance of p2 to p6, ... distance of p4 to p6]]
what is the best approach if I want to use numpy?
You can first arange the two arrays into an m×1×3 and an 1×n×3 shape, and then subtract the coordinates:
delta = array_1[:,None] - array_2
Next we can square the differences in the coordinates, and calculate the sum, then we can calculate the square roout:
distances = np.sqrt((delta*delta).sum(axis=2))
Now distances is an m×n matrix with as ij-th element the distance between the i-th element of the first array, and j-th element of the second array.
For example if we have as data:
>>> array_1 = np.arange(12).reshape(-1,3)
>>> array_2 = 2*np.arange(6).reshape(-1,3)
We get as result:
>>> delta = array_1[:,None] - array_2
>>> distances = np.sqrt((delta*delta).sum(axis=2))
>>> distances
array([[ 2.23606798, 12.20655562],
[ 3.74165739, 7.07106781],
[ 8.77496439, 2.23606798],
[13.92838828, 3.74165739]])
The first element of array_1 has coordinates (0,1,2), and the second of array_2 has coordinates (6,8,10). Hence the distance is:
>>> np.sqrt(6*6 + 7*7 + 8*8)
12.206555615733702
This is what we see in the distances array for distances[0,1].
The above function method can calculate the Euclidean distance for an arbitrary amount of dimensions. Given both array_1 and array_2 have points with the same number of dimensions (1D, 2D, 3D, etc.), this can calculate the distances of the points.
This answer isn't specifically for numpy arrays, but could easily be extended to include them. The module itertools.product is your friend here.
# Fill this with your formula for distance
def calculate_distance(point_1, point_2):
distance = ...
return distance
# The itertools module helps here
import itertools
array_1, array_2 = [p1, p2, p3, p4], [p5, p6]
# Initialise list to store answers
distances = []
# Iterate over every combination and calculate distance
for i, j in itertools.product(array_1, array_2):
distances.append(calculate_distance(i, j)
I am working with an intrusion detection algorithm which works on the basis of line crossing detection. I have developed a basic algorithm using the equation y = mx+c, but it is showing some wrong detection when the person reaches nearer to the line. I need some suggestion for making it a perfect line touching algorithm.
If your line has starting and ending points [x1, y1] and [x2, y2], then the line equation is:
y - y1 = m * (x - x1), where m = (y2 - y1)/(x2-x1)
Then you can check if a point belongs to the line or not, substituting either x or y, and checking if the other matches the line equation.
In Pyhton:
# the two points that define the line
p1 = [1, 6]
p2 = [3, 2]
# extract x's and y's, just for an easy code reading
x1, y1 = p1
x2, y2 = p2
m = (y2-y1)/(x2-x1)
# your centroid
centroid = [2,4]
x3, y3 = centroid
# check if centroid belongs to the line
if (m * (x3-x1) + y1) == y3:
print("Centroid belongs to line")
But probably...
...you'll have better results calculating the distance between red dot and the line (distance from a point to a line), and then checking if it is near enough (i.e. distance less than some value).
In Python:
# points that define the line
p1 = [1, 6]
p2 = [3, 2]
x1, y1 = p1
x2, y2 = p2
centroid = [2,4]
x3, y3 = centroid
# distance from centroid to line
import math # to calculate square root
dist = abs((y2-y1)*x3 - (x2-x1)*y3 + x2*y1 - y2*x1)/math.sqrt((y2-y1)**2 + (x2-x1)**2)
if dist < some_value:
print("Near enough")
Let the line go from point l0 to point l1. Then let the centroid be point p1. Let the vector l be the vector from l0 to l1 and p from l0 to p1. Then you can find the distance from the point p1 to the line using dot product as described here.
You probably want to find the distance from your point to the line segment and then evaluate if the point is on the line segment based on that distance. This can be done in a similar fashion but with more logic around it, as described here.
An implementation in python using numpy is given below. It can easily be extended to handle N centroids, enabling you to track different objects in parallel. It works by projecting the point onto the line segment and finding the distance from this point to the centroid.
import numpy as np
def distance_from_line_segment_points_to_point(l0, l1, p1):
l0 = np.array(l0)
l1 = np.array(l1)
p1 = np.array(p1)
l_vec = l1 - l0
p_vec = p1 - l0
if (l0 == l1).all():
return np.linalg.norm(p_vec)
l_norm = np.linalg.norm(l_vec)
l_unit = l_vec / l_norm
t = np.dot(l_unit, p_vec)
if t >= l_norm:
p_proj = l1
elif t <= 0:
p_proj = l0
else:
p_proj = l0 + t * l_unit
return np.linalg.norm(p1 - p_proj)
print(distance_from_line_segment_points_to_point([0, 0], [0, 0], [1, 1])) # sqrt(2), 1.4
print(distance_from_line_segment_points_to_point([0, 0], [1, 0], [1, 1])) # 1
print(distance_from_line_segment_points_to_point([0, 0], [1, 1], [0, 1])) # sqrt(2)/2, 0.707
I am working on a clustering problem. There's a situation where I have 3 cluster centers as below, and I want to calculate euclidean distance from these 3 cluster centers from another m*n dimensional matrix. It would be very helpful if anyone can guide me through this.
kmeans.cluster_centers_
Out[99]:
array([[-2.23020213, 0.35654288],
[ 7.69370352, 1.72991757],
[ 0.92519202, -0.29218753]])
matrix
Out[100]:
array([[ 0.11650485, 0.11650485, 0.11650485, 0.11650485, 0.11650485,
0.11650485],
[ 0.11650485, 0.18446602, 0.18446602, 0.2815534 , 0.37864078,
0.37864078],
[ 0.21359223, 0.21359223, 0.21359223, 0.21359223, 0.29708738,
0.35533981],
...,
[ 0.2640625 , 0.2734375 , 0.30546875, 0.31953125, 0.31953125,
0.31953125],
[ 1. , 1. , 1. , 1. , 1. ,
1. ],
[ 0.5 , 0.5 , 0.5 , 0.5 , 0.5 ,
0.5 ]])
I want to do it in Python. I have used sklearn for my clustering.
Euclidean distance is defined on vectors of a fixed length d.
I.e.it is a function R^d x R^d -> R
So whatever you are trying to do - it is not the usual Euclidean distance. You seem to have k=3 cluster centers with d=2 coordinates, but your matrix has an incompatible shape that cannot be interpreted in an obvious way as 2 d vectors.
I'm writing a library that deals with 2D graphical shapes.
I'm just wondering why should my coordinate system range from [-1, 1] for both the x and y axis
instead of [0, width] for x and [0, height] for y ?
I went for the latter system because I felt it was straight forward to implement.
From Jim Blinn's A Trip Down The Graphics Pipeline, p. 138.
Let's start with what might at first seem the simplest transformation: normalized device coordinates to pixel space. The transform is
s_x * X_NDC + d_x = X_pixel
s_y * Y_NDC + d_y = Y_pixel
A user/programmer does all screen design in NDC. There are three nasty realities of the hardware that NDC hides from us:
The actual number of pixels in x and y.
Non-uniform pixel spacing in x and y.
Up versus down for the Y coordinate. The NDC-to-pixel transformation will invert Y if necessary so that Y in NDC points up.
...
s_x = ( N_x - epsilon ) / 2
d_x = ( N_x - epsilon ) / 2
s_y = ( N_y - epsilon ) / (-2*a)
d_y = ( N_y - epsilon ) / 2
epsilon = .001
a = N_y/N_x (physical screen aspect ratio)