Fabricjs: how to get the starting point of a path? - fabricjs

Is there any built-in method in fabricjs that returns the coordinates of the starting point of a path ? I do not need the coordinates of the bounding rectangle.
Thanks!

To get the starting point you have to extract the point and calculate its actual position on canvas. As of fabric 1.6.0 you have all functions to do that, for the previous version you need a bit more logic:
example path:
var myPath = new fabric.Path('M 25 0 L 300 100 L 200 300 z');
point:
var x = myPath.path[0][1];
var y = myPath.path[0][2];
var point = {x: x, y: y};
Logic:
1) calculate path transformation matrix:
needs: path.getCenterPoint(); path.angle, path.scaleX, path.scaleY, path.skewX, path.skewY, path.flipX, path.flipY.
var degreesToradians = fabric.util.degreesToradians,
multiplyMatrices = fabric.util.multiplyTransformMatrices,
center = this.getCenterPoint(),
theta = degreesToRadians(path.angle),
cos = Math.cos(theta),
sin = Math.sin(theta),
translateMatrix = [1, 0, 0, 1, center.x, center.y],
rotateMatrix = [cos, sin, -sin, cos, 0, 0],
skewMatrixX = [1, 0, Math.tan(degreesToRadians(path.skewX)), 1],
skewMatrixY = [1, Math.tan(degreesToRadians(path.skewY)), 0, 1],
scaleX = path.scaleX * (apth.flipX ? -1 : 1),
scaleY = path.scaleY * (path.flipY ? -1 : 1),
scaleMatrix = [path.scaleX, 0, 0, path.scaleY],
matrix = path.group ? path.group.calcTransformMatrix() : [1, 0, 0, 1, 0, 0];
matrix = multiplyMatrices(matrix, translateMatrix);
matrix = multiplyMatrices(matrix, rotateMatrix);
matrix = multiplyMatrices(matrix, scaleMatrix);
matrix = multiplyMatrices(matrix , skewMatrixX);
matrix = multiplyMatrices(matrix , skewMatrixY);
// at this point you have the transform matrice.
Now take the rendering path process:
the canvas is transformed by matrix, then the point of path ar drawn with an offset that you can find in path.pathOffset.x and path.pathOffset.y.
So take your first point, subtract the offset.
point.x -= path.pathOffset.x;
point.y -= path.pathOffset.y;
Then
var finalpoint = fabric.util.transformPoint(point, matrix);
In new fabric 1.6.0 all the logic is in a function, you can just run:
var matrix = path.calcTransformMatrix(); and then proceed with transform Point logic.

Checkout the Path.path property. It is a 2D array containing an element for each path command. The second array holds the command in the first element e.g. 'M' for move, the following elements contain the coordinates.
var myPath = new fabric.Path('M 25 0 L 300 100 L 200 300 z');
var startX = myPath.path[0][1];
var startY = myPath.path[0][2];

Related

How to apply a translation to a coordinate vector?

I am trying to implement and understand how to perform a simple translation in GLSL.
In order to do that, I am making a simple test in Octave to ensure that I am understanding the transformation itself.
I have the following vector that represents a 2D coordinates embedded into a 4 dimension vector:
candle = [1586266800, 11812, 0, 0]
Which means that the point has locations x=1586266800 and y=11812.
I am trying to apply a translation using the following values:
priceBottom = 11800
timestampOrigin = 1586266800
Which means that the new origin of coordinates will be x=1586266800 and y=11800.
I build the following translation matrix:
[ 1 0 0 tx ]
[ 0 1 0 ty ]
[ 0 0 1 tz ]
[ 0 0 0 1 ]
translation1 = [1, 0, 0, -timestampOrigin; 0, 1, 0, -priceBottom; 0, 0, 1, 0; 0, 0, 0, 1]
Is this matrix correct?
How shall I apply it to the vector?
I have tried:
>> candle * translation1
ans =
1.5863e+009 1.1812e+004 0.0000e+000 -2.5162e+018
Which obviously does not work.
Your translation is wrong. From a mathematical point of view, the transformation you're after is:
i.e. you need to 'augment' your vector with another dimension with the value 1, so that it can be used to add the 'translation' information to each row during matrix multiplication.
So, if I understood your example correctly
Initial_position = [1586266800; 11812; 0; 0] # note: vertical vector
Augmented_vector = [Initial_position; 1]
Translation_vector = [0 ; -12 ; 0; 0] # note: vertical vector
Transformation = eye(5);
Transformation( 1:4, 5 ) = Translation_vector
Translated_vector = Transformation * Augmented_vector;
Translated_vector = Translated_vector( 1:4, 1 )

Sphere-Sphere Intersection

I have two spheres that are intersecting, and I'm trying to find the intersection point nearest in the direction of the point (0,0,1)
My first sphere's (c1) center is at (c1x = 0, c1y = 0, c1z = 0) and has a radius of r1 = 2.0
My second sphere's (c2) center is at (c2x = 2, c2y = 0, c2z = 0) and has a radius of r2 = 2.0
I've been following the logic on this identical question for the 'Typical intersections' part, but was having some trouble understanding it and was hoping someone could help me.
First I'm finding the center of intersection c_i and radius of the intersecting circle r_i:
Here the first sphere has center c_1 and radius r_1, the second c_2 and r_2, and their intersection has center c_i and radius r_i. Let d = ||c_2 - c_1||, the distance between the spheres.
So sphere1 has center c_1 = (0,0,0) with r_1 = 2. Sphere2 has c_2 = (2,0,0) with r_2 = 2.0.
d = ||c_2 - c_1|| = 2
h = 1/2 + (r_1^2 - r_2^2)/(2* d^2)
So now I solve the function of h like so and get 0.5:
h = .5 + (2^2 - 2^2)/(2*2^2)
h = .5 + (0)/(8)
h = 0.5
We can sub this into our formula for c_i above to find the center of the circle of intersections.
c_i = c_1 + h * (c_2 - c_1)
(this equation was my original question, but a comment on this post helped me understand to solve it for each x,y,z)
c_i_x = c_1_x + h * (c_2_x - c_1_x)
c_i_x = 0 + 0.5 * (2 - 0) = 0.5 * 2
1 = c_i_x
c_i_y = c_1_y + h * (c_2_y - c_1_y)
c_i_y = 0 + 0.5 * (0- 0)
0 = c_i_y
c_i_z = c_1_z + h * (c_2_z - c_1_z)
c_i_z = 0 + 0.5 * (0 - 0)
0 = c_i_z
c_i = (c_i_x, c_i_z, c_i_z) = (1, 0, 0)
Then, reversing one of our earlier Pythagorean relations to find r_i:
r_i = sqrt(r_1*r_1 - hhd*d)
r_i = sqrt(4 - .5*.5*2*2)
r_i = sqrt(4 - 1)
r_i = sqrt(3)
r_i = 1.73205081
So if my calculations are correct, I know the circle where my two spheres intersect is centered at (1, 0, 0) and has a radius of 1.73205081
I feel somewhat confident about all the calculations above, the steps make sense as long as I didn't make any math mistakes. I know I'm getting closer but my understanding begins to weaken starting at this point. My end goal is to find an intersection point nearest to (0,0,1), and I have the circle of intersection, so I think what I need to do is find a point on that circle which is nearest to (0,0,1) right?
The next step from this solutionsays:
So, now we have the center and radius of our intersection. Now we can revolve this around the separating axis to get our full circle of solutions. The circle lies in a plane perpendicular to the separating axis, so we can take n_i = (c_2 - c_1)/d as the normal of this plane.
So finding the normal of the plane involves n_i = (c_2 - c_1)/d, do I need to do something similar for finding n_i for x, y, and z again?
n_i_x = (c_2_x - c_1_x)/d = (2-0)/2 = 2/2 = 1
n_i_y = (c_2_y - c_1_y)/d = (0-0)/2 = 0/2 = 0
n_i_z = (c_2_z - c_1_z)/d = (0-0)/2 = 0/2 = 0
After choosing a tangent and bitangent t_i and b_i perpendicular to this normal and each other, you can write any point on this circle as: p_i(theta) = c_i + r_i * (t_i * cos(theta) + b_i sin(theta));
Could I choose t_i and b_i from the point I want to be nearest to? (0,0,1)
Because of the Hairy Ball Theorem, there's no one universal way to choose the tangent/bitangent to use. My recommendation would be to pick one of the coordinate axes not parallel to n_i, and set t_i = normalize(cross(axis, n_i)), and b_i = cross(t_i, n_i) or somesuch.
c_i = c_1 + h * (c_2 - c_1)
This is vector expression, you have to write similar one for every component like this:
c_i.x = c_1.x + h * (c_2.x - c_1.x)
and similar for y and z
As a result, you'll get circle center coordinates:
c_i = (1, 0, 0)
As your citate says, choose axis not parallel to n vect0r- for example, y-axis, get it's direction vector Y_dir=(0,1,0) and multiply by n
t = Y_dir x n = (0, 0, 1)
b = n x t = (0, 1, 0)
Now you have two vectors t,b in circle plane to build circumference points.

Apply filters to Hough Line Detection

In my application, I use Hough Line Detection to detect lines inside an image. What I'm trying to do is to retrieve only the lines that compose the border and the corners of each square of the chessboard. How can I apply filters to obtain a clear view of the lines?
My idea is to apply filters to check the angle between each line(90 degrees) or the distance to get only the lines that count. The final goal will be to obtain the intersection between these lines to get the coordinates of each square.
Code:
chessBoard = cv2.imread('img.png')
gray = cv2.cvtColor(chessBoard,cv2.COLOR_BGR2GRAY)
dst = cv2.Canny(gray, 50, 200)
lines= cv2.HoughLines(dst, 1, math.pi/180.0, 100, np.array([]), 0, 0)
a,b,c = lines.shape
for i in range(a):
rho = lines[i][0][0]
theta = lines[i][0][1]
a = math.cos(theta)
b = math.sin(theta)
x0, y0 = a*rho, b*rho
pt1 = ( int(x0+1000*(-b)), int(y0+1000*(a)) )
pt2 = ( int(x0-1000*(-b)), int(y0-1000*(a)) )
cv2.line(chessBoard, pt1, pt2, (0, 255, 0), 2, cv2.LINE_AA)

Project 2D points onto Circle/Curve

I have a list of XY points which represent text in a "dot matrix" form. The origin of the first point in the set in the set is 0,0(upper left point). (I can change the points to incremental coordinates too)
I would like to project or wrap the points around a radius like so:
I've tried to follow this answer, but the results are not what I expect:
How To Project Point Onto a Sphere
I've also tried converting to Polar coordinates and imposing
the R coordinate to determine the Theta and the convert back to cartesian, but that does not work either.
For example, the letter T produces this which should then be projected to the curve:
0, .0
0.1, .0
0.2, .0
0.2, .-0.1
0.2, .-0.2
0.2, .-0.3
0.2, .-0.4
0.2, .-0.5
0.2, .-0.6
0.3, .0
0.4, .0
What is the process to get my points to follow a radial curve
Say you want to curve around a circle centered at (cx, cy) with radius r, using dots with size (diameter) 0.1.
The distance, d the center of a dot at (x, y) is from center of the circle is:
d = r + y - size / 2
(I've subtracted size / 2 to get the position of the center of dot)
The angle theta (in radians) around the circle is:
theta = (x + size / 2) / r
The position of the dot is then:
dx = cx + d * cos(theta)
dy = cy - d * sin(theta)
Here's an example using SVG and Javascript
var svg = document.getElementById('curve-text');
var NS = "http://www.w3.org/2000/svg";
var points = [
[0, 0],
[0.1, 0],
[0.2, 0],
[0.2, -0.1],
[0.2, -0.2],
[0.2, -0.3],
[0.2, -0.4],
[0.2, -0.5],
[0.2, -0.6],
[0.3, 0],
[0.4, 0]
];
var cx = 2;
var cy = 2;
var r = 2;
var size = 0.1;
drawCircle(cx, cy , r - 0.7);
var circumference = Math.PI * 2 * r;
var angle = 360 / circumference;
var radians = 1 / r;
// Add 12 copies of the letter T around the circle
for (var j = 0; j < 12; j++) {
for (var i = 0; i < points.length; i++) {
addDots(points[i][0] + j, points[i][1], size, cx, cy, r)
}
}
function drawCircle(cx, cy , r) {
var circle = document.createElementNS(NS, 'circle');
circle.setAttributeNS(null, 'cx', cx);
circle.setAttributeNS(null, 'cy', cy);
circle.setAttributeNS(null, 'r', r);
circle.setAttributeNS(null, 'fill', 'none');
circle.setAttributeNS(null, 'stroke', 'black');
circle.setAttributeNS(null, 'stroke-width', '0.02');
svg.appendChild(circle);
}
function addDots(x, y, size, cx, cy, r) {
var dotR = size / 2;
var d = r + (y - dotR);
var theta = (x + dotR) / r;
var x = cx + d * Math.cos(theta);
var y = cy - d * Math.sin(theta);
var dot = document.createElementNS(NS, 'circle');
dot.setAttributeNS(null, 'cx', x);
dot.setAttributeNS(null, 'cy', y);
dot.setAttributeNS(null, 'r', dotR);
svg.appendChild(dot);
}
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 4 4" id="curve-text" width="200" height="200">
</svg>
You need to take the (X, Y) coordinates as if they were (Θ, R) polar coordinates (in this order), and convert to Cartesian.
Experiment a little to understand the effect of horizontal or vertical translation before the transformation, as well as h/v scaling. Ensure that for all points R > r, the radius of the circle.

vtk projection matrix: from world to display

I'm trying to obtain a 4x4 projection matrix that transforms a point in the world to the display coordinates.
Having a pixel (x, y) and the corresponding z-value (from the zbuffer), I obtain its 3D world coordinates with vtkWorldPointPicker class. Let's denote the result by x.
According to documentation, I can compute the view coordinates of the world point by applying the matrix GetCompositeProjectionTransformMatrix to x. Next, I'm using the transformation from the view to the initial display coordinates by using the code found in vtkViewport::ViewToDisplay (*):
dx = (v[0] + 1.0) * (sizex*(v[2]-v[0])) / 2.0 + sizex*v[0];
dy = (v[1] + 1.0) * (sizey*(v[3]-v[1])) / 2.0 + sizey*v[1];
where sizex and sizey are the width and height of the image in pixels, and v are the computed view coordinates.
Unfortunately, the values I get back do not match the original:
display [0, 0, 0.716656] // x,y-pixel coordinates and the zbuffer
x = [0.0255492, -0.0392383, 0.00854707] // world coordinates (using vtkWorldPointPicker)
// camera->GetCompositeProjectionTransformMatrix
P = [
-1.84177 0 0 0
0 1.20317 1.39445 0
0 -757.134 653.275 -9.9991
0 -0.757126 0.653268 0 ]
v = [-0.0470559, -0.0352919, 25.2931, 0.0352919] // P*x
a = [7697.18, -0.597848] // using (*)
Is this approach (in general) correct, or is there a more conventional way to do this? Thanks for any help.
Edit: the provided snippet from vtkViewport::ViewToDisplay is incorrect. It should read:
dx = (v[0] + 1.0) * (sizex*(vp[2]-vp[0])) / 2.0 + sizex*vp[0];
dy = (v[1] + 1.0) * (sizey*(vp[3]-vp[1])) / 2.0 + sizey*vp[1];
Note, that v refers to the normalised view coordinates, vp is the viewport (by default, vp := [0, 0, 1, 1])!
The conversion is indeed valid, although there might be built-in ways to obtain the final matrix.
Assuming only one (default) viewport is used, the matrix converting the view into display coordinates is:
M = [X/2, 0, 0, X/2,
0, Y/2, 0, Y/2,
0, 0, 1, 0,
0, 0, 0, 1]
where X and Y is the width and height of the image in pixels.
Hence, given a point x in the world coordinates, the display coordinates in homogeneous form are:
c = M * P * x;
where P is the CompositeProjectionTransformMatrix. After normalising (c[i] /= c[3], i = 0,1,2) we arrive at the original pixel values.

Resources