I need to fit a globally smooth cubic b-spline to interpolate through some points, while approximating others (i.e., 3rd case):
As I understand it, given the above example, approximation would involve something like this:
Input: Set of data points {P0, P1, ..., P3}, degree=3, desired number of control points m
Output: Approximated control points {Q0, Q1, ..., Qm}
1. Generate knots vector U = {u0, u1, ..., u8}
2. For each data point Pi, compute the weight Wi
3. Formulate and solve the linear system of equations to compute the control points {Q0, Q1, ..., Qm}:
sum of (N(ui) * Wi * Pi) for i = 0 to n = Q(ui) for each u in U
4. Return the set of approximated control points {Q0, Q1, ..., Qm}
And interpolation would involve something like this:
Input: Set of control points {P0, P1, P2, P3}, degree=3, number of evaluation points m
Output: Interpolated points {Q0, Q1, Q2, ..., Qm}
1. Generate knots vector U = {u0, u1, ..., u8}
2. For each evaluation point u in U, compute the basis functions N(u)
3. For each evaluation point u, compute the interpolated point Q(u) as:
Q(u) = sum of (N(ui) * Pi) for i = 0 to 3
4. Return the set of interpolated points {Q0, Q1, ..., Qm}
How can I combine the above techniques, or modify either one of them, to accomplish a mixture of both curve approximation and interpolation?
Related
I have a path output as shown in the image below, in a coordinate system 1 wherein the start point and the end point are (40,40) and (10,20) respectively.
I want to scale this path to a new coordinate system (coordinate system 2) with a known start and end point, the path has to scale and adjust between the new points.
I believe Affine transforms might help / linear algebra.
How do I achieve this ? and will this be accurate or will it distort ?
To find appropriate affine tranformation (there are many ways to transform two points into two another ones, but we choose the simplest way), you can apply these elementary steps:
Shift coordinates by (-startx, -starty)
Scale along X-axis with coefficient (newendx-newstartx)/(endx-startx) (here -80/3)
Scale along Y-axis with coefficient (newendy-newstarty)/(endy-starty) (here -35)
Shift coordinates by (newstartx, newstarty)
Resulting affine tranformation is product of these four matrices
Using Wolfram alpha to get matrix
M == {{c, 0, 0},
{0, d, 0},
{a*c + e, b*d + f, 1}}
where a,b,c,d,e,f are values from decription above (a = -startx and so on)
Now transform coordinates with multiplication of point coordinates and matrix M
(x, y, 1) * M = (newx, newy, 1)
I've got a shape consisting of four points, A, B, C and D, of which the only their position is known. The goal is to transform these points to have specific angles and offsets relative to each other.
For example: A(-1,-1) B(2,-1) C(1,1) D(-2,1), which should be transformed to a perfect square (all angles 90) with offsets between AB, BC, CD and AD all being 2. The result should be a square slightly rotated counter-clockwise.
What would be the most efficient way to do this?
I'm using this for a simple block simulation program.
As Mark alluded, we can use constrained optimization to find the side 2 square that minimizes the square of the distance to the corners of the original.
We need to minimize f = (a-A)^2 + (b-B)^2 + (c-C)^2 + (d-D)^2 (where the square is actually a dot product of the vector argument with itself) subject to some constraints.
Following the method of Lagrange multipliers, I chose the following distance constraints:
g1 = (a-b)^2 - 4
g2 = (c-b)^2 - 4
g3 = (d-c)^2 - 4
and the following angle constraints:
g4 = (b-a).(c-b)
g5 = (c-b).(d-c)
A quick napkin sketch should convince you that these constraints are sufficient.
We then want to minimize f subject to the g's all being zero.
The Lagrange function is:
L = f + Sum(i = 1 to 5, li gi)
where the lis are the Lagrange multipliers.
The gradient is non-linear, so we have to take a hessian and use multivariate Newton's method to iterate to a solution.
Here's the solution I got (red) for the data given (black):
This took 5 iterations, after which the L2 norm of the step was 6.5106e-9.
While Codie CodeMonkey's solution is a perfectly valid one (and a great use case for the Lagrangian Multipliers at that), I believe that it's worth mentioning that if the side length is not given this particular problem actually has a closed form solution.
We would like to minimise the distance between the corners of our fitted square and the ones of the given quadrilateral. This is equivalent to minimising the cost function:
f(x1,...,y4) = (x1-ax)^2+(y1-ay)^2 + (x2-bx)^2+(y2-by)^2 +
(x3-cx)^2+(y3-cy)^2 + (x4-dx)^2+(y4-dy)^2
Where Pi = (xi,yi) are the corners of the fitted square and A = (ax,ay) through D = (dx,dy) represent the given corners of the quadrilateral in clockwise order. Since we are fitting a square we have certain contraints regarding the positions of the four corners. Actually, if two opposite corners are given, they are enough to describe a unique square (save for the mirror image on the diagonal).
Parametrization of the points
This means that two opposite corners are enough to represent our target square. We can parametrise the two remaining corners using the components of the first two. In the above example we express P2 and P4 in terms of P1 = (x1,y1) and P3 = (x3,y3). If you need a visualisation of the geometrical intuition behind the parametrisation of a square you can play with the interactive version.
P2 = (x2,y2) = ( (x1+x3-y3+y1)/2 , (y1+y3-x1+x3)/2 )
P4 = (x4,y4) = ( (x1+x3+y3-y1)/2 , (y1+y3+x1-x3)/2 )
Substituting for x2,x4,y2,y4 means that f(x1,...,y4) can be rewritten to:
f(x1,x3,y1,y3) = (x1-ax)^2+(y1-ay)^2 + ((x1+x3-y3+y1)/2-bx)^2+((y1+y3-x1+x3)/2-by)^2 +
(x3-cx)^2+(y3-cy)^2 + ((x1+x3+y3-y1)/2-dx)^2+((y1+y3+x1-x3)/2-dy)^2
a function which only depends on x1,x3,y1,y3. To find the minimum of the resulting function we then set the partial derivatives of f(x1,x3,y1,y3) equal to zero. They are the following:
df/dx1 = 4x1-dy-dx+by-bx-2ax = 0 --> x1 = ( dy+dx-by+bx+2ax)/4
df/dx3 = 4x3+dy-dx-by-bx-2cx = 0 --> x3 = (-dy+dx+by+bx+2cx)/4
df/dy1 = 4y1-dy+dx-by-bx-2ay = 0 --> y1 = ( dy-dx+by+bx+2ay)/4
df/dy3 = 4y3-dy-dx-2cy-by+bx = 0 --> y3 = ( dy+dx+by-bx+2cy)/4
You may see where this is going, as simple rearrangment of the terms leads to the final solution.
Final solution
I have a shape made out of several triangles which is positioned somewhere in world space with scale, rotate, translate. I also have a plane on which I would like to project (orthogonal) the shape.
I could multiply every vertex of every triangle in the shape with the objects transformation matrix to find out where it is located in world coordinates, and then project this point onto the plane.
But I don't need to draw the projection, and instead I would like to transform the plane with the inverse transformation matrix of the shape, and then project all the vertices onto the (inverse transformed) plane. Since it only requires me to transform the plane once and not every vertex.
My plane has a normal (xyz) and a distance (d). How do I multiply it with a 4x4 transformation matrix so that it turns out ok?
Can you create a vec4 as xyzd and multiply that? Or maybe create a vector xyz1 and then what to do with d?
You need to convert your plane to a different representation. One where N is the normal, and O is any point on the plane. The normal you already know, it's your (xyz). A point on the plane is also easy, it's your normal N times your distance d.
Transform O by the 4x4 matrix in the normal way, this becomes your new O. You will need a Vector4 to multiply with a 4x4 matrix, set the W component to 1 (x, y, z, 1).
Also transform N by the 4x4 matrix, but set the W component to 0 (x, y, z, 0). Setting the W component to 0 means that your normals won't get translated. If your matrix is composed of more that just translating and rotating, then this step isn't so simple. Instead of multiplying by your transformation matrix, you have to multiply by the transpose of the inverse of the matrix i.e. Matrix4.Transpose(Matrix4.Invert(Transform)), there's a good explanation on why here.
You now have a new normal vector N and a new position vector O. However I suppose you want it in xyzd form again? No problem. As before, xyz is your normal N all that's left is to calculate d. d is the distance of the plane from the origin, along the normal vector. Hence, it is simply the dot product of O and N.
There you have it! If you tell me what language you're doing this in, I'd happily type it up in code as well.
EDIT, In pseudocode:
The plane is vector3 xyz and number d, the matrix is a matrix4x4 M
vector4 O = (xyz * d, 1)
vector4 N = (xyz, 0)
O = M * O
N = transpose(invert(M)) * N
xyz = N.xyz
d = dot(O.xyz, N.xyz)
xyz and d represent the new plane
This question is a bit old but I would like to correct the accepted answer.
You do not need to convert your plane representation.
Any point lies on the plane if
It can be written as dot product :
You are looking for the plane transformed by your 4x4 matrix .
For the same reason, you must have
So and with some arrangements
TLDR : if p=(a,b,c,d), p' = transpose(inverse(M))*p
Notation:
n is a normal represented as a (1x3) row-vector
n' is the transformed normal of n according to transform matrix T
(n|d) is a plane represented as a (1x4) row-vector (with n the plane's normal and d the plane's distance to the origin)
(n'|d') is the transformed plane of (n|d) according to transform matrix T
T is a (4x4) (affine) column-major transformation matrix (i.e. transforming a column-vector t is defined as t' = T t).
Transforming a normal n:
n' = n adj(T)
Transforming a plane (n|d):
(n'|d') = (n|d) adj(T)
Here, adj is the adjugate of a matrix which is defined as follows in terms of the inverse and determinant of a matrix:
T^-1 = adj(T)/det(T)
Note:
The adjugate is generally not equal to the inverse of a transformation matrix T. If T includes a reflection, det(T) = -1, reversing the winding order!
Re-normalizing n' is mathematically not required (but maybe numerically depending on the implementation) since scaling is taken care off by the determinant. Thanks to Adrian Leonhard.
You can directly transform the plane without first decomposing and recomposing a plane (normal and point).
I am trying to do inverse kinematics for a serial chain of arbitrarily many links.
In the following paper, I have found an example for how to calculate the Jacobian matrix.
Entry (i, j) = v[j] * (s[i] - p[j])
where:
v[j] is the unit vector of the axis of
rotation for joint j
s[i] is the position (int world
coords?) of joint i
p[j] is the position (in world
coords?) of joint j
The paper says that this works if j is a rotational joint with a single degree of freedom. But my rotational joints have no constraints on their rotation. What formula do I then want? (Or am I possibly misunderstanding the term "degree of freedom"?)
This question is old, but I'll answer anyway, as it is something I have thought about but never really gotten around to implement.
Rotational joints with no constraints are called ball joints or spherical joints; they have 3 degrees of freedom. You can use the formula in the tutorial for spherical joints also, if you parameterize each spherical joint in terms of 3 rotational (revolute) joints of one degree of freedom each.
For example: Let N be the number of spherical joints. Suppose each joint has a local transformation T_local[i] and a world transformation
T_world[i] = T_local[0] * ... * T_local[i]
Let R_world[i][k], k = 0, 1, 2, be the k-th column of the rotation matrix of T_world[i]. Define the 3 * N joint axes as
v[3 * j + 0] = R_world[i][0]
v[3 * j + 1] = R_world[i][1]
v[3 * j + 2] = R_world[i][2]
Compute the Jacobian J for some end-effector s[i], using the formula of the tutorial. All coordinates are in the world frame.
Using for example the pseudo-inverse method gives a displacement dq that moves the end-effector in a given direction dx.
The length of dq is 3 * N. Define
R_dq[j] =
R_x[dq[3 * j + 0]] *
R_y[dq[3 * j + 1]] *
R_z[dq[3 * j + 2]]
for j = 0, 1, ..., N-1, where R_x, R_y, R_z are the transformation matrices for rotation about the x-, y-, and z-axes.
Update the local transformations:
T_local[j] := T_local[j] * R_dq[j]
and repeat from the top to move the end-effector in other directions dx.
Let me suggest a simpler approach to Jacobians in the context of arbitrary many DOFs: Basically, the Jacobian tells you, how far each joint moves, if you move the end effector frame in some arbitrarily chosen direction. Let f(θ) be the forward kinematics, where θ=[θ1,...,θn] are the joints. Then you can obtain the Jacobian by differentiating the forward kinematics with respect to the joint variables:
Jij = dfi/dθj
is your manipulator's Jacobian. Inverting it would give you the inverse kinematics with respect to velocities. It can still be useful though, if you want to know how far each joint has to move if you want to move your end effector by some small amount Δx in any direction (because on position level, this would effectively be a linearization):
Δθ=J-1Δx
Hope that this helps.
How do I generate a transformation matrix for rotating points/others by the angle between two lines/vectors/directions in CGAL?
2D is what I need. 3D is what I love.
According to the manual you have these tools to work with:
Aff_transformation_2<Kernel> t ( const Rotation, Direction_2<Kernel> d, Kernel::RT num, Kernel::RT den = RT(1))
approximates the rotation over the angle indicated by direction d, such that the differences between the sines and cosines of the rotation given by d and the approximating rotation are at most num/den each.
Precondition: num/den>0 and d != 0.
Aff_transformation_2<Kernel> t.operator* ( s) composes two affine transformations.
Aff_transformation_2<Kernel> t.inverse () gives the inverse transformation.
With them you should be able to compute the matrices corresponding to the two directions and use an identity along the lines of:
Mat(d1-d2) === Mat(d1)*Inv(Mat(d2))
to get what you want.