I have a catmull-rom curve defined with a couple of control points as shown here:
I would like to animate an object moving along the curve, but be able to define the velocity of the object.
When iterating over the curve's points using the getPoint method, the object moves chordaly (in the image, at u=0, we are at p1, at u=0.25, we are at p2 etc). Using the getPointAt method, the object moves with uniform speed along the curve.
However what I would like to so is to have greater control over the animation, so that I can specify that the movement from p1 to p2 should take 0.5, from p2 to p3, 0.3, and from p3 to p4 0.2. Is this possible?
Thanks for the suggestions. The way I finally implemented this was to create a custom mapping between my time variable, an the u variable for three.js getPoint function.
I created a piecewise linear functionn using a javascript library called everpolate. This way I could map t to u such that:
At t = 0, u = 0, resulting in p1
At t = 0.5, u = 1/3, resulting in p2
At t = 0.8, u = 2/3, resulting in p3
At t = 1, u = 1, resulting in p4
T to U map picture
However what I would like to so is to have greater control over the animation, so that I can specify that the movement from p1 to p2 should take 0.5, from p2 to p3, 0.3, and from p3 to p4 0.2. Is this possible?
You can achieve this by using an animation library like tween.js. In this way, you can specify the start and end position of your object and the desired duration. It's also possible to customize the type of transition by using easing functions.
You have multiple options I will describe the theory and then one possible implementation.
Theory
You want to arclength parametrize your curve. Which means that an increment of 1 in the parameter results in a distance of movement along the curve of 1.
This parametrization will allow you to fully control the movement of your object at any speed you want, be it constan't linear, non-linear, piecewise...
Possible application
There are many numerical integration techniques that will allow you to arclength parametrize the curve.
A possible on is to precompute the values and put them on a table. Pick a small epsilon and starting at the first parameter value x_0, evaluate the function at x_0, x_0+ epsilon, x_0 + 2*epsilon...
As you do this take the linear distance between each sample and add it to an accumulator. i.e travelled_distance += length(sample[x], sample[x+1]).
Store the pair in a table.
Now when you are at x and want to move y units you can round x to the nearest x_n and linearly look for the first x_n value whose distance is greater than y and then return that x_n.
This algorithm is not the most efficient, but it is easy to understand and to code, so at least it can get you started.
If you need a more optimized version, look for arc length parametrization algorithms.
Related
I am working on some shaders, and I need to transform normals.
I read in few tutorials the way you transform normals is you multiply them with the transpose of the inverse of the modelview matrix. But I can't find explanation of why is that so, and what is the logic behind that?
It flows from the definition of a normal.
Suppose you have the normal, N, and a vector, V, a tangent vector at the same position on the object as the normal. Then by definition N·V = 0.
Tangent vectors run in the same direction as the surface of an object. So if your surface is planar then the tangent is the difference between two identifiable points on the object. So if V = Q - R where Q and R are points on the surface then if you transform the object by B:
V' = BQ - BR
= B(Q - R)
= BV
The same logic applies for non-planar surfaces by considering limits.
In this case suppose you intend to transform the model by the matrix B. So B will be applied to the geometry. Then to figure out what to do to the normals you need to solve for the matrix, A so that:
(AN)·(BV) = 0
Turning that into a row versus column thing to eliminate the explicit dot product:
[tranpose(AN)](BV) = 0
Pull the transpose outside, eliminate the brackets:
transpose(N)*transpose(A)*B*V = 0
So that's "the transpose of the normal" [product with] "the transpose of the known transformation matrix" [product with] "the transformation we're solving for" [product with] "the vector on the surface of the model" = 0
But we started by stating that transpose(N)*V = 0, since that's the same as saying that N·V = 0. So to satisfy our constraints we need the middle part of the expression — transpose(A)*B — to go away.
Hence we can conclude that:
transpose(A)*B = identity
=> transpose(A) = identity*inverse(B)
=> transpose(A) = inverse(B)
=> A = transpose(inverse(B))
My favorite proof is below where N is the normal and V is a tangent vector. Since they are perpendicular their dot product is zero. M is any 3x3 invertible transformation (M-1 * M = I). N' and V' are the vectors transformed by M.
To get some intuition, consider the shear transformation below.
Note that this does not apply to tangent vectors.
Take a look at this tutorial:
https://paroj.github.io/gltut/Illumination/Tut09%20Normal%20Transformation.html
You can imagine that when the surface of a sphere stretches (so the sphere is scaled along one axis or something similar) the normals of that surface will all 'bend' towards each other. It turns out you need to invert the scale applied to the normals to achieve this. This is the same as transforming with the Inverse Transpose Matrix. The link above shows how to derive the inverse transpose matrix from this.
Also note that when the scale is uniform, you can simply pass the original matrix as normal matrix. Imagine the same sphere being scaled uniformly along all axes, the surface will not stretch or bend, nor will the normals.
If the model matrix is made of translation, rotation and scale, you don't need to do inverse transpose to calculate normal matrix. Simply divide the normal by squared scale and multiply by model matrix and we are done. You can extend that to any matrix with perpendicular axes, just calculate squared scale for each axes of the matrix you are using instead.
I wrote the details in my blog: https://lxjk.github.io/2017/10/01/Stop-Using-Normal-Matrix.html
Don't understand why you just don't zero out the 4th element of the direction vector before multiplying with the model matrix. No inverse or transpose needed. Think of the direction vector as the difference between two points. Move the two points with the rest of the model - they are still in the same relative position to the model. Take the difference between the two points to get the new direction, and the 4th element, cancels out to zero. Lot cheaper.
I need to offset a curve, which by the simplest way is just shifting the points perpendicularly. I can access each point to calculate angle of each line along given path, for now I use atan2. Then I take those two angle and make average of it. It returns the shortest angle, not what I need in this case.
How can I calculate angle of each connection? Concerning that I am not interested in the shortest angle but the one that would create parallel offset curve.
Assuming 2D case...
So do a cross product of direction vectors of 2 neighboring lines the sign of z coordinate of the result will tell you if the lines are CW/CCW
So if you got 3 consequent control points on the polyline: p0,p1,p2 then:
d1 = p1-p0
d2 = p2-p1
if you use some 3D vector math then convert them to 3D by setting:
d1.z=0;
d2.z=0;
now compute 3D cross:
n = cross(d1,d2)
which returns vector perpendicular to both vectors of size equals to the area of quad (parallelogram) constructed with d1,d2 as base vectors. The direction (from the 2 possible) is determined by the winding rule of the p0,p1,p2 so inspecting z of the result is enough.
The n.x,n.y are not needed so you can compute directly without doing full cross product:
n.z=(d1.x*d2.y)-(d1.y*d2.x)
if (n.z>0) case1
if (n.z<0) case2
if the case1 is CW or CCW depends on your coordinate system properties (left/right handness). This approach is very commonly used in CG fur back face culling of polygons ...
if n.z is zero it means that your vectors/lines are either parallel or at lest one of them is zero.
I think these might interest you:
draw outline for some connected lines
How can I create an internal spiral for a polygon?
Also in 2D you do not need atan2 to get perpendicular vector... You can do instead this:
u = (x,y)
v = (-y,x)
w = (x,-y)
so u is any 2D vector and v,w are the 2 possible perpendicular vectors to u in 2D. they are the result of:
cross((x,y,0),(0,0,1))
cross((0,0,1),(x,y,0))
Here is example image of what I want to do:
I want to calculate Path 1 from Path 2.
Screenshot made from Inkscape, where I'm, at first, create Path 1, then add p3 to the original path. This is didn't change the original path at all, because new point actually unneeded. So, how can I detect this point(p3) using Path 2 SVG path representation and calculate Path 1 from Path 2?
Basically, I search for the math formulas, which can help me to convert(also checking that whether it is possible):
C 200,300 300,250 400,250 C 500,250 600,300 600,400
to
C 200,200 600,200 600,400
You're solving a constraint problem. Taking your first compound curve, and using four explicit coordinates for each subcurve, we have:
points1 = point[8];
points2 = point[4];
with the following correspondences:
points1[0] == points2[0];
points1[7] == points2[3];
direction(points1[0],points1[1]) == direction(points2[0], points2[1]);
direction(points1[6],points1[7]) == direction(points2[2], points2[3]);
we also have a constraint on the relative placement for points2[1] and points2[2] due to the tangent of the center point in your compound curve:
direction(points1[2],points[4]) == direction(points2[1],points2[2]);
and lastly, we have a general constraint on where on- and off-curve points can be for cubic curves if we want the curve to pass through a point, which is described over at http://pomax.github.io/bezierinfo/#moulding
Taking the "abc" ratio from that section, we can check whether your compound curve parameters fit a cubic curve: if we construct a new cubic curve with points
A = points1[0];
B = points1[3];
C = points1[7];
with B at t=0.5 (in this case), then we can verify whether the resulting curve fits the constraints that must hold for this to be a legal simplification.
The main problem here is that we, in general, don't know whether the "in between start and end" point should fall on t=0.5, or whether it's a different t value. The easiest solution is to see how far that point is along the total curve (using arc length: distance = arclength(c1) / arclength(c1)+arclength(c2) will tell us) and use that as initial guess for t, iterating outward on either side for a few values.
The second option is to solve a generic cubic equation for the tangent vector at your "in between" point. We form a cubic curve with points
points3 = [ points1[0], points1[1], points1[6], points1[7] ];
and then solve its derivative equations to find one or more t values that have the same tangent direction (but not magnitude!) as our in-between point. Once we have those (and we might have more than 2), we evaluate whether we can create a curve through our three points of interest with the middle point set to each of those found t values. Either one or zero of the found t values will yield a legal curve. If we have one: perfect, we found a simplification. If we find none, then the compound curve cannot be simplified into a single cubic curve.
I have an array of points (x0,y0)... (xn,yn) monotonic in x and wish to draw the "best" curve through these using Bezier curves. This curve should not be too "jaggy" (e.g. similar to joining the dots) and not too sinuous (and definitely not "go backwards"). I have created a prototype but wonder whether there is an objectively "best solution".
I need to find control points for all segments xi,y1 x(i+1)y(i+1). My current approach (except for the endpoints) for a segment x(i), x(i+1) is:
find the vector x(i-1)...x(i+1) , normalize, and scale it by factor * len(i,i+1) to give the vector for the leading control point
find the vector x(i+2)...x(i) , normalize, and scale it by factor * len(i,i+1) to give the vector for the trailing control point.
I have tried factor=0.1 (too jaggy), 0.33 (too curvy) and 0.20 - about right. But is there a better approach which (say) makes 2nd and 3nd derivatives as smooth as possible. (I assume such an algorithm is implemented in graphics packages)?
I can post pseudo/code if requested. Here are the three images (0.1/0.2/0.33). The control points are shown by straight lines: black (trailing) and red (leading)
Here's the current code. It's aimed at plotting Y against X (monotonic X) without close-ing. I have built my own library for creating SVG (preferred output); this code creates triples of x,y in coordArray for each curve segment (control1, xcontrol2, end). Start is assumed by last operation (Move or Curve). It's Java but should be easy to interpret (CurvePrimitive maps to cubic, "d" is the String representation of the complete path in SVG).
List<SVGPathPrimitive> primitiveList = new ArrayList<SVGPathPrimitive>();
primitiveList.add(new MovePrimitive(real2Array.get(0)));
for(int i = 0; i < real2Array.size()-1; i++) {
// create path 12
Real2 p0 = (i == 0) ? null : real2Array.get(i-1);
Real2 p1 = real2Array.get(i);
Real2 p2 = real2Array.get(i+1);
Real2 p3 = (i == real2Array.size()-2) ? null : real2Array.get(i+2);
Real2Array coordArray = plotSegment(factor, p0, p1, p2, p3);
SVGPathPrimitive primitive = new CurvePrimitive(coordArray);
primitiveList.add(primitive);
}
String d = SVGPath.constructDString(primitiveList);
SVGPath path1 = new SVGPath(d);
svg.appendChild(path1);
/**
*
* #param factor to scale control points by
* #param p0 previous point (null at start)
* #param p1 start of segment
* #param p2 end of segment
* #param p3 following point (null at end)
* #return
*/
private Real2Array plotSegment(double factor, Real2 p0, Real2 p1, Real2 p2, Real2 p3) {
// create p1-p2 curve
double len12 = p1.getDistance(p2) * factor;
Vector2 vStart = (p0 == null) ? new Vector2(p2.subtract(p1)) : new Vector2(p2.subtract(p0));
vStart = new Vector2(vStart.getUnitVector().multiplyBy(len12));
Vector2 vEnd = (p3 == null) ? new Vector2(p2.subtract(p1)) : new Vector2(p3.subtract(p1));
vEnd = new Vector2(vEnd.getUnitVector().multiplyBy(len12));
Real2Array coordArray = new Real2Array();
Real2 controlStart = p1.plus(vStart);
coordArray.add(controlStart);
Real2 controlEnd = p2.subtract(vEnd);
coordArray.add(controlEnd);
coordArray.add(p2);
// plot controls
SVGLine line12 = new SVGLine(p1, controlStart);
line12.setStroke("red");
svg.appendChild(line12);
SVGLine line21 = new SVGLine(p2, controlEnd);
svg.appendChild(line21);
return coordArray;
}
A Bezier curve requires the data points, along with the slope and curvature at each point. In a graphics program, the slope is set by the slope of the control-line, and the curvature is visualized by the length.
When you don't have such control-lines input by the user, you need to estimate the gradient and curvature at each point. The wikipedia page http://en.wikipedia.org/wiki/Cubic_Hermite_spline, and in particular the 'interpolating a data set' section has a formula that takes these values directly.
Typically, estimating these values from points is done using a finite difference - so you use the values of the points on either side to help estimate. The only choice here is how to deal with the end points where there is only one adjacent point: you can set the curvature to zero, or if the curve is periodic you can 'wrap around' and use the value of the last point.
The wikipedia page I referenced also has other schemes, but most others introduce some other 'free parameter' that you will need to find a way of setting, so in the absence of more information to help you decide how to set other parameters, I'd go for the simple scheme and see if you like the results.
Let me know if the wikipedia article is not clear enough, and I'll knock up some code.
One other point to be aware of: what 'sort' of Bezier interpolation are you after? Most graphics programs do cubic bezier in 2 dimensions (ie you can draw a circle-like curve), but your sample images look like it could be 1d functions approximation (as in for every x there is only one y value). The graphics program type curve is not really mentioned on the page I referenced. The maths involved for converting estimate of slope and curvature into a control vector of the form illustrated on http://en.wikipedia.org/wiki/B%C3%A9zier_curve (Cubic Bezier) would take some working out, but the idea is similar.
Below is a picture and algorithm for a possible scheme, assuming your only input is the three points P1, P2, P3
Construct a line (C1,P1,C2) such that the angles (P3,P1,C1) and (P2,P1,C2) are equal. In a similar fashion construct the other dark-grey lines. The intersections of these dark-grey lines (marked C1, C2 and C3) become the control-points as in the same sense as the images on the Bezier Curve wikipedia site. So each red curve, such as (P3,P1), is a quadratic bezier curve defined by the points (P3, C1, P1). The construction of the red curve is the same as given on the wikipedia site.
However, I notice that the control-vector on the Bezier Curve wikipedia page doesn't seem to match the sort of control vector you are using, so you might have to figure out how to equate the two approaches.
I tried this with quadratic splines instead of cubic ones which simplifies the selection of control points (you just choose the gradient at each point to be a weighted average of the mean gradients of the neighbouring intervals, and then draw tangents to the curve at the data points and stick the control points where those tangents intersect), but I couldn't find a sensible policy for setting the gradients of the end points. So I opted for Lagrange fitting instead:
function lagrange(points) { //points is [ [x1,y1], [x2,y2], ... ]
// See: http://www.codecogs.com/library/maths/approximation/interpolation/lagrange.php
var j,n = points.length;
var p = [];
for (j=0;j<n;j++) {
p[j] = function (x,j) { //have to pass j cos JS is lame at currying
var k, res = 1;
for (k=0;k<n;k++)
res*=( k==j ? points[j][1] : ((x-points[k][0])/(points[j][0]-points[k][0])) );
return res;
}
}
return function(x) {
var i, res = 0;
for (i=0;i<n;i++)
res += p[i](x,i);
return res;
}
}
With that, I just make lots of samples and join them with straight lines.
This is still wrong if your data (like mine) consists of real world measurements. These are subject to random errors and if you use a technique that forces the curve to hit them all precisely, then you can get silly valleys and hills between the points. In cases like these, you should ask yourself what order of polynomial the data should fit and ... well ... that's what I'm about to go figure out.
I am looking for a tool similar to graphviz that can render graphs, but that will allow me to constrain just the x coordinate of each node. Then, the tool will automatically choose y coordinates to make the graph look neat.
Basically, I want to make a timeline.
Language / platform / rendering medium are not very important.
If you want a neat-looking graph a force-directed algorithm is going to be your best bet. One of the best ones is SFDP (developed by AT&T, included in graphviz) though I can't seem to find pseudocode or an easy implementation. I don't think there are any algorithms this specialized. Thankfully, it's easy to code your own. I'll present some pseudocode mostly lifted form Wikipedia, but with suitably one-dimensional modifications. I'll assume you have n vertices and the vector of x-positions is x, subscripted by x.i.
set all vertex velocities to (0,0)
set all vertex positions to (x.i, random)
while (KE > epsilon)
KE = 0
for each vertex v
force = (0,0)
for each vertex u != v
force = force + (0, coulomb(u, v).y)
if u is incident to v
force = force + (0, hooke(u, v).y)
v.velocity = (v.velocity + timestep * force) * damping
v.position = v.position + timestep * v.velocity
KE = KE + |v.velocity| ^ 2
here the .y denotes getting the y-component of the force. This ensures that the x-components of the positions of the vertices never change from what you set them to be. The epsilon parameter is to be set by you, and should be something small compared to what you expect KE (the kinetic energy) to be. Also, |v| denotes the magnitude of the vector v (all computations are of 2-vectors in the above, except the KE). Note I set the mass of all the nodes to be 1, but you can change that if you want.
The Hooke and Coulomb functions calculate the respective forces between nodes; the first is linear in distance between vertices, the second is quadratic, so there is a guaranteed equilibrium. These functions look something like
def hooke(u, v)
return -k * |u.position - v.position|
def coulomb(u, v)
return C * |u.position - v.position|
where again most computations are in vector form. C and k have real values but experiment to get the graph you want. This isn't usually necessary because the scaling factors will, in two dimensions, pretty much expand or contract the whole graph, but here the x-distances are set so to get a good-looking graph you will have to change the values a bit.