Logic for "slight right turn" given three points - geometry

Given three co-planar (2D) points (X1, Y1), (X2, Y2), and (X3, Y3), which represent (respectively ...) "1=where I was, 2=where I am, and 3=where I am going," I need a simple algorithm that will tell me, e.g.
Veer to the right
Make a slight left turn
Turn to the left
In other words, (a) is the turn to the left or to the right; and (b) how sharp is the turn (letting me be arbitrary about this).
For the first part, I've already learned about how to use (see wikipedia: Graham Scan, and question #26315401 here) cross-product to determine whether the turn is to the left or to the right, based on whether the path is counterclockwise.
And, I'm sure that ATAN2() will be at the core of determining how sharp the turn is.
But I can't .. quite .. wrap my head around the proper math which will work in all orientations. (Especially when the angle crosses the zero-line. (A bearing of 350 degrees to 10 degrees is a gap of 20 degrees, not 340, etc.)
Okay, I'm tired. [... of bangin' my head against the wall this mornin'.] "Every time I think I've got it, I'm not sure." So, okay, it's time to ask ... :-)

When you calculate direction change angles with Atan2, don't bother about absolute angles. You have not to calculate two bearings and subtract them - Atan2 can give you relative angle between the first and the second vectors in range -Pi..Pi (-180..180) (range might depend on programming language).
x12 = x2-x1
y12 = y2-y1
x23 = x3-x2
y23 = y3-y2
DirChange = Atan2(x12*y23-x23*y12, x12*x23+y12*y23)
Some explanations: we can calculate sine of vector-vector angle through cross product and vector norms (|A| = Sqrt(A.x*A.x + A.y*A.y)):
Sin(A_B) = (A x B) / (|A|*|B|)
and cosine of vector-vector angle through dot (scalar) product and vector norms:
Cos(A_B) = (A * B) / (|A|*|B|)
Imagine that Atan2 calculates angle with sine and cosine of this angle, excluding common denominator (product of norms)
A_B = Atan2(Sin(A_B), Cos(A_B))
Example in Delphi:
var
P1, P2, P3: TPoint;
x12, y12, x23, y23: Integer;
DirChange: Double;
begin
P1 := Point(0, 0);
P2 := Point(1, 0);
P3 := Point(2, 1);
x12 := P2.X - P1.X;
y12 := P2.Y - P1.Y;
x23 := P3.X - P2.X;
y23 := P3.Y - P2.Y;
DirChange := Math.ArcTan2(x12 * y23 - x23 * y12, x12 * x23 + y12* y23);
Memo1.Lines.Add(Format('%f radians %f degrees',
[DirChange, RadToDeg(DirChange)]));
Output:
0.79 radians 45.00 degrees (left turn)
for your example data set (1,1), (3,2), and (6,3)
-0.14 radians -8.13 degrees (right turn)

EDITED - THE FOLLOWING IS WRONG. My ORIGINAL response was as follows ...
I'm not coming up with the expected answers when I try to use your response.
Let's say the points are: (1,1), (3,2), and (6,3). A gentle right turn.
Using a spreadsheet, I come up with: X12=2, X23=3, Y12=1, Y23=3, and the ATAN2 result (in degrees) is 101.3. A very sharp turn of more than 90 degrees. The spreadsheet formula (listing X1,Y1,X2,Y2,X3,Y3,X12,X23,Y12,Y23 and answer) on row 2, is:
=DEGREES(ATAN2(G2*J2-H2*I2; G2*I2+H2*J2))
(The spreadsheet, OpenOffice, lists "X" as the first parameter to ATAN2.)
Are you certain that the typo is on my end?
AND, AS A MATTER OF FACT (SO TO SPEAK), "YES, IT WAS!"
(Heh, I actually had said it myself. It just didn't occur to me to, like, duh, swap them already.)
My spreadsheet's version of the ATAN2 function specifies the X parameter first. Most programming languages (Delphi, Perl, PHP ...) specify Y first, and that's how the (correct!) answer was given.
When I edited the formula, reversing the parameters to meet the spreadsheet's definition, the problem went away, and I was able to reproduce the values from the (edited) reply. Here's the corrected formula, with the parameters reversed:
=DEGREES(ATAN2(G2*I2+H2*J2; G2*J2-H2*I2))
^^== X ==^^ ^^== Y ==^^
Again, this change in formula was needed because this spreadsheet's implementation of ATAN2 is backwards from that of most programming language implementations. The answer that was originally given, which lists Y first, is correct for most programming languages.

Well, I seem to still be having a bit of a problem . . .
If the points are very far away from each other in nearly a straight line, I am coming up with "large angles." Example:
P1: (0.60644,0.30087) ..
P2: (0.46093,0.30378) ..
P3: (0.19335,0.30087)
The X-coordinate increases but the Y-coordinate remains almost the same.
x12=-0.145507 .. y12=-0.00290698
x23=-0.267578125 .. y23=0.002906976
(I inverted the Y differences because the coordinates are in quadrant-IV, where Y increases downward.)
x=-0.000354855 .. y=-0.00120083
ans= -106.462 (degrees)
Since the points are very nearly co-linear, I expected the answer to be very small. As you can see, it's more than 106 degrees.

Related

Does computing screen space for reflections/refractions require second partial derivatives?

I have written a basic raytracer which keeps track of screen space. Each fragment has an associated pixel radius. When a ray dir hits the geometry when extruded by some distance from an eye, I compute the normal vector N for the hit, and combine it with four more rays. In pseudocode:
def distance := shortestDistanceToSurface(sdf, eye, dir, pixelRadius)
def p := eye + dir * distance
def N := estimateNormal(sdf, p)
def glance := distance * glsl.dot(dir, N)
def dx := (dirPX / glsl.dot(dirPX, N) - dirNX / glsl.dot(dirNX, N)) * glance
def dy := (dirPY / glsl.dot(dirPY, N) - dirNY / glsl.dot(dirNY, N)) * glance
Here, dirPX, dirNX, dirPY, and dirNY are rays which are offset by dir by the pixel radius in screen space in each of the four directions, but still aiming at the same reference point. This gives dx and dy, which are partial derivatives across the pixel indicating the rate at which the hit moves along the surface of the geometry as the rays move through screen space.
Because I track screen space, I can use pre-filtered samplers, as discussed by Inigo Quilez. They look great. However, now I want to add reflection (and refraction), which means that I need to recurse, and I'm not sure how to compute these rays and track screen space.
The essential problem is that, in order to figure out what color of light is being reflected at a certain place on the geometry, I need to not just take a point sample, but examine the whole screen space which is reflected. I can use the partial derivatives to give me four new points on the geometry which approximate an ellipse which is the projection of the original pixel from the screen:
def px := dx * pixelRadius
def py := dy * pixelRadius
def pPX := p + px
def pNX := p - px
def pPY := p + py
def pNY := p - py
And I can compute an approximate pixel radius by smushing the ellipse into a circle. I know that this ruins certain kinds of desirable anisotropic blur, but what's a raytracer without cheating?
def nextRadius := (glsl.length(dx) * glsl.length(dy)).squareRoot() * pixelRadius
However, I don't know where to reflect those points into the geometry; I don't know where to focus their rays. If I have to make a choice of focus, then it will be arbitrary, and depending on where the geometry reflects its own image, then this could arbitrarily blur or moiré the reflected images.
Do I need to take the second partial derivatives? I can approximate them just like the first derivatives, and then I can use them to adjust the normal N with slight changes, just like with the hit p. The normals then guide the focus of the ellipse, and map it to an approximate conic section. I'm worried about three things:
I worry about the cost of doing a couple extra vector additions and multiplications, which is probably negligible;
And also about whether the loss in precision, which is already really bad when doing these cheap derivatives, is going to be too lossy over multiple reflections;
And finally, how I'm supposed to handle situations where screen space explodes; when I have a mirrored sphere, how am I supposed to sample over big wedges of reflected space and e.g. average a checkerboard pattern into a grey?
And while it's not a worry, I simply don't know how to take four vectors and quickly fit a convincing cone to them, but this might be a mere problem of spending some time doing algebra on a whiteboard.
Edit: In John Amanatides' 1984 paper Ray Tracing with Cones, the curvature information is indeed computed, and used to fit an estimated cone onto the reflected ray. In Homan Igehy's 1999 paper Tracing Ray Differentials, only the first-order derivatives are used, and second derivatives are explicitly ignored.
Are there other alternatives, maybe? I've experimented with discarding the pixel radius after one reflection and just taking point samples, and they look horrible, with lots of aliasing and noise. Perhaps there is a field-of-view or depth-of-field approximation which can be computed on a per-material basis. As usual, multisampling can help, but I want an analytic solution so I don't waste so much CPU needlessly.
(sdf is a signed distance function and I am doing sphere tracing; the same routine both computes distance and also normals. glsl is the GLSL standard library.)
I won't accept my own answer, but I'll explain what I've done so that I can put this question down for now. I ended up going with an approach similar to Amanatides, computing a cone around each ray.
Each time I compute a normal vector, I also compute the mean curvature. Normal vectors are computed using a well-known trick. Let p be a zero of the SDF as in my question, let epsilon be a reasonable offset to use for numerical differentiation, and then let vp and vn be vectors whose components are evaluations of the SDF near p but perturbed at each component by epsilon. In pseudocode:
def justX := V(1.0, 0.0, 0.0)
def justY := V(0.0, 1.0, 0.0)
def justZ := V(0.0, 0.0, 1.0)
def vp := V(sdf(p + justX * epsilon), sdf(p + justY * epsilon), sdf(p + justZ * epsilon))
def vn := V(sdf(p - justX * epsilon), sdf(p - justY * epsilon), sdf(p - justZ * epsilon))
Now, by clever abuse of finite difference coefficients, we can compute both first and second derivatives at the same time. The third coefficient, sdf(p), is already zero by assumption. This gives our normal vector N, which is the gradient, and our mean curvature H, which is the Laplacian.
def N := glsl.normalize(vp - vn)
def H := sum(vp + vn) / (epsilon * epsilon)
We can estimate Gaussian curvature from mean curvature. While mean curvature tells our cone how much to expand or contract, Gaussian curvature is always non-negative and measures how much extra area (spherical excess) is added to the cone's area of intersection. Gaussian curvature is given with K instead of H, and after substituting:
def K := H * H
Now we're ready to adjust the computation of fragment width. Let's assume that, in addition to the pixelRadius in screen space and distance from the camera to the geometry, we also have dradius, the rate of change in pixel radius over distance. We can take a dot product to compute a glance factor just like before, and the trigonometry is similar:
def glance := glsl.dot(dir, N).abs().reciprocal()
def fradius := (pixelRadius + dradius * distance) * glance * (1.0 + K)
def fwidth := fradius * 2.0
And now we have fwidth just like in GLSL. Finally, when it comes time to reflect, we'll want to adjust the change in radius by integrating our second-derivative curvature into our first derivative:
def dradiusNew := dradius + H * fradius
The results are satisfactory. The fragments might be a little too big; it's hard to tell if something is overly blurry or just properly antialiased. I wonder whether Amanatides used a different set of equations to handle curvature; I spent far too long at a whiteboard deriving what ended up being pleasantly simple operations.
This image was not supersampled; each pixel had one fragment with one ray and one cone.

Making a Bezier curve based on 3 points the line will intersect

A quadratic bezier curve needs these three points, but I do not have an ordered pair of p1. Instead, I have the ordered pair of points here
The middle point (P1) is the highest point of the parabola.
The parabola is equal in both sides
How do I get the 3 points from image 1 using the points from image 2?
Apply the knowledge explained in https://pomax.github.io/bezierinfo/#abc and you should be good to go. You'll need to decide which time value that "somewhere on the curve" point has, and then you can use the formula for the projection ratio to find the actual control point coordinate.
However, at t=0.5 the ratio is just "1:1" so things get even easier because your point projects onto the midpoint of the line that connects that first and last point, and the real control point is the same distance "above" your point as the point is above that line:
So you just compute the midpoint:
m =
x: (p1.x + p2.x) / 2
y: (p1.y + p2.y) / 2
and the x and y distance to the midpoint from the "p2 you have" point:
d =
x: (p2.x - m.x)
y: (p2.y - m.y)
and then the real p2 is simply that distance away from the "p2 you have":
real2 =
x: p2.x + d.x
y: p2.y + d.y
However, note that this only works for t=0.5: both that projected point on the start--end line and the distance ratios will be (possibly very) different for any other t value and you should use the formula that the Bezier primer talks about.
Also note that what you call "the peak" is in no way guaranteed to be at t=0.5... for example, have a look at this curve:
The point that is marked as belonging to t=0.5 is certainly not where you would say the "peak" of the curve is (in fact, that's closer to t=0.56), so if all you have is three points, you technically always have incomplete information and you're going to have to invent some rule for deciding how to fill in the missing bits. In this case "what t value do I consider my somewhere-on-the-curve point to be?".

Finding the original position of a point on an image after rotation

I have the x, y co-ordinates of a point on a rotated image by certain angle. I want to find the co-ordinates of the same point in the original, non-rotated image.
Please check the first image which is simpler:
UPDATED image, SIMPLIFIED:
OLD image:
Let's say the first point is A, the second is B and the last is C. I assume you have the rotation matrice R (see Wikipedia Rotation Matrix if not) et the translation vector t, so that B = R*A and C = B+t.
It comes C = R*A + t, and so A = R^1*(C-t).
Edit: If you only need the non rotated new point, simply do D = R^-1*C.
First thing to do is defining the reference system (how "where the points lies with respect to each image" will be translated into numbers). I guess that you want to rely on a basic 2D reference system, given by a single point (a couple of X/Y values). For example: left/lower corner (min. X and min. Y).
The algorithm is pretty straightforward:
Getting the new defining reference point associated with the
rotated shape (min. X and min. Y), that is, determining RefX_new and
RefY_new.
Applying a basic conversion between reference systems:
X_old = X_new + (RefX_new - RefX_old)
Y_old = Y_new + (RefY_new -
RefY_old)
----------------- UPDATE TO RELATE FORMULAE TO NEW CAR PIC
RefX_old = min X value of the CarFrame before being rotated.
RefY_old = max Y value of the CarFrame before being rotated.
RefX_new = min X value of the CarFrame after being rotated.
RefY_new = max Y value of the CarFrame after being rotated.
X_new = X of the point with respect to the CarFrame after being rotated. For example: if RefX_new = 5 with respect to absolute frame (0,0) and X of the point with respect to this absolute frame is 8, X_new would be 3.
Y_new = Y of the point with respect to CarFrame after being rotated (equivalently to point above)
X_old_C = X_new_C(respect to CarFrame) + (RefX_new(CarFrame_C) - RefX_old(CarFrame_A))
Y_old_C = Y_new_C(respect to CarFrame) + (RefY_new(CarFrame_C) - RefY_old(CarFrame_A))
These coordinates are respect to the CarFrame and thus you might have to update them with respect to the absolute frame (0,0, I guess), as explained above, that is:
X_old_D_absolute_frame = X_old_C + (RefX_new(CarFrame_C) + RefX_global(i.e., 0))
Y_old_D_absolute_frame = Y_old_C + (RefY_new(CarFrame_C) + RefY_global(i.e., 0))
(Although you should do that once the CarFrame is in its "definitive position" with respect to the global frame, that is, on picture D (the point has the same coordinates with respect to the CarFrame in both picture C and D, but different ones with respect to the global frame).)
It might seem a bit complex put in this way; but it is really simple. You have just to think carefully about one case and create the algorithm performing all the actions. The idea is extremely simple: if I am on 8 inside something which starts in 5; I am on 3 with respect to the container.
------------ UPDATE IN THE METHODOLOGY
As said in the comment, these last pictures prove that the originally-proposed calculation of reference (max. Y/min. X) is not right: it shouldn't be the max./min. values of the carFrame but the minimum distances to the closer sides (= perpendicular line from the left/bottom side to the point).
------------ TRIGONOMETRIC CALCS FOR THE SPECIFIC EXAMPLE
The algorithm proposed is the one you should apply in any situation. Although in this specific case, the most difficult part is not moving from one reference system to the other, but defining the reference point in the rotated system. Once this is done, the application to the non-rotated case is immediate.
Here you have some calcs to perform this action (I have done it pretty quickly, thus better take it as an orientation and do it by your own); also I have only considered the case in the pictures, that is, rotation over the left/bottom point:
X_rotated = dx * Cos(alpha)
where dx = X_orig - (max_Y_CarFrame - Y_Orig) * Tan(alpha)
Y_rotated = dy * Cos(alpha)
where dy = Y_orig - X_orig * Tan(alpha)
NOTE: (max_Y_CarFrame - Y_Orig) in dx and X_orig in dy expect that the basic reference system is 0,0 (min. X and min. Y). If this is not the case, you would have to change this variables.
The X_rotated and Y_rotated give the perpendicular distance from the point to the closest side of the carFrame (respectively, left and bottom side). By applying these formulae (I insist: analyse them carefully), you get the X_old_D_absolute_frame/Y_old_D_absolute_frame that is, you have just to add the lef/bottom values from the carFrame (if it is located in 0,0, these would be the final values).

How to calculate angle between two direction vectors that form a closed/open shape?

I am trying to figure out the correct trig. eq./function to determine the following:
The Angle-change (in DEGREES) between two DIRECTION VECTORS(already determined), that represent two line-segment.
This is used in the context of SHAPE RECOGTNITION (hand drawn by user on screen).
SO basically,
a) if the user draws a (rough) shape, such as a circle, or oval, or rectangle etc - the lines that makes up that shape are broken down in to say.. 20 points(x-y pairs).
b) I have the DirectionVector for each of these LINE SEGMENTS.
c) So the BEGINNING of a LINE SEGMENT(x0,y0), will the END points of the previous line(so as to form a closed shape like a rectangle, let's say).
SO, my question is , given the context(i.e. determinign the type of a polygon), how does one find the angle-change between two DIRECTION VECTORS(available as two floating point values for x and y) ???
I have seen so many different trig. equations and I'm seeking clarity on this.
Thanks so much in advance folks!
If (x1,y1) is the first direction vector and (x2,y2) is the second one, it holds:
cos( alpha ) = (x1 * x2 + y1 * y2) / ( sqrt(x1*x1 + y1*y1) * sqrt(x2*x2 + y2*y2) )
sqrt means the square root.
Look up http://en.wikipedia.org/wiki/Dot_product
Especially the section "Geometric Representation".
You could try atan2:
float angle = atan2(previousY-currentY, previousX-currentY);
but also, as the previous answers mentioned, the
angle between two verctors = acos(first.dotProduct(second))
I guess you have the vector as three points (x_1, y_1), (x_2, y_2) and (x_3, y_3).
Then you can move the points so that (x_1, y_1) == (0, 0) by
(x_1, y_1) = (x_2, y_2) - (x_1, y_1)
(x_2, y_2) = (x_3, y_3) - (x_1, y_1)
Now you have this situation:
Think of this triangle as two right-angled triangles. The first one has the angle alpha and a part of beta, the second right-angled triangle has the other part of beta.
Then you can apply:
You can calculate alpha like this:
If I understand you correctly, you may just evaluate the dot product between two vectors and take the appropriate arccos to retrieve the angle between these vectors.

Calculating the coordinates of the third point of a triangle

OK, I know this sounds like it should be asked on math.stackoverflow.com, but this is embarrassingly simple maths that I've forgotten from high-school, rather than advanced post-graduate stuff!
I'm doing some graphics programming, and I have a triangle. Incidentally, two of this triangle's sides are equal, but I'm not sure if that's relevant. I have the coordinates of two of the corners (vertices), but not the third (these coordinates are pixels on the screen, in case that's relevant). I know the lengths of all three sides.
How do I determine the coordinates of the unknown vertex?
for oblique triangles: c^2 = a^2 + b^2 - 2ab * cos(C)
where a, b, c are the lengths of the sides (regardless of length)
and A, B, C are the angles opposite the side with the same letter.
Use the above to figure out the angle from one of the endpoints you know, then use the angle, the position of the vertex, and the angle between the adjacent sides to determine where the unknown vertex is.
And the complexity of the problem doesn't determine which site it should go on, only the subject matter. So you should move this to math.
EDIT: I had a serious brainfart previously, but this should work.
Use the law of cosines
/* use the law of cosines to get the angle of CAB */
c² = a² + b² - 2ab cos(Cangle)
cos(Cangle) = (a²+b²-c²) / 2ab
Cangle = acos((a²+b²-c²) / 2ab)
AB = B.xy - A.xy;
normalize(AB);
len = length(AC)
C.x = len*AB.x* cos(Cangle) * len*AB.y*sin(Cangle);
C.y = len*AB.x*-sin(Cangle) * len*AB.y*cos(Cangle);

Resources