Is ARC clockwise or counter clockwise? - geometry

I have an ARC. For ARC I have an below information :
Start point.
End point.
Another third point on ARC.
Center point.
Below image shows in Detail :
Now the question is how to check ARC is clock wise or counter clockwise?

In 2D you can exploit cross product which returns perpendicular vector to vectors multiplied. The side on which it points is dependend on the order of multiplicants (CW/CCW) so if you got vectors v1,v2 then v1 x v2 = - (v2 x v1). if v1,v2 are 2D vectors then the result of their cross lies only in Z-coordinate so:
Let's starting point is SP, ending point is EP, and some middle point is MP.
Form vectors SE = EP - SP and SM = MP - SP and calculate the sign of Z-coordinate of their cross product.
CP = SE.X * SM.Y - SE.Y * SM.X
Arc is clockwise if cross product is positive and anti-clockwise if negative (or the other way around it depends on how is your 2D coordinate system defined).
Work example for arcs Red-Green-Blue, right one is small arc, left two are large arcs:
function used (note that sign < 0 is for my left-handed coordinate system)
function IsCW(sx, sy, mx, my, ex, ey: Integer): Boolean;
begin
Result := (ex-sx) * (my-sy) - (ey-sy) * (mx-sx) < 0;
end;

Related

Does computing screen space for reflections/refractions require second partial derivatives?

I have written a basic raytracer which keeps track of screen space. Each fragment has an associated pixel radius. When a ray dir hits the geometry when extruded by some distance from an eye, I compute the normal vector N for the hit, and combine it with four more rays. In pseudocode:
def distance := shortestDistanceToSurface(sdf, eye, dir, pixelRadius)
def p := eye + dir * distance
def N := estimateNormal(sdf, p)
def glance := distance * glsl.dot(dir, N)
def dx := (dirPX / glsl.dot(dirPX, N) - dirNX / glsl.dot(dirNX, N)) * glance
def dy := (dirPY / glsl.dot(dirPY, N) - dirNY / glsl.dot(dirNY, N)) * glance
Here, dirPX, dirNX, dirPY, and dirNY are rays which are offset by dir by the pixel radius in screen space in each of the four directions, but still aiming at the same reference point. This gives dx and dy, which are partial derivatives across the pixel indicating the rate at which the hit moves along the surface of the geometry as the rays move through screen space.
Because I track screen space, I can use pre-filtered samplers, as discussed by Inigo Quilez. They look great. However, now I want to add reflection (and refraction), which means that I need to recurse, and I'm not sure how to compute these rays and track screen space.
The essential problem is that, in order to figure out what color of light is being reflected at a certain place on the geometry, I need to not just take a point sample, but examine the whole screen space which is reflected. I can use the partial derivatives to give me four new points on the geometry which approximate an ellipse which is the projection of the original pixel from the screen:
def px := dx * pixelRadius
def py := dy * pixelRadius
def pPX := p + px
def pNX := p - px
def pPY := p + py
def pNY := p - py
And I can compute an approximate pixel radius by smushing the ellipse into a circle. I know that this ruins certain kinds of desirable anisotropic blur, but what's a raytracer without cheating?
def nextRadius := (glsl.length(dx) * glsl.length(dy)).squareRoot() * pixelRadius
However, I don't know where to reflect those points into the geometry; I don't know where to focus their rays. If I have to make a choice of focus, then it will be arbitrary, and depending on where the geometry reflects its own image, then this could arbitrarily blur or moiré the reflected images.
Do I need to take the second partial derivatives? I can approximate them just like the first derivatives, and then I can use them to adjust the normal N with slight changes, just like with the hit p. The normals then guide the focus of the ellipse, and map it to an approximate conic section. I'm worried about three things:
I worry about the cost of doing a couple extra vector additions and multiplications, which is probably negligible;
And also about whether the loss in precision, which is already really bad when doing these cheap derivatives, is going to be too lossy over multiple reflections;
And finally, how I'm supposed to handle situations where screen space explodes; when I have a mirrored sphere, how am I supposed to sample over big wedges of reflected space and e.g. average a checkerboard pattern into a grey?
And while it's not a worry, I simply don't know how to take four vectors and quickly fit a convincing cone to them, but this might be a mere problem of spending some time doing algebra on a whiteboard.
Edit: In John Amanatides' 1984 paper Ray Tracing with Cones, the curvature information is indeed computed, and used to fit an estimated cone onto the reflected ray. In Homan Igehy's 1999 paper Tracing Ray Differentials, only the first-order derivatives are used, and second derivatives are explicitly ignored.
Are there other alternatives, maybe? I've experimented with discarding the pixel radius after one reflection and just taking point samples, and they look horrible, with lots of aliasing and noise. Perhaps there is a field-of-view or depth-of-field approximation which can be computed on a per-material basis. As usual, multisampling can help, but I want an analytic solution so I don't waste so much CPU needlessly.
(sdf is a signed distance function and I am doing sphere tracing; the same routine both computes distance and also normals. glsl is the GLSL standard library.)
I won't accept my own answer, but I'll explain what I've done so that I can put this question down for now. I ended up going with an approach similar to Amanatides, computing a cone around each ray.
Each time I compute a normal vector, I also compute the mean curvature. Normal vectors are computed using a well-known trick. Let p be a zero of the SDF as in my question, let epsilon be a reasonable offset to use for numerical differentiation, and then let vp and vn be vectors whose components are evaluations of the SDF near p but perturbed at each component by epsilon. In pseudocode:
def justX := V(1.0, 0.0, 0.0)
def justY := V(0.0, 1.0, 0.0)
def justZ := V(0.0, 0.0, 1.0)
def vp := V(sdf(p + justX * epsilon), sdf(p + justY * epsilon), sdf(p + justZ * epsilon))
def vn := V(sdf(p - justX * epsilon), sdf(p - justY * epsilon), sdf(p - justZ * epsilon))
Now, by clever abuse of finite difference coefficients, we can compute both first and second derivatives at the same time. The third coefficient, sdf(p), is already zero by assumption. This gives our normal vector N, which is the gradient, and our mean curvature H, which is the Laplacian.
def N := glsl.normalize(vp - vn)
def H := sum(vp + vn) / (epsilon * epsilon)
We can estimate Gaussian curvature from mean curvature. While mean curvature tells our cone how much to expand or contract, Gaussian curvature is always non-negative and measures how much extra area (spherical excess) is added to the cone's area of intersection. Gaussian curvature is given with K instead of H, and after substituting:
def K := H * H
Now we're ready to adjust the computation of fragment width. Let's assume that, in addition to the pixelRadius in screen space and distance from the camera to the geometry, we also have dradius, the rate of change in pixel radius over distance. We can take a dot product to compute a glance factor just like before, and the trigonometry is similar:
def glance := glsl.dot(dir, N).abs().reciprocal()
def fradius := (pixelRadius + dradius * distance) * glance * (1.0 + K)
def fwidth := fradius * 2.0
And now we have fwidth just like in GLSL. Finally, when it comes time to reflect, we'll want to adjust the change in radius by integrating our second-derivative curvature into our first derivative:
def dradiusNew := dradius + H * fradius
The results are satisfactory. The fragments might be a little too big; it's hard to tell if something is overly blurry or just properly antialiased. I wonder whether Amanatides used a different set of equations to handle curvature; I spent far too long at a whiteboard deriving what ended up being pleasantly simple operations.
This image was not supersampled; each pixel had one fragment with one ray and one cone.

Determine if an arc represents an reflex or acute angle

I have this seemingly simple but very confusing problem.
Given I have a set of vertices (x1,y1), (x2,y2), (x3,y3)...... representing an arc. The points can either be clockwise or counter clockwise, but are all similarly ordered.
And I know the center of the arc (xc,yc).
How can I tell if the arc subtends an acute/obtuse or reflex angle?
One obvious solution is to take the difference of atan2((last_pt)-(center)) and atan2((first_pt)-(center))). But if the arc goes through the point where PI become -PI, this method breaks down.
Also, since the arc points are derived from a rather noisy pixelated picture, the vertices are not exactly smooth.
Picture of a acute and reflex arc
I cant wrap my brain around solving this problem.
Thanks for your help!
Working with 2D angles is a pain for the reason you described, so it's better to work with vector math instead, which is rotationally invariant.
Define the 2D cross-product, A ^ B = Ax * By - Ay * Bx. This is positive if A is clockwise rotated relative to B, and vice versa.
The logic:
Compute C = (last_pt - center) ^ (first_pt - center)
If C = 0, the arc is either closed or 180-degree (forgot the name for this)
If C > 0, the arc must either be (i) clockwise and acute/obtuse or (ii) anti-clockwise and reflex
If C < 0, the opposite applies
Pseudocode:
int arc_type(Point first, Point last, Point center, bool clockwise)
{
// cross-product
float C = (last.x - center.x) * (first.y - center.y)
- (last.y - center.y) * (first.x - center.x);
if (Math.abs(C) < /* small epsilon */)
return 0; // 180-degree
return ((C > 0) ^ clockwise) ? 1 // reflex
: -1; // acute / obtuse
}
Note that if you don't have prior knowledge of whether the arc is clockwise or anti-, you can use the same cross-product method on adjacent points. You need to ensure that the order of the points is consistent - if not you can, again using the cross-product, sort them by (relative) angle.

How to draw the normal to the plane in PCL

I have the plane equation describing the points belonging to a plane in 3D and the origin of the normal X, Y, Z. This should be enough to be able to generate something like a 3D arrow. In pcl this is possible via the viewer but I would like to actually store those 3D points inside the cloud. How to generate them then ? A cylinder with a cone on top ?
To generate a line perpendicular to the plane:
You have the plane equation. This gives you the direction of the normal to the plane. If you used PCL to get the plane, this is in ModelCoefficients. See the details here: SampleConsensusModelPerpendicularPlane
The first step is to make a line perpendicular to the normal at the point you mention (X,Y,Z). Let (NORMAL_X,NORMAL_Y,NORMAL_Z) be the normal you got from your plane equation. Something like.
pcl::PointXYZ pnt_on_line;
for(double distfromstart=0.0;distfromstart<LINE_LENGTH;distfromstart+=DISTANCE_INCREMENT){
pnt_on_line.x = X + distfromstart*NORMAL_X;
pnt_on_line.y = Y + distfromstart*NORMAL_Y;
pnt_on_line.z = Z + distfromstart*NORMAL_Z;
my_cloud.points.push_back(pnt_on_line);
}
Now you want to put a hat on your arrow and now pnt_on_line contains the end of the line exactly where you want to put it. To make the cone you could loop over angle and distance along the arrow, calculate a local x and y and z from that and convert them to points in point cloud space: the z part would be converted into your point cloud's frame of reference by multiplying with the normal vector as with above, the x and y would be multiplied into vectors perpendicular to this normal vectorE. To get these, choose an arbitrary unit vector perpendicular to the normal vector (for your x axis) and cross product it with the normal vector to find the y axis.
The second part of this explanation is fairly terse but the first part may be the more important.
Update
So possibly the best way to describe how to do the cone is to start with a cylinder, which is an extension of the line described above. In the case of the line, there is (part of) a one dimensional manifold embedded in 3D space. That is we have one variable that we loop over adding points. The cylinder is a two dimensional object so we have to loop over two dimensions: the angle and the distance. In the case of the line we already have the distance. So the above loop would now look like:
for(double distfromstart=0.0;distfromstart<LINE_LENGTH;distfromstart+=DISTANCE_INCREMENT){
for(double angle=0.0;angle<2*M_PI;angle+=M_PI/8){
//calculate coordinates of point and add to cloud
}
}
Now in order to calculate the coordinates of the new point, well we already have the point on the line, now we just need to add it to a vector to move it away from the line in the appropriate direction of the angle. Let's say the radius of our cylinder will be 0.1, and let's say an orthonormal basis that we have already calculated perpendicular to the normal of the plane (which we will see how to calculate later) is perpendicular_1 and perpendicular_2 (that is, two vectors perpendicular to each other, of length 1, also perpendicular to the vector (NORMAL_X,NORMAL_Y,NORMAL_Z)):
//calculate coordinates of point and add to cloud
pnt_on_cylinder.x = pnt_on_line.x + 0.1 * perpendicular_1.x * 0.1 * cos(angle) + perpendicular_2.x * sin(angle)
pnt_on_cylinder.y = pnt_on_line.y + perpendicular_1.y * 0.1 * cos(angle) + perpendicular_2.y * 0.1 * sin(angle)
pnt_on_cylinder.z = pnt_on_line.z + perpendicular_1.z * 0.1 * cos(angle) + perpendicular_2.z * 0.1 * sin(angle)
my_cloud.points.push_back(pnt_on_cylinder);
Actually, this is a vector summation and if we were to write the operation as vectors it would look like:
pnt_on_line+perpendicular_1*cos(angle)+perpendicular_2*sin(angle)
Now I said I would talk about how to calculate perpendicular_1 and perpendicular_2. Let K be any unit vector that is not parallel to (NORMAL_X,NORMAL_Y,NORMAL_Z) (this can be found by trying e.g. (1,0,0) then (0,1,0)).
Then
perpendicular_1 = K X (NORMAL_X,NORMAL_Y,NORMAL_Z)
perpendicular_2 = perpendicular_1 X (NORMAL_X,NORMAL_Y,NORMAL_Z)
Here X is the vector cross product and the above are vector equations. Note also that the original calculation of pnt_on_line involved a vector dot product and a vector summation (I am just writing this for completeness of the exposition).
If you can manage this then the cone is easy just by changing a couple of things in the double loop: the radius just changes along its length until it is zero at the end of the loop and in the loop distfromstart will not start at 0.

Plotting graphs using Bezier curves

I have an array of points (x0,y0)... (xn,yn) monotonic in x and wish to draw the "best" curve through these using Bezier curves. This curve should not be too "jaggy" (e.g. similar to joining the dots) and not too sinuous (and definitely not "go backwards"). I have created a prototype but wonder whether there is an objectively "best solution".
I need to find control points for all segments xi,y1 x(i+1)y(i+1). My current approach (except for the endpoints) for a segment x(i), x(i+1) is:
find the vector x(i-1)...x(i+1) , normalize, and scale it by factor * len(i,i+1) to give the vector for the leading control point
find the vector x(i+2)...x(i) , normalize, and scale it by factor * len(i,i+1) to give the vector for the trailing control point.
I have tried factor=0.1 (too jaggy), 0.33 (too curvy) and 0.20 - about right. But is there a better approach which (say) makes 2nd and 3nd derivatives as smooth as possible. (I assume such an algorithm is implemented in graphics packages)?
I can post pseudo/code if requested. Here are the three images (0.1/0.2/0.33). The control points are shown by straight lines: black (trailing) and red (leading)
Here's the current code. It's aimed at plotting Y against X (monotonic X) without close-ing. I have built my own library for creating SVG (preferred output); this code creates triples of x,y in coordArray for each curve segment (control1, xcontrol2, end). Start is assumed by last operation (Move or Curve). It's Java but should be easy to interpret (CurvePrimitive maps to cubic, "d" is the String representation of the complete path in SVG).
List<SVGPathPrimitive> primitiveList = new ArrayList<SVGPathPrimitive>();
primitiveList.add(new MovePrimitive(real2Array.get(0)));
for(int i = 0; i < real2Array.size()-1; i++) {
// create path 12
Real2 p0 = (i == 0) ? null : real2Array.get(i-1);
Real2 p1 = real2Array.get(i);
Real2 p2 = real2Array.get(i+1);
Real2 p3 = (i == real2Array.size()-2) ? null : real2Array.get(i+2);
Real2Array coordArray = plotSegment(factor, p0, p1, p2, p3);
SVGPathPrimitive primitive = new CurvePrimitive(coordArray);
primitiveList.add(primitive);
}
String d = SVGPath.constructDString(primitiveList);
SVGPath path1 = new SVGPath(d);
svg.appendChild(path1);
/**
*
* #param factor to scale control points by
* #param p0 previous point (null at start)
* #param p1 start of segment
* #param p2 end of segment
* #param p3 following point (null at end)
* #return
*/
private Real2Array plotSegment(double factor, Real2 p0, Real2 p1, Real2 p2, Real2 p3) {
// create p1-p2 curve
double len12 = p1.getDistance(p2) * factor;
Vector2 vStart = (p0 == null) ? new Vector2(p2.subtract(p1)) : new Vector2(p2.subtract(p0));
vStart = new Vector2(vStart.getUnitVector().multiplyBy(len12));
Vector2 vEnd = (p3 == null) ? new Vector2(p2.subtract(p1)) : new Vector2(p3.subtract(p1));
vEnd = new Vector2(vEnd.getUnitVector().multiplyBy(len12));
Real2Array coordArray = new Real2Array();
Real2 controlStart = p1.plus(vStart);
coordArray.add(controlStart);
Real2 controlEnd = p2.subtract(vEnd);
coordArray.add(controlEnd);
coordArray.add(p2);
// plot controls
SVGLine line12 = new SVGLine(p1, controlStart);
line12.setStroke("red");
svg.appendChild(line12);
SVGLine line21 = new SVGLine(p2, controlEnd);
svg.appendChild(line21);
return coordArray;
}
A Bezier curve requires the data points, along with the slope and curvature at each point. In a graphics program, the slope is set by the slope of the control-line, and the curvature is visualized by the length.
When you don't have such control-lines input by the user, you need to estimate the gradient and curvature at each point. The wikipedia page http://en.wikipedia.org/wiki/Cubic_Hermite_spline, and in particular the 'interpolating a data set' section has a formula that takes these values directly.
Typically, estimating these values from points is done using a finite difference - so you use the values of the points on either side to help estimate. The only choice here is how to deal with the end points where there is only one adjacent point: you can set the curvature to zero, or if the curve is periodic you can 'wrap around' and use the value of the last point.
The wikipedia page I referenced also has other schemes, but most others introduce some other 'free parameter' that you will need to find a way of setting, so in the absence of more information to help you decide how to set other parameters, I'd go for the simple scheme and see if you like the results.
Let me know if the wikipedia article is not clear enough, and I'll knock up some code.
One other point to be aware of: what 'sort' of Bezier interpolation are you after? Most graphics programs do cubic bezier in 2 dimensions (ie you can draw a circle-like curve), but your sample images look like it could be 1d functions approximation (as in for every x there is only one y value). The graphics program type curve is not really mentioned on the page I referenced. The maths involved for converting estimate of slope and curvature into a control vector of the form illustrated on http://en.wikipedia.org/wiki/B%C3%A9zier_curve (Cubic Bezier) would take some working out, but the idea is similar.
Below is a picture and algorithm for a possible scheme, assuming your only input is the three points P1, P2, P3
Construct a line (C1,P1,C2) such that the angles (P3,P1,C1) and (P2,P1,C2) are equal. In a similar fashion construct the other dark-grey lines. The intersections of these dark-grey lines (marked C1, C2 and C3) become the control-points as in the same sense as the images on the Bezier Curve wikipedia site. So each red curve, such as (P3,P1), is a quadratic bezier curve defined by the points (P3, C1, P1). The construction of the red curve is the same as given on the wikipedia site.
However, I notice that the control-vector on the Bezier Curve wikipedia page doesn't seem to match the sort of control vector you are using, so you might have to figure out how to equate the two approaches.
I tried this with quadratic splines instead of cubic ones which simplifies the selection of control points (you just choose the gradient at each point to be a weighted average of the mean gradients of the neighbouring intervals, and then draw tangents to the curve at the data points and stick the control points where those tangents intersect), but I couldn't find a sensible policy for setting the gradients of the end points. So I opted for Lagrange fitting instead:
function lagrange(points) { //points is [ [x1,y1], [x2,y2], ... ]
// See: http://www.codecogs.com/library/maths/approximation/interpolation/lagrange.php
var j,n = points.length;
var p = [];
for (j=0;j<n;j++) {
p[j] = function (x,j) { //have to pass j cos JS is lame at currying
var k, res = 1;
for (k=0;k<n;k++)
res*=( k==j ? points[j][1] : ((x-points[k][0])/(points[j][0]-points[k][0])) );
return res;
}
}
return function(x) {
var i, res = 0;
for (i=0;i<n;i++)
res += p[i](x,i);
return res;
}
}
With that, I just make lots of samples and join them with straight lines.
This is still wrong if your data (like mine) consists of real world measurements. These are subject to random errors and if you use a technique that forces the curve to hit them all precisely, then you can get silly valleys and hills between the points. In cases like these, you should ask yourself what order of polynomial the data should fit and ... well ... that's what I'm about to go figure out.

How do I QUICKLY find the closest intersection in 2D between a ray and m polylines?

How do I find the closest intersection in 2D between a ray:
x = x0 + t*cos(a), y = y0 + t*sin(a)
and m polylines:
{(x1,y1), (x2,y2), ..., (xn,yn)}
QUICKLY?
I started by looping trough all linesegments and for each linesegment;
{(x1,y1),(x2,y2)} solving:
x1 + u*(x2-x1) = x0 + t*cos(a)
y1 + u*(y2-y1) = y0 + t*sin(a)
by Cramer's rule, and afterward sorting the intersections on distance, but that was slow :-(
BTW: the polylines happens to be monotonically increasing in x.
Coordinate system transformation
I suggest you first transform your setup to something with easier coordinates:
Take your point p = (x, y).
Move it by (-x0, -y0) so that the ray now starts at the center.
Rotate it by -a so that the ray now lies on the x axis.
So far the above operations have cost you four additions and four multiplications per point:
ca = cos(a) # computed only once
sa = sin(a) # likewise
x' = x - x0
y' = y - y0
x'' = x'*ca + y'*sa
y'' = y'*ca - x'*sa
Checking for intersections
Now you know that a segment of the polyline will only intersect the ray if the sign of its y'' value changes, i.e. y1'' * y2'' < 0. You could even postpone the computation of the x'' values until after this check. Furthermore, the segment will only intersect the ray if the intersection of the segment with the x axis occurs for x > 0, which can only happen if either value is greater than zero, i.e. x1'' > 0 or x2'' > 0. If both x'' are greater than zero, then you know there is an intersection.
The following paragraph is kind of optional, don't worry if you don't understand it, there is an alternative noted later on.
If one x'' is positive but the other is negative, then you have to check further. Suppose that the sign of y'' changed from negative to positive, i.e. y1'' < 0 < y2''. The line from p1'' to p2'' will intersect the x axis at x > 0 if and only if the triangle formed by p1'', p2'' and the origin is oriented counter-clockwise. You can determine the orientation of that triangle by examining the sign of the determinant x1''*y2'' - x2''*y1'', it will be positive for a counter-clockwise triangle. If the direction of the sign change is different, the orientation has to be different as well. So to take this together, you can check whether
(x1'' * y2'' - x2'' * y1'') * y2'' > 0
If that is the case, then you have an intersection. Notice that there were no costly divisions involved so far.
Computing intersections
As you want to not only decide whether an intersection exists, but actually find a specific one, you now have to compute that intersection. Let's call it p3. It must satisfy the equations
(x2'' - x3'')/(y2'' - y3'') = (x1'' - x3'')/(y1'' - y3'') and
y3'' = 0
which results in
x3'' = (x1'' * y1'' - x2'' * y2'')/(y1'' - y2'')
Instead of the triangle orientation check from the previous paragraph, you could always compute this x3'' value and discard any results where it turns out to be negative. Less code, but more divisions. Benchmark if in doubt about performance.
To find the point closest to the origin of the ray, you take the result with minimal x3'' value, which you can then transform back into its original position:
x3 = x3''*ca + x0
y3 = x3''*sa + y0
There you are.
Note that all of the above assumed that all numbers were either positive or negative. If you have zeros, it depends on the exact interpretation of what you actually want to compute, how you want to handle these border cases.
To avoid checking intersection with all segments, some space partition is needed, like Quadtree, BSP tree. With space partition it is needed to check ray intersection with space partitions.
In this case, since points are sorted by x-coordinate, it is possible to make space partition with boxes (min x, min y)-(max x, max y) for parts of polyline. Root box is min-max of all points, and it is split in 2 boxes for first and second part of a polyline. Number of segments in parts is same or one box has one more segment. This box splitting is done recursively until only one segment is in a box.
To check ray intersection start with root box and check is it intersected with a ray, if it is than check 2 sub-boxes for an intersection and first test closer sub-box then farther sub-box.
Checking ray-box intersection is checking if ray is crossing axis aligned line between 2 positions. That is done for 4 box boundaries.

Resources