Bezier clipping - graphics

I'm trying to find/make an algorithm to compute the intersection (a new filled object) of two arbitrary filled 2D objects. The objects are defined using either lines or cubic beziers and may have holes or self-intersect. I'm aware of several existing algorithms doing the same with polygons, listed here. However, I'd like to support beziers without subdividing them into polygons, and the output should have roughly the same control points as the input in areas where there are no intersections.
This is for an interactive program to do some CSG but the clipping doesn't need to be real-time. I've searched for a while but haven't found good starting points.

I found the following publication to be the best of information regarding Bezier Clipping:
T. W. Sederberg, BYU, Computer Aided Geometric Design Course Notes
Chapter 7 that talks about Curve Intersection is available online. It outlines 4 different approaches to find intersections and describes Bezier Clipping in detail:
https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=1000&context=facpub

I know I'm at risk of being redundant, but I was investigating the same issue and found a solution that I'd read in academic papers but hadn't found a working solution for.
You can rewrite the bezier curves as a set of two bi-variate cubic equations like this:
∆x = ax₁*t₁^3 + bx₁*t₁^2 + cx₁*t₁ + dx₁ - ax₂*t₂^3 - bx₂*t₂^2 - cx₂*t₂ - dx₂
∆y = ay₁*t₁^3 + by₁*t₁^2 + cy₁*t₁ + dy₁ - ay₂*t₂^3 - by₂*t₂^2 - cy₂*t₂ - dy₂
Obviously, the curves intersect for values of (t₁,t₂) where ∆x = ∆y = 0. Unfortunately, it's complicated by the fact that it is difficult to find roots in two dimensions, and approximate approaches tend to (as another writer put it) blow up.
But if you're using integers or rational numbers for your control points, then you can use Groebner bases to rewrite your bi-variate order-3 polynomials into a (possibly-up-to-order-9-thus-your-nine-possible-intersections) monovariate polynomial. After that you just need to find your roots (for, say t₂) in one dimension, and plug your results back into one of your original equations to find the other dimension.
Burchburger has a layman-friendly introduction to Groebner Bases called "Gröbner Bases: A Short Introduction for Systems Theorists" that was very helpful for me. Google it. The other paper that was helpful was one called "Fast, precise flattening of cubic Bézier path and offset curves" by TF Hain, which has lots of utility equations for bezier curves, including how to find the polynomial coefficients for the x and y equations.
As for whether the Bezier clipping will help with this particular method, I doubt it, but bezier clipping is a method for narrowing down where intersections might be, not for finding a final (though possibly approximate) answer of where it is. A lot of time with this method will be spent in finding the mono-variate equation, and that task doesn't get any easier with clipping. Finding the roots is by comparison trivial.
However, one of the advantages of this method is that it doesn't depend on recursively subdividing the curve, and the problem becomes a simple one-dimensional root-finding problem, which is not easy, but well documented. The major disadvantage is that computing Groebner bases is costly and becomes very unwieldy if you're dealing with floating point polynomials or using higher order Bezier curves.
If you want some finished code in Haskell to find the intersections, let me know.

I wrote code to do this a long, long time ago. The project I was working on defined 2D objects using piecewise Bezier boundaries that were generated as PostScipt paths.
The approach I used was:
Let curves p, q, be defined by Bezier control points. Do they intersect?
Compute the bounding boxes of the control points. If they don't overlap, then the two curves don't intersect. Otherwise:
p.x(t), p.y(t), q.x(u), q.y(u) are cubic polynomials on 0 <= t,u <= 1.0.
The distance squared (p.x - q.x) ** 2 + (p.y - q.y) ** 2 is a polynomial on (t,u).
Use Newton-Raphson to try and solve that for zero. Discard any solutions outside 0 <= t,u <= 1.0
N-R may or may not converge. The curves might not intersect, or N-R can just blow up when the two curves are nearly parallel. So cut off N-R if it's not converging after after some arbitrary number of iterations. This can be a fairly small number.
If N-R doesn't converge on a solution, split one curve (say, p) into two curves pa, pb at t = 0.5. This is easy, it's just computing midpoints, as the linked article shows. Then recursively test (q, pa) and (q, pb) for intersections. (Note that in the next layer of recursion that q has become p, so that p and q are alternately split on each ply of the recursion.)
Most of the recursive calls return quickly because the bounding boxes are non-overlapping.
You will have to cut off the recursion at some arbitrary depth, to handle weird cases where the two curves are parallel and don't quite touch, but the distance between them is arbitrarily small -- perhaps only 1 ULP of difference.
When you do find an intersection, you're not done, because cubic curves can have multiple crossings. So you have to split each curve at the intersecting point, and recursively check for more interections between (pa, qa), (pa, qb), (pb, qa), (pb, qb).

There are a number of academic research papers on doing bezier clipping:
http://www.andrew.cmu.edu/user/sowen/abstracts/Se306.html
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.61.6669
http://www.dm.unibo.it/~casciola/html/research_rr.html
I recommend the interval methods because as you describe, you don't have to divide down to polygons, and you can get guaranteed results as well as define your own arbitrary precision for the resultset. For more information on interval rendering, you may also refer to http://www.sunfishstudio.com

Related

Fitting a transition + circle + transition curve to a set of measured points

I am dealing with a reverse-engineering problem regarding road geometry and estimation of design conditions.
Suppose you have a set of points obtained from the measurement of positions of a road. This road has straight sections as well as curve sections. Straight sections are, of course, represented by lines, and curves are represented by circles of unknown center and radius. There are, as well, transition sections, which may be clothoids / Euler spirals or any other usual track transition curve. A representation of the track may look like this:
We know in advance that the road / track was designed taking this transition + circle + transition principle into account for every curve, yet we only have the measurement points, and the goal is to find the parameters describing every curve on the track, this is, the transition parameters as well as the circle's center and radius.
I have written some code using a nonlinear optimization algorithm, where a user can select start and end points and fit a circle that to the arc section between them, as it shows in next figure:
However, I don't find a suitable way to take the transition into account. After giving it some thought I came to think that this s because, given a set of discrete points -with their measurement error- representing a full curve, it is not entirely clear where to consider it "begins" and where it "ends" and, moreover, it is less clear where to consider the transition, the proper circle and the exit transition "begin" and "end".
Is there any work on this subject which I may have missed? is there a proper way to fit the whole transition + curve + transition structure into the set of points?
As far as I know, there's no method to fit a sequence clothoid1-circle-clothoid2 into a given set of points.
Basic facts are that two points define a straight, and three points define a unique circle.
The clothoid is far more complex, because you need: The parameter A, the final radius Rf, an initial point px,py, the radius Ri at that point, and the tangent T (angle with X-axis) at that point.
These are 5 data you may use to find the solution.
Due to clothoid coords are calculated by expanded Fresnel integrals (see https://math.stackexchange.com/a/3359006/688039 a little explanation), and then apply a translation & rotation, there's no an easy way to fit this spiral into a set of given points.
When I've had to deal with your issue, what I've done is:
Calculate the radius for triplets of consecutive points: p1p2p3, p2p3p4, p3p4p5, etc
Observe the sequence of radius. Similar values mean a circle, increasing/decreasing values mean a clothoid; Big values would mean a straight.
For each basic element (line, circle) find the most probably characteristics (angles, vertices, radius) by hand or by some regression method. Many times the common sense is the best.
For a spiral you may start with aproximated values, taken from the adjacent elements. These values may very well be the initial angle and point, and the initial and final radius. Then you need to iterate, playing with Fresnel and 'space change' until you find a "good" parameter A. Then repeat with small differences in the other values, those you took from adjacents.
Make the changes you consider as good. For example, many values (A, radius) use to be integers, without decimals, just because it was easier for the designer to type.
If you can make a small applet to do these steps then it's enough. Using a typical roads software helps, but doesn't avoid you the iteration process.
If the points are dense compared to the effective radii of curvature, estimate the local curvature by least square fitting of a circle on a small number of points, taking into account that the curvature is most of the time zero.
You will obtain a plot with constant values and ramps that connect them. You can use an estimate of the slope at the inflection points to figure out the transition points.

How to optimize two ranges for the determination of the intersection point between two curves

I start this thread asking for your help in Excel.
The main goal is to determine the coordinates of the intersection point P=(x,y) between two curves (curve A, curve B) modeled by points.
The curves are non-linear and each defining point is determined using complex equations (equations are dependent by a lot of parameters chosen by user, as well as user will choose the number of points which will define the accuracy of the curves). That is to say that each curve (curve A and curve B) is always changing in the plane XY (Z coordinate is always zero, we are working on the XY plane) according to the input parameters and the number of the defining points is also depending by the user choice.
My first attempt was to determine the intersection point through the trend equations of each curve (I used the LINEST function to determine the coefficients of the polynomial equation) and by solving the solution putting them into a system. The problem is that Excel is not interpolating very well the curves because they are too wide, then the intersection point (the solution of the system) is very far from the real solution.
Then, what I want to do is to shorten the ranges of points to be able to find two defining trend equations for the curves, cutting away the portion of curves where cannot exist the intersection.
Today, in order to find the solution, I plot the curves on Siemens NX cad using multi-segment splines with order 3 and then I can easily find the coordinates of the intersection point. Please notice that I am using the multi-segment splines to be more precise with the approximation of the functions curve A and curve B.
Since I want to avoid the CAD tool and stay always on Excel, is there a way to select a shorter range of the defining points close to the intersection point in order to better approximate curve A and curve B with trend equations (Linest function with 4 points and 3rd order spline) and then find the solution?
I attach a picture to give you an example of Curve A and Curve B on the plane:
https://postimg.cc/MfnKYqtk
At the following link you can find the Excel file with the coordinate points and the curve plot:
https://www.mediafire.com/file/jqph8jrnin0i7g1/intersection.xlsx/file
I hope to solve this problem with your help, thank you in advance!
kalo86
Your question gave me some days of thinking and research.
With the help of https://pomax.github.io/bezierinfo/
§ 27 - Intersections (Line-line intersections)
and
§ 28 - Curve/curve intersection
your problem can be solved in Excel.
About the mystery of Excel smoothed lines you find details here:
https://blog.splitwise.com/2012/01/31/mystery-solved-the-secret-of-excel-curved-line-interpolation/
The author of this fit is Dr. Brian T. Murphy, PhD, PE from www.xlrotor.com. You find details here:
https://www.xlrotor.com/index.php/our-company/about-dr-murphy
https://www.xlrotor.com/index.php/knowledge-center/files
=>see Smooth_curve_bezier_example_file.xls
https://www.xlrotor.com/smooth_curve_bezier_example_file.zip
These knitted together you get the following results for the intersection of your given curves:
for the straight line intersection:
(x = -1,02914127711195 / y = 23,2340949174492)
for the smooth line intersection:
(x = -1,02947493047196 / y = 23,2370611219553)
For a full automation of your task you would need to add more details regarding the needed accuracy and what details you need for further processing (and this is actually not the scope of this website ;-).
Intersection of the straight lines:
Intersection of the smoothed lines:
comparison charts:
solution,
Thank you very much for the anwer, you perfectly centered my goal.
Your solution (for the smoothed lines) is very very close to what I determine in Siemens NX.
I'm going to read the documentation at the provided link https://pomax.github.io/bezierinfo/ in order to better understand the math behind this argument.
Then, to resume my request, you have been able to find the coordinates (x,y) of the intersection point between two curves without passing through an advanced CAD system with a very good precision.
I am starting to study now, best regards!
kalo86

Choosing step for Bezier curve drawing

Bezier curve is a parametric curve, meaning that there is a paramater t at which one can evaluate the polynomials in order to find out the positions of points laying on the curve.
Polynomials for some common cases can be found at en.wikipedia.org/wiki/B%C3%A9zier_curve#Specific_cases
To draw a Bezier curve on screen, one could evaluate the polynomials from 0 to 1 at ever increasing t by tiny little steps. However, that would very wasteful because, in general, the parameter "space" does not correspond to screen "space", i.e. several little steps may fall onto just one pixel.
My question is: how to find the smallest step which increases Cartesian distance from previous point at least by 1 pixel?
To put it in other way: I would like to draw a Bezier curve on screen. How to choose the (uniform) step by which t should grow so that I never draw at one pixel more the once? I don't mind the "holes" when the t grows too quickly, I just don't want to redraw already drawn pixels.
Edit
By "how to find" I mean O(1). Yes, I could use De Casteljau's algorithm but I was hoping there is a way to "guess" the optimal t step quickly.
The comment above (jozxyqk) gives you a hint. I would give it a try with a recursive binary division of the spline drawing.
Lets say you start with a coarse resolution of the parameter space (delta_t = 0.1), that gives you 11 points on the spline curve s , s(t=0), s(t=0.1), ..., s(t=0.9), s(t=1).
Calculate the distance between s(t_i) and s(t_i+1). If it is >1 than make a binary subdivision between those two points. And so on...
But honestly, I guess it is faster to calculate all points a a higher resolution without any recursive loops or subdivisions. Especially if you are using multithread programming.

Two Dimensional Curve Approximation

here is what I want to do (preferably with Matlab):
Basically I have several traces of cars driving on an intersection. Each one is noisy, so I want to take the mean over all measurements to get a better approximation of the real route. In other words, I am looking for a way to approximate the Curve, which has the smallest distence to all of the meassured traces (in a least-square sense).
At the first glance, this is quite similar what can be achieved with spap2 of the CurveFitting Toolbox (good example in section Least-Squares Approximation here).
But this algorithm has some major drawback: It assumes a function (with exactly one y(x) for every x), but what I want is a curve in 2d (which may have several y(x) for one x). This leads to problems when cars turn right or left with more then 90 degrees.
Futhermore it takes the vertical offsets and not the perpendicular offsets (according to the definition on wolfram).
Has anybody an idea how to solve this problem? I thought of using a B-Spline and change the number of knots and the degree until I reached a certain fitting quality, but I can't find a way to solve this problem analytically or with the functions provided by the CurveFitting Toolbox. Is there a way to solve this without numerical optimization?
mbeckish is right. In order to get sufficient flexibility in the curve shape, you must use a parametric curve representation (x(t), y(t)) instead of an explicit representation y(x). See Parametric equation.
Given n successive points on the curve, assign them their true time if you know it or just integers 0..n-1 if you don't. Then call spap2 twice with vectors T, X and T, Y instead of X, Y. Now for arbitrary t you get a point (x, y) on the curve.
This won't give you a true least squares solution, but should be good enough for your needs.

RayTracing: When to Normalize a vector?

I am rewriting my ray tracer and just trying to better understand certain aspects of it.
I seem to have down pat the issue regarding normals and how you should multiply them by the inverse of the transpose of a transformation matrix.
What I'm confused about is when I should be normalizing my direction vectors?
I'm following a certain book and sometimes it'll explicitly state to Normalize my vector and other cases it doesn't and I find out that I needed to.
Normalized vector is in the same direction with just unit length 1? So I'm unclear when it is necessary?
Thanks
You never need to normalize a vector unless you are working with the angles between vectors, or unless you are rotating a vector.
That's it.
In the former case, all of your trig functions require your vectors to land on a unit circle, which means the vectors are normalized. In the latter case, you are dividing out the magnitude, rotating the vector, making sure it stays a unit, and then multiplying the magnitude back in. Normalization just goes with the territory.
If someone tells you that coordinate system are defined by n unit vectors, know that i-hat, j-hat, k-hat, and so on can be any arbitrary vector(s) of any length and direction, so long as none of them are parallel. This is the heart of affine transformations.
If someone tries to tell you that the dot product requires normalized vectors, shake your head and smile. The dot product only needs normalized vectors when you are using it to get the angle between two vectors.
But doesn't normalization make the math "simpler"?
Not really -- It adds a magnitude computation and a division. Numbers between 0..1 are no different than numbers between 0..x.
Having said that, you sometimes normalize in order to play well with others. But if you find yourself normalizing vectors as a matter of principle before calling methods, consider using a flag attached to the vector to save yourself a step. Mathematically, it is unimportant, but practically, it can make a huge difference in performance.
So again... it's all about rotating a vector or measuring its angle against another vector. If you aren't doing that, don't waste cycles.
tl;dr: Normalized vectors simplify your math. They also reduce the number of very hard to diagnose visual artifacts in your images.
Normalized vector is in the same direction with just unit length 1? So
I'm unclear when it is necessary?
You almost always want all vectors in a ray tracer to be normalized.
The simplest example is that of the intersection test: where does a bouncing ray hit another object.
Consider a ray where:
p(t) = p_0 + v * t
In this case, a point anywhere along that ray p(t) is defined as an offset from the original point p_0 and an offset along a particular direction v. For every increment of parameter t, the resulting p(t) will move another increment of length equal to the length of the vector v.
Remember, you know p_0 and v. When you are trying to find the point where this ray next hits another object, you have to solve for that t. It is obviously more convenient, if not always obviously necessary, to use normalized vector vs in that representation.
However, that same vector v is used in lighting calculations. Imagine that we have another direction vector u that points towards a lighting source. For the purpose of a very simple shading model, we can define the light at a particular point to be the dot product between those two vectors:
L(p) = v * u
Admittedly, this is a very uninteresting reflection model but it captures the high points of the discussion. A spot on a surface is bright if reflection points towards the light and dim if not.
Now, remember that another way of writing this dot product is the product of the magnitudes of the vectors times the cosine of the angle between them:
L(p) = ||v|| ||u|| cos(theta)
If u and v are of unit length (normalized), then the equation will evaluate to be proportional to the angle between the two vectors. However, if v is not of unit length, say because you didn't bother to normalize after reflecting the vector in the ray model above, now your lighting model has a problem. Spots on the surface using a larger v will be much brighter than spots that do not.
It is necessary to normalize a direction vector whenever you use it in some math that is influenced by its length.
The prime example is the dot product, which is used in most lighting equations. You also sometimes need to normalize vectors that you use in lighting calculations, even if you believe that they are normal.
For example, when using an interpolated normal on a triangle. Common sense tells you that since the normals at the vertices are normal, the vectors you get by interpolating are too. So much for common sense... the truth is that they will be shorter unless they incidentially all point into the same direction. Which means that you will shade the triangle too dark (to make matters worse, the effect is more pronounced the closer the light source gets to the surface, which is a... very funny result).
Another example where a vector might or might not be normalized is the cross product, depending on what you are doing. For example, when using the two cross products to build an orthonormal base, then you must at least normalize once (though if you do it naively, you end up doing it more often).
If you only care about the direction of the resulting "up vector", or about the sign, you don't need to normalize.
I'll answer the opposite question. When do you NOT need to normalize? Almost all calculations related to lighting require unit vectors - the dot product then gives you the cosine of the angle between vectors which is really useful. Some equations can still cope but become more complex (essentially doing the normalization in the equation) That leaves mostly intersection tests.
Equations for many intersection tests can be simplified if you have unit vectors. Some do not require it - for example if you have a plane equation (with a unit normal) you can find the ray-plane intersection without normalizing the ray direction vector. The distance will be in terms of the ray direction vectors length. This might be OK if all you want is to intersect a bunch of those planes (the relative distances will all be correct). But as soon as you want to compare with a different distance - calculated using the normalized ray direction - the distance values will not compare properly.
You might think about normalizing a direction vector AFTER doing some work that does not require it - maybe you have an acceleration structure that can be traversed without a normalized vector. But that isn't relevant either because eventually the ray will hit something and you're going to want to do a lighting/shading calculation with it. So you may as well normalize them from the start...
In other words, any specific calculation may not require a normalized direction vector, but a given direction vector will almost certainly need to be normalized at some point in the process.
Vectors are used to store two conceptually different elements: points in space and directions:
If you are storing a point in space (for example the position of the camera, the origin of the ray, the vertices of triangles) you don't want to normalize, because you would be modifying the value of the vector, and losing the specific position.
If you are storing a direction (for example the camera up, the ray direction, the object normals) you want to normalize, because in this case you are interested not in the specific value of the point, but on the direction it represents, so you don't need the magnitude. Normalization is useful in this case because it simplifies some operations, such as calculating the cosine of two vectors, something that can be done with a dot product if both are normalized.

Resources