Fit rectangle of given aspect ratio around a set of points - geometry

Given a set of points in 2D space, how would I find a smallest (rotated) rectangle of a given aspect ratio that encloses the points?
I found several questions about fitting arbitrary rectangles (by looking at the angle of the edges of the convex hull), but that approach does not seem to carry over easily.

Related

Sphere and nonuniform object intersection

I have two objects: A sphere and an object. Its an object that I created using surface reconstruction - so we do not know the equation of the object. I want to know the intersecting points on the sphere when the object and the sphere intersect. If we had a sphere and a cylinder, we could solve for the equation and figure out the area and all that but the problem here is that the object is not uniform.
Is there a way to find out the intersecting points or area on the sphere?
I'd start by finding the intersection of triangles with the sphere. First find the intersection of each triangle's plane and the sphere, which gives a circle. Then find the circle's intersection/s with the triangle edges in 2D using line/circle tests. The result will be many arcs which I guess you could approximate with lines. I'm not really sure where to go from here without knowing the end goal.
If it's surface area you're after, maybe a numerical approach would be better. I'd cover the sphere in points and count the number inside the non-uniform object. To find if a point is inside, maybe trace outwards and count the intersections with the surface (if it's odd, the point is inside). You could use the stencil buffer for this if you wanted (similar to stencil shadows).
If you want the volume of intersection a quick google search gives "carve", a mesh based CSG library.
Starting with triangles versus the sphere will give you the points of intersection.
You can take the arcs of intersection with each surface and combine them to make fences around the sphere. Ideally your reconstructed object will be in winged-edge format so you could just step from one fence segment to the next, but with reconstructed surfaces I guess you might need to apply some slightly fuzzy logic.
You can determine which side of each fence is inside the reconstructed object and which side is out by factoring in the surface normals along the fence.
You can then cut the sphere along the fences and add the internal bits to the display.
For the other side of things you could remove any triangle completely inside the sphere and cut those that intersect.

Fit maximum convex hull to interior of a set of points

I'd like to find the largest convex hull which fits in the interior of a set of points. I have a set of points which are roughly circular, with a large number of outlier points outside of the circle I'd like to fit. Imagine a circle with "solar flares"... I want to fit the circle and completely ignore the flares. I've tried various fit and culling strategies, but they aren't working well.
I've searched quite a bit and not found a solution. Thanks in advance.
The notion you need may be alpha shapes. The convex hull is a sub-set of the alpha-shape for an extreme value for alpha. The alpha shape is fitting a set of point closer than the convex hull with some values for alpha.
Theory has been developed by Edelbrunner. This is a good start: http://www.mpi-inf.mpg.de/~jgiesen/tch/sem06/Celikik.pdf
For computation, you must: compute delaunay triangulation and/or voronoi diagram, then select points that observe one condition.
Example alpha shape:
This is in fact a concave hull, and it may disregard outliers.

Pixel overlap with polygon: efficient (scanline-type) algorithm

Problem specification:
I have a rectangular and uniformly spaced image of pixels with vertex coordinates (i,j), (i+1,j), (i, j+1), (i+1, j+1) [i=0,...,m-1; j=0,...,n-1] and a polygon P with vertex coordinates (x_1,y_1), ..., (x_n, y_n). Now I want to efficiently compute the percentage of every pixel overlapping with P. P can be non-convex, or even self-intersection.
Essentially, this is a "soft" generalization of the scan-line rasterization algorithms which check efficiently if the pixel centers lie inside / outside the polygon.
I can think of the following approaches:
(1) Upsample the image (e.g. by a factor 10*10), count how many subpixel centers lie inside the polygon, and divide by 100. Problems: time efficiency, memory efficiency, accuracy.
(2) Use the scan-line algorithm on a slightly bigger and by (0.5,0.5) translated grid to compute the pixels that lie fully inside / outside, create a list of "borderline" pixels, walk counter-clockwise along the edges and compute the intersection areas with all pixels along the way. Problems: requires subtle coding, easy to introduce bugs.
My question: Has anybody already encountered this problem, and do you know a third, superior approach? And if not, have you made better experiences with (1) or with (2)? I assume that this problem may arise in the context of antialiasing?
Doing the exact geometric analysis might not be too difficult.
Deal with those pixels that are partially covered by the polygon first: you can use a technique from ray-tracing to quickly find all pixels that intersect with the polygon edges. You can then use the Cohen-Sutherland algorithm to efficiently find the points of intersection between the edge and the pixel, and hence you can compute the area of coverage for that pixel.
Note that you can avoid one of the two clipping operations involved in Cohen-Sutherland as adjacent pixels will share a segment intersection point. For instance - if you have two adjacent pixels, A and B that intersect with a segment p->q at points a1, a2, b1 and b2, then a2 and b1 will be the same. Passing the segment a2->q into the routine when clipping against B should avoid repeating work.
You'll have to treat the pixels that contain the polygon vertices specially, but again it shouldn't be too tricky: Cohen-Sutherland will help here as well.
Self-intersecting polygons will also throw up some special cases to handle - pixels that intersect with two or more edges. I can easily imagine that handling these exactly in all cases might get tricky, so I'd be tempted to just do the upsampling approach here.
Once these edge pixels have been identified, you can do the standard scan-line thing to fill in the polygon's interior pixels.
edit: Actually, now that I think more about it, you can totally skip the Cohen-Sutherland step. The algorithm in the linked paper can be easily extended to return the intersection points between the segment and the pixel grid. The segment will leave a given pixel at min( tMaxX, tMaxY ). Keep track of the last exit point to re-use as the entry point for the next pixel.
I would do
1a) Upsample when the pixel is partly overlapping:
but not the whole image, only the current pixel to be checked, or all pixels in the current scan line if that helps.
Than there is no memory argument.
speed? up to 16x16 i dont think that speed is an issue.

Convex hull of (longitude, latitude)-points on the surface of a sphere

Standard convex hull algorithms will not work with (longitude, latitude)-points, because standard algorithms assume you want the hull of a set of Cartesian points. Latitude-longitude points are not Cartesian, because longitude "wraps around" at the anti-meridian (+/- 180 degrees). I.e., two degrees east of longitude 179 is -179.
So if your set of points happens to straddle the anti-meridian, you will compute spurious hulls that stretch all the way around the world incorrectly.
Any suggestions for tricks I could apply with a standard convex hull algorithm to correct for this, or pointers to proper "geospherical" hull algorithms?
Now that I think on it, there are more interesting cases to consider than straddling the anti-merdian. Consider a "band" of points that encircle the earth -- its convex hull would have no east/west bounds. Or even further, what is the convex hull of {(0,0), (0, 90), (0, -90), (90, 0), (-90, 0), (180, 0)}? -- it would seem to contain the entire surface of the earth, so which points are on its perimeter?
Standard convex hull algorithms are not defeated by the wrapping-around of the coordinates on the surface of the Earth but by a more fundamental problem. The surface of a sphere (let's forget the not-quite-sphericity of the Earth) is not a Euclidean space so Euclidean geometry doesn't work, and convex hull routines which assume that the underlying space is Euclidean (show me one which doesn't, please) won't work.
The surface of the sphere conforms to the concepts of an elliptic geometry where lines are great circles and antipodal points are considered the same point. You've already started to experience the issues arising from trying to apply a Euclidean concept of convexity to an elliptic space.
One approach open to you would be to adopt the definitions of geodesic convexity and implement a geodesic convex hull routine. That looks quite hairy. And it may not produce results which conform to your (generally Euclidean) expectations. In many cases, for 3 arbitrary points, the convex hull turns out to be the entire surface of the sphere.
Another approach, one adopted by navigators and cartographers through the ages, would be to project part of the surface of the sphere (a part containing all your points) into Euclidean space (which is the subject of map projections and I won't bother you with references to the extensive literature thereon) and to figure out the convex hull of the projected points. Project the area you are interested in onto the plane and adjust the coordinates so that they do not wrap around; for example, if you were interested in France you might adjust all longitudes by adding 30deg so that the whole country was coordinated by +ve numbers.
While I'm writing, the idea proposed in #Li-aung Yip's answer, of using a 3D convex hull algorithm, strikes me as misguided. The 3D convex hull of the set of surface points will include points, edges and faces which lie inside the sphere. These literally do not exist on the 2D surface of the sphere and only change your difficulties from wrestling with the not-quite-right concept in 2D to quite-wrong in 3D. Further, I learned from the Wikipedia article I referenced that a closed hemisphere (ie one which includes its 'equator') is not convex in the geometry of the surface of the sphere.
Instead of considering your data as latitude-longitude data, could you instead consider it in 3D space and apply a 3D convex hull algorithm? You may be able to then find the 2D convex hull you desire by analysing the 3D convex hull.
This returns you to well-travelled algorithms for cartesian convex hulls (albeit in three dimensions) and has no issues with wrap around of the coordinates.
Alternately, there's this paper: Computing the Convex Hull of a Simple Polygon on the Sphere (1996) which seems to deal with some of the same issues that you're dealing with (coordinate wrap-around, etc.)
If all your points are within a hemisphere (that is, if you can find a cut plane through the center of the Earth that puts them all on one side), then you can do a central a.k.a. gnomic a.k.a. gnomonic projection from the center of the Earth to a plane parallel to the cut plane. Then all great circles become straight lines in the projection, and so a convex hull in the projection will map back to a correct convex hull on the Earth. You can see how wrong lat/lon points are by looking at the latitude lines in the "Gnomonic Projection" section here (notice that the longitude lines remain straight).
(Treating the Earth as a sphere still isn't quite right, but it's a good second approximation. I don't think points on a true least-distance path across a more realistic Earth (say WGS84) generally lie on a plane through the center. Maybe pretending they do gives you a better approximation than what you get with a sphere.)
FutureNerd:
You are absolutely correct. I had to solve the exact same problem as Maxy-B for my application. As a first iteration, I just treated (lng,lat) as (x,y) and ran a standard 2D algorithm. This worked fine as long as nobody looked too close, because all my data were in the contiguous U.S. As a second iteration, though, I used your approach and proved the concept.
The points MUST be in the same hemisphere. As it turns out, choosing this hemisphere is non-trivial (it's not just the points' center, as I had initially guessed.) To illustrate, consider the following four points: (0,0), (-60,0), (+60,0) along the equator, and (0,90) the north pole. However you choose to define "center", their center lies on the north pole by symmetry and all four points are in the Northern Hemisphere. However, consider replacing the fourth point with, say (-19, 64) iceland. Now their center is NOT on the north pole, but asymmetrically drawn toward iceland. However, all four points are still in the Northern Hemisphere. Further, the Northern Hemisphere, as uniquely defined by the North Pole, is the ONLY hemisphere they share. So calculating this "pole" becomes algorithmic, not algebraic.
See my repository for the Python code:
https://github.com/VictorDavis/GeoConvexHull
This question has been answered a while ago, but I would like to sum up the results of my research.
The spherical convex hull is basically defined only for non-antipodal points. Supposing all the points are on the same hemisphere, you can compute their convex hull in two main ways:
Project the points to a plane using gnomonic/central projection and apply a planar convex hull algorithm. See Lin-Lin Chen, T. C. Woo, "Computational Geometry on the Sphere With Application to Automated Machining" (1992). If the points are on a known hemisphere, you can hard-code on which plane to project the points unto.
Adapt planar convex hull algorithms to the sphere. See C. Grima and A. Marquez, "Computational Geometry on Surfaces: Performing Computational Geometry on the Cylinder, the Sphere, the Torus, and the Cone", Springer (2002). This reference seems to give a similar method to the abstract referenced by Li-aung Yip above.
For reference, in Python I am working on an implementation of my own, which currently works only for points on the Northern hemisphere.
See also this question on Math Overflow.
All edges of a spherical convex hull can be viewed/treated as great circles (seminally, all edges of a convex hull in euclidean space can be treated as lines (rather than a line segment)). Each one of these great circles cuts the sphere into two hemispheres. You could thus conceive each great circle as a constraint. A point that is within the convex hull will be on each of the hemispheres defined by each constraint.
Each edge of the original polygon is a candidate edge of the convex hull. To verify if it is indeed an edge of the convex hull, you'd simply need to verify if all nodes of the polygon are on the hemisphere defined by the great circle that goes through the two nodes of the edge in question. However, we'd still need to create new edges that surpass the concave nodes of the polygon.
But lets rather shortcut / brute-forces this:
Draw a great circle between every pair of nodes in the polygon. Do this in both directions (i.e. the great circle connecting A to B and the great circle connecting B to A). For a polygon with N nodes, you will thus end up with N^2 great circle. Each one of these great circles is a candidate constraint (i.e. a candidate edge of the convex polygon). Some of these great circles will overlap with the edges of the original polygon, but most won't. Now, remember again: each great circle is a constraint that constrains the sphere to one hemisphere. Now verify if all nodes of the original polygon satisfy the constraint (i.e. if all nodes are on the hemisphere defined by the great circle). If yes, then this great circle is an edge of the convex hull. If, however a single node of the original polygon does not satisfy the constraint, then it isn't and you can discard this great circle.
The beauty of this is that once you converted your latitudes and longitudes into cartesian vectors pointing onto the unit sphere, it really just requires dot products and cross products
- You find the great circle that passes through two points on a sphere by its cross product
- A point is on the hemisphere defined by a great circle if the dot product of the great circle and the point is greater (or equal) to 0.
So even for polygons with a large number of edges, this brute force method should work just fine.

Convert quadratic curve to cubic curve

Looking at Convert a quadratic bezier to a cubic?, I can finally understand why programming teachers always told me that math was so important. Sadly, I didn't listen.
Can anyone provide a more concrete - e.g., computer-language-y - formula for converting a quadratic curve to a cubic? Understanding that there's some rounding errors possible, which is fine.
Given a quad curve represented by variables:
StartX, StartY
ControlX, ControlY
EndX, EndY
And desiring StartX, StartY and EndX, EndY to remain the same, but to now have Control1X, Control1Y and Control2X, Control2Y of a cubic curve.
Is it...
Control1X = StartX + (.66 * (ControlX - StartX))
Control2X = EndX + (.66 * (ControlX - EndX))
With the same essential functions used to calculate Control1Y and Control2Y?
Your code is right except that you should use 2.0/3.0 instead of 0.66.
You avoid most rounding errors by using
Control1 = (Start + 2 * Control) / 3
Control2 = (End + 2 * Control) / 3
Note that line segments are also convertible to cubic Bezier curves using:
Control1 = Start
Control2 = End
This can be handy when converting a complex path mixing various types of curves (linear, quadratic, cubic).
There's also a basic transform for converting elliptic arcs to cubic (with some minor unnoticeable errors): you just have to split at least the arc on elliptic quadrans (cutting the ellipse first on the two orthogonal axis of symetries, or on arbitrary orthogonal axis passing through the center if the ellipse is a circle, then representing each arc; when the ellipse is a circle, the two focal points are confused on the same point, the center of the circle, so you can use any direction for one of the orthogonal axis).
Many SVG renderers do that by adding an additional split on octants (so that you get also precise position not only for points where the two main axis are passing through, but also for two diagonal axis which are bissecting (when the ellipse is a circle) each quadrant (when the ellipse is not a circle, assimilate it as a circle flattened with a linear transform along the small axis only, you do the same computation), because octants are also quite precisely positioned:
cos(pi/4) = sin(pi/4) = sqrt(2)/2 ≈ 0.71, and because this additional splitting will allow precise rendering of tangents on points crossing the diagonals at 45 degrees of the circle.
A full ellipse is then converted to 8 cubic arcs (i.e. 8 points on ellipse and 16 control points): you'll almost not notice the difference between elliptical arcs and these generated cubic arcs
You can create an algorithm that uses the same "flattening error" computed when splitting a Bezier to a list of linear segments, which are then drawn using the classic fast Bresenham algo for line segments; a "flattenning" algorithm just has to measure the relative deviation of the sum of lengths of the two straight segments joining the two focal points of the ellipse to any point of the generated cubic arcs, as this sum is constant on any true ellipse: if you make this measurement on the generated control points for the cubic arcs, the difference should be below a given percentage of the expected sum, or within an absolute distance precision, and can be used to create better approximation of control points with a simple linear formula so that these added points will be on the real ellipse.
Such transform of arbitrary paths is useful when you want to derive other curves from the path, notably the curves of "buffers" at a given distance, notably when these paths must be converted to "strokes" with a defined "stroke width": you need to compute two "inner" and "outer" curves and then concentrate on how to converting the miters/buts/squares/rounded corners, and then to cut long miters at a convenient distance (matching the "miter limit" factor times the "stroke width").
More advanced renderers will also use miters represented by tangent circles when there's a corner between two arcs instead of two segments (this is useful for drawing cute geographic maps)...
Converting an arbitrary path mixing segments, elliptic and bezier arcs to only cubic arcs is a necessary step to compute precise images without excessive defects visible when zooming in. This is then necessary when your "stroke" buffers have to take some effects (such as computing dashes), and then enhancing the result with semi-transparent pixels or subpixels to smooth the rendered strokes (smoothing is easy to computez only when everything has been flattened to line segments, and alsos may be simpler to develop if it only has to manage paths containing only cubic beziers: it can easily be parallelized if needed and accelerated by hardware). Bezier arcs are always interesting because drawing them is fast and requires only basic arithmetics, and the time needed to draw them is proportional to the length of the curve with every point drawn with the same accuracy level.
In summary, all curves are representable by cubic Bezier arcs with a maximum measurable deviation allowed (you can set this maximum deviation to one half pixel, or one subpixel if you first scale up the measurement grid for half-toning or subpixel shading, and then represent accurately every curve with a reasonnaly fast rendering, and get accurate rendering at any zoom level with curves smoothed everywhere, including with half-toning or transparency levels when finally drawing the linear strokes with the classic Bresenham algorithm using fast integer-only arithmetics). These rendered curve will all have the correct tangeants everywhere, without any unexpected angles visible on approximation points, and the remaining control points in the approximation will make also a good smooth rendering of the curvature everywhere (i.e. radius of the tangeant circle), so you can use this approximation as well to derive other measurements such as acceleration, inertial forces, or magnetic effects of paths of charged particles).
If you ever need higher precision, use Bezier arcs with degree 4 (i.e. with 3 control points between points on curve) to get smoothed derivation at a supplementary degree (e.g. gradients of forces), or just split the cubic arcs with additional steps further, until the derivation is smooth enough (but using degree-4 Bezier arcs requires much less points curves and less control points for the same accuracy tolerances, than when using cubic Bezier only).

Resources