I am currently experimenting with openGL, and I'm drawing a lot of circles that I have to break down to triangles (triangulate a circle).
I've been calculating the vertices of the triangles by having an angle that is incremented, and using cos() and sin() to get the x and y values for one vertex.
I searched a bit on the internet about the best and most efficient way of doing this, and even though there's not much information avaliable realized that thin and long triangles (my approach) are not very good. A better approach would be to start with an equilateral triangle and then repeatedly add triangles that cover the larges possible area that's not yet covered.
left - my method; right - new method
I am wondering if this is the most efficient way of doing this, and if yes, how would that be implemented in actual code.
The website where I found the method: link
both triangulations has their pros and cons
The Triangle FAN has equal sized triangles which sometimes looks better with textures (and other interpolated stuff) and the code to generate is simple for loop with parametric circle equation.
The increasing detail mesh has less triangles and can easily support LOD which might be faster. However number of points is not arbitrary (3,6,12,24,48,...). The code is slightly more complicated you could:
start with equilateral triangle remembering circumference edges
so add triangle (p0,p1,p2) to mesh and edges (p0,p1),(p1,p2),(p2,p0) to circumference.
for each edge (p0,p1) of circumference
compute:
p2 = 0.5*(p0+p1); // mid point
p2 = r*p2/|p2|; // normalize it to circle circumference assuming (0,0) is center
add triangle (p0,p1,p2) to mesh and replace p0,p1 edge with (p0,p2),(p2,p1) edges
note that r/|p2| will be the same for all edges in current detail level so no need to compute expensive |p2| over and over again.
goto #2 until you have enough dense triangulation
so 2 for loops and few dynamic lists (points,triangles,circumference_edges,and some temps if not doing this inplace). Also this method does not need goniometrics at all (can be modified to generate the triangle fan too).
Here similar stuff:
sphere triangulation using similar technique
My app captures the shape of a room by having the user point a camera at floor corners, and then doing a bunch of math, eventually ending up with a polygon.
The assumption is that the walls are straight (not curved). The majority of the corners are formed by walls at right angles to each other, but in some cases might not be.
Depending on how accurately the user points the camera, the (x,y) coordinates I derive for the corner might be beyond the actual corner, or in front of the actual camera, or, less likely, to the left or right. Obviously, in this case, when I connect the dots, I get weird parallelogram or rhomboid shapes. See example.
I am looking for a program or algorithm to normalize or regularize these shapes, provided we know which corners are supposed to be right angles.
My initial attempt involved finding segments which had angles which were "close" to each other, adjust them all to the same angle, and then recalculate the vertices. However, this algorithm proved to be unstable.
My current thinking is to find angles which are most obtuse (as would be caused by a point mistakenly placed beyond the actual corner), or most acute (as would be caused by a point mistakenly placed in front of the actual corner), and find the corner point which would make it a right angle. The problem, however, is that such as adjustment could have side-effects on other corners, such as making them even further away from right angles. I sense I need some kind of algorithm which takes all the information and optimizes/solves it at once--is this a kind of linear programming problem?--but I am stuck.
There is not a unique solution.
For example, take the perpendicular from the middle point of an edge to the two neighboring edges. This will give you two new corners.
Or take the perpendicular from the end point of an edge to other edges.
Or compute the average of angles in the end points of an edge. Use this average and the middle point of the edge to compute new corners.
Or...
To get the most faithful compliance, capture (or calculate) distances from each corner to the other three. Build triangles with those distances. Then use the average of the coordinates you compute for a corner from 2 or 3 triangles.
Resulting angles will not be exactly 90 degrees, but the polygon will represent the room fairly.
I am looking for an algorithm for the following problem:
Given:
A 3D triangle mesh. The mesh represents a part of the surface of the earth.
A polyline (a connected series of line segments) whose vertices are always on an edge or on a vertex of a triangle of the mesh. The polyline represents the centerline of a road on the surface of the earth.
I need to calculate and display the road i.e. add half of the road's width on each side of the center line, calculate the resulting vertices in the corresponding triangles of the mesh, fill the area of the road and outline the sides of the road.
What is the simplest and/or most effective strategy to do this? How do I store the data of the road most efficiently?
I see 2 options here:
render thick polyline with road texture
While rendering polyline you need TBN matrix so use
polyline tangent as tangent
surface normal as normal
binormal=tangent x normal
shift actual point p position to
p0=p+d*binormal
p1=p-d*binormal
and render textured line (p0,p1). This approach is not precise match to surface mesh so you need to disable depth or use some sort of blending. Also on sharp turns it could miss some parts of a curve (in that case you can render rectangle or disc instead of line.
create the mesh by shifting polyline to sides by half road size
This produces mesh accurate road fit, but due to your limitations the shape of the road can be very distorted without mesh re-triangulation in some cases. I see it like this:
for each segment of road cast 2 lines shifted by half of road size (green,brown)
find their intersection (aqua dots) with shared edge of mesh with the current road control point (red dot)
obtain the average point (magenta dot) from the intersections and use that as road mesh vertex. In case one of the point is outside shared mesh ignore it. In case both intersections are outside shared edge find closest intersection with different edge.
As you can see this can lead to serious road thickness distortions in some cases (big differences between intersection points, or one of the intersection points is outside surface mesh edge).
If you need accurate road thickness then use the intersection of the casted lines as a road control point instead. To make it possible either use blending or disabling Depth while rendering or add this point to mesh of the surface by re-triangulating the surface mesh. Of coarse such action will also affect the road mesh and you need to iterate few times ...
Another way is use of blended texture for road (like sprites) and compute the texture coordinate for the control points. If the road is too thick then thin it by shifting the texture coordinate ... To make this work you need to select the most far intersection point instead of average ... Compute the real half size of the road and from that compute texture coordinate.
If you get rid of the limitation (for road mesh) that road vertex points are at surface mesh segments or vertexes then you can simply use the intersection of shifted lines alone. That will get rid of the thickness artifacts and simplify things a lot.
I have two objects: A sphere and an object. Its an object that I created using surface reconstruction - so we do not know the equation of the object. I want to know the intersecting points on the sphere when the object and the sphere intersect. If we had a sphere and a cylinder, we could solve for the equation and figure out the area and all that but the problem here is that the object is not uniform.
Is there a way to find out the intersecting points or area on the sphere?
I'd start by finding the intersection of triangles with the sphere. First find the intersection of each triangle's plane and the sphere, which gives a circle. Then find the circle's intersection/s with the triangle edges in 2D using line/circle tests. The result will be many arcs which I guess you could approximate with lines. I'm not really sure where to go from here without knowing the end goal.
If it's surface area you're after, maybe a numerical approach would be better. I'd cover the sphere in points and count the number inside the non-uniform object. To find if a point is inside, maybe trace outwards and count the intersections with the surface (if it's odd, the point is inside). You could use the stencil buffer for this if you wanted (similar to stencil shadows).
If you want the volume of intersection a quick google search gives "carve", a mesh based CSG library.
Starting with triangles versus the sphere will give you the points of intersection.
You can take the arcs of intersection with each surface and combine them to make fences around the sphere. Ideally your reconstructed object will be in winged-edge format so you could just step from one fence segment to the next, but with reconstructed surfaces I guess you might need to apply some slightly fuzzy logic.
You can determine which side of each fence is inside the reconstructed object and which side is out by factoring in the surface normals along the fence.
You can then cut the sphere along the fences and add the internal bits to the display.
For the other side of things you could remove any triangle completely inside the sphere and cut those that intersect.
Standard convex hull algorithms will not work with (longitude, latitude)-points, because standard algorithms assume you want the hull of a set of Cartesian points. Latitude-longitude points are not Cartesian, because longitude "wraps around" at the anti-meridian (+/- 180 degrees). I.e., two degrees east of longitude 179 is -179.
So if your set of points happens to straddle the anti-meridian, you will compute spurious hulls that stretch all the way around the world incorrectly.
Any suggestions for tricks I could apply with a standard convex hull algorithm to correct for this, or pointers to proper "geospherical" hull algorithms?
Now that I think on it, there are more interesting cases to consider than straddling the anti-merdian. Consider a "band" of points that encircle the earth -- its convex hull would have no east/west bounds. Or even further, what is the convex hull of {(0,0), (0, 90), (0, -90), (90, 0), (-90, 0), (180, 0)}? -- it would seem to contain the entire surface of the earth, so which points are on its perimeter?
Standard convex hull algorithms are not defeated by the wrapping-around of the coordinates on the surface of the Earth but by a more fundamental problem. The surface of a sphere (let's forget the not-quite-sphericity of the Earth) is not a Euclidean space so Euclidean geometry doesn't work, and convex hull routines which assume that the underlying space is Euclidean (show me one which doesn't, please) won't work.
The surface of the sphere conforms to the concepts of an elliptic geometry where lines are great circles and antipodal points are considered the same point. You've already started to experience the issues arising from trying to apply a Euclidean concept of convexity to an elliptic space.
One approach open to you would be to adopt the definitions of geodesic convexity and implement a geodesic convex hull routine. That looks quite hairy. And it may not produce results which conform to your (generally Euclidean) expectations. In many cases, for 3 arbitrary points, the convex hull turns out to be the entire surface of the sphere.
Another approach, one adopted by navigators and cartographers through the ages, would be to project part of the surface of the sphere (a part containing all your points) into Euclidean space (which is the subject of map projections and I won't bother you with references to the extensive literature thereon) and to figure out the convex hull of the projected points. Project the area you are interested in onto the plane and adjust the coordinates so that they do not wrap around; for example, if you were interested in France you might adjust all longitudes by adding 30deg so that the whole country was coordinated by +ve numbers.
While I'm writing, the idea proposed in #Li-aung Yip's answer, of using a 3D convex hull algorithm, strikes me as misguided. The 3D convex hull of the set of surface points will include points, edges and faces which lie inside the sphere. These literally do not exist on the 2D surface of the sphere and only change your difficulties from wrestling with the not-quite-right concept in 2D to quite-wrong in 3D. Further, I learned from the Wikipedia article I referenced that a closed hemisphere (ie one which includes its 'equator') is not convex in the geometry of the surface of the sphere.
Instead of considering your data as latitude-longitude data, could you instead consider it in 3D space and apply a 3D convex hull algorithm? You may be able to then find the 2D convex hull you desire by analysing the 3D convex hull.
This returns you to well-travelled algorithms for cartesian convex hulls (albeit in three dimensions) and has no issues with wrap around of the coordinates.
Alternately, there's this paper: Computing the Convex Hull of a Simple Polygon on the Sphere (1996) which seems to deal with some of the same issues that you're dealing with (coordinate wrap-around, etc.)
If all your points are within a hemisphere (that is, if you can find a cut plane through the center of the Earth that puts them all on one side), then you can do a central a.k.a. gnomic a.k.a. gnomonic projection from the center of the Earth to a plane parallel to the cut plane. Then all great circles become straight lines in the projection, and so a convex hull in the projection will map back to a correct convex hull on the Earth. You can see how wrong lat/lon points are by looking at the latitude lines in the "Gnomonic Projection" section here (notice that the longitude lines remain straight).
(Treating the Earth as a sphere still isn't quite right, but it's a good second approximation. I don't think points on a true least-distance path across a more realistic Earth (say WGS84) generally lie on a plane through the center. Maybe pretending they do gives you a better approximation than what you get with a sphere.)
FutureNerd:
You are absolutely correct. I had to solve the exact same problem as Maxy-B for my application. As a first iteration, I just treated (lng,lat) as (x,y) and ran a standard 2D algorithm. This worked fine as long as nobody looked too close, because all my data were in the contiguous U.S. As a second iteration, though, I used your approach and proved the concept.
The points MUST be in the same hemisphere. As it turns out, choosing this hemisphere is non-trivial (it's not just the points' center, as I had initially guessed.) To illustrate, consider the following four points: (0,0), (-60,0), (+60,0) along the equator, and (0,90) the north pole. However you choose to define "center", their center lies on the north pole by symmetry and all four points are in the Northern Hemisphere. However, consider replacing the fourth point with, say (-19, 64) iceland. Now their center is NOT on the north pole, but asymmetrically drawn toward iceland. However, all four points are still in the Northern Hemisphere. Further, the Northern Hemisphere, as uniquely defined by the North Pole, is the ONLY hemisphere they share. So calculating this "pole" becomes algorithmic, not algebraic.
See my repository for the Python code:
https://github.com/VictorDavis/GeoConvexHull
This question has been answered a while ago, but I would like to sum up the results of my research.
The spherical convex hull is basically defined only for non-antipodal points. Supposing all the points are on the same hemisphere, you can compute their convex hull in two main ways:
Project the points to a plane using gnomonic/central projection and apply a planar convex hull algorithm. See Lin-Lin Chen, T. C. Woo, "Computational Geometry on the Sphere With Application to Automated Machining" (1992). If the points are on a known hemisphere, you can hard-code on which plane to project the points unto.
Adapt planar convex hull algorithms to the sphere. See C. Grima and A. Marquez, "Computational Geometry on Surfaces: Performing Computational Geometry on the Cylinder, the Sphere, the Torus, and the Cone", Springer (2002). This reference seems to give a similar method to the abstract referenced by Li-aung Yip above.
For reference, in Python I am working on an implementation of my own, which currently works only for points on the Northern hemisphere.
See also this question on Math Overflow.
All edges of a spherical convex hull can be viewed/treated as great circles (seminally, all edges of a convex hull in euclidean space can be treated as lines (rather than a line segment)). Each one of these great circles cuts the sphere into two hemispheres. You could thus conceive each great circle as a constraint. A point that is within the convex hull will be on each of the hemispheres defined by each constraint.
Each edge of the original polygon is a candidate edge of the convex hull. To verify if it is indeed an edge of the convex hull, you'd simply need to verify if all nodes of the polygon are on the hemisphere defined by the great circle that goes through the two nodes of the edge in question. However, we'd still need to create new edges that surpass the concave nodes of the polygon.
But lets rather shortcut / brute-forces this:
Draw a great circle between every pair of nodes in the polygon. Do this in both directions (i.e. the great circle connecting A to B and the great circle connecting B to A). For a polygon with N nodes, you will thus end up with N^2 great circle. Each one of these great circles is a candidate constraint (i.e. a candidate edge of the convex polygon). Some of these great circles will overlap with the edges of the original polygon, but most won't. Now, remember again: each great circle is a constraint that constrains the sphere to one hemisphere. Now verify if all nodes of the original polygon satisfy the constraint (i.e. if all nodes are on the hemisphere defined by the great circle). If yes, then this great circle is an edge of the convex hull. If, however a single node of the original polygon does not satisfy the constraint, then it isn't and you can discard this great circle.
The beauty of this is that once you converted your latitudes and longitudes into cartesian vectors pointing onto the unit sphere, it really just requires dot products and cross products
- You find the great circle that passes through two points on a sphere by its cross product
- A point is on the hemisphere defined by a great circle if the dot product of the great circle and the point is greater (or equal) to 0.
So even for polygons with a large number of edges, this brute force method should work just fine.