What "soup" (definition) means in "triangle soup" or "polygon soup"? - graphics

I have no idea what "soup" literally means when it is used in "triangle soup" or "polygon soup" regarding to computer graphics. Is it related to the "soup", we eat with spoon?
(I'm not a native English speaker.)

Wikipedia to the rescue!
A polygon soup is a group of unorganized triangles, with generally no
relationship whatsoever. Polygon soups are a geometry storage format
in a 3D modeling package, such as Maya, Houdini, or Blender. Polygon
soup can help save memory, load/write time, and disk space for large
polygon meshes compared to the equivalent polygon mesh.
http://en.wikipedia.org/wiki/Polygon_soup

When the faces of a polygon mesh are given but the connectivity is unknown, we must deal with of a polygon soup.
Polygon soup does not have any connectivity (each point has as many occurences as the number of polygons it belongs to).

Related

polygons from BSP

I have a 3d volume given by a binary space partition tree. Usually these are made from polygon models, and the splitted polygons already stored inside the tree nodes.
But mine is not, so I have no polygons. Every node has nothing but it's cut plane (given by normal and origin distance for example). Thus the tree still represent a solid 3d volume, defined by all the cuts made. However, for visualisation I need a polygonal mesh of this volume. How can that be reconstructed efficiently?
The crude method would be to convert the infinite half spaces of the leaves to large enough polhedrons (eg. cubes) and push every single one of them upwards the tree, cutting it by every node's plane it passes. That seems extremely costly, as the tree may be unbalanced (eg. if stupidly made from a convex polyhedra). Is there any classic solution?
In order to recover the polygonal surface you need to intersect the planes. Where each vertex of a polygon is generated by an intersection of three planes and each edge by an intersection of 2 planes. But making this efficient and numerical stable is no trivial task. So i propose to use qhalf that is part of qhull. A documentation of the input and ouput of qhalf can be found here. Of course you can use qhull (and the functionality from qhalf) as a library.

Convex hull of (longitude, latitude)-points on the surface of a sphere

Standard convex hull algorithms will not work with (longitude, latitude)-points, because standard algorithms assume you want the hull of a set of Cartesian points. Latitude-longitude points are not Cartesian, because longitude "wraps around" at the anti-meridian (+/- 180 degrees). I.e., two degrees east of longitude 179 is -179.
So if your set of points happens to straddle the anti-meridian, you will compute spurious hulls that stretch all the way around the world incorrectly.
Any suggestions for tricks I could apply with a standard convex hull algorithm to correct for this, or pointers to proper "geospherical" hull algorithms?
Now that I think on it, there are more interesting cases to consider than straddling the anti-merdian. Consider a "band" of points that encircle the earth -- its convex hull would have no east/west bounds. Or even further, what is the convex hull of {(0,0), (0, 90), (0, -90), (90, 0), (-90, 0), (180, 0)}? -- it would seem to contain the entire surface of the earth, so which points are on its perimeter?
Standard convex hull algorithms are not defeated by the wrapping-around of the coordinates on the surface of the Earth but by a more fundamental problem. The surface of a sphere (let's forget the not-quite-sphericity of the Earth) is not a Euclidean space so Euclidean geometry doesn't work, and convex hull routines which assume that the underlying space is Euclidean (show me one which doesn't, please) won't work.
The surface of the sphere conforms to the concepts of an elliptic geometry where lines are great circles and antipodal points are considered the same point. You've already started to experience the issues arising from trying to apply a Euclidean concept of convexity to an elliptic space.
One approach open to you would be to adopt the definitions of geodesic convexity and implement a geodesic convex hull routine. That looks quite hairy. And it may not produce results which conform to your (generally Euclidean) expectations. In many cases, for 3 arbitrary points, the convex hull turns out to be the entire surface of the sphere.
Another approach, one adopted by navigators and cartographers through the ages, would be to project part of the surface of the sphere (a part containing all your points) into Euclidean space (which is the subject of map projections and I won't bother you with references to the extensive literature thereon) and to figure out the convex hull of the projected points. Project the area you are interested in onto the plane and adjust the coordinates so that they do not wrap around; for example, if you were interested in France you might adjust all longitudes by adding 30deg so that the whole country was coordinated by +ve numbers.
While I'm writing, the idea proposed in #Li-aung Yip's answer, of using a 3D convex hull algorithm, strikes me as misguided. The 3D convex hull of the set of surface points will include points, edges and faces which lie inside the sphere. These literally do not exist on the 2D surface of the sphere and only change your difficulties from wrestling with the not-quite-right concept in 2D to quite-wrong in 3D. Further, I learned from the Wikipedia article I referenced that a closed hemisphere (ie one which includes its 'equator') is not convex in the geometry of the surface of the sphere.
Instead of considering your data as latitude-longitude data, could you instead consider it in 3D space and apply a 3D convex hull algorithm? You may be able to then find the 2D convex hull you desire by analysing the 3D convex hull.
This returns you to well-travelled algorithms for cartesian convex hulls (albeit in three dimensions) and has no issues with wrap around of the coordinates.
Alternately, there's this paper: Computing the Convex Hull of a Simple Polygon on the Sphere (1996) which seems to deal with some of the same issues that you're dealing with (coordinate wrap-around, etc.)
If all your points are within a hemisphere (that is, if you can find a cut plane through the center of the Earth that puts them all on one side), then you can do a central a.k.a. gnomic a.k.a. gnomonic projection from the center of the Earth to a plane parallel to the cut plane. Then all great circles become straight lines in the projection, and so a convex hull in the projection will map back to a correct convex hull on the Earth. You can see how wrong lat/lon points are by looking at the latitude lines in the "Gnomonic Projection" section here (notice that the longitude lines remain straight).
(Treating the Earth as a sphere still isn't quite right, but it's a good second approximation. I don't think points on a true least-distance path across a more realistic Earth (say WGS84) generally lie on a plane through the center. Maybe pretending they do gives you a better approximation than what you get with a sphere.)
FutureNerd:
You are absolutely correct. I had to solve the exact same problem as Maxy-B for my application. As a first iteration, I just treated (lng,lat) as (x,y) and ran a standard 2D algorithm. This worked fine as long as nobody looked too close, because all my data were in the contiguous U.S. As a second iteration, though, I used your approach and proved the concept.
The points MUST be in the same hemisphere. As it turns out, choosing this hemisphere is non-trivial (it's not just the points' center, as I had initially guessed.) To illustrate, consider the following four points: (0,0), (-60,0), (+60,0) along the equator, and (0,90) the north pole. However you choose to define "center", their center lies on the north pole by symmetry and all four points are in the Northern Hemisphere. However, consider replacing the fourth point with, say (-19, 64) iceland. Now their center is NOT on the north pole, but asymmetrically drawn toward iceland. However, all four points are still in the Northern Hemisphere. Further, the Northern Hemisphere, as uniquely defined by the North Pole, is the ONLY hemisphere they share. So calculating this "pole" becomes algorithmic, not algebraic.
See my repository for the Python code:
https://github.com/VictorDavis/GeoConvexHull
This question has been answered a while ago, but I would like to sum up the results of my research.
The spherical convex hull is basically defined only for non-antipodal points. Supposing all the points are on the same hemisphere, you can compute their convex hull in two main ways:
Project the points to a plane using gnomonic/central projection and apply a planar convex hull algorithm. See Lin-Lin Chen, T. C. Woo, "Computational Geometry on the Sphere With Application to Automated Machining" (1992). If the points are on a known hemisphere, you can hard-code on which plane to project the points unto.
Adapt planar convex hull algorithms to the sphere. See C. Grima and A. Marquez, "Computational Geometry on Surfaces: Performing Computational Geometry on the Cylinder, the Sphere, the Torus, and the Cone", Springer (2002). This reference seems to give a similar method to the abstract referenced by Li-aung Yip above.
For reference, in Python I am working on an implementation of my own, which currently works only for points on the Northern hemisphere.
See also this question on Math Overflow.
All edges of a spherical convex hull can be viewed/treated as great circles (seminally, all edges of a convex hull in euclidean space can be treated as lines (rather than a line segment)). Each one of these great circles cuts the sphere into two hemispheres. You could thus conceive each great circle as a constraint. A point that is within the convex hull will be on each of the hemispheres defined by each constraint.
Each edge of the original polygon is a candidate edge of the convex hull. To verify if it is indeed an edge of the convex hull, you'd simply need to verify if all nodes of the polygon are on the hemisphere defined by the great circle that goes through the two nodes of the edge in question. However, we'd still need to create new edges that surpass the concave nodes of the polygon.
But lets rather shortcut / brute-forces this:
Draw a great circle between every pair of nodes in the polygon. Do this in both directions (i.e. the great circle connecting A to B and the great circle connecting B to A). For a polygon with N nodes, you will thus end up with N^2 great circle. Each one of these great circles is a candidate constraint (i.e. a candidate edge of the convex polygon). Some of these great circles will overlap with the edges of the original polygon, but most won't. Now, remember again: each great circle is a constraint that constrains the sphere to one hemisphere. Now verify if all nodes of the original polygon satisfy the constraint (i.e. if all nodes are on the hemisphere defined by the great circle). If yes, then this great circle is an edge of the convex hull. If, however a single node of the original polygon does not satisfy the constraint, then it isn't and you can discard this great circle.
The beauty of this is that once you converted your latitudes and longitudes into cartesian vectors pointing onto the unit sphere, it really just requires dot products and cross products
- You find the great circle that passes through two points on a sphere by its cross product
- A point is on the hemisphere defined by a great circle if the dot product of the great circle and the point is greater (or equal) to 0.
So even for polygons with a large number of edges, this brute force method should work just fine.

Graphics-Related Question: Mesh and Geometry

What's the difference between mesh and geometry? Aren't they the same? i.e. collection of vertices that form triangles?
A point is geometry, but it is not a mesh. A curve is geometry, but it is not a mesh. An iso-surface is geometry, but it is not... enfin you get the point by now.
Meshes are geometry, not the other way around.
Geometry in the context of computing is far more limited that geometry as a branch of mathematics. There are only a few types of geometry typically used in computer graphics. Sprites are used when rendering points (particles), line segments are used when rendering curves and meshes are used when rendering surface-like geometry.
A mesh is typically a collection of polygons/geometric objects. For instance triangles, quads or a mixture of various polygons. A mesh is simply a more complex shape.
From Wikipedia:
Geometry is a part of mathematics
concerned with questions of size,
shape, and relative position of
figures and with properties of space
IMO a mesh falls under that criteria.
In the context implied by your question:
A mesh is a collection of polygons arranged in such a way that each polygon shares at least one vertex with another polygon in that collection. You can reach any polygon in a mesh from any other polygon in that mesh by traversing the edges and vertices that define those polygons.
Geometry refers to any object in space whose properties may be described according to the principles of the branch of mathematics known as geometry.
That the term "geometry" has different meanings mathematically and in rendering. In rendering it usually denotes what is static in a scene (walls, etc.) What is widely called a "mesh" is a group of geometrical objects (basically triangles) that describe or form an "object" in the scene - pretty much like envalid said it, but usually a mesh forms a single object or entity in a scene. Very often that is how rendering engines use the term: The geometrical data of each scene element (object, entity) composes that element's mesh.
Although this is tagged in "graphics", I think the answer connects with the interpretation from computational physics. There, we usually think of the geometry as an abstraction of the system that is to be represented/simulated, while the mesh is an approximation of the geometry - a compromise we usually have to make to be able to represent the spatial domain within the finite memory of the machine.
You can think of them basically as regular or unstructured sets of points "sprayed" on a surface or within a volume in space.
To be able to do visualization/simulation, it is also necessary to determine the neighbors of each point - for example using Delaunay triangulation which allows you to group sets of points into elements (for which you can solve algebraic versions of the equations describing your system).
In the context of surface representation in computer graphics, I think all major APIs (e.g. OpenGL) have functions which can display these primitives (which can be triangles as given by Delaunay, quads or maybe some other elements).

Triangulated Irregular Networks from qhull

I wanted to create TINs from 3D points (about 7 million in every file) using qhull.
can anyone suggest a place where i could probably see how to do this? thanks!
I've never used QHull since it is hard to integrate as a library into an existing project. Try out Triangle; it is specialized for 2D and is very easy to use (it comes with an example of how to call it from other C code).
I could recommend you a software package called Streaming Computation of Delaunay Triangulations. On a normal computer it can compute
Delaunay triangulations for large,
well-distributed data sets in 2D and
3D which can be greatly accelerated by
exploiting the natural spatial
coherence in a stream of points.
In terms of performance:
We compute a billion-triangle terrain
representation for the Neuse River
system from 11.2 GB of LIDAR data in
48 minutes using only 70 MB of memory
on a laptop.
Here is teaser image on how it works:
You can check out this video explaining their method/software.
Wiki says,
A TIN comprises a triangular network
of vertices, known as mass points,
with associated coordinates in three
dimensions connected by edges to form
a triangular tessellation.
Three-dimensional visualizations are
readily created by rendering of the
triangular facets. In regions where
there is little variation in surface
height, the points may be widely
spaced whereas in areas of more
intense variation in height the point
density is increased.
A TIN is typically based on a Delaunay
triangulation but its utility will be
limited by the selection of input data
points: well-chosen points will be
located so as to capture significant
changes in surface form, such as
topographical summits, breaks of
slope, ridges, valley floors, pits and
cols.
MATLAB can generate 3-D Delaunay tesselation and n-D Delaunay tesselation using Qhull.
3-dimensional Delaunay tessellation - tetramesh is used to plot the tetrahedrons that form the corresponding simplex
(source: mathworks.com)

What does 'Polygon' mean in terms of 3D Graphics?

An old Direct3D book says
"...you can achieve an acceptable frame
rate with hardware acceleration while
displaying between 2000 and 4000
polygons per frame..."
What is one polygon in Direct3D? Do they mean one primitive (indexed or otherwise) or one triangle?
That book means triangles. Otherwise, what if I wanted 1000-sided polygons? Could I still achieve 2000-4000 such shapes per frame?
In practice, the only thing you'll want it to be is a triangle because if a polygon is not a triangle it's generally tessellated to be one anyway. (Eg, a quad consists of two triangles, et cetera). A basic triangulation (tessellation) algorithm for that is really simple; you just loop though the vertices and turn every three vertices into a triangle.
Here, a "polygon" refers to a triangle. All . However, as you point out, there are many more variables than just the number of triangles which determine performance.
Key issues that matter are:
The format of storage (indexed or not; list, fan, or strip)
The location of storage (host-memory vertex arrays, host-memory vertex buffers, or GPU-memory vertex buffers)
The mode of rendering (is the draw primitive command issued fully from the host, or via instancing)
Triangle size
Together, those variables can create much greater than a 2x variation in performance.
Similarly, the hardware on which the application is running may vary 10x or more in performance in the real world: a GPU (or integrated graphics processor) that was low-end in 2005 will perform 10-100x slower in any meaningful metric than a current top-of-the-line GPU.
All told, any recommendation that you use 2-4000 triangles is so ridiculously outdated that it should be entirely ignored today. Even low-end hardware today can easily push 100,000 triangles in a frame under reasonable conditions. Further, most visually interesting applications today are dominated by pixel shading performance, not triangle count.
General rules of thumb for achieving good triangle throughput today:
Use [indexed] triangle (or quad) lists
Store data in GPU-memory vertex buffers
Draw large batches with each draw primitives call (thousands of primitives)
Use triangles mostly >= 16 pixels on screen
Don't use the Geometry Shader (especially for geometry amplification)
Do all of those things, and any machine today should be able to render tens or hundreds of thousands of triangles with ease.
According to this page, a polygon is n-sided in Direct3d.
In C#:
public static Mesh Polygon(
Device device,
float length,
int sides
)
As others already said, polygons here means triangles.
Main advantage of triangles is that, since 3 points define a plane, triangles are coplanar by definition. This means that every point within the triangle is exactly defined as a linear combination of polygon points. More vertices aren't necessarily coplanar, and they don't define a unique curved plane.
An advantage more in mechanical modeling than in graphics is that triangles are also undeformable.

Resources