Polygon Triangulation with Holes - graphics

I am looking for an algorithm or library (better) to break down a polygon into triangles. I will be using these triangles in a Direct3D application. What are the best available options?
Here is what I have found so far:
Ben Discoe's notes
FIST: Fast Industrial-Strength Triangulation of Polygons
I know that CGAL provides triangulation but am not sure if it supports holes.
I would really appreciate some opinions from people with prior experience in this area.
Edit: This is a 2D polygon.

To give you some more choices of libraries out there:
Polyboolean. I never tried this one, but it looks promising: http://www.complex-a5.ru/polyboolean/index.html
General Polygon Clipper. This one works very well in practice and does triangulation as well as clipping and holes holes: http://www.cs.man.ac.uk/~toby/alan/software/
My personal recommendation: Use the tesselation from the GLU (OpenGL Utility Library). The code is rock solid, faster than GPC and generates less triangles. You don't need an initialized OpenGL-Handle or anything like this to use the lib.
If you don't like the idea to include OpenGL system libs in a DirectX application there is a solution as well: Just download the SGI OpenGL reference implementation code and lift the triangulator from it. It just uses the OpenGL-Typedef names and a hand full of enums. That's it. You can extract the code and make a stand alone lib in an hour or two.
In general my advice would be to use something that alreay works and don't start to write your own triangulation.
It is tempting to roll your own if you have read about the ear-clipping or sweep-line algorithm, but fact is that computational geometry algorithms are incredible hard to write in a way that they work stable, never crash and always return a meaningful result. Numerical roundoff errors will accumulate and kill you in the end.
I wrote a triangulation algorithm in C for the company I work with. Getting the core algorithm working took two days. Getting it working with all kinds of degenerated inputs took another two years (I wasn't working fulltime on it, but trust me - I spent more time on it than I should have).

Jonathan Shewchuk's Triangle library is phenomenal; I've used it for automating triangulation in the past. You can ask it to attempt to avoid small/narrow triangles, etc., so you come up with "good" triangulations instead of just any triangulation.

CGAL has the tool you need:
Constrained Triangulations
You can simply provide boundaries of your polygon (incuding the boundaries of the holes) as constraints (the best would be that you insert all vertices, and then specify the constraints as pairs of Vertex_handles).
You can then tag the triangles of the triangulation by any traversal algorithm: start with a triangle incident to the infinite vertex and tag it as being outside, and each time you cross a constraint, switch to the opposite tag (inside if you were previously tagging the triangles as outsider, outside if you were tagging triangles as insider before).

I have found the poly2tri library to be exactly what I needed for triangulation. It produces a much cleaner mesh than other libraries I've tried (including libtess), and it does support holes as well. It's been converted to a bunch of languages. The license is New BSD, so you can use it in any project.
Poly2tri library on Google Code

try libtess2
https://code.google.com/p/libtess2/downloads/list
based on the original SGI GLU tesselator (with liberal licensing). Solves some memory management issues around lots of small mallocs.

You can add the holes relatively easily yourself. Basically triangulate to the convex hull of the input points, as per CGAL, and then delete any triangle whose incentre lies inside any of the hole polygons (or outside any of the external boundaries). When dealing with lots of holes in a large dataset, masking techniques may be used to significantly speed this process up.
edit: A common extension to this technique is to weed weak triangles on the hull, where the longest edge or smallest internal angle exceeds a given value. This will form a better concave hull.

I have implemented a 3D polygon triangulator in C# using the ear clipping method. It is easy to use, supports holes, is numerically robust, and supports aribtrary (not self-intersecting) convex/non-convex polygons.

This is a common problem in finite element analysis. It's called "automatic mesh generation". Google found this site with links to commercial and open source software. They usually presume some kind of CAD representation of the geometry to start.

Another option (with a very flexible license) is to port the algorithm from VTK:
vtkDelaunay2D
This algorithm works fairly well. Using it directly is possible, but requires links to VTK, which may have more overhead than you want (although it has many other nice features, as well).
It supports constraints (holes/boundaries/etc), as well as triangulating a surface that isn't necessarily in the XY plane. It also supports some features I haven't seen elsewhere (see the notes on Alpha values).

Related

Which CAD / geometry library should I use: CGAL Open CASCADE or Boost::Geometry something else?

For a customizable laser-cut drawers project I would like to be able to create the connection between any two intersecting perpendicular "2D" objects (with width) programmatically. For this I need to:
load and save 2D objects in some standard format.
extrude 2D object, and make the standard manipulations on 2D objects (specifically subtraction of objects.)
intersect 2 object, and be able to determine the "line" of intersection.
Which library/tool you would suggest for that? Can you estimate how difficult it would be to master it to do the above tasks?
Thanks,
Ronen
I'm biased ( PythonOCC dev ), but I'd say OpenCasCade would best fit your needs. Its comes wit Step and Iges importers out of the box ( more formats are supported commercially ), where you'd have to parse and reconstruct the geometry from a file with Boost and CGAL. Finally, neither of those packages deal with nurbs / CAD geometry ( BRep, boundary representation ) but merely triangles ( polygon soup ). So, OCC fits the scope best IMHO.
I have experience with OPENCASCADE and CGAL . Boost Geometry is very limited / simple and does not provider support for topological structures , solids , brep etc. The purpose has been quite different from the rest. Certainly, its quality is higher than any of the other two.
Opencascade among the three is least preferable choice in terms of quality. For example, everything is defined in global namespace, there are multiple macro definitions on header files, and has bloated classes. It has quite some support for various algorithms and constructs but only up to some point; for the rest you have to pay. These are somehow understandable, since it has been developed as an in-house library not for public access, and therefore they did not care about such things. The community is quite small, therefore you will have to go searching through the documentation a lot and experiment on how to do things. Usually, there are more than one way to work, and it is common to write many adaptors in your code to interface between different algorithms.
On the other hand CGAL is quite the opposite, has support for almost anything you can imagine, it is quite modern, and there is dedicated community along with very good documentation and examples for most use cases. There are different classes and algorithms depending on trade-off of the problem at hand. There is support for different UI (WIN/QT) and it interfaces well with STL / BOOST containers structures. Comparing to opencascade that does not even have a proper STL compatible iterator class, that is a significant difference.
Therefore, I would strongly suggest to work with CGAL.
In case you are forced to work with OPENCASCADE and want to use CGAL at the same time you will probably have to include header files such that Handle macro is undefed e.g.
#include <TopoDS_Shape.h>
#undef Handle
#include <CGAL/Alpha_shape_3.h>
#define Handle(ClassName) Handle_##ClassName

How do I reduce the coordinate count of an arbitrary SVG path, without losing much or any precision?

I am scouring the web for tools, programs, utilities, supporting libraries and code primitives that help optimize SVGs for simplicity, space and elegance recently, to link to from the Kilobyte SVG Challenge's tools section, but have yet to find good primitives focusing on how to reduce the number of coordinates of a path, without losing much – or ideally any – precision.
Take this marker-augmented version of the Coca Cola logo, for instance (~7kb, essentially all path data) – which very clearly shows lots of promise for reducing its number of bèziers, given some tooling to do the math to come up with a path using fewer nodes, while producing essentially the same curve.
For the much simpler problem of polygons and polylines (read "all-line paths"), you can use the Douglas–Peucker or Visvalingam’s algorithm (see Mike Bostock's excellent d3 implementation of the latter) to simply remove the coordinates least affecting the path's shape until you're happy with a size-to-precision fit suiting your needs.
I am looking for the equivalent that notices where larger curve (or even arc) segments could replace lots of these redundant mid-curve coordinate stops, without lots of manual tweaking. I think some vector graphics packages (Adobe Illustrator, maybe even Inkscape?) may offer features like these (tips on how to access them welcome!) - though I would love to find scriptable tools we can recommend and offer HOWTOs of how to use from the command line, or even web apps, that squeeze out excess path filler material for people.
For reference, the Kilobyte SVG Challenge is a for-fun SVG education and advocacy stunt I have set up, recently. All non-question-topic discussion about it are best held there, and/or on its github repository linked above. Stay awesome! :)
You can use Ramer–Douglas–Peucker algorithm to simplify polylines or polygons path.

Texture coordinates: how to calculate them?

Is there a formula that i can use to calculate texture coordinates for a complex object not something like cube or sphere?
The texture coordinates are usually set manually by whoever creates the model, using the modelling package.
There are ways of automating the whole process, to a great extent. The results may not be much use if somebody is going to draw the texture based on the UV coordinates, and if you ask the impossible (e.g., mapping a sphere exactly, with no distortion and no seams) then you may not get perfect results -- but for processes such as light mapping this is a common approach.
Levy's LSCM is one approach, as used in Blender, for example. See http://alice.loria.fr/index.php/publications.html?Paper=lscm#2002
Direct3D9 has a UV unwrap tool in its D3DX library; I'm not sure what algorithm it uses, and the documentation isn't amazing, but it does work. See
http://msdn.microsoft.com/en-us/library/bb206321(VS.85).aspx
(Most 3D modelling packages have some kind of automated UV unwrap, too, but in general they never seem to have had too much time spent on them. Presumably the expectation is that somebody will want to go through and fix it up by hand afterwards anyway.)

Antialiasing alternatives

I've seen antialiasing on Windows using GDI+, Java and also that provided by Photoshop and Gimp. Are there any other libraries out there which provide antialiasing facility without depending on support from the host OS?
Antigrain Geometry provides anti-aliased graphics in software.
As simon pointed out, the term anti-aliasing is misused/abused quite regularly so it's always helpful to know exactly what you're trying to do.
Since you mention GDI, I'll assume you're talking about maintaining nice crisp edges when you resize them - so something like a character in a font looks clean and not pixelated when you resize it 2x or 3x it's original size. For these sorts of things I've used a technique in the past called alpha-tested magnification - you can read the whitepaper here:
http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf
When I implemented it, I used more than one plane so I could get better edges on all types of objects, but it covers that briefly towards the end. Of all the approaches (that I've used) to maintain quality when scaling vector images, this was the easiest and highest quality. This also has the advantage of being easily implemented in hardware. From an existing API standpoint, your best bet is to use either OpenGL or Direct3D - that being said, it really only requires bilinear filtered and texture mapped to accomplish what it does, so you could roll your own (I have in the past). If you are always dealing with rectangles and only need to do scaling it's pretty trivial, and adding rotation doesn't add that much complexity. If you do roll your own, make sure to pay particular attention to subpixel positioning (how you resolve pixel positions that do not fall on a full pixel, as this is critical to the quality and sometimes overlooked.
Hope that helps!
There are (often misnamed, btw, but that's a dead horse) many anti-aliasing approaches that can be used. Depending on what you know about the original signal and what the intended use is, different things are most likely to give you the desired result.
"Support from the host OS" is probably most sensible if the output is through the OS display facilities, since they have the most information about what is being done to the image.
I suppose that's a long way of asking what are you actually trying to do? Many graphics libraries will provide some form of antialiasing, whether or not they'll be appropriate depends a lot on what you're trying to achieve.

Collision detection with hardware generated primitives

There's a lot of literature on collision detection, and I've read at least a big enough portion of it to be fairly familiar with most techniques. However, there's something that has eluded me for a while, and I figured, since StackOverflow provides access to a large group of brilliant minds at once, I'd ask here first before digging around in the bookshelf.
In this day and age, more and more work is being delegated to GPU rather than CPU, and in a lot of cases this is a good thing. For example, there are geometry shaders to create new geometry, or (slightly less new, but still quite fascinating) vertex shaders to which you can through a bunch of vertexes at, and something elegant will come out of it. What I was considering though, as these primitives exists only on the graphics hardware, how would you perform reliable collision detection with these primitives? Let's assume I have some kind of extremely simplified mesh which is displaced in a vertex shader (I don't have a concrete problem, I'm more playing with the idea, and I haven't gotten very deep into geometry shaders yet).
What I've considered so far is separate 'rendering' passes from suitable angles with shading more or less turned off, and perhaps lower resolution mesh, rendering the inside (with faces flipped inward) of my second primitive along with the mesh I want to test against, and executing an occlusion query for the mesh. If the mesh is completely occluded there'd be no intersection. This would of course require that my second primitive is convex.
Somehow I get the feeling that this kind of test will be extremely expensive as the number of primitives increase (even if a large portion can be culled directly). Does anyone else have another idea or technique? I'm more familiar with opengl and cg than directx, but if you have some examples or so in directx, I guess I'll be able to figure out the opengl counterparts.
All ideas are appreciated, so please brainstorm. :)
It sounds like Dan Horn's article “Stream Reduction Operations for GPGPU Applications” in GPU Gems 2 is exactly what you want. Like all chapters, it's freely available online.

Resources