The difference between triangulation and mesh - graphics

I have done some computer graphical programming recently, and I have no experience before. I used the library call CGAL(computer geometry algorithm library). Also, I noticed that there is class for triangulation and also class for mesh. Is mesh just a kind of triangle net? Do they have any differences?
Thanks!

Triangulation is one way to mesh the geometry. And it is also possible to represent geometry in different shapes.

Related

3D triangulation of the surface by points

I have an array of 3D points. And according to them, you need to build a triangulation only on the surface of the body, as shown in the figure. Which method is best to use? It will be great if you explain how your proposed method works.
Delaunay's triangulation didn't help
If c++ is fine, you can find in the CGAL library the advancing front surface reconstruction package.

Silhouette below 3D model

There are some 3D applications which can cast shadow or silhouette below 3D models. They render pretty fast and smooth. I wonder what kind of technology is the standard procedure to get 3D model shadow/silhouette.
For example is there any C++ library like libigl or CGAL to get shadow/silhouette pretty fast? Or maybe GLSL shading is used? Any hint would be appreciated on the standard technology stack.
For rendering, it's trivial. Just project the vertices to the surface (for the case of the XY plane, this just entails setting the Z coordinate to 0) and render the triangles. There'll be a lot of overlap, but since you're just rendering that won't matter.
If you're trying to build a set of polygons representing the silhouette shape, you'll need to instead union the projected triangles using something like the Vatti clipping algorithm.
Computing shadows is a vast and uneasy topic. In the real world, light sources are extended and the shadow edges are not sharp (there is penumbra). Then there are cast shadows, and even self-shadows.
If you limit yourself to punctual light sources (hence sharp shadows), there is a simple principle: if you place an observer at the light source, the faces he will see are illuminated by that light source. Conversely, the hidden surfaces are in the shadow.
For correct rendering, the shadowed areas should be back-projected to the scene and painted black.
By nature, the ray-tracing techniques make this process easy to implement.

Generating coordinates for abstract triangulation

I have a piece of abstract triangulation, made entirely out of equilateral triangles, that describes a curved 2d space. As such, some vertices have for example 7 equilateral triangles attached to them. Now I want to draw this as a terrain.
This has to be done in 3d, so I expect a lot of saddle nodes and some cone-like structures. I am currently trying to find a nice algorithm that does this for me, but as of yet I have come out empty handed. In principle you could 'just' solve a large set of quadratic equations that fixes all the distances, but this is unfeasible. I would be content with an algorithm that gives a best approximation.
Any advice?

Quad mesh generation code

I am looking for some sample code (any language) of quadrilateral mesh generation. However, is seems quite a difficult task!
I am not picky, I'd like to mesh at least polygons with holes, nothing fancy! So, we're talking about 2D planar shapes here.
Any hint?
PS. Of course, if it could even handle curved surfaces, I'd be even happier!
Quadrilateral meshing is by no means easy, especially if the elements should be more or less well-formed. There are no algorithms that can deal with any arbitrary shape without deteriorating element shapes. For a whole lot of problem classes, there are algorithms in applied mathematics and computational science books and papers.

Graphics-Related Question: Mesh and Geometry

What's the difference between mesh and geometry? Aren't they the same? i.e. collection of vertices that form triangles?
A point is geometry, but it is not a mesh. A curve is geometry, but it is not a mesh. An iso-surface is geometry, but it is not... enfin you get the point by now.
Meshes are geometry, not the other way around.
Geometry in the context of computing is far more limited that geometry as a branch of mathematics. There are only a few types of geometry typically used in computer graphics. Sprites are used when rendering points (particles), line segments are used when rendering curves and meshes are used when rendering surface-like geometry.
A mesh is typically a collection of polygons/geometric objects. For instance triangles, quads or a mixture of various polygons. A mesh is simply a more complex shape.
From Wikipedia:
Geometry is a part of mathematics
concerned with questions of size,
shape, and relative position of
figures and with properties of space
IMO a mesh falls under that criteria.
In the context implied by your question:
A mesh is a collection of polygons arranged in such a way that each polygon shares at least one vertex with another polygon in that collection. You can reach any polygon in a mesh from any other polygon in that mesh by traversing the edges and vertices that define those polygons.
Geometry refers to any object in space whose properties may be described according to the principles of the branch of mathematics known as geometry.
That the term "geometry" has different meanings mathematically and in rendering. In rendering it usually denotes what is static in a scene (walls, etc.) What is widely called a "mesh" is a group of geometrical objects (basically triangles) that describe or form an "object" in the scene - pretty much like envalid said it, but usually a mesh forms a single object or entity in a scene. Very often that is how rendering engines use the term: The geometrical data of each scene element (object, entity) composes that element's mesh.
Although this is tagged in "graphics", I think the answer connects with the interpretation from computational physics. There, we usually think of the geometry as an abstraction of the system that is to be represented/simulated, while the mesh is an approximation of the geometry - a compromise we usually have to make to be able to represent the spatial domain within the finite memory of the machine.
You can think of them basically as regular or unstructured sets of points "sprayed" on a surface or within a volume in space.
To be able to do visualization/simulation, it is also necessary to determine the neighbors of each point - for example using Delaunay triangulation which allows you to group sets of points into elements (for which you can solve algebraic versions of the equations describing your system).
In the context of surface representation in computer graphics, I think all major APIs (e.g. OpenGL) have functions which can display these primitives (which can be triangles as given by Delaunay, quads or maybe some other elements).

Resources