Triangulated Irregular Networks from qhull - geometry

I wanted to create TINs from 3D points (about 7 million in every file) using qhull.
can anyone suggest a place where i could probably see how to do this? thanks!

I've never used QHull since it is hard to integrate as a library into an existing project. Try out Triangle; it is specialized for 2D and is very easy to use (it comes with an example of how to call it from other C code).

I could recommend you a software package called Streaming Computation of Delaunay Triangulations. On a normal computer it can compute
Delaunay triangulations for large,
well-distributed data sets in 2D and
3D which can be greatly accelerated by
exploiting the natural spatial
coherence in a stream of points.
In terms of performance:
We compute a billion-triangle terrain
representation for the Neuse River
system from 11.2 GB of LIDAR data in
48 minutes using only 70 MB of memory
on a laptop.
Here is teaser image on how it works:
You can check out this video explaining their method/software.

Wiki says,
A TIN comprises a triangular network
of vertices, known as mass points,
with associated coordinates in three
dimensions connected by edges to form
a triangular tessellation.
Three-dimensional visualizations are
readily created by rendering of the
triangular facets. In regions where
there is little variation in surface
height, the points may be widely
spaced whereas in areas of more
intense variation in height the point
density is increased.
A TIN is typically based on a Delaunay
triangulation but its utility will be
limited by the selection of input data
points: well-chosen points will be
located so as to capture significant
changes in surface form, such as
topographical summits, breaks of
slope, ridges, valley floors, pits and
cols.
MATLAB can generate 3-D Delaunay tesselation and n-D Delaunay tesselation using Qhull.
3-dimensional Delaunay tessellation - tetramesh is used to plot the tetrahedrons that form the corresponding simplex
(source: mathworks.com)

Related

Consistent normal calculation of a point cloud

Is there a library in python or c++ that is capable of estimating normals of point clouds in a consistent way?
In a consistent way I mean that the orientation of the normals is globally preserved over the surface.
For example, when I use python open3d package:
downpcd.estimate_normals(search_param=o3d.geometry.KDTreeSearchParamHybrid(
radius=4, max_nn=300))
I get an inconsistent results, where some of the normals point inside while the rest point outside.
many thanks
UPDATE: GOOD NEWS!
The tangent plane algorithm is now implemented in Open3D!
The source code and the documentation.
You can just call pcd.orient_normals_consistent_tangent_plane(k=15).
And k is the knn graph parameter.
Original answer:
Like Mark said, if your point cloud comes from multiple depth images, then you can call open3d.geometry.orient_normals_towards_camera_location(pcd, camera_loc) before concatenating them together (assuming you're using python version of Open3D).
However, if you don't have that information, you can use the tangent plane algorithm:
Build knn-graph for your point cloud.
The graph nodes are the points. Two points are connected if one is the other's k-nearest-neighbor.
Assign weights to the edges in the graph.
The weight associated with edge (i, j) is computed as 1 - |ni ⋅ nj|
Generate the minimal spanning tree of the resulting graph.
Rooting the tree at an initial node,
traverse the tree in depth-first order, assigning each node an
orientation that is consistent with that of its parent.
Actually the above algorithm comes from Section 3.3 of Hoppe's 1992
SIGGRAPH paper Surface Reconstruction from Unorganized Points. The algorithm is also open sourced.
AFAIK the algorithm does not guarantee a perfect orientation, but it should be good enough.
If you know the viewpoint from where each point was captured, it can be used to orient the normals.
I assume that this not the case - so given your situation, which seems rather watertight and uniformly sampled, mesh reconstruction is promising.
PCL library offers many alternatives in the surface module. For the sake of normal estimation, I would start with either:
ConcaveHull
Greedy projection triangulation
Although simple, they should be enough to produce a single coherent mesh.
Once you have a mesh, each triangle defines a normal (the cross product). It is important to note that a mesh isn't just a collection of independent faces. The faces are connected and this connectivity enforces a coherent orientation across the mesh.
pcl::PolygonMesh is an "half edge data structure". This means that every triangle face is defined by an ordered set of vertices, which defines the orientation:
order of vertices => order of cross product => well defined unambiguous normals
You can either use the normals from the mesh (nearest neighbor), or calculate a low resolution mesh and just use it to orient the cloud.

Interpolation of Polyhedron

Given a polyhedron defined by a matrix of 3-Dimensional vertices and its faces(delaunay triangles), I want to be able to create a smooth 3-D object.
Is there any software that has built a built in function that would allow me to do this?
If not, I have found a paper that seems to describe what I want, but I am unable to fully understand the math. http://graphics.berkeley.edu/papers/Turk-MIS-2002-10/Turk-MIS-2002-10.pdf.
Here is an examples of what I am looking for.
Rabbit
One solution for "smoothing" geometry, if we state the problem a bit more formally, is to perform mean curvature flow on your mesh. Here are some search terms - "curve-shortening flow", "mean curvature flow", "willmore flow", "conformal curvature flow" ...
Image source: Keenan Crane. Context and permission
"Smoothness of a surface or curve is very hard to define. (For an empirical test on what people perceive as smooth see http://www.levien.com/phd/thesis.pdf#page=23).
If you only care about perceived smoothness, for example, smoother appearance while rendering in high resolution etc., an easier approach would be Catmull-Clark subdivision scheme.
The geometric intuition is quite simple. In the case of a 2D curve, in every instance, every point on a curve moves according to some function of the curvature at that point. If we let the curve or surface move like this for some time, it will start smoothing out areas with high curvature more and more, eventually becoming a circle (or a sphere in 3d) and then collapse to a point. So for smoothing usually we have to preserve areas or volumes.
One way to define it is in terms of some energy, and our goal is to minimise this energy on the mesh. For example willmore flow minimises the willmore energy. Sometimes this process is called fairing.
I am not aware of a prepackaged library or tool, that's freely available and open source for curvature flow.
Algorithms
2D only
K.Mikula, D.Sevcovic, "Tangentially stabilized Lagrangian algorithm for elastic curve evolution driven by intrinsic Laplacian of curvature",
pdf
2D and 3D
https://www.youtube.com/watch?v=Jhqlmcms04M.
Keenan Crane's page has more information on this and more examples too.
http://www.cs.cmu.edu/~kmcrane/Projects/ConformalWillmoreFlow/
2D and 3D (level set method)
https://math.berkeley.edu/~sethian/2006/level_set.html

What formula or algorithm can I use to draw a 3D Sphere without using OpenGL-like libs?

I know that there are 4 techniques to draw 3D objects:
(1) Wireframe Modeling and rendering, (2) Additive Modeling, (3) Subtractive Modeling, (4) Splines and curves.
Then, those models go through hidden surface removal algorithm.
Am I correct?
Be that way, What formula or algorithm can I use to draw a 3D Sphere?
I am using a low-level library named WinBGIm from colorado university.
there are 4 techniques to draw 3D objects:
(1) Wireframe Modeling and rendering, (2) Additive Modeling, (3) Subtractive Modeling, (4) Splines and curves.
These are modelling techniques and not rendering techniques. They allow you to mathematically define your mesh's geometry. How you render this data on to a 2D canvas is another story.
There are two fundamental approaches to rendering 3D models on a 2D canvas.
Ray Tracing
The basic idea of ray tracing is to pass a ray from the camera's origin, through the point on the canvas whose colour needs to be determined. Determine which models get hit by it and pick the closest one, determine how it's lit to compute the colour there. This is done by further tracing rays from the hit point to all the light sources in the scene. If you notice, this approach eliminates the need to use hidden surface determination algorithms like the back face culling, z-buffer, etc. since the basic idea is rooted on a hidden surface algorithm (ray tracing).
There are packages, libraries, etc. that help you do this. However, it's common that ray tracers are written from scratch as a college-level project. However, this approach takes more time to render (not to code), but the results are generally more pleasing than the below one. This approach is more popular when you want to render non-interactive visuals like movies.
Rasterization
This approach takes primitives (triangles and quads) that define the models in the scene and sample them at regular intervals (screen pixels they cover) and write it on to a colour buffer. Here hidden surface is usually eliminated using the Z-buffer; a buffer that stores the z-order of the fragment and the closer one wins, when writing to the colour buffer.
Rasterization is the more popular approach with cheap hardware support for it available on most modern computers due to years of research and money that has gone in to it. Libraries like OpenGL and Direct3D are readily available to facilitate development. Although the results are less pleasing than ray tracing, it's faster to render and thus is widely used in interactive, real-time rendering like games.
If you want to not use those libraries, then you have to do what is commonly known as software rendering i.e. you will end up doing what these libraries do.
What formula or algorithm can I use to draw a 3D Sphere?
Depends on which one of the above you choose. If you simply rasterize a 3D sphere in 2D with orthographic projection, all you have to do is draw a circle on the canvas.
If you are looking for hidden lines removal (drawing the edges rather than the inside of the faces), the solution is easy: "back face culling".
Every edge of your model belongs to two faces. For every face you can compute the normal vector and check if it is facing to the observer (by the sign of the dot product of the normal and the direction of the projection line); in other words, if the observer is located in the outer half-space defined by the plane of the face. Then an edge is wholly visible if and only if it belongs to at least one front face.
Usual discretization of the sphere are made by drawing equidistant parallels and meridians. It may be advantageous to adjust the spacing of the parallels so that all tiles are about the same area.

How does a graphics engine figure out how to place pixels to make a 3d image?

I was wondering what procedure a simple 3d program uses to draw 2d pixels so that they appear 3d. I'm really interested in this for drawing purposes since if a program can figure out how to use a flat screen to produce images with depth then maybe I could use those techniques in my drawing.
Are there any basic 3d engine out there I can look at? Without any 2d to 3d abstractions?
Two notions may interest you:
The perspective projection, which is the mathematical transformation which takes 3D points (or vertices) and the characteristics of your camera (position, orientation, frustrum, ...) and gives you the 2D projection of the point on your chosen medium (screen).
Wikipedia - 3D Projection
StackOverflow - Transform GPS-Points to Screen-Points with Perspective Projection in Android (I made a detailed answer)
The Painter's algorithm (since you seem to ask for drawing-related techniques), a rendering method which sorts by depth all the elements of your scene after their projection, and draws them on your medium by decreasing depth, to ensure a realistic output ("far objects hidden behind closer ones" - imitating painters method). This algorithm has however some limits (far from efficient in its basic implementation, can't easily deal with elements intersecting or circularly overlapping each others), so most of the days a more efficient method is used, the Z-buffering, which deals with depth conflicts on a pixel-to-pixel basis.
Wikipedia - Painter's algorithm
Wikipedia - Z-buffering
By combining those notions, you can actually implement your own simple 3D engine (in the other StackOverflow thread I'm pointing, I gave a link to an article I made about creating such an engine easily).
If you want to look at more complex engines and notions, you can take a look at the GPU Gems 3 by Nvidia for instance, or look at articles about OpenGL.
Hope it helped, bye !

What does 'Polygon' mean in terms of 3D Graphics?

An old Direct3D book says
"...you can achieve an acceptable frame
rate with hardware acceleration while
displaying between 2000 and 4000
polygons per frame..."
What is one polygon in Direct3D? Do they mean one primitive (indexed or otherwise) or one triangle?
That book means triangles. Otherwise, what if I wanted 1000-sided polygons? Could I still achieve 2000-4000 such shapes per frame?
In practice, the only thing you'll want it to be is a triangle because if a polygon is not a triangle it's generally tessellated to be one anyway. (Eg, a quad consists of two triangles, et cetera). A basic triangulation (tessellation) algorithm for that is really simple; you just loop though the vertices and turn every three vertices into a triangle.
Here, a "polygon" refers to a triangle. All . However, as you point out, there are many more variables than just the number of triangles which determine performance.
Key issues that matter are:
The format of storage (indexed or not; list, fan, or strip)
The location of storage (host-memory vertex arrays, host-memory vertex buffers, or GPU-memory vertex buffers)
The mode of rendering (is the draw primitive command issued fully from the host, or via instancing)
Triangle size
Together, those variables can create much greater than a 2x variation in performance.
Similarly, the hardware on which the application is running may vary 10x or more in performance in the real world: a GPU (or integrated graphics processor) that was low-end in 2005 will perform 10-100x slower in any meaningful metric than a current top-of-the-line GPU.
All told, any recommendation that you use 2-4000 triangles is so ridiculously outdated that it should be entirely ignored today. Even low-end hardware today can easily push 100,000 triangles in a frame under reasonable conditions. Further, most visually interesting applications today are dominated by pixel shading performance, not triangle count.
General rules of thumb for achieving good triangle throughput today:
Use [indexed] triangle (or quad) lists
Store data in GPU-memory vertex buffers
Draw large batches with each draw primitives call (thousands of primitives)
Use triangles mostly >= 16 pixels on screen
Don't use the Geometry Shader (especially for geometry amplification)
Do all of those things, and any machine today should be able to render tens or hundreds of thousands of triangles with ease.
According to this page, a polygon is n-sided in Direct3d.
In C#:
public static Mesh Polygon(
Device device,
float length,
int sides
)
As others already said, polygons here means triangles.
Main advantage of triangles is that, since 3 points define a plane, triangles are coplanar by definition. This means that every point within the triangle is exactly defined as a linear combination of polygon points. More vertices aren't necessarily coplanar, and they don't define a unique curved plane.
An advantage more in mechanical modeling than in graphics is that triangles are also undeformable.

Resources