Need Help to Visualize Atom contact in Delaunay Triangulation - graphics

I am new to CGAL . I am working on a school project to compute the Delaunay Triangulation of Protein structure . How can I visualize the DT structure in Mesh lab . I tried using Poison surface reconstruction, but PSR is using constrained DT and adding new edges which is not I want.
I want to visualize the edge contacts between 3d atom points in Delaunay
Triangulation. Could anyone help me.

Poisson surface reconstruction is not using constrained Delaunay Triangulation, it is defining a function which 0-level is meshed using the CGAL surface mesher.
If you want to compute the Delaunay triangulation of a point set, simply use the class Delaunay_triangulation_3. The best way to visualize the triangulation is to display its edges. I don't think meshlab is able to display polylines but I think it is pretty straight forward to modify one of the example provided by CGAL, extract the edges of the triangulation and generate for example a CGO that you can open in PyMol in addition to your protein structure.

Related

3D triangulation of the surface by points

I have an array of 3D points. And according to them, you need to build a triangulation only on the surface of the body, as shown in the figure. Which method is best to use? It will be great if you explain how your proposed method works.
Delaunay's triangulation didn't help
If c++ is fine, you can find in the CGAL library the advancing front surface reconstruction package.

Consistent normal calculation of a point cloud

Is there a library in python or c++ that is capable of estimating normals of point clouds in a consistent way?
In a consistent way I mean that the orientation of the normals is globally preserved over the surface.
For example, when I use python open3d package:
downpcd.estimate_normals(search_param=o3d.geometry.KDTreeSearchParamHybrid(
radius=4, max_nn=300))
I get an inconsistent results, where some of the normals point inside while the rest point outside.
many thanks
UPDATE: GOOD NEWS!
The tangent plane algorithm is now implemented in Open3D!
The source code and the documentation.
You can just call pcd.orient_normals_consistent_tangent_plane(k=15).
And k is the knn graph parameter.
Original answer:
Like Mark said, if your point cloud comes from multiple depth images, then you can call open3d.geometry.orient_normals_towards_camera_location(pcd, camera_loc) before concatenating them together (assuming you're using python version of Open3D).
However, if you don't have that information, you can use the tangent plane algorithm:
Build knn-graph for your point cloud.
The graph nodes are the points. Two points are connected if one is the other's k-nearest-neighbor.
Assign weights to the edges in the graph.
The weight associated with edge (i, j) is computed as 1 - |ni ⋅ nj|
Generate the minimal spanning tree of the resulting graph.
Rooting the tree at an initial node,
traverse the tree in depth-first order, assigning each node an
orientation that is consistent with that of its parent.
Actually the above algorithm comes from Section 3.3 of Hoppe's 1992
SIGGRAPH paper Surface Reconstruction from Unorganized Points. The algorithm is also open sourced.
AFAIK the algorithm does not guarantee a perfect orientation, but it should be good enough.
If you know the viewpoint from where each point was captured, it can be used to orient the normals.
I assume that this not the case - so given your situation, which seems rather watertight and uniformly sampled, mesh reconstruction is promising.
PCL library offers many alternatives in the surface module. For the sake of normal estimation, I would start with either:
ConcaveHull
Greedy projection triangulation
Although simple, they should be enough to produce a single coherent mesh.
Once you have a mesh, each triangle defines a normal (the cross product). It is important to note that a mesh isn't just a collection of independent faces. The faces are connected and this connectivity enforces a coherent orientation across the mesh.
pcl::PolygonMesh is an "half edge data structure". This means that every triangle face is defined by an ordered set of vertices, which defines the orientation:
order of vertices => order of cross product => well defined unambiguous normals
You can either use the normals from the mesh (nearest neighbor), or calculate a low resolution mesh and just use it to orient the cloud.

VTK - create 3D model

I'm trying to create a 3D mask model from the 3D coordinate points that are stored in the txt file. I use the Marching cubes algorithm. It looks like it´s not able to link individual points, and therefore holes are created in the model.
Steps: (by https://lorensen.github.io/VTKExamples/site/Cxx/Modelling/MarchingCubes/)
First, load 3D points from file as vtkPolyData.
Then, use vtkVoxelModeller
Put voxelModeller output to MC algorithm and finally visualize
visualization
Any ideas?
Thanks
The example takes a spherical mesh (a.k.a. a set of triangles forming a sealed 3D shape), converts it to a voxel representation (a 3D image where the voxels outside the mesh are black and those inside are not) then converts it back to a mesh using Marching Cubes algorithm. In practice the input and output of the example are very similar meshes.
In your case, you load the points and try to create a voxel representation of them. The problem is that your set of points is not sufficient to define a volume, they are not a sealed mesh, just a list of points.
In order to replicate the example you should do the following:
1) building a 3D mesh from your points (you gave no information of what the points are/represent so I can't help you much with this task). In other words you need to tell how these points are connected between then to form a 3D shape (vtkPolyData). VTK can't guess how your points are connected, you have to tell it.
2) once you have a mesh, if you need a voxel representation (vtkImageData) of it you can use vtkVoxelModeller or vtkImplicitModeller. At this point you can use vtk filters that need a vtkImageData as input.
3) finally in order to convert voxels back to a mesh (vtkPolyData) you can use vtkMarchingCubes (or better vtkFlyingEdges3D that is a very similar algorithm but much faster).
Edit:
It is not clear what the shape you want should be, but you can try to use vtkImageOpenClose3D so the steps are:
First, load 3D points from file as vtkPolyData.
Then, use vtkVoxelModeller
Put voxelModeller output to vtkImageOpenClose3D algorithm, then vtkImageOpenClose3D algorithm output to MC (change to vtkFlyingEdges3D) algorithm and finally visualize
Example for vtkImageOpenClose3D:
https://www.vtk.org/Wiki/VTK/Examples/Cxx/Images/ImageOpenClose3D

What's the purpose of a unit normal vector when creating a 3D shape?

I understand that to create a shape (let's say a 3D sphere for an example) that I have to first find the vertex locations of the shape and second, use the parametric equation in order to create the x, y, z points of the triangle meshes. I am currently looking at a sample code to create shapes and it appears that after using the parametric equation in order to find the vectors of the triangle meshes, unit normals to the sphere at the vertices are found.
I understand why regular vectors in the first step are used to create the 3D shape and that a normal vector is perpendicular to the shape object, but I don't understand why the unit normal vectors at the vertices are used to create the shapes? What's the purpose of finding the normal of the vectors?
I am not sure I totally understand your question, but one very important use for normals in computer graphics is calculating reflections. For instance, if you're writing a simple raytracer, Lambertian reflectance is quite easy to compute if you know the normal vector where your camera ray intersects a surface. Normals are similarly required for (off the top of my head) the majority of calculations involved in more complex rendering techniques.

Triangulated Irregular Networks from qhull

I wanted to create TINs from 3D points (about 7 million in every file) using qhull.
can anyone suggest a place where i could probably see how to do this? thanks!
I've never used QHull since it is hard to integrate as a library into an existing project. Try out Triangle; it is specialized for 2D and is very easy to use (it comes with an example of how to call it from other C code).
I could recommend you a software package called Streaming Computation of Delaunay Triangulations. On a normal computer it can compute
Delaunay triangulations for large,
well-distributed data sets in 2D and
3D which can be greatly accelerated by
exploiting the natural spatial
coherence in a stream of points.
In terms of performance:
We compute a billion-triangle terrain
representation for the Neuse River
system from 11.2 GB of LIDAR data in
48 minutes using only 70 MB of memory
on a laptop.
Here is teaser image on how it works:
You can check out this video explaining their method/software.
Wiki says,
A TIN comprises a triangular network
of vertices, known as mass points,
with associated coordinates in three
dimensions connected by edges to form
a triangular tessellation.
Three-dimensional visualizations are
readily created by rendering of the
triangular facets. In regions where
there is little variation in surface
height, the points may be widely
spaced whereas in areas of more
intense variation in height the point
density is increased.
A TIN is typically based on a Delaunay
triangulation but its utility will be
limited by the selection of input data
points: well-chosen points will be
located so as to capture significant
changes in surface form, such as
topographical summits, breaks of
slope, ridges, valley floors, pits and
cols.
MATLAB can generate 3-D Delaunay tesselation and n-D Delaunay tesselation using Qhull.
3-dimensional Delaunay tessellation - tetramesh is used to plot the tetrahedrons that form the corresponding simplex
(source: mathworks.com)

Resources