creating pp3 point pattern in spatstat for a cone-shape point cloud - spatstat

How can I create a 3D point pattern (PP3) in spatstat package for a point cloud which does not have a cube/rectangular cube shape (for example, cone or tetrahedron)?

Currently it is only possible to use a three dimensional box as the domain of a three dimensional point pattern, and I don't expect this to change anytime soon.

Related

Most efficient and effective way to create a surface from 3d points

Say I had a point cloud with n number of points in 3d space(relatively densely packed together). What is the most efficient way to create a surface that goes contains every single point in it and lets me calculate values such as the normal and curvature at some point on the surface that was created? I also need to be able to create this surface as fast as possible(a few milliseconds hopefully working with python) and it can be assumed that n < 1000.
There is no "most efficient and effective" way (this is true of any problem in any domain).
In the first place, the surface you have in mind is not mathematically defined uniquely.
A possible approach is by means of the so-called Alpha-shapes, implemented either from a Delaunay tetrahedrization, or by the ball-pivoting method. For other methods, lookup "mesh reconstruction" or "surface reconstruction".
On another hand, normals and curvature can be computed locally, from neighbors configurations, without reconstructing a surface (though there is an ambiguity on the orientation of the normals).
I could suggest Nina Amenta's Power Crust algorithm (link to code), or also meshlab suite, which can compute the curvatures too.

3D point cloud matching

I have a 3D point cloud and I would like to match different point clouds with each other for recognition purposes. Does OpenCV or Tensorflow do it for me? if yes, how?
Example:
src1 = pointCloud of object 1
src2 = pointCloud of object 2
compare(src1, src2)
Output: Both point clouds are of the same object or different objects.
I want to achieve something like this. Please help with some ideas or resources.
OpenCV Surface Matching can be used to detect and find pose of a given point cloud within another point cloud.
In Open3d there is a 3d reconstruction module, but it is used to register (find poses) of RGBD Images and reconstruct 3d object from them. But there is a sub step in which different point cloud fragments are registered (finding pose of point clouds) to combine them into a single point cloud for reconstruction. But not sure if it is useful for your task.
There are many 3d Point cloud object detection methods which use neural networks, as well, but you have to generate the data needed to train, if your objects are not available in a standard dataset.

Consistent normal calculation of a point cloud

Is there a library in python or c++ that is capable of estimating normals of point clouds in a consistent way?
In a consistent way I mean that the orientation of the normals is globally preserved over the surface.
For example, when I use python open3d package:
downpcd.estimate_normals(search_param=o3d.geometry.KDTreeSearchParamHybrid(
radius=4, max_nn=300))
I get an inconsistent results, where some of the normals point inside while the rest point outside.
many thanks
UPDATE: GOOD NEWS!
The tangent plane algorithm is now implemented in Open3D!
The source code and the documentation.
You can just call pcd.orient_normals_consistent_tangent_plane(k=15).
And k is the knn graph parameter.
Original answer:
Like Mark said, if your point cloud comes from multiple depth images, then you can call open3d.geometry.orient_normals_towards_camera_location(pcd, camera_loc) before concatenating them together (assuming you're using python version of Open3D).
However, if you don't have that information, you can use the tangent plane algorithm:
Build knn-graph for your point cloud.
The graph nodes are the points. Two points are connected if one is the other's k-nearest-neighbor.
Assign weights to the edges in the graph.
The weight associated with edge (i, j) is computed as 1 - |ni β‹… nj|
Generate the minimal spanning tree of the resulting graph.
Rooting the tree at an initial node,
traverse the tree in depth-first order, assigning each node an
orientation that is consistent with that of its parent.
Actually the above algorithm comes from Section 3.3 of Hoppe's 1992
SIGGRAPH paper Surface Reconstruction from Unorganized Points. The algorithm is also open sourced.
AFAIK the algorithm does not guarantee a perfect orientation, but it should be good enough.
If you know the viewpoint from where each point was captured, it can be used to orient the normals.
I assume that this not the case - so given your situation, which seems rather watertight and uniformly sampled, mesh reconstruction is promising.
PCL library offers many alternatives in the surface module. For the sake of normal estimation, I would start with either:
ConcaveHull
Greedy projection triangulation
Although simple, they should be enough to produce a single coherent mesh.
Once you have a mesh, each triangle defines a normal (the cross product). It is important to note that a mesh isn't just a collection of independent faces. The faces are connected and this connectivity enforces a coherent orientation across the mesh.
pcl::PolygonMesh is an "half edge data structure". This means that every triangle face is defined by an ordered set of vertices, which defines the orientation:
order of vertices => order of cross product => well defined unambiguous normals
You can either use the normals from the mesh (nearest neighbor), or calculate a low resolution mesh and just use it to orient the cloud.

Does an algorithm exist to calculate the union of two watertight meshes?

I have two watertight models (meshes). I would like to generate a mesh that represents the intersection of these two models.
Does an algorithm exist for calculating the mesh that represents the intersection of two models? If so, can you provide (high level) details of the algorithm or a reference?
See this answer to a related problem.
For each mesh, an oracle function can be constructed that determines whether a query line segment intersects the surface (and where) as well as the location of the segment endpoints (inside / outside the solid). The two oracle functions can then be combined together to construct an oracle function for the intersection of the two solids bound by the meshes. This new oracle function can then be fed to surface meshing algorithms like Marching Cubes variants or Delaunay-based approaches (see 3D Surface Mesh Generation in the CGAL documentation) to reconstruct a mesh representation of the intersection.

How to merge Two data sets of point clouds or polygons into one? (Merging not Appending)

I am developing a real-time scanner, in which I would be able to scan the surface in real-time. Untill now I can scan one patch of the surface and can save it.
One patch means just 1 scan of the surface (point cloud or triangles).
What I want is, I want to scan multiple patches in real-time. For this purpose, I have to merge the previous patch with the current patch. But I don't know what is the standard way or algorithm to merge two patches and also I don't know which one is the best way for merging, for example, before triangulation (point cloud merging), or after the triangulation (mesh merging).
Merging means remove overlapped points or triangles
My idea: If there are two point clouds, source and target then using VTK, find a closest point from the target point cloud, and select one point the other is discarded. Is this a method for merging ? This is just my idea?
But the problem is, the number of points in source and target will be different.
How can I merge two patches using VTK, kindly guide me?
Also suggest me what is the standard and optimum way to achieve real-time scanning task.
Case # 1:
i) Point Cloud Aquisition
ii) Register
iii) Merge
iv) Triangulate
Case # 2:
i) Point Cloud Aquisition
ii) Register
iii) Triangulate
iv) Merge
Case # 3:
i) Point Cloud Aquisition
ii) Triangulate
iii)Register
iv) Merge
Please guide me.
Thanks.
I put a note here because I have just been thinking about something similar.
Your suggested method (doing a nearest neighbour search during the merge) does seem possible. The issue about there being different sizes between the two clouds being merged does not seem to be an issue if you do a radius search based on some desirable resolution rather than a search for 1 neighbour.
To manage your case 1, you could try merging all the clouds then downsampling with a voxel grid e.g. pcl::VoxelGrid before the triangulation (this would be the easiest way but may not be what you want).
The algorithm encapsulated in pcl::GreedyProjectionTriangulation seems to be mostly described in the below paper [1]. In that paper they also describe an incremental mesh update procedure which is a minor change to the algorithm (they remove triangles close to a new point and start the greedy triangulation again). As far as I know, this has not been implemented in PCL but shouldn't be too difficult. This would correspond to your case 2. However, the mesh you get out would depend on the order in which you merged the clouds. Because it is a time investment I would suggest trying the point-based merging first.
[1] Marton, Z. C., R. B. Rusu, and M. Beetz. 2009. β€œOn Fast Surface Reconstruction Methods for Large and Noisy Point Clouds.” In IEEE International Conference On Robotics and Automation, 3218–3223. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5152628.

Resources