Given a polyhedron defined by a matrix of 3-Dimensional vertices and its faces(delaunay triangles), I want to be able to create a smooth 3-D object.
Is there any software that has built a built in function that would allow me to do this?
If not, I have found a paper that seems to describe what I want, but I am unable to fully understand the math. http://graphics.berkeley.edu/papers/Turk-MIS-2002-10/Turk-MIS-2002-10.pdf.
Here is an examples of what I am looking for.
Rabbit
One solution for "smoothing" geometry, if we state the problem a bit more formally, is to perform mean curvature flow on your mesh. Here are some search terms - "curve-shortening flow", "mean curvature flow", "willmore flow", "conformal curvature flow" ...
Image source: Keenan Crane. Context and permission
"Smoothness of a surface or curve is very hard to define. (For an empirical test on what people perceive as smooth see http://www.levien.com/phd/thesis.pdf#page=23).
If you only care about perceived smoothness, for example, smoother appearance while rendering in high resolution etc., an easier approach would be Catmull-Clark subdivision scheme.
The geometric intuition is quite simple. In the case of a 2D curve, in every instance, every point on a curve moves according to some function of the curvature at that point. If we let the curve or surface move like this for some time, it will start smoothing out areas with high curvature more and more, eventually becoming a circle (or a sphere in 3d) and then collapse to a point. So for smoothing usually we have to preserve areas or volumes.
One way to define it is in terms of some energy, and our goal is to minimise this energy on the mesh. For example willmore flow minimises the willmore energy. Sometimes this process is called fairing.
I am not aware of a prepackaged library or tool, that's freely available and open source for curvature flow.
Algorithms
2D only
K.Mikula, D.Sevcovic, "Tangentially stabilized Lagrangian algorithm for elastic curve evolution driven by intrinsic Laplacian of curvature",
pdf
2D and 3D
https://www.youtube.com/watch?v=Jhqlmcms04M.
Keenan Crane's page has more information on this and more examples too.
http://www.cs.cmu.edu/~kmcrane/Projects/ConformalWillmoreFlow/
2D and 3D (level set method)
https://math.berkeley.edu/~sethian/2006/level_set.html
Related
Is there a library in python or c++ that is capable of estimating normals of point clouds in a consistent way?
In a consistent way I mean that the orientation of the normals is globally preserved over the surface.
For example, when I use python open3d package:
downpcd.estimate_normals(search_param=o3d.geometry.KDTreeSearchParamHybrid(
radius=4, max_nn=300))
I get an inconsistent results, where some of the normals point inside while the rest point outside.
many thanks
UPDATE: GOOD NEWS!
The tangent plane algorithm is now implemented in Open3D!
The source code and the documentation.
You can just call pcd.orient_normals_consistent_tangent_plane(k=15).
And k is the knn graph parameter.
Original answer:
Like Mark said, if your point cloud comes from multiple depth images, then you can call open3d.geometry.orient_normals_towards_camera_location(pcd, camera_loc) before concatenating them together (assuming you're using python version of Open3D).
However, if you don't have that information, you can use the tangent plane algorithm:
Build knn-graph for your point cloud.
The graph nodes are the points. Two points are connected if one is the other's k-nearest-neighbor.
Assign weights to the edges in the graph.
The weight associated with edge (i, j) is computed as 1 - |ni ⋅ nj|
Generate the minimal spanning tree of the resulting graph.
Rooting the tree at an initial node,
traverse the tree in depth-first order, assigning each node an
orientation that is consistent with that of its parent.
Actually the above algorithm comes from Section 3.3 of Hoppe's 1992
SIGGRAPH paper Surface Reconstruction from Unorganized Points. The algorithm is also open sourced.
AFAIK the algorithm does not guarantee a perfect orientation, but it should be good enough.
If you know the viewpoint from where each point was captured, it can be used to orient the normals.
I assume that this not the case - so given your situation, which seems rather watertight and uniformly sampled, mesh reconstruction is promising.
PCL library offers many alternatives in the surface module. For the sake of normal estimation, I would start with either:
ConcaveHull
Greedy projection triangulation
Although simple, they should be enough to produce a single coherent mesh.
Once you have a mesh, each triangle defines a normal (the cross product). It is important to note that a mesh isn't just a collection of independent faces. The faces are connected and this connectivity enforces a coherent orientation across the mesh.
pcl::PolygonMesh is an "half edge data structure". This means that every triangle face is defined by an ordered set of vertices, which defines the orientation:
order of vertices => order of cross product => well defined unambiguous normals
You can either use the normals from the mesh (nearest neighbor), or calculate a low resolution mesh and just use it to orient the cloud.
I know that there are 4 techniques to draw 3D objects:
(1) Wireframe Modeling and rendering, (2) Additive Modeling, (3) Subtractive Modeling, (4) Splines and curves.
Then, those models go through hidden surface removal algorithm.
Am I correct?
Be that way, What formula or algorithm can I use to draw a 3D Sphere?
I am using a low-level library named WinBGIm from colorado university.
there are 4 techniques to draw 3D objects:
(1) Wireframe Modeling and rendering, (2) Additive Modeling, (3) Subtractive Modeling, (4) Splines and curves.
These are modelling techniques and not rendering techniques. They allow you to mathematically define your mesh's geometry. How you render this data on to a 2D canvas is another story.
There are two fundamental approaches to rendering 3D models on a 2D canvas.
Ray Tracing
The basic idea of ray tracing is to pass a ray from the camera's origin, through the point on the canvas whose colour needs to be determined. Determine which models get hit by it and pick the closest one, determine how it's lit to compute the colour there. This is done by further tracing rays from the hit point to all the light sources in the scene. If you notice, this approach eliminates the need to use hidden surface determination algorithms like the back face culling, z-buffer, etc. since the basic idea is rooted on a hidden surface algorithm (ray tracing).
There are packages, libraries, etc. that help you do this. However, it's common that ray tracers are written from scratch as a college-level project. However, this approach takes more time to render (not to code), but the results are generally more pleasing than the below one. This approach is more popular when you want to render non-interactive visuals like movies.
Rasterization
This approach takes primitives (triangles and quads) that define the models in the scene and sample them at regular intervals (screen pixels they cover) and write it on to a colour buffer. Here hidden surface is usually eliminated using the Z-buffer; a buffer that stores the z-order of the fragment and the closer one wins, when writing to the colour buffer.
Rasterization is the more popular approach with cheap hardware support for it available on most modern computers due to years of research and money that has gone in to it. Libraries like OpenGL and Direct3D are readily available to facilitate development. Although the results are less pleasing than ray tracing, it's faster to render and thus is widely used in interactive, real-time rendering like games.
If you want to not use those libraries, then you have to do what is commonly known as software rendering i.e. you will end up doing what these libraries do.
What formula or algorithm can I use to draw a 3D Sphere?
Depends on which one of the above you choose. If you simply rasterize a 3D sphere in 2D with orthographic projection, all you have to do is draw a circle on the canvas.
If you are looking for hidden lines removal (drawing the edges rather than the inside of the faces), the solution is easy: "back face culling".
Every edge of your model belongs to two faces. For every face you can compute the normal vector and check if it is facing to the observer (by the sign of the dot product of the normal and the direction of the projection line); in other words, if the observer is located in the outer half-space defined by the plane of the face. Then an edge is wholly visible if and only if it belongs to at least one front face.
Usual discretization of the sphere are made by drawing equidistant parallels and meridians. It may be advantageous to adjust the spacing of the parallels so that all tiles are about the same area.
I am looking for an algorithm that given two meshes could clip one using another.
The simplest form of this is clipping a mesh using a plane. I've already implemented that by following something similar to what is described here.
What it does is basically inspecting all mesh vertices and triangles with respect to the plane (the plane's normal and point are given). If the triangle is completely above the plane, it is left untouched. If it falls completely below the plane, it is discarded. If some of the edges of the triangle intersect with the plane, the intersecting points with the plane are calculated and added as the new vertices. Finally a cap is generated for the hole on the place the mesh was cut.
The problem is that the algorithm assumes that the plane is unlimited, therefore whatever is in its path is clipped. In the simplest form, I need an extension of this without the assumption of a plane of "infinite" size.
To clarify, imagine that we have a 3D model of a desk with 2 boxes on it. The boxes are adjacent (but not touching or stacked). The user will define a cutting plane of a limited width and height underneath the first box and performs the cut. We end up with a desk model (mesh) with a box on it and another box (mesh) that can be freely moved around/manipulated.
In the general form, I'd like the user to be able to define a bounding box for the box he/she wants to separate from the desk model and perform the cut using that bounding box.
If I could extend the algorithm I already have to an algorithm with limited-sized planes, that would be great for now.
What you're looking for are constructive solid geometry/boolean algorithms with arbitrary meshes. It's considerably more complex than slicing meshes by an infinite plane.
Among the earliest and simplest research in this area, and a good starting point, is Constructive Solid Geometry for Polyhedral Objects by Trumbore and Hughes.
http://cs.brown.edu/~jfh/papers/Laidlaw-CSG-1986/main.htm
From the original paper:
More elaborate solutions extend upon this subject with a variety of data structures.
The real complexity of the operation lies in the slicing algorithm to slice one triangle against another. The nightmare of implementing robust CSG lies in numerical precision. It's easy when you involve objects far more complex than a cube to run into cases where a slice is made just barely next to a vertex (at which point you have the tough decision of merging the new split vertex or not prior to carrying out more splits), where polygons are coplanar (or almost), etc.
So I suggest initially erring on the side of using very high-precision floating point numbers, possibly even higher than double precision to focus on getting something working correctly and robustly. You can optimize later (first pass should be to use an accelerator like an octree/kd-tree/bvh), but you'll avoid many headaches this way in your first iteration.
This is vastly simpler to implement at render time if you're focusing on a raytracer rather than a modeling software, e.g. With raytracers, all you have to do to do this kind of arbitrary clipping is pretend that an object used to subtract from another has its polygons flipped in the culling process, e.g. It's easy to solve robustly at the ray level, but quite a bit harder to do robustly at the geometric level.
Another thing you can do to make your life so much easier if you can afford it is to voxelize your object, find subtractions/additions/unions of voxels, and then translate the voxels back into a mesh. This is so much easier to make robust, but harder to do efficiently and the voxel->polygon conversion can get quite involved if you want better results than what marching cubes provide.
It's a really tough area to do extremely well and requires perseverance, and thus the reason for the existence of things like this: http://carve-csg.com/about.
If someone is interested, currently there is a solution for this problem in CGAL library. It allows clipping one triangular mesh using another mesh as bounding volume. The usage example can be found here.
I have a list of cubic bezier curves in 3D, such that the curves are connected to each other and closes a cycle.
I am looking for a way to create a surface from the bezier curves. Eventually i want to triangulate the surface and present it in a graphic application.
Is there an algorithm for surfacing a closed path of cubic bezier segments?
It looks like you only know part of the details of the surface (given by the Bezier curves) and you've to extrapolate the surface out of it. As a simple example I'm imagining a bunch of circles in 3D with the center and radius that will be reconstructed into a sphere.
If this is the case you can use level sets. With level sets, you define a bunch of input parameters that defines the force exerted by the external factors on your surface and the 'tension' of the surface.
Crudely put, level sets define the behaviour of surface as they expand(or contract ) over time. As it expands it tries to maintain it's smoothness while meeting other boundary conditions - like 'sticking' to the circles in this case. So if you want a sphere from bunch of circles, the circles will exert a great force, while the surface will also be very tense.
Physbam has an open source implementation of level sets.
CGAL and PCL also provide a host of methods that generate surfaces from things such as points sets and implicit surface. You may be able to adapt one of them for your use.
You can look into the algorithms they use if you want to implement one on your own. I think at least one of them use the Poisson Surface Reconstruction algorithm.
I was wondering what procedure a simple 3d program uses to draw 2d pixels so that they appear 3d. I'm really interested in this for drawing purposes since if a program can figure out how to use a flat screen to produce images with depth then maybe I could use those techniques in my drawing.
Are there any basic 3d engine out there I can look at? Without any 2d to 3d abstractions?
Two notions may interest you:
The perspective projection, which is the mathematical transformation which takes 3D points (or vertices) and the characteristics of your camera (position, orientation, frustrum, ...) and gives you the 2D projection of the point on your chosen medium (screen).
Wikipedia - 3D Projection
StackOverflow - Transform GPS-Points to Screen-Points with Perspective Projection in Android (I made a detailed answer)
The Painter's algorithm (since you seem to ask for drawing-related techniques), a rendering method which sorts by depth all the elements of your scene after their projection, and draws them on your medium by decreasing depth, to ensure a realistic output ("far objects hidden behind closer ones" - imitating painters method). This algorithm has however some limits (far from efficient in its basic implementation, can't easily deal with elements intersecting or circularly overlapping each others), so most of the days a more efficient method is used, the Z-buffering, which deals with depth conflicts on a pixel-to-pixel basis.
Wikipedia - Painter's algorithm
Wikipedia - Z-buffering
By combining those notions, you can actually implement your own simple 3D engine (in the other StackOverflow thread I'm pointing, I gave a link to an article I made about creating such an engine easily).
If you want to look at more complex engines and notions, you can take a look at the GPU Gems 3 by Nvidia for instance, or look at articles about OpenGL.
Hope it helped, bye !