My question is similar to this question. I am using python-igraph library to create undirected graph. What I want to achieve is to untangle as much as possible such that minimum number of crossings of edges is achieved. Then I want convert this clean layout to a 2D plane where I can read the the coordinates of each vertex and no vertex is overlapping any other vertex.
For my current graph I have generated the layout based on the Fruchterman-Reingold force-directed algorithm (as shown in the image).
Can anyone give me some hints how can I achieve that? or this cannot be solved in polynomial time because to find the best placement of vertices with minimum of number of crossing is a NP-Hard problem.
Related
I am currently working on a hole detection problem in 3D point cloud data. I am referring to this paper "Detecting Holes in Point Set Surfaces" by Gerhard H Bendels, Ruwen Schnabel and Reinhard Klein. One of the criterions mentioned is an Angle Criterion in which we need to determine angles between consecutive points in a KD Tree(Radially Nearest Neighbors to a given point).
See Image:
Angle between points
I am using Open3D to extract a KD Tree but I believe it is giving me an unsorted list of points rather than a list of consecutive points.
See Image:
List of Nearest neighbors
The point below '______' is the point of interest and rest are it's neighbors. Now my question is,
How do I know which point is next to which point?
And if that's not possible to know, How can I find the angles as shown in the first image.
I just need the angles to find the boundary probability for each point, so an answer would really help me progress.
Thanks
What I've Tried so far..
I have tried generating vectors out of all the points and calculated the angles using dot product. But that seems wrong because I believe I may be calculating dot products between first and third point.
I would to know if is possible to do deviation analysis with Meshlab and transfer the result to vertex color in a mesh. So expand those 2 ideas...
1st. Is it possible to do deviation analysis with MeshLab? I have a scanned mesh and I will compare with a "ideal model". The difference between these 2 will generate a (grey or color) scale information that represent the distance I have from the points of the scanned model to the "ideal" one.
2nd. I want to get this information (color/grey grading that shows how distant the points are) and transfer to a vertex color information.
I don't know it was clear, but if you know what deviation analysis means I think you got it. The difference is that I would like the generate a 3d mesh with the vertex color provided by this deviation analysis.
Seems that mesh lab can compare two models and can deal with vertex colorizing, but I don't Know if is possible to deal with real measurements, transfer this information to vertex color and export a mesh that show it.
If its possible and If you know how just point me some direction. I'm not familiar with Meshlab and click here and there trying a impossible task can be very frustrating, so it will be good if someone can give me some tips.
Thanks.
Yes, MeshLab can compute deviation analysis between two similar surfaces (and the required alignment preprocessing too).
Estimating the deviation between two meshes means computing the hausdorff distance.
There is a small tutorial on how to compute and visualize it in MeshLab here:
http://meshlabstuff.blogspot.com/2010/01/measuring-difference-between-two-meshes.html
So the software I am using accepts 3D objects in the form of contours or .stl files. The contours I have are along the z-plane(each plane has a unique z). I have had to modify the contours for my experiment and now the contours do not have a unique z for each plane(they are now slightly angled wrt z=0 plane).
The points represent the edges of the 3D object. What would be the best way to take this collection of points and create a .stl file?
I am relatively new to working with python and 3D objects, so any help, pointers or suggestions would be much appreciated.
Edit: I have the simplices and verticies using the Delaunay(), but how do I proceed next?
The co-ordinates of all points are in this text file in the format "x y z".
So after seeking an answer for months and trying to use Meshlab and Blender I finally stumbled across the answer using numpy-stl. Hopeful that it will help others in a similar situation.
Here is the code to generate the .STL file:
from stl import mesh
num_triangles=len(fin_list)
data = np.zeros(num_triangles, dtype=mesh.Mesh.dtype)
for i in range(num_triangles):
#I did not know how to use numpy-arrays in this case. This was the major roadblock
# assign vertex co-ordinates to variables to write into mesh
data["vectors"][i] = np.array([[v1x, v1y, v1z],[v2x, v2y, v2z],[v3x, v3y, v3z]])
m=mesh.Mesh(data)
m.save('filename.stl')
The three vertices that form a triangle in the mesh go in as a vector that define the surface normal. I just collected three such vertices that form a triangle and wrote them into the mesh. Since I had a regular array of points, it was easy to collect the triangles:
for i in range(len(point_list)-1):
plane_a=[]
plane_b=[]
for j in range(len(point_list[i])-1):
tri_a=[]
tri_b=[]
#series a triangles
tri_a.append(point_list[i+1][j])
tri_a.append(point_list[i][j+1])
tri_a.append(point_list[i][j])
#series b triangles
tri_b.append(point_list[i+1][j])
tri_b.append(point_list[i+1][j+1])
tri_b.append(point_list[i][j+1])
#load to plane
plane_a.append(tri_a)
plane_b.append(tri_b)
group_a.append(plane_a)
group_b.append(plane_b)
The rules for choosing triangles for creating a mesh are as follows:
The vertices must be arranged in a counter-clock direction.
Each triangle must share two vertices with adjacent triangles.
The direction normal must point out of the surface.
There were two more rules that I did not follow but it still worked in my case:
1. All coordinates must be positive(In 1st Quadrant only)
2. All triangles must be arranged in an increasing z-order.
Note: There can be two kinds of .STL file formats: Binary and ASCII. numpy-stl writes out in the binary format. More info on STL files can be found here.
Hope this helps!
I would like to find two vertices of two meshes (1 vertex per mesh) that define the closest distance between them. Or the two triangles would be fine I guess.
However I'm not sure how to search for this in CGAL's documentation, I'm sure that this is doable with some existing tool (probably based on a 3d distance field and/or AABBs). Could I please get a hint (keywords/link) on what to look for?
I've been pointed to the Optimal Distances CGAL package, but it's not exactly what I want, since it outputs the distance and the coordinates, so finding the vertex ID is an additional computational overhead.
I've already implemented a collision detection with CGAL to find the triangle-triangle intersection in a triangle-soup, using AABB-trees. I guess that I should be somehow close to this, although now a simple soup with all me object-triangles wouldn't do the job.
The solution found was this:
CGAL's Optimal Distances package can give an approximation of the closest distance between the convex hulls of two meshes, without explicitly computing the hulls. As a result one gets the shortest distance between these hulls, and the coordinates of the 2 points that lie on them and define this distance.
Then these coordinates can be used as a search-query in kd-trees that contains the original vertices of the meshes in order to find the closest vertices.
In case one mesh is non-convex, the hull that CGAL is using is very approximate, so convex decomposition might be necessary. In such a case one would have to check distances for each convex part and then take the shortest distance.
The above would result in something like this:
enter link description here
Greetings,
We have a set of points which represent an intersection of a 3d body and a horizontal plane. We would like to detect the 2D shapes that represent the cross sections of the body. There can be one or more such shapes. We found articles that discuss how to operate on images using Hough Transform, but we may have thousands of such points, so converting to an image is very wasteful. Is there a simpler way to do this?
Thank you
In converting your 3D model to a set of points, you have thrown away the information required to find the intersection shapes. Walk the edge-face connectivity graph of your 3D model to find the edge-plane intersection points in order.
Assuming you have, or can construct, the 3d model topography (some number of vertices, edges between vertices, faces bound by edges):
Iterate through the edge list until you find one that intersects the test plane, add it to a list
Pick one of the faces that share this edge
Iterate through the other edges of that face to find the next intersection, add it to the list
Repeat for the other face that shares that edge until you arrive back at the starting edge
You've built an ordered list of edges that intersect the plane - it's trivial to linearly interpolate each edge to find the intersection points, in order, that form the intersection shape. Note that this process assumes that the face polygons are convex, which in your case they are.
If your volume is concave you'll have multiple discrete intersection shapes, and so you need to repeat this process until all edges have been examined.
There's some java code that does this here
The algorithm / code from the accepted answer does not work for complex special cases, when the plane intersects some vertices of a concave surface. In this case "walking" the edge-face connectivity graph greedily could close some of the polygons before time.
What happens is, that because the plane intersects a vertex, at one point when walking the graph there are two possibilities for the next edge, and it does matter which one is chosen.
A possible solution is to implement a graph traversal algorithm (for instance depth-first search), and choose the longest loop which contains the starting edge.
It looks like you wanted to combine intersection points back into connected figures using some detection or Hough Transform.
Much simpler and more robust way is to immediately get not just intersection points, but contours of 3D body, where the plane cuts it.
To construct contours on the body given by triangular mesh, define the value in each mesh vertex equal to signed distance from the plane (positive on one side of the plane and negative on the other side). The marching squares algorithm for isovalue=0 can be then applied to extract the segments of the contours:
This algorithm works well even when the plane passes through a vertex or an edge of the mesh.
To better understand what is the result of plane section, please take a look at this short video. Following the links there, one can find the implementation as well.