I am trying to add curved arrows between two nodes in a DiGraph by using NetworkX library.
The documentation for the function nx.draw_networkx_edges mentions the argument connectionstyle. Setting connectionstyle='arc3, rad = 0.2' should yield curved connectors. But I am unable to reproduce this.
I have also looked into the module matplotlib.patches and call ConnectionStyle as mentioned in https://matplotlib.org/stable/api/_as_gen/matplotlib.patches.ConnectionStyle.html ; but I still get the same visual.
array_of_edges = list(zip(df.start_node, df.end_node, df.weight))
G.add_weighted_edges_from(array_of_edges)
nx.draw_networkx_edges(
G, pos_nodes,
edge_color='#5B174C',
width=df['weight'] / 600,
alpha=0.8,
connectionstyle='arc3,rad=0.2')
This is what I am getting and this is what I require:
https://imgur.com/a/1TcPfY9
I had a similar issue a while back (I wanted to compare two states of the same network). I couldn't work out a solution using networkx so I made my own, the code for which can be found here.
You will have to convert your edge list into a square adjacency matrix, where the absence of a connection is denoted by a NaN, and non-NaN entries are interpreted as edge weights and mapped to edge colors. You can then call the module using
network_line_graph.draw(adjacency_matrix, node_order=None, arc_above=True)
Note that if you don't specify the node order explicitly, the node order is optimised using recursive minimum flow cuts to place strongly connected subnetworks/nodes together (ideally you would minimize total arc length but that gets computationally expensive very quickly).
The API is pretty similar to networkx but if you do have any problems, please raise an issue on the github.
Related
Given the start node and goal node in a graph, I want to find one simple path between these two nodes. I do not want the shortest path, but need any random simple path.
I tried using all_simple_paths from networkx, but this module seems to calculate all the simple paths before returning anything. This takes a long time to run.
Is there a way to find just one simple path?
Also, I would ideally like to make sure this path does not cross any "obstacles". These obstacles are a predefined set of nodes from the same graph. Is there a way to add in this constraint?
PS: I don't necessarily need to use networkx. The code I am writing is in Python.
You could treat this as a min cost network flow problem where your start node wants to send a unit of flow (demand = -1) to your goal node (demand = 1). You can set the edge capacities to 1 and you can set all the edge weights to 0 except for those around "obstacle" nodes. For those obstacle nodes you can set all the edges either coming into or going out of them to have a weight of 1. The algorithm will try to find any arbitrary path using only edges with weight 0, but will use weight 1 edges if no path exists with only weight 0 edges.
See the nx.min_cost_flow function. This function requires you to make your graph a directed graph nx.DiGraph if it's not already.
I managed to solve this problem by using the RRT algorithm. It gives a random path between the source and destination nodes and also avoids obstacles.
Is there a library in python or c++ that is capable of estimating normals of point clouds in a consistent way?
In a consistent way I mean that the orientation of the normals is globally preserved over the surface.
For example, when I use python open3d package:
downpcd.estimate_normals(search_param=o3d.geometry.KDTreeSearchParamHybrid(
radius=4, max_nn=300))
I get an inconsistent results, where some of the normals point inside while the rest point outside.
many thanks
UPDATE: GOOD NEWS!
The tangent plane algorithm is now implemented in Open3D!
The source code and the documentation.
You can just call pcd.orient_normals_consistent_tangent_plane(k=15).
And k is the knn graph parameter.
Original answer:
Like Mark said, if your point cloud comes from multiple depth images, then you can call open3d.geometry.orient_normals_towards_camera_location(pcd, camera_loc) before concatenating them together (assuming you're using python version of Open3D).
However, if you don't have that information, you can use the tangent plane algorithm:
Build knn-graph for your point cloud.
The graph nodes are the points. Two points are connected if one is the other's k-nearest-neighbor.
Assign weights to the edges in the graph.
The weight associated with edge (i, j) is computed as 1 - |ni ⋅ nj|
Generate the minimal spanning tree of the resulting graph.
Rooting the tree at an initial node,
traverse the tree in depth-first order, assigning each node an
orientation that is consistent with that of its parent.
Actually the above algorithm comes from Section 3.3 of Hoppe's 1992
SIGGRAPH paper Surface Reconstruction from Unorganized Points. The algorithm is also open sourced.
AFAIK the algorithm does not guarantee a perfect orientation, but it should be good enough.
If you know the viewpoint from where each point was captured, it can be used to orient the normals.
I assume that this not the case - so given your situation, which seems rather watertight and uniformly sampled, mesh reconstruction is promising.
PCL library offers many alternatives in the surface module. For the sake of normal estimation, I would start with either:
ConcaveHull
Greedy projection triangulation
Although simple, they should be enough to produce a single coherent mesh.
Once you have a mesh, each triangle defines a normal (the cross product). It is important to note that a mesh isn't just a collection of independent faces. The faces are connected and this connectivity enforces a coherent orientation across the mesh.
pcl::PolygonMesh is an "half edge data structure". This means that every triangle face is defined by an ordered set of vertices, which defines the orientation:
order of vertices => order of cross product => well defined unambiguous normals
You can either use the normals from the mesh (nearest neighbor), or calculate a low resolution mesh and just use it to orient the cloud.
I'm working on a 3D reconstruction system and want to generate a triangular mesh from the registered point cloud data using Python 3. My objects are not convex, so the marching cubes algorithm seems to be the solution.
I prefer to use an existing implementation of such method, so I tried scikit-image and Open3d but both the APIs do not accept raw point clouds as input (note that I'm not expert of those libraries). My attempts to convert my data failed and I'm running out of ideas since the documentation does not clarify the input format of the functions.
These are my desired snippets where pcd_to_volume is what I need.
scikit-image
import numpy as np
from skimage.measure import marching_cubes_lewiner
N = 10000
pcd = np.random.rand(N,3)
def pcd_to_volume(pcd, voxel_size):
#TODO
volume = pcd_to_volume(pcd, voxel_size=0.05)
verts, faces, normals, values = marching_cubes_lewiner(volume, 0)
open3d
import numpy as np
import open3d
N = 10000
pcd = np.random.rand(N,3)
def pcd_to_volume(pcd, voxel_size):
#TODO
volume = pcd_to_volume(pcd, voxel_size=0.05)
mesh = volume.extract_triangle_mesh()
I'm not able to find a way to properly write the pcd_to_volume function. I do not prefer a library over the other, so both the solutions are fine to me.
Do you have any suggestions to properly convert my data? A point cloud is a Nx3 matrix where dtype=float.
Do you know another implementation [of the marching cube algorithm] that works on raw point cloud data? I would prefer libraries like scikit and open3d, but I will also take into account github projects.
Do you know another implementation [of the marching cube algorithm] that works on raw point cloud data?
Hoppe's paper Surface reconstruction from unorganized points might contain the information you needed and it's open sourced.
And latest Open3D seems to be containing surface reconstruction algorithms like alphaShape, ballPivoting and PoissonReconstruction.
From what I know, marching cubes is usually used for extracting a polygonal mesh of an isosurface from a three-dimensional discrete scalar field (that's what you mean by volume). The algorithm does not work on raw point cloud data.
Hoppe's algorithm works by first generating a signed distance function field (a SDF volume), and then passing it to marching cubes. This can be seen as an implementation to you pcd_to_volume and it's not the only way!
If the raw point cloud is all you have, then the situation is a little bit constrained. As you might see, the Poisson reconstruction and Screened Poisson reconstruction algorithm both implement pcd_to_volume in their own way (they are highly related). However, they needs additional point normal information, and the normals have to be consistently oriented. (For consistent orientation you can read this question).
While some Delaunay based algorithm (they do not use marching cubes) like alphaShape and this may not need point normals as input, for surfaces with complex topology, it's hard to get a satisfactory result due to orientation problem. And the graph cuts method can use visibility information to solve that.
Having said that, if your data comes from depth images, you will usually have visibility information. And you can use TSDF to build a good surface mesh. Open3D have already implemented that.
Background:
I'm doing polymer simulation. And I'm trying to use networkx to calculate how many chains in the system. Molecules inside systems are equal to the nodes and bonds equal to the connection between nodes.
What I have tried:
I used networkx.chain_decompostion to calculate the number of the chain.
import networkx as nx
info = nx.chain_decomposition(G)
Issues:
I found it only find the chains which are closed loop, such as A1-A2-A3-A1.
However, there are still many chains are not closed, such as A1-A2-A3.
Is there an easy way to collect both types of the chains. Thanks!
The function chain_decomposition is not what you think it is. From the docs:
The chain decomposition of a graph with respect a depth-first search tree is a set of cycles or paths derived from the set of fundamental cycles of the tree [...]
What you are probably looking for is the function number_connected_components.
See this link for details. This assumes that each connected component is a chain, i.e. that there are several disjoint subgraphs in your graph G, each corresponding to a (non-branching) polymer molecule. If that is not the case (the polymer is branched) then I you need to do something a bit more clever. For example, you could compute all shortest paths between leave nodes (atoms with a single bond).
You can find the leaf nodes by inspecting the degree of the nodes with list(G.degree) (leaves have degree 1), and then compute the shortest paths with between all leaf pairs with all_shortest_paths.
To find cyclic molecules you can use chain_decomposition as before.
Back story: I'm creating a Three.js based 3D graphing library. Similar to sigma.js, but 3D. It's called graphosaurus and the source can be found here. I'm using Three.js and using a single particle representing a single node in the graph.
This was the first task I had to deal with: given an arbitrary set of points (that each contain X,Y,Z coordinates), determine the optimal camera position (X,Y,Z) that can view all the points in the graph.
My initial solution (which we'll call Solution 1) involved calculating the bounding sphere of all the points and then scale the sphere to be a sphere of radius 5 around the point 0,0,0. Since the points will be guaranteed to always fall in that area, I can set a static position for the camera (assuming the FOV is static) and the data will always be visible. This works well, but it either requires changing the point coordinates the user specified, or duplicating all the points, neither of which are great.
My new solution (which we'll call Solution 2) involves not touching the coordinates of the inputted data, but instead just positioning the camera to match the data. I encountered a problem with this solution. For some reason, when dealing with really large data, the particles seem to flicker when positioned in front/behind of other particles.
Here are examples of both solutions. Make sure to move the graph around to see the effects:
Solution 1
Solution 2
You can see the diff for the code here
Let me know if you have any insight on how to get rid of the flickering. Thanks!
It turns out that my near value for the camera was too low and the far value was too high, resulting in "z-fighting". By narrowing these values on my dataset, the problem went away. Since my dataset is user dependent, I need to determine an algorithm to generate these values dynamically.
I noticed that in the sol#2 the flickering only occurs when the camera is moving. One possible reason can be that, when the camera position is changing rapidly, different transforms get applied to different particles. So if a camera moves from X to X + DELTAX during a time step, one set of particles get the camera transform for X while the others get the transform for X + DELTAX.
If you separate your rendering from the user interaction, that should fix the issue, assuming this is the issue. That means that you should apply the same transform to all the particles and the edges connecting them, by locking (not updating ) the transform matrix until the rendering loop is done.