I have a weighted Graph using networkx and the topology is highly meshed. I would like to extract a number of paths between two nodes with distance minimization.
To clarify, the dijkstra_path function finds the weighted shortest path between two nodes, I would like to get that as well as the second and third best option of shortest weighted paths between two nodes.
I tried using all_simple_paths and then ordering the paths in distance minimization order but it is extremely time consuming when the network is meshed with 500 nodes or so.
Any thoughts on the matter? Thank you for your help!
Try networkx's shortest_simple_paths.
Related
Hello I have a dynamic programming related question. How can I compute the shortest path in hops from starting node to ending, with the constrain that the vertices and edges will have an equal or higher predefined value. For example the highest rate of data in a network. Could someone provide some pseudo-code or any thoughts, thank you in advance.
Build new graph from the given network, which does not contain the vertices and edges whose value is less than the predefined value, and from the start node, in the new graph run an algorithm to find the shortest path to the end node, such as BFS, Dijkstra (-Greedy, not Dynamic Programming), Bellman – Ford, etc.
so I was given x amount of points generated randomly and need to find the shortest path for all of those points. So what would be the best method I could use given that the amounts of points could potentially reach a very large amount and the endpoint would be the final points of the given x.
Thank you
There are so many methods for the shortest path. Dijkstra is the father of the shortest path, you can use it for trial and error dijkstra.
I need to find shortest paths which should pass through several nodes and edges. Few details:
It should be shortest paths according to weights.
Include set can be ordered and unordered.
Graph size - 50 000 vertices and 450 0000 edges
Is there any way to find paths like this using arangodb?
I've tried K_SHORTEST_PATHS but it is too slow for some cases.
Without a data set, this is tricky to test. Unfortunately, K_SHORTEST_PATHS is the only built-in way to add "weight" to edges, unless you build something yourself. Also, both SHORTEST_PATH methods do not implement PRUNE, which is the best way to speed graph traversal.
My suggestion would be to use a directed graph method (FOR v,e,p IN 1..9 INBOUND x...), implementing both PRUNE and FILTER clauses to reduce the number of hops, and something like COLLECT path = p AGGREGATE weight = SUM(e.weight) to calculate weight.
Based on the definition, I know that In a connected graph, closeness centrality calculated as the sum of the length of the shortest paths between the node and all other nodes in the graph, and I am using Networkx library to calculate this parameter in my code based on the blow command:
nx.closeness_centrality(G,i)
But I want to find closeness centrality based on the shortest path between node "i" and a predefined set of nodes, not All nodes
how can I reach this? is it possible ?
old question and you might have already find the answer but I will run the the command above for all the nodes and then find the set nodes with the keys you want to target.
Closeness centrality based on shortest between few selected nodes? If I understood it correctly then you must find distance between those nodes individually.
closeness_centrality = sum(all_distances)**(-1)
Given a two variables with the same number of observations, you will apparently see they follow three linear regressions in the scatter plot. How could you separate them into three groups with different linear fittings?
There exist specialized clustering algorithms for this.
Google for "correlation clustering".
If they all go through 0, then it may be easier to apply the proper feature transformation to make them separable. So don't neglect preprocessing. It's the most important part.
I would calculate the slope of the segment between any pairs of points, so with n points you get n(n+1)/2 slopes values, and then use a clustering algorithm.
It is the same idea which is behind the Theil–Sen estimator
It just came to my mind and seems worth to give a try.
Seems to be a mixture of regressions. There are several packages to do this. One of them is FlexMix, while not very satisfying. I put what I got and expected in below.
I think I solved the problem partly. We can use a r package flexmix to achieve this as the lowest panel shows. The package works fine with another two known-fitting groups of data. The seperating ratio can reach as high as 90% with fitting coefficients close to the known coefs.