I'm using networkx library to find shortest path between two nodes using dijkstra algo as follows
import networkx as nx
A = [[0, 100, 0, 0 , 40, 0],
[100, 0, 20, 0, 0, 70],
[0, 20, 0, 80, 50, 0],
[0, 0, 80, 0, 0, 30],
[40, 0, 50, 0, 0, 60],
[0, 70, 0, 30, 60, 0]];
print(nx.dijkstra_path(A, 0, 4))
In the above code I'm using matrix directly, But library requires graph to be created as follows
G = nx.Graph()
G = nx.add_node(<node>)
G.add_edge(<node 1>, <node 2>)
It is very time consuming to create matrix by using above commands. Is there any way to give input as weighted matrix to the dijkstra_path function.
First you need to convert your adjacency matrix to a numpy matrix with np.array.
Then you can simply create your graph with from_numpy_matrix.
import networkx as nx
import numpy as np
A = [[0, 100, 0, 0 , 40, 0],
[100, 0, 20, 0, 0, 70],
[0, 20, 0, 80, 50, 0],
[0, 0, 80, 0, 0, 30],
[40, 0, 50, 0, 0, 60],
[0, 70, 0, 30, 60, 0]]
a = np.array(A)
G = nx.from_numpy_matrix(a)
print(nx.dijkstra_path(G, 0, 4))
Output:
[0, 4]
Side note: you can check the graph edges with the following code.
for edge in G.edges(data=True):
print(edge)
Output:
(0, 1, {'weight': 100})
(0, 4, {'weight': 40})
(1, 2, {'weight': 20})
(1, 5, {'weight': 70})
(2, 3, {'weight': 80})
(2, 4, {'weight': 50})
(3, 5, {'weight': 30})
(4, 5, {'weight': 60})
Related
Below is the code
code for slice assignment
For loop is used to assign lists to slice of list from main 2d list
But the output is not as expected.
qu = [[1, 2, 100], [2, 5, 100], [3, 4, 100]]
n=5
a = [[0]*n]*len(qu)
for i in range(len(qu)):
p=qu[i][0]-1
q=qu[i][1]
a[i][p:q]=[qu[i][2]]*(q-p)
print(a)
Output ---
[[100, 100, 0, 0, 0], [100, 100, 0, 0, 0], [100, 100, 0, 0, 0]]
[[100, 100, 100, 100, 100], [100, 100, 100, 100, 100], [100, 100, 100, 100, 100]]
[[100, 100, 100, 100, 100], [100, 100, 100, 100, 100], [100, 100, 100, 100, 100]]
Expected Output --
[[100, 100, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]
[[100, 100, 0, 0, 0], [0, 100, 100, 100, 100], [0, 0, 0, 0, 0]]
[[100, 100, 0, 0, 0], [0, 100, 100, 100, 100], [0, 0, 100, 100, 0]]
Blockquote
I have below code in python
# dense to sparse
from numpy import array
from scipy.sparse import csr_matrix
# create dense matrix
A = array([[1, 0, 0, 1, 0, 0], [0, 0, 2, 0, 0, 1], [0, 0, 0, 2, 0, 0]])
print(A)
# convert to sparse matrix (CSR method)
S = csr_matrix(A)
print(S)
# reconstruct dense matrix
B = S.todense()
print(B)
Above code when I have following statement I have
print(B[0])
I have following output:
[[1 0 0 1 0 0]]
How can I loop through the above values i.e, 1, 0, 0, 1, 0, 0, 0
In [2]: from scipy.sparse import csr_matrix
...: # create dense matrix
...: A = np.array([[1, 0, 0, 1, 0, 0], [0, 0, 2, 0, 0, 1], [0, 0, 0, 2, 0, 0]])
...: S = csr_matrix(A)
In [3]: A
Out[3]:
array([[1, 0, 0, 1, 0, 0],
[0, 0, 2, 0, 0, 1],
[0, 0, 0, 2, 0, 0]])
In [4]: S
Out[4]:
<3x6 sparse matrix of type '<class 'numpy.int64'>'
with 5 stored elements in Compressed Sparse Row format>
S.toarray() or S.A for short, makes a dense ndarray:
In [5]: S.A
Out[5]:
array([[1, 0, 0, 1, 0, 0],
[0, 0, 2, 0, 0, 1],
[0, 0, 0, 2, 0, 0]])
todense makes a np.matrix object, which is always 2d
In [6]: S.todense()
Out[6]:
matrix([[1, 0, 0, 1, 0, 0],
[0, 0, 2, 0, 0, 1],
[0, 0, 0, 2, 0, 0]])
In [7]: S.todense()[0]
Out[7]: matrix([[1, 0, 0, 1, 0, 0]])
In [9]: S.todense()[0][0]
Out[9]: matrix([[1, 0, 0, 1, 0, 0]])
To iterate by 'columns' we have to do something like:
In [10]: [S.todense()[0][:,i] for i in range(3)]
Out[10]: [matrix([[1]]), matrix([[0]]), matrix([[0]])]
In [11]: [S.todense()[0][0,i] for i in range(3)]
Out[11]: [1, 0, 0]
There is a shortcut for converting a 1d row np.matrix to a 1d ndarray:
In [12]: S.todense()[0].A1
Out[12]: array([1, 0, 0, 1, 0, 0])
Get a 1d array from a "row" of a ndarray is simpler:
In [14]: S.toarray()[0]
Out[14]: array([1, 0, 0, 1, 0, 0])
np.matrix is generally discouraged, as a remnant from a time when the transition from MATLAB was more important. Now that fact that sparse is modeled on np.matrix (but not subclassed) is the main reason for keeping np.matrix. Row and column sums of a sparse matrix return dense matrix.
The following example is about index array
import numpy as np
labels = np.array([0, 1, 2, 0, 4])
image = np.array([[0, 0, 1, 1, 1],
[2, 2, 0, 0, 0],
[0, 0, 3, 0, 4]])
And the labels[image] gives the following result
array([[0, 0, 1, 1, 1],
[2, 2, 0, 0, 0],
[0, 0, 0, 0, 4]])
I am not clear how does this, i.e., labels[image] works? Thanks.
I have the following labels
>>> lab
array([3, 0, 3 ,3, 1, 1, 2 ,2, 3, 0, 1,4])
I want to assign this label to another numpy array i.e
>>> arr
array([[81, 1, 3, 87], # 3
[ 2, 0, 1, 0], # 0
[13, 6, 0, 0], # 3
[14, 0, 1, 30], # 3
[ 0, 0, 0, 0], # 1
[ 0, 0, 0, 0], # 1
[ 0, 0, 0, 0], # 2
[ 0, 0, 0, 0], # 2
[ 0, 0, 0, 0], # 3
[ 0, 0, 0, 0], # 0
[ 0, 0, 0, 0], # 1
[13, 2, 0, 11]]) # 4
and add all corresponding rows with same labels.
The output must be
([[108, 7, 4,117]--3
[ 0, 0, 0, 0]--0
[ 0, 0, 0, 0]--1
[ 0, 0, 0, 0]--2
[13, 2, 0, 11]])--4
You could use groupby from pandas:
import pandas as pd
parr=pd.DataFrame(arr,index=lab)
pd.groupby(parr,by=parr.index).sum()
0 1 2 3
0 2 0 1 0
1 0 0 0 0
2 0 0 0 0
3 108 7 4 117
4 13 2 0 11
numpy doesn't have a group_by function like pandas, but it does have a reduceat method that performs fast array actions on groups of elements (rows). But it's application in this case is a bit messy.
Start with our 2 arrays:
In [39]: arr
Out[39]:
array([[81, 1, 3, 87],
[ 2, 0, 1, 0],
[13, 6, 0, 0],
[14, 0, 1, 30],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[13, 2, 0, 11]])
In [40]: lbls
Out[40]: array([3, 0, 3, 3, 1, 1, 2, 2, 3, 0, 1, 4])
Find the indices that will sort lbls (and rows of arr) into contiguous blocks:
In [41]: I=np.argsort(lbls)
In [42]: I
Out[42]: array([ 1, 9, 4, 5, 10, 6, 7, 0, 2, 3, 8, 11], dtype=int32)
In [43]: s_lbls=lbls[I]
In [44]: s_lbls
Out[44]: array([0, 0, 1, 1, 1, 2, 2, 3, 3, 3, 3, 4])
In [45]: s_arr=arr[I,:]
In [46]: s_arr
Out[46]:
array([[ 2, 0, 1, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[81, 1, 3, 87],
[13, 6, 0, 0],
[14, 0, 1, 30],
[ 0, 0, 0, 0],
[13, 2, 0, 11]])
Find the boundaries of these blocks, i.e. where s_lbls jumps:
In [47]: J=np.where(np.diff(s_lbls))
In [48]: J
Out[48]: (array([ 1, 4, 6, 10], dtype=int32),)
Add the index of the start of the first block (see the reduceat docs)
In [49]: J1=[0]+J[0].tolist()
In [50]: J1
Out[50]: [0, 1, 4, 6, 10]
Apply add.reduceat:
In [51]: np.add.reduceat(s_arr,J1,axis=0)
Out[51]:
array([[ 2, 0, 1, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[108, 7, 4, 117],
[ 13, 2, 0, 11]], dtype=int32)
These are your numbers, sorted by lbls (for 0,1,2,3,4).
With reduceat you could take other actions like maximum, product etc.
I have the following labels
>>> lab
array([2, 2, 2, 2, 2, 3, 3, 0, 0, 0, 0, 1])
I want to assign this label to another numpy array i.e
>>> arr
array([[81, 1, 3, 87], # 2
[ 2, 0, 1, 0], # 2
[13, 6, 0, 0], # 2
[14, 0, 1, 30], # 2
[ 0, 0, 0, 0], # 2
[ 0, 0, 0, 0], # 3
[ 0, 0, 0, 0], # 3
[ 0, 0, 0, 0], # 0
[ 0, 0, 0, 0], # 0
[ 0, 0, 0, 0], # 0
[ 0, 0, 0, 0], # 0
[13, 2, 0, 11]]) # 1
and add the elements of 0th group, 1st group, 2nd group, 3rd group?
If the labels of equal values are contiguous, as in your example, then you may use np.add.reduceat:
>>> lab
array([2, 2, 2, 2, 2, 3, 3, 0, 0, 0, 0, 1])
>>> idx = np.r_[0, 1 + np.where(lab[1:] != lab[:-1])[0]]
>>> np.add.reduceat(arr, idx)
array([[110, 7, 5, 117], # 2
[ 0, 0, 0, 0], # 3
[ 0, 0, 0, 0], # 0
[ 13, 2, 0, 11]]) # 1
if they are not contiguous, then use np.argsort to align the array and labels such that labels of the same values are next to each other:
>>> i = np.argsort(lab)
>>> lab, arr = lab[i], arr[i, :] # aligns array and labels such that labels
>>> lab # are sorted and equal labels are contiguous
array([0, 0, 0, 0, 1, 2, 2, 2, 2, 2, 3, 3])
>>> idx = np.r_[0, 1 + np.where(lab[1:] != lab[:-1])[0]]
>>> np.add.reduceat(arr, idx)
array([[ 0, 0, 0, 0], # 0
[ 13, 2, 0, 11], # 1
[110, 7, 5, 117], # 2
[ 0, 0, 0, 0]]) # 3
or alternatively use groupby from pandas library:
>>> pd.DataFrame(arr).groupby(lab).sum().values
array([[ 0, 0, 0, 0],
[ 13, 2, 0, 11],
[110, 7, 5, 117],
[ 0, 0, 0, 0]])