Trying to create Social graph using NetworkX in theory(as i think) everything is good works, but in practice works wrong.
So i've got information about some groups in such format:
members={'Group Name 1':[User 1 ID, User ID 2...],...,'Group Name N' : [User 1 ID,...,User K Id]}
For example:
members={'Group 1' : [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
'Group 2' : [10, 11, 12, 13, 14, 9],
'Group 3':[21,22,23,24] }
In outcome i need graph in which:
Vertices - Social Group
Edges - the existence of common subscribers (User IDs)
Vertices Size - Users Count
distance between Vertices - common Subscribers (User IDs)
My code:
matrix={}
for i in members:
for j in members:
if i!=j:
matrix[i+j]=len(set(members[i]) & set(members[j]))*1.0/min(len(set(members[i])),len(set(members[j])))
max_matrix = max(matrix.values())
min_matrix = min(matrix.values())
for i in matrix:
matrix[i] = (matrix[i] - min_matrix) / (max_matrix - min_matrix)
g = networkx.Graph(directed=False)
for i in members:
for j in members:
if i != j:
g.add_edge(i, j, weight=matrix[i+j])
members_count = {x:len(members[x]) for x in members}
max_value = max(members_count.values()) * 1.0
size = []
max_size = 900
min_size = 100
for node in g.nodes():
size.append(((members_count[node]/max_value)*max_size + min_size)*10)
import matplotlib.pyplot as plt
pos=networkx.spring_layout(g)
plt.figure(figsize=(20,20))
networkx.draw_networkx(g, pos, node_size=size, width=0.5, font_size=8)
plt.axis('off')
plt.show()
BUT, i can't understand why Edges drawing for groups which have no common IDs.
NetworkX only use weight as an attribute of edges. Whether there is an edge or not doesn't depend on edges' weights.
In other word, Those edges with weight 0 are also count as edges and it will be displayed by drawing function.
Related
I am trying to optimize a funciton that is trying to maximize the correlation between two (pandas) time series arrays (X and Y). This is done by using three parameters (a, b, c) and a third time series array (Z). The Z array is used to reindex the values in the X array (based on the parameters a, b, c) in such a way as to maximize the correlation of the reindexed X array (Xnew) with the Y array.
Below is some pseudo-code to demonstrate what I amy trying to do. I have attempted this using LMfit and scipy optimize but I am not sure how to make this task work in those packages. For example in LMfit if I tried to minimize the MyOpt function (which passes back a single value of the correlation metric) then it complains that I have more parameters than outputs. However, if I pass back the time series of the corrlation metric (diff) the the parameter values remain fixed at their input values.
I know the reindexing function I am using works because using the rather crude methods similar to the code below give signifianct changes in the mean (diff) metric passed back.
My knowledge of these optimizaiton packages is not up to scratch for this job so if anyone has a suggestion on how to tackle this, I would be greatfull.
def GetNewIndex(Z, a, b ,c):
old_index = np.arange(0, len(Z))
index_adj = some_func(a,b,c)
new_index = old_index + index_adj
max_old = np.max(old_index)
new_index[new_index > max_old] = max_old
new_index[new_index < 0] = 0
return new_index
def MyOpt(params, X, Y ,Z):
a = params['A']
b = params['B']
c = params['C']
# estimate lag (in samples) based on ambient RH
new_index = GetNewIndex(Z, a, b, c)
# assign old values to new locations and convert back to pandas series
Xnew = np.take(X.values, new_index)
Xnew = pd.Series(Xnew, index=X.index)
cc = Y.rolling(1201, center=True).corr(Xnew)
cc = cc.interpolate(limit_direction='both', limit_area=None)
diff = 1-np.abs(cc)
return np.mean(diff)
#==================================================
X = some long pandas time series data
Y = some long pandas time series data
Z = some long pandas time series data
As = [1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2]
Bs = [0, 0 ,0, 1, 1, 1, 0, 0, 0, 1, 1, 1]
Cs = [5, 6, 5, 6, 5, 6, 5, 6, 5, 6, 5, 6]
outs = []
for A, B, C in zip(As, Bs, Cs):
params={'A':A, 'B':B, 'C':C}
out = MyOpt(params, X, Y, Z)
outs.append(out)
I have two lists
l1 = [[['Arsenal F.C.']],[['Chelsea F.C.']],[['FC Barcelona']], [['FC Barcelona'], ['NFL']],[['Formula E']], [['Formula E'], ['NBA']], [['Hashtag United F.C.']],[['India National Cricket Team']], [['J1 League']],[['Liverpool F.C.']],[['Manchester United F.C.']], [['Manchester United F.C.'], ['LaLiga']], [['Manchester United F.C.'], ['LaLiga'], ['Real Madrid C.F.']]]
l2 = [2, 1, 5, 1, 4, 1, 1, 2, 1, 1, 3, 1, 1]
l1 has the name of the teams together and l2 has the frequency of occurrence of each team. I want to visualize this with something like a bar chart where x axis has team names and y-axis has the respective frequencies.My code looks like following:
fig,ax = plt.subplots(figsize=(30,12))
_ = ax.set_title("combination searched")
_ = ax.bar(l1,l2)
_ = ax.set_xlabel("teams")
_ = ax.set_ylabel("No of times combination is searched")
plt.show()
I also wanted to get teams as xticks but I got error while plotting
I got the following Error:
TypeError: the dtypes of parameters x (object) and width (float64) are incompatible
this is working for me now
l3 = [str(v) for v in unique]
plt.figure(figsize = (50,30))
plt.barh(l3,counts)
plt.show()
I have an undirected graph and I'm looking for a way to remove the minimum weight edge from every node. I tried several methods but they all seem to fail.
Given a complete graph
>>> G = nx.complete_graph(n=5)
>>> for (u, v) in G.edges():
... G.edges[u,v]['weight'] = random.randint(0,10)
To take the minimum weight edge incident to a node and then remove it you can do as follows.
>>> for u in G.nodes():
... min_weight_edge = min(G.edges(u), key=lambda x: G.get_edge_data(x[0], x[1])["weight"])
... G.remove_edge(*min_weight_edge)
...
First create a complete graph with random weights:
g = nx.complete_graph(5)
for (u,v,w) in g.edges(data=True):
w['weight'] = random.randint(0,10)
Option1: iterate over the nodes and remove the minimum weight edge.
for n in g.nodes():
min_weight = (-1,-1,float("inf"))
for e in g.edges(nbunch=n,data="weight"):
#print(e)
if min_weight[2] > e[2]:
min_weight = e
print(min_weight)
g.remove_edge(min_weight[0], min_weight[1])
Option 2: remove edges in the end.
Only remove the edges in the end, checking if the edge is already in the list of edges to be removed.
edges_to_remove = set()
for n in g.nodes():
min_weight = (-1,-1,float("inf"))
for e in g.edges(nbunch=n,data="weight"):
#print(e)
if min_weight[2] > e[2]:
min_weight = e
if (min_weight[1],min_weight[0]) not in edges_to_remove:
print(min_weight)
edges_to_remove.add((min_weight[0],min_weight[1]))
for e in edges_to_remove:
g.remove_edge(*e)
Notice that these two solutions yield different results:
For the same graph:
Edges (u,v,weight) removed using Option 1:
(0, 3, 3)
(1, 2, 0)
(2, 3, 7)
(3, 1, 8)
(4, 1, 3)
Edges (u,v,weight) removed using Option 2:
(0, 3, 3)
(1, 2, 0)
(4, 1, 3)
The first option removes the smallest weight edge for each node depending on the order!, i.e if the smallest weight edge has already been removed it will remove the next smallest weight edge. Will always remove as many edges as the number of nodes.
The second option only removes the smallest weight edge for each node, i.e if the smallest edge of a given node has already been removed it will not remove any edge.
I am trying to insert spacing between two specific bars but cannot find any easy way to do this. I can manually add a dummy row with with 0 height to create and empty space but doesn't give me control of how wide the space should be. Is there a more programmatic method I can use to control the spacing between bars at any position?
Example Code:
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
mydict = {
'Event': ['Running', 'Swimming', 'Biking', '', 'Hiking', 'Jogging'],
'Completed': [2, 4, 3, 0, 7, 9],
'Participants': [10, 20, 35, 0, 10, 20]}
df = pd.DataFrame(mydict).set_index('Event')
df = df.assign(Completion=(df.Completed / df.Participants) * 100)
plt.subplots(figsize=(5, 4))
print(df.index)
ax = sns.barplot(x=df.Completion, y=df.index, color="orange", orient='h')
plt.xticks(rotation=60)
plt.tight_layout()
plt.show()
Example DataFrame Output:
Completed Participants Completion
Event
Running 2 10 20.000000
Swimming 4 20 20.000000
Biking 3 35 8.571429
0 0 NaN
Hiking 7 10 70.000000
Jogging 9 20 45.000000
Example output (blue arrows added outside of code to show where empty row was added.):
I think you can access the position of the boxes and the name of the labels. Then modify them. You may find an more general way depending on your use case, but this works for the given example.
#define a function to add space starting a specific label
def add_space_after(ax, label_shift='', extra_space=0):
bool_space = False
# get postion of current ticks
ticks_position = np.array(ax.get_yticks()).astype(float)
# iterate over the boxes/label
for i, (patch, label) in enumerate(zip(ax.patches, ax.get_yticklabels())):
# if the label to start the shift found
if label.get_text()==label_shift: bool_space = True
# reposition the boxes and the labels afterward
if bool_space:
patch.set_y(patch.get_y() + extra_space)
ticks_position[i] += extra_space
# in the case where the spacing is needed
if bool_space:
ax.set_yticks(ticks_position)
ax.set_ylim([ax.get_ylim()[0]+extra_space, ax.get_ylim()[1]])
#note: no more blank row
mydict = {
'Event': ['Running', 'Swimming', 'Biking', 'Hiking', 'Jogging'],
'Completed': [2, 4, 3, 7, 9],
'Participants': [10, 20, 35, 10, 20]}
df = pd.DataFrame(mydict).set_index('Event')
df = df.assign(Completion=(df.Completed / df.Participants) * 100)
ax = sns.barplot(x=df.Completion, y=df.index, color="orange", orient='h')
plt.xticks(rotation=60)
plt.tight_layout()
#use the function
add_space_after(ax, 'Hiking', 0.6)
plt.show()
I have a matrix (2d numpy ndarray, to be precise):
A = np.array([[4, 0, 0],
[1, 2, 3],
[0, 0, 5]])
And I want to roll each row of A independently, according to roll values in another array:
r = np.array([2, 0, -1])
That is, I want to do this:
print np.array([np.roll(row, x) for row,x in zip(A, r)])
[[0 0 4]
[1 2 3]
[0 5 0]]
Is there a way to do this efficiently? Perhaps using fancy indexing tricks?
Sure you can do it using advanced indexing, whether it is the fastest way probably depends on your array size (if your rows are large it may not be):
rows, column_indices = np.ogrid[:A.shape[0], :A.shape[1]]
# Use always a negative shift, so that column_indices are valid.
# (could also use module operation)
r[r < 0] += A.shape[1]
column_indices = column_indices - r[:, np.newaxis]
result = A[rows, column_indices]
numpy.lib.stride_tricks.as_strided stricks (abbrev pun intended) again!
Speaking of fancy indexing tricks, there's the infamous - np.lib.stride_tricks.as_strided. The idea/trick would be to get a sliced portion starting from the first column until the second last one and concatenate at the end. This ensures that we can stride in the forward direction as needed to leverage np.lib.stride_tricks.as_strided and thus avoid the need of actually rolling back. That's the whole idea!
Now, in terms of actual implementation we would use scikit-image's view_as_windows to elegantly use np.lib.stride_tricks.as_strided under the hoods. Thus, the final implementation would be -
from skimage.util.shape import view_as_windows as viewW
def strided_indexing_roll(a, r):
# Concatenate with sliced to cover all rolls
a_ext = np.concatenate((a,a[:,:-1]),axis=1)
# Get sliding windows; use advanced-indexing to select appropriate ones
n = a.shape[1]
return viewW(a_ext,(1,n))[np.arange(len(r)), (n-r)%n,0]
Here's a sample run -
In [327]: A = np.array([[4, 0, 0],
...: [1, 2, 3],
...: [0, 0, 5]])
In [328]: r = np.array([2, 0, -1])
In [329]: strided_indexing_roll(A, r)
Out[329]:
array([[0, 0, 4],
[1, 2, 3],
[0, 5, 0]])
Benchmarking
# #seberg's solution
def advindexing_roll(A, r):
rows, column_indices = np.ogrid[:A.shape[0], :A.shape[1]]
r[r < 0] += A.shape[1]
column_indices = column_indices - r[:,np.newaxis]
return A[rows, column_indices]
Let's do some benchmarking on an array with large number of rows and columns -
In [324]: np.random.seed(0)
...: a = np.random.rand(10000,1000)
...: r = np.random.randint(-1000,1000,(10000))
# #seberg's solution
In [325]: %timeit advindexing_roll(a, r)
10 loops, best of 3: 71.3 ms per loop
# Solution from this post
In [326]: %timeit strided_indexing_roll(a, r)
10 loops, best of 3: 44 ms per loop
In case you want more general solution (dealing with any shape and with any axis), I modified #seberg's solution:
def indep_roll(arr, shifts, axis=1):
"""Apply an independent roll for each dimensions of a single axis.
Parameters
----------
arr : np.ndarray
Array of any shape.
shifts : np.ndarray
How many shifting to use for each dimension. Shape: `(arr.shape[axis],)`.
axis : int
Axis along which elements are shifted.
"""
arr = np.swapaxes(arr,axis,-1)
all_idcs = np.ogrid[[slice(0,n) for n in arr.shape]]
# Convert to a positive shift
shifts[shifts < 0] += arr.shape[-1]
all_idcs[-1] = all_idcs[-1] - shifts[:, np.newaxis]
result = arr[tuple(all_idcs)]
arr = np.swapaxes(result,-1,axis)
return arr
I implement a pure numpy.lib.stride_tricks.as_strided solution as follows
from numpy.lib.stride_tricks import as_strided
def custom_roll(arr, r_tup):
m = np.asarray(r_tup)
arr_roll = arr[:, [*range(arr.shape[1]),*range(arr.shape[1]-1)]].copy() #need `copy`
strd_0, strd_1 = arr_roll.strides
n = arr.shape[1]
result = as_strided(arr_roll, (*arr.shape, n), (strd_0 ,strd_1, strd_1))
return result[np.arange(arr.shape[0]), (n-m)%n]
A = np.array([[4, 0, 0],
[1, 2, 3],
[0, 0, 5]])
r = np.array([2, 0, -1])
out = custom_roll(A, r)
Out[789]:
array([[0, 0, 4],
[1, 2, 3],
[0, 5, 0]])
By using a fast fourrier transform we can apply a transformation in the frequency domain and then use the inverse fast fourrier transform to obtain the row shift.
So this is a pure numpy solution that take only one line:
import numpy as np
from numpy.fft import fft, ifft
# The row shift function using the fast fourrier transform
# rshift(A,r) where A is a 2D array, r the row shift vector
def rshift(A,r):
return np.real(ifft(fft(A,axis=1)*np.exp(2*1j*np.pi/A.shape[1]*r[:,None]*np.r_[0:A.shape[1]][None,:]),axis=1).round())
This will apply a left shift, but we can simply negate the exponential exponant to turn the function into a right shift function:
ifft(fft(...)*np.exp(-2*1j...)
It can be used like that:
# Example:
A = np.array([[1,2,3,4],
[1,2,3,4],
[1,2,3,4]])
r = np.array([1,-1,3])
print(rshift(A,r))
Building on divakar's excellent answer, you can apply this logic to 3D array easily (which was the problematic that brought me here in the first place). Here's an example - basically flatten your data, roll it & reshape it after::
def applyroll_30(cube, threshold=25, offset=500):
flattened_cube = cube.copy().reshape(cube.shape[0]*cube.shape[1], cube.shape[2])
roll_matrix = calc_roll_matrix_flattened(flattened_cube, threshold, offset)
rolled_cube = strided_indexing_roll(flattened_cube, roll_matrix, cube_shape=cube.shape)
rolled_cube = triggered_cube.reshape(cube.shape[0], cube.shape[1], cube.shape[2])
return rolled_cube
def calc_roll_matrix_flattened(cube_flattened, threshold, offset):
""" Calculates the number of position along time axis we need to shift
elements in order to trig the data.
We return a 1D numpy array of shape (X*Y, time) elements
"""
# armax(...) finds the position in the cube (3d) where we are above threshold
roll_matrix = np.argmax(cube_flattened > threshold, axis=1) + offset
# ensure we don't have index out of bound
roll_matrix[roll_matrix>cube_flattened.shape[1]] = cube_flattened.shape[1]
return roll_matrix
def strided_indexing_roll(cube_flattened, roll_matrix_flattened, cube_shape):
# Concatenate with sliced to cover all rolls
# otherwise we shift in the wrong direction for my application
roll_matrix_flattened = -1 * roll_matrix_flattened
a_ext = np.concatenate((cube_flattened, cube_flattened[:, :-1]), axis=1)
# Get sliding windows; use advanced-indexing to select appropriate ones
n = cube_flattened.shape[1]
result = viewW(a_ext,(1,n))[np.arange(len(roll_matrix_flattened)), (n - roll_matrix_flattened) % n, 0]
result = result.reshape(cube_shape)
return result
Divakar's answer doesn't do justice to how much more efficient this is on large cube of data. I've timed it on a 400x400x2000 data formatted as int8. An equivalent for-loop does ~5.5seconds, Seberg's answer ~3.0seconds and strided_indexing.... ~0.5second.