networkx - can't remove_node from graph - python-3.x

A G = nx.DiGraph() whose nodes and edges are the following:
G.nodes() = ['10.2.110.1', '10.2.25.65', '10.2.94.87', '10.2.20.209', '10.2.6.206', '10.2.94.55', '10.2.182.10', '10.2.94.86', '10.2.20.2', '10.2.20.1', '10.2.94.94']
G.edges() = [('10.2.110.1', '10.2.20.2'), ('10.2.110.1', '10.2.20.1'), ('10.2.25.65', '10.2.6.206'), ('10.2.94.87', '10.2.94.55'), ('10.2.20.209', '10.2.110.1'), ('10.2.94.55', '10.2.20.209'), ('10.2.182.10', '10.2.182.10'), ('10.2.94.86', '10.2.94.87'), ('10.2.20.2', '10.2.25.65'), ('10.2.20.1', '10.2.182.10'), ('10.2.94.94', '10.2.94.86')]
That above, produces the following topology.
As you can see, node_94 is green, because is the starting node. Both node_10 and node_206 are the farEnds.
I want to remove nodes from the graph depending on the number of hops away from the farEnds for node_94.
I have this function which tries to remove nodes depending on how far a node is from a given farEnd.
def getHopToNH(G):
labelList = {}
nodes = G
for startNode in nodes.nodes():
try:
farInt = nx.get_node_attributes(nodes,'farInt')[startNode]
except:
farInt = 'NA'
try:
p = min([len(nx.shortest_path(nodes,source=startNode,target=end)) for end in farInt])
except:
p = 0
if p < 7:
labelList = {**labelList,**{str(startNode):'node_'+str(startNode).split(".")[3]}}
else:
nodes.remove_node(startNode)
return labelList,nodes
However, when running that function, I get the following error:
File "trace_1_7.py", line 87, in getHopToNH
for startNode in nodes.nodes():
RuntimeError: dictionary changed size during iteration
The problem arises with the nodes.remove_node(startNode). If I remove that line, the code works nice and produces the plot that you can see above.
How can I accomplish the removal based on the number of hops towards a farEnd?
Thanks!
Lucas

Since networkx internally represents graphs using dicts, when iterating over a graph's nodes, we are iterating over the keys of a dictionary (this dictionary maps each node to its attributes). Using remove_node will change the size of this dictionary, which is not allowed when we are iterating over its key, hence the RuntimeError.
To remove nodes, we maintain a list containing the nodes we want to remove, then remove the nodes in this list after the for loop.
def getHopToNH(G):
labelList = {}
nodes = G
nodes_to_remove = []
for startNode in nodes.nodes():
try:
farInt = nx.get_node_attributes(nodes,'farInt')[startNode]
except:
farInt = 'NA'
try:
p = min([len(nx.shortest_path(nodes,source=startNode,target=end)) for end in farInt])
except:
p = 0
if p < 7:
labelList = {**labelList,**{str(startNode):'node_'+str(startNode).split(".")[3]}}
else:
nodes_to_remove.append(startNode)
nodes.remove_nodes_from(nodes_to_remove)
return labelList,nodes

Related

Display and save contents of a data frame with multi-dimensional array elements

I have created and updated a pandas dataframe to fill details of a section of an image and its corresponding features.
slice_sq_dim = 200
df_slice = pd.DataFrame({'Sample': str,
'Slice_ID':int,
'Slice_Array': [np.zeros((slice_sq_dim,slice_sq_dim))],
'Interface_Array': [np.zeros((slice_sq_dim,slice_sq_dim))],
'Slice_Array_Threshold': [np.zeros((slice_sq_dim,slice_sq_dim))]})
I added individual elements of this dataframe by updating the value of each cell through row by row iteration. Once I have completed my dataframe (with around 200 rows), I cannot seem to display more than the first row of its contents. I assume that this is due to the inclusion of multi-dimensional numpy arrays (image slices) as a component. I have also exported this data into a JSON file so that it can act as an input file during the next run. The following code shows how I exactly tried this and also how I fill my dataframe.
Slices_data_file = os.path.join(os.getcwd(), "Slices_dataframe.json")
if os.path.isfile(Slices_data_file):
print("Using the saved data of slices from previous run..")
df_slice = pd.read_json(Slices_data_file, orient='records')
else:
print("No previously saved slice data found..")
no_of_slices = 20
for index, row in df_files.iterrows(): # df_files is the previous dataframe with image path details
path = row['image_path']
slices, slices_thresh, slices_interface = slice_image(path, slice_sq_dim, no_of_slices)
# each of the output is a list of 20 image slices
for n, arr in enumerate(slices):
indx = (indx_row - 1 ) * no_of_slices + n
df_slice.Sample[indx] = path
df_slice.Slice_ID[indx] = n+1
df_slice.Slice_Array[indx] = arr
df_slice.Interface_Array[indx] = slices_interface[n]
df_slice.Slice_Array_Threshold[indx] = slices_thresh[n]
df_slice.to_json(Slices_data_file, orient='records')
I would like to do the following things:
Complete the dataframe with the possibility to add further columns of scalar values
View the dataframe normally with multiple rows and iterate using functions such as df_slice.iterrows() which is currently not supported
Save and reuse the database so as to avoid the repeated and time-consuming operations
Any advice or better suggestions?
After some while of searching, I found some topics that helped. pd.Series was very appropriate here. Also, I think that there was a "SettingwithCopyWarning" thatI chose to ignore somewhere in between. Final code is given below:
Slices_data_file = os.path.join(os.getcwd(), "Slices_dataframe.json")
if os.path.isfile(Slices_data_file):
print("Using the saved data of slices from previous run..)")
df_slice = pd.read_json(Slices_data_file, orient = 'columns')
else:
print("No previously saved slice data found..")
Sample_col = []
Slice_ID_col = []
Slice_Array_col = []
Interface_Array_col = []
Slice_Array_Threshold_col = []
no_of_slices = 20
slice_sq_dim = 200
df_slice = pd.DataFrame({'Sample': str,
'Slice_ID':int,
'Slice_Array': [],
'Interface_Array': [],
'Slice_Array_Threshold': []})
for index, row in df_files.iterrows():
path = row['image_path']
slices, slices_thresh, slices_interface = slice_image(path, slice_sq_dim, no_of_slices)
for n, arr in enumerate(slices):
Sample_col.append(Image_Unique_ID)
Slice_ID_col.append(n+1)
Slice_Array_col.append(arr)
Interface_Array_col.append(slices_interface[n])
Slice_Array_Threshold_col.append(slices_thresh[n])
print("Sicing -> ", Image_Unique_ID, " Complete")
df_slice['Sample'] = pd.Series(Sample_col)
df_slice['Slice_ID'] = pd.Series(Slice_ID_col)
df_slice['Slice_Array'] = pd.Series(Slice_Array_col)
df_slice['Interface_Array'] = pd.Series(Interface_Array_col)
df_slice['Slice_Array_Threshold'] = pd.Series(Slice_Array_Threshold_col)
df_slice.to_json(os.path.join(os.getcwd(), "Slices_dataframe.json"), orient='columns')

Create dictionary with count of values from list

I'm trying to figure out how to create a dictionary with the key as the school and values the wins-losses-draws, based on each item in the list. For example, calling my_dict['Clemson'] would return the string "1-1-1"
"
team_score_list =[['Georgia', 'draw'], ['Duke', 'loss'], ['Virginia Tech', 'win'], ['Virginia', 'loss'], ['Clemson', 'loss'], ['Clemson', 'win'], ['Clemson', 'draw']]
The output for the above list should be the following dictionary:
{'Georgia': 0-0-1, 'Duke': 0-1-0, 'Virginia Tech': 1-0-0, 'Virginia': 0-1-0, 'Clemson': 1-1-1}
For context, the original data comes from a CSV, where each line is in the form of Date,Opponent,Location,Points For,Points Against.
For example: 2016-12-31,Kentucky,Neutral,33,18.
I've managed to wrangle the data into the above list (albeit probably not in the most efficient manner), however just not exactly sure how to get this into the format above.
Any help would be greatly appreciated!
Not beautiful but this should work.
team_score_list = [
["Georgia", "draw"],
["Duke", "loss"],
["Virginia Tech", "win"],
["Virginia", "loss"],
["Clemson", "loss"],
["Clemson", "win"],
["Clemson", "draw"],
]
def gen_dict_lst(team_score_list):
"""Generates dict of list based on team record"""
team_score_dict = {}
for team_record in team_score_list:
if team_record[0] not in team_score_dict.keys():
team_score_dict[team_record[0]] = [0, 0, 0]
if team_record[1] == "win":
team_score_dict[team_record[0]][0] += 1
elif team_record[1] == "loss":
team_score_dict[team_record[0]][1] += 1
elif team_record[1] == "draw":
team_score_dict[team_record[0]][2] += 1
return team_score_dict
def convert_format(score_dict):
"""formats list to string for output validation"""
output_dict = {}
for key, value in score_dict.items():
new_val = []
for index, x in enumerate(value):
if index == 2:
new_val.append(str(x))
else:
new_val.append(str(x) + "-")
new_str = "".join(new_val)
output_dict[key] = new_str
return output_dict
score_dict = gen_dict_lst(team_score_list)
out_dict = convert_format(score_dict)
print(out_dict)
You can first make a dictionary and insert/increment values of wins,loss and draw while iterating over the dictionary values. Here I have shown a way using variable name same as the string used for win,loss and draw and then increased corresponding value in dictionary using global()['str'] (from another answer)
dct={}
for i in team_score_list:
draw=2
win=0
loss=1
if i[0] in dct:
dct[i[0]][globals()[i[1]]]+=1
else:
dct[i[0]]=[0,0,0]
dct[i[0]][globals()[i[1]]]=1
You can then convert your list to string by using '-'.join(...) to get it in a format you want in the dictionary.
I now get what you mean:
You could do
a = dict()
f = lambda x,s: str(int(m[x]=='1' or j==s))
for (i,j) in team_score_list:
m = a.get(i,'0-0-0')
a[i] = f"{f(0,'win')}-{f(2,'draw')}-{f(4,'loss')}"
{'Georgia': '0-1-0',
'Duke': '0-0-1',
'Virginia Tech': '1-0-0',
'Virginia': '0-0-1',
'Clemson': '1-1-1'}
Now this is an answer only for this example. If you had many data, it would be good to use a list then join at the end. Eg
b = dict()
g = lambda x,s: str(int(m[x]) + (j==s))
for (i,j) in team_score_list:
m = b.get(i,[0,0,0])
b[i] =[g(0,"win"),g(1,"draw"),g(2,"loss")]
{key:'-'.join(val) for key,val in b.items()}
{'Georgia': '0-1-0',
'Duke': '0-0-1',
'Virginia Tech': '1-0-0',
'Virginia': '0-0-1',
'Clemson': '1-1-1'}

How to concatenate data frames from two different dictionaries into a new data frame in python?

This is my sample code
dataset_current=dataset_seq['Motor_Current_Average']
dataset_consistency=dataset_seq['Consistency_Average']
#technique with non-overlapping the values(for current)
dataset_slide=dataset_current.tolist()
from window_slider import Slider
import numpy
list = numpy.array(dataset_slide)
bucket_size = 336
overlap_count = 0
slider = Slider(bucket_size,overlap_count)
slider.fit(list)
empty_dictionary = {}
count = 0
while True:
count += 1
window_data = slider.slide()
empty_dictionary['df_current%s'%count] = window_data
empty_dictionary['df_current%s'%count] =pd.DataFrame(empty_dictionary['df_current%s'%count])
empty_dictionary['df_current%s'%count]= empty_dictionary['df_current%s'%count].rename(columns={0: 'Motor_Current_Average'})
if slider.reached_end_of_list(): break
locals().update(empty_dictionary)
#technique with non-overlapping the values(for consistency)
dataset_slide_consistency=dataset_consistency.tolist()
list = numpy.array(dataset_slide_consistency)
slider_consistency = Slider(bucket_size,overlap_count)
slider_consistency.fit(list)
empty_dictionary_consistency = {}
count_consistency = 0
while True:
count_consistency += 1
window_data_consistency = slider_consistency.slide()
empty_dictionary_consistency['df_consistency%s'%count_consistency] = window_data_consistency
empty_dictionary_consistency['df_consistency%s'%count_consistency] =pd.DataFrame(empty_dictionary_consistency['df_consistency%s'%count_consistency])
empty_dictionary_consistency['df_consistency%s'%count_consistency]= empty_dictionary_consistency['df_consistency%s'%count_consistency].rename(columns={0: 'Consistency_Average'})
if slider_consistency.reached_end_of_list(): break
locals().update(empty_dictionary_consistency)
import pandas as pd
output_current ={}
increment = 0
while True:
increment +=1
output_current['dataframe%s'%increment] = pd.concat([empty_dictionary_consistency['df_consistency%s'%count_consistency],empty_dictionary['df_current%s'%count]],axis=1)
My question is i have two dictionaries that contains 79 data frames in each one of them namely "empty_dictionary_consistency" and "empty_dictionary" . I want to create a new data frame for each one of them so that it concatenates df1 from empty_dictionary_consistency with df1 from empty_dictionary .So , it will start from concatenating df1 from empty_dictionary_consistency with df1 from empty_dictionary till df79 from empty_dictionary_consistency with df79 from empty_dictionary . I tried using while loop to increment it but does not shows any output.
output_current ={}
increment = 0
while True:
increment +=1
output_current['dataframe%s'%increment] = pd.concat([empty_dictionary_consistency['df_consistency%s'%count_consistency],empty_dictionary['df_current%s'%count]],axis=1)
Can anyone help me regarding this? How can i do this.
I am not near my computer now, so I can not test the code, but it seems that the problem is in indices. In the last loop, on every iteration you increment a variable called 'increment', but you still use indices from previous loops for dictionaries that you want to concatenate. Try to change variables that you use for indexing all dictionaries to 'increment'.
And one more thing - I can't see when this loop is going to finish?
UPD
I mean this:
length = len(empty_dictionary_consistency)
increment = 0
while increment < length:
increment +=1
output_current['dataframe%s'%increment] = pd.concat([empty_dictionary_consistency['df_consistency%s'%increment],empty_dictionary['df_current%s'%increment]],axis=1)
While iterating over your dictionaries you should use a variable that you increment as an index in all three dictionaries. And as soon as you do not use a Slider object in the loop, you have to stop it when the first dictionary is over.

Please help me to fix the ''list index out of range'' error

I wrote a program to calculate the ratio of minor (under 20 of age) population in each prefecture of Japan and it keeps producing this error: list index out of range, at line 19: ratio =(agerange[1]+agerange[2]+agerange[3]+agerange[4])/population*100.0
Link to csv: https://drive.google.com/open?id=1uPSMpgHw0csRx1UgAJzRLit9p6NrztFY
f=open("population.csv","r")
header=f.readline()
header=header.rstrip("\r\n")
while True:
line=f.readline()
if line=="":
break
line=line.rstrip("\r\n")
field=line.split(sep=",")
population=0
ratio=0
agerange=[ "pref" ]
for age in range(1, len(field)):
agerange.append(int(field[age]))
population+=int(field[age])
ratio =(agerange[1]+agerange[2]+agerange[3]+agerange[4])/population*100.0
print(field[0],ratio)
On line 17, I assume you to do the following code:
ratio =(agerange[0]+agerange[1]+agerange[2]+agerange[3])/population*100.0
next time, write your error more in detail please.
What you could do instead is get the sums of populations in the required age ranges and then perform the ratio calculation.
In Python, you can use the map function to convert the values in an iterable to ints, and make that into a list.
Once you have the list, you can use the sum function on it, or a part of it.
So, I came up with:
f = open("population.csv","r")
header = f.readline()
header = header.rstrip("\r\n")
while True:
line = f.readline()
if line == "":
break
line = line.rstrip("\r\n")
field = line.split(sep=",")
popData = list(map(int, field[1:]))
youngPop = sum(popData[:4])
oldPop = sum(popData[4:])
ratio = youngPop / (youngPop + oldPop)
print(field[0].ljust(12), ratio)
f.close()
Which outputs (just showing a portion here):
Hokkaido 0.1544532130777903
Aomori 0.1564945226917058
Iwate 0.16108452950558214
Miyagi 0.16831683168316833
Akita 0.14357429718875503
Yamagata 0.16515426497277677
Fukushima 0.16586921850079744
(I don't really know Python, so there could be some "better" or more conventional way.)

Retrieving dict value via hardcoded key, works. Retrieving via computed key doesn't. Why?

I'm generating a common list of IDs by comparing two sets of IDs (the ID sets are from a dictionary, {ID: XML "RECORD" element}). Once I have the common list, I want to iterate over it and retrieve the value corresponding to the ID from a dictionary (which I'll write to disc).
When I compute the common ID list using my diff_comm_checker function, I'm unable to retrieve the dict value the ID corresponds to. It doesn't however fail with a KeyError. I can also print the ID out.
When I hard code the ID in as the common_id value, I can retrieve the dict value.
I.e.
common_ids = diff_comm_checker( list_1, list_2, "text")
# does nothing - no failures
common_ids = ['0603599998140032MB']
#gives me:
0603599998140032MB {'R': '0603599998140032MB'} <Element 'RECORD' at 0x04ACE788>
0603599998140032MB {'R': '0603599998140032MB'} <Element 'RECORD' at 0x04ACE3E0>
So I suspected there was some difference between the strings. I checked both the function output and compared it against the hard-coded values using:
print [(_id, type(_id), repr(_id)) for _id in common_ids][0]
I get exactly the same for both:
>>> ('0603599998140032MB', <type 'str'>, "'0603599998140032MB'")
I have also followed the advice of another question and used difflib.ndiff:
common_ids1 = diff_comm_checker( [x.keys() for x in to_write[0]][0], [x.keys() for x in to_write[1]][0], "text")
common_ids = ['0603599998140032MB']
print "\n".join(difflib.ndiff(common_ids1, common_ids))
>>> 0603599998140032MB
So again, doesn't appear that there's any difference between the two.
Here's a full, working example of the code:
from StringIO import StringIO
import xml.etree.cElementTree as ET
from itertools import chain, islice
def diff_comm_checker(list_1, list_2, text):
"""Checks 2 lists. If no difference, pass. Else return common set between two lists"""
symm_diff = set(list_1).symmetric_difference(list_2)
if not symm_diff:
pass
else:
mismatches_in1_not2 = set(list_1).difference( set(list_2) )
mismatches_in2_not1 = set(list_2).difference( set(list_1) )
if mismatches_in1_not2:
mismatch_logger(
mismatches_in1_not2,"{}\n1: {}\n2: {}".format(text, list_1, list_2), 1, 2)
if mismatches_in2_not1:
mismatch_logger(
mismatches_in2_not1,"{}\n2: {}\n1: {}".format(text, list_1, list_2), 2, 1)
set_common = set(list_1).intersection( set(list_2) )
if set_common:
return sorted(set_common)
else:
return "no common set: {}\n".format(text)
def chunks(iterable, size=10):
iterator = iter(iterable)
for first in iterator:
yield chain([first], islice(iterator, size - 1))
def get_elements_iteratively(file):
"""Create unique ID out of image number and case number, return it along with corresponding xml element"""
tag = "RECORD"
tree = ET.iterparse(StringIO(file), events=("start","end"))
context = iter(tree)
_, root = next(context)
for event, record in context:
if event == 'end' and record.tag == tag:
xml_element_2 = ''
xml_element_1 = ''
for child in record.getchildren():
if child.tag == "IMAGE_NUMBER":
xml_element_1 = child.text
if child.tag == "CASE_NUM":
xml_element_2 = child.text
r_id = "{}{}".format(xml_element_1, xml_element_2)
record.set("R", r_id)
yield (r_id, record)
root.clear()
def get_chunks(file, chunk_size):
"""Breaks XML into chunks, yields dict containing unique IDs and corresponding xml elements"""
iterable = get_elements_iteratively(file)
for chunk in chunks(iterable, chunk_size):
ids_records = {}
for k in chunk:
ids_records[k[0]]=k[1]
yield ids_records
def create_new_xml(xml_list):
chunk = 5000
chunk_rec_ids_1 = get_chunks(xml_list[0], chunk)
chunk_rec_ids_2 = get_chunks(xml_list[1], chunk)
to_write = [chunk_rec_ids_1, chunk_rec_ids_2]
######################################################################################
### WHAT'S GOING HERE ??? WHAT'S THE DIFFERENCE BETWEEN THE OUTPUTS OF THESE TWO ? ###
common_ids = diff_comm_checker( [x.keys() for x in to_write[0]][0], [x.keys() for x in to_write[1]][0], "create_new_xml - large - common_ids")
#common_ids = ['0603599998140032MB']
######################################################################################
for _id in common_ids:
print _id
for gen_obj in to_write:
for kv_pair in gen_obj:
if kv_pair[_id]:
print _id, kv_pair[_id].attrib, kv_pair[_id]
if __name__ == '__main__':
xml_1 = """<?xml version="1.0"?><RECORDSET><RECORD><CASE_NUM>140032MB</CASE_NUM><IMAGE_NUMBER>0603599998</IMAGE_NUMBER></RECORD></RECORDSET>"""
xml_2 = """<?xml version="1.0"?><RECORDSET><RECORD><CASE_NUM>140032MB</CASE_NUM><IMAGE_NUMBER>0603599998</IMAGE_NUMBER></RECORD></RECORDSET>"""
create_new_xml([xml_1, xml_2])
The problem is not in the type or value of common_ids returned from diff_comm_checker. The problem is that the function diff_comm_checker or in constructing the arguments to the function that destroys the values of to_write
If you try this you will see what I mean
common_ids = ['0603599998140032MB']
diff_comm_checker( [x.keys() for x in to_write[0]][0], [x.keys() for x in to_write[1]][0], "create_new_xml - large - common_ids")
This will give the erroneous behavior without using the return value from diff_comm_checker()
This is because to_write is a generator and the call to diff_comm_checker exhausts that generator. The generator is then finished/empty when used in the if-statement in the loop. You can create a list from a generator by using list:
chunk_rec_ids_1 = list(get_chunks(xml_list[0], chunk))
chunk_rec_ids_2 = list(get_chunks(xml_list[1], chunk))
But this may have other implications (memory usage...)
Also, what is the intention of this construct in diff_comm_checker?
if not symm_diff:
pass
In my opinion nothing will happen regardless if symm_diff is None or not.

Resources