the task on hand where I got stuck is, that I have to put the table content of a file in a dictionary of dictionaries structure.
The file contains something like this: (first six lines of ascii-file)
Name-----------|Alt name-------|------RA|-----DEC|-----z|---CR|----FX|---FX*|Error|---LX|--NH|ID-|Ref#----
RXCJ0000.1+0816 UGC12890 0.0295 8.2744 0.0396 0.26 5.80 5.39 12.4 0.37 5.9 1,3
RXCJ0001.9+1204 A2692 0.4877 12.0730 0.2033 0.08 1.82 1.81 17.9 3.24 5.1 1
RXCJ0004.9+1142 UGC00032 1.2473 11.7006 0.0761 0.17 3.78 3.68 12.7 0.93 5.3 2,4
RXCJ0005.3+1612 A2703 1.3440 16.2105 0.1164 0.24 4.96 4.94 11.8 2.88 3.7 B 2,5
RXCJ0006.3+1052 a) 1.5906 10.8677 0.1698 0.15 3.28 3.28 19.3 4.05 5.6 1
I can provide a file sample if necessary.
The following code works fine till it comes to storing each line-dict into a second dict.
#!/usr/bin/env python3
from collections import *
from re import *
obsrun = {}
objects = {}
re = compile('\d+.\d\d\d\d')
filename = 'test.asc'
with open(filename, 'r') as f:
lines = f.readlines()
for l in line[2:]:
#split the read lines into a list
o_bject = l.split()
#print(o_bject)
#interate over each entry and people the line-dictionary with values of interest
#what's needed (in col of table): identifier, common name, rightascension, declination
for k in o_bject:
objects.__setitem__('id', o_bject[0])
objects.__setitem__('common_name', o_bject[1])
# sometimes the common name has blanks, multiple entries or replacements
if re.match(o_bject[2]):
objects.__setitem__('ra', float(o_bject[2] ) )
objects.__setitem__('dec', float(o_bject[3] ) )
else:
objects.__setitem__('ra', float(o_bject[3] ) )
objects.__setitem__('dec', float(o_bject[4] ) )
#extract the identifier (name of the object) for use as key
name = objects.get('id')
#print(name)
print(objects) #*
# as documented in http://stackoverflow.com/questions/1024847/add-to-a-dictionary-in-python
obsrun[name] = objects
#print(obsrun)
#getting an ordered dictionary sorted by keys
OrderedDict(sorted(obsrun.items(), key= lambda t: t[0] ) ) #t[0] keys,t[1] values
What one can see from the output on console is, that the inner for-loop does what's supposed to do. It's confirmed by the print(objects) at *.
But when it comes to storing the row-dicts as value in the second dict, it's people with the same values. The keys are correctly built.
What I don't understand is, that the print() command displays the correct content of "objects" but they are not stored into "obsrun" correctly.
Does the error lie in the dict view nature or what did I do wrong?
How should I improve the code?
Thanks in advance,
Christian
You created only one dictionary, so each time through the loop you are modifying the same one.
Move the line
objects = {}
into the for l in line[2:]: loop. This will create a separate dict for each line of the file.
Also, using __setitem__ directly is unnecessary and makes the code harder to read. Change the lines from objects.__setitem__('id', o_bject[0]) to objects['id'] = o_bject[0].
It's worth pointing out that you don't really need a dict-of-dicts unless you are trying to look up the entries by name. (You don't explain much what the use case is, here.)
The one thing that leaps out from your code is that you're using setitem a lot - I think maybe you are coming from C++ or Java, where dictionaries do not have language support built in. In Python, this is not the case- you can say d[key]=value to add an item to a dictionary.
Here's some code to create a list (array) of dictionaries. It would be pretty trivial to make Table a dictionary keyed on one of the fields. I'll leave that for you to figure out. :)
Alternatively, a list is much easier to iterate over than a dict, if your problem is going to be performing computations on the data. So if you have to add up or average up or find the min/max, you probably want this version.
#!/usr/bin/env python3 -tt
data = open('test.asc')
header = data.readline().replace('-', '')
Field_names = header.split('|')
Table = []
# Read in the remaining lines, one at a time
for line in data:
fields = line.split()
Table.append(dict(zip(Field_names, fields)))
from pprint import pprint
pprint(Table)
So you say, that giving "objects" to obsrun is just linking "objects" and not copying the content? So I have to keep each inner dict since it's just linked.
You're right about setitem. I used it to make it more clear to me, what exactly I'm doing there.
I will try moving objects = {} into the inner for-loop.
Thanks for the answer. Will get back to report if that did the trick.
Update: That did it! Thanks so much, I really got stuck there, but I learned something import about dictionaries and that, in this cased, they are just linked, so it's memory saving already.
cheers,
Christian
Related
I am using Obspy _read_segy function to read a segy file using following line of code:
line_1=_read_segy('st1.segy')
However I have a large number of files in a folder as follow:
st1.segy
st2.segy
st3.segy
.
.
st700.segy
I want to use a for loop to read the data but I am new so can any one help me in this regard.
Currently i am using repeated lines to read data as follow:
line_2=_read_segy('st1.segy')
line_2=_read_segy('st2.segy')
The next step is to display the segy data using matplotlib and again i am using following line of code on individual lines which makes it way to much repeated work. Can someone help me with creating a loop to display the data and save the figures .
data=np.stack(t.data for t in line_1.traces)
vm=np.percentile(data,99)
plt.figure(figsize=(60,30))
plt.imshow(data.T, cmap='seismic',vmin=-vm, vmax=vm, aspect='auto')
plt.title('Line_1')
plt.savefig('Line_1.png')
plt.show()
Your kind suggestions will help me a lot as I am a beginner in python programming.
Thank you
If you want to reduce code duplication, you use something called functions. And If you want to repeatedly do something, you can use loops. So you can call a function in a loop, if you want to do this for all files.
Now, for reading the files in folder, you can use glob package of python. Something like below:
import glob, os
def save_fig(in_file_name, out_file_name):
line_1 = _read_segy(in_file_name)
data = np.stack(t.data for t in line_1.traces)
vm = np.percentile(data, 99)
plt.figure(figsize=(60, 30))
plt.imshow(data.T, cmap='seismic', vmin=-vm, vmax=vm, aspect='auto')
plt.title(out_file_name)
plt.savefig(out_file_name)
segy_files = list(glob.glob(segy_files_path+"/*.segy"))
for index, file in enumerate(segy_files):
save_fig(file, "Line_{}.png".format(index + 1))
I have not added other imports here, which you know to add!. segy_files_path is the folder where your files reside.
You just need to dynamically open the files in a loop. Fortunately they all follow the same naming pattern.
N = 700
for n in range(N):
line_n =_read_segy(f"st{n}.segy") # Dynamic name.
data = np.stack(t.data for t in line_n.traces)
vm = np.percentile(data, 99)
plt.figure(figsize=(60, 30))
plt.imshow(data.T, cmap="seismic", vmin=-vm, vmax=vm, aspect="auto")
plt.title(f"Line_{n}")
plt.show()
plt.savefig(f"Line_{n}.png")
plt.close() # Needed if you don't want to keep 700 figures open.
I'll focus on addressing the file looping, as you said you're new and I'm assuming simple loops are something you'd like to learn about (the first example is sufficient for this).
If you'd like an answer to your second question, it might be worth providing some example data, the output result (graph) of your current attempt, and a description of your desired output. If you provide that reproducible example and clear description of the problem you're having it'd be easier to answer.
Create a list (or other iterable) to hold the file names to read, and another container (maybe a dict) to hold the result of your read_segy.
files = ['st1.segy', 'st2.segy']
lines = {} # creates an empty dictionary; dictionaries consist of key: value pairs
for f in files: # f will first be 'st1.segy', then 'st2.segy'
lines[f] = read_segy(f)
As stated in the comment by #Guimoute, if you want to dynamically generate the file names, you can create the files list by pasting integers to the base file name.
lines = {} # creates an empty dictionary; dictionaries have key: value pairs
missing_files = []
for i in range(1, 701):
f = f"st{str(i)}.segy" # would give "st1.segy" for i = 1
try: # in case one of the files is missing or can’t be read
lines[f] = read_segy(f)
except:
missing_files.append(f) # store names of missing or unreadable files
import pandas as pd
import nltk
import os
directory = os.listdir(r"C:\...")
x = []
num = 0
for i in directory:
x.append(pd.read_fwf("C:\\..." + i))
x[num] = x[num].to_string()
So, once I have a dictionary x = [ ] populated by the read_fwf for each file in my directory:
I want to know how to make it so every single character is lowercase. I am having trouble understanding the syntax and how it is applied to a dictionary.
I want to define a filter that I can use to count for a list of words in this newly defined dictionary, e.g.,
list = [bus, car, train, aeroplane, tram, ...]
Edit: Quick unrelated question:
Is pd_read_fwf the best way to read .txt files? If not, what else could I use?
Any help is very much appreciated. Thanks
Edit 2: Sample data and output that I want:
Sample:
The Horncastle boar's head is an early seventh-century Anglo-Saxon
ornament depicting a boar that probably was once part of the crest of
a helmet. It was discovered in 2002 by a metal detectorist searching
in the town of Horncastle, Lincolnshire. It was reported as found
treasure and acquired for £15,000 by the City and County Museum, where
it is on permanent display.
Required output - changes everything in uppercase to lowercase:
the horncastle boar's head is an early seventh-century anglo-saxon
ornament depicting a boar that probably was once part of the crest of
a helmet. it was discovered in 2002 by a metal detectorist searching
in the town of horncastle, lincolnshire. it was reported as found
treasure and acquired for £15,000 by the city and county museum, where
it is on permanent display.
You shouldn't need to use pandas or dictionaries at all. Just use Python's built-in open() function:
# Open a file in read mode with a context manager
with open(r'C:\path\to\you\file.txt', 'r') as file:
# Read the file into a string
text = file.read()
# Use the string's lower() method to make everything lowercase
text = text.lower()
print(text)
# Split text by whitespace into list of words
word_list = text.split()
# Get the number of elements in the list (the word count)
word_count = len(word_list)
print(word_count)
If you want, you can do it in the reverse order:
# Open a file in read mode with a context manager
with open(r'C:\path\to\you\file.txt', 'r') as file:
# Read the file into a string
text = file.read()
# Split text by whitespace into list of words
word_list = text.split()
# Use list comprehension to create a new list with the lower() method applied to each word.
lowercase_word_list = [word.lower() for word in word_list]
print(word_list)
Using a context manager for this is good since it automatically closes the file for you as soon as it goes out of scope (de-tabbed from with statement block). Otherwise you would have to use file.open() and file.read().
I think there are some other benefits to using context managers, but someone please correct me if I'm wrong.
I think what you are looking for is dictionary comprehension:
# Python 3
new_dict = {key: val.lower() for key, val in old_dict.items()}
# Python 2
new_dict = {key: val.lower() for key, val in old_dict.iteritems()}
items()/iteritems() gives you a list of tuples of the (keys, values) represented in the dictionary (e.g. [('somekey', 'SomeValue'), ('somekey2', 'SomeValue2')])
The comprehension iterates over each of these pairs, creating a new dictionary in the process. In the key: val.lower() section, you can do whatever manipulation you want to create the new dictionary.
I'm working on a program that will read a .txt file of NBA team names with fives statistics for each team into a dictionary. I was able to read the file into a dictionary with the correct key-value pairs, but I can't figure out how to return a table of minimum, maximum, and average values for each statistic across all teams. I've looked at other questions on the site, but I can't find anything pertaining to a key with multiple values per entry in the dictionary. Here's what the dictionary looks like for a few of the entries:
{'Golden State Warriors':[113.5, 107.5, 43.5, 0.503, 8.0], 'Houston Rockets':[112.4, 103.9, 43.5, 0.46, 8.5], ... : ...}
I need to make a table that displays the min, max, and average of each statistic across all entries:
PPG PPAG RPG FG SPG
MIN
MAX
AVG
I can do this kind of stuff with the statistics in the form of a list, but whenever I try to write lists into a dictionary, I get a TypeError. Would greatly appreciate suggestions, I've been stuck on this for hours.
Also, IMPORTANT NOTE: I cannot use lambda for this task, I am working on this for a project and my options are for loops, while loops, and the basic list functions and dictionary methods.
The following code should do the computation of the stas:
from statistics import mean
lists = list(dic.values())
output = {'MIN':[], 'MAX':[], 'AVG':[]}
for i in range(0, len(lists[0])):
output['MIN'].append(min(col[i] for col in lists))
output['MAX'].append(max(col[i] for col in lists))
output['AVG'].append(mean(col[i] for col in lists))
To print the results as a table, you may do the following:
print("\tPPG\tPPAG\tRPG\tFG\tSPG")
for stats in output:
print(stats, end='\t')
for i in range(0, cols):
print(round(output[stats][i], 3), end='\t')
print()
Hope it helps.
As long as all the data for each team is in the same order in the dictionary this will work well for your purpose. I started by forming a list containing the dictionary values which are also lists. Then you can zip the sub lists together, this has the handy feature of grouping all the same indices in the lists together. Now we can call max, min, and mean on each zip'd stat! If you can not import the 'statistics' module a list comp would also be an easy one liner.
from statistics import mean
stats = {'Golden State Warriors':[113.5, 107.5, 43.5, 0.503, 8.0], 'Houston Rockets':[112.4, 103.9, 43.5, 0.46, 8.5]}
values = [*zip(*stats.values())]
for value in values:
print("max: {}\nmin: {}\nmean: {:0.3f} \n".format(max(value), min(value), mean(value)))
EDIT: take note that if you try to zip lists together that are not the same length it will zip by the shortest list. This behavior can be corrected by using the itertools izip_longest and providing a default value.
I'm very new in python (I usually write in php). I want to understand how to store information in an associative array, and if you can explain me whats the difference of "tuples", "arrays", "dictionary" and "list" will be wonderful (I tried to read different source but I still not caching it).
So This is my code:
#!/usr/bin/python3.4
import csv
import string
nidless_keys = dict()
nidless_keys = ['test_string1','test_string2'] #this contain the string to
# be searched in linesreader
data = {'type':[],'id':[]} #here I want to store my information
with open('path/to/csv/file.csv',newline="") as csvfile:
linesreader = csv.reader(csvfile,delimiter=',',quotechar="|")
for row in linesreader: #every line in this csv have a url like
#www.test.com/?test_string1&id=123456
current_row_string = str(row)
for needle in nidless_keys:
current_needle = str(needle)
if current_needle in current_row_string:
data[current_needle[current_row_string[-8:]]) += 1 # also I
#need to count per every id how much rows there are.
In conclusion:
my_data_stored = [current_needle][current_row_string[-8]]
current_row_string[-8] is a url which the last 8 digit of the url is an ID.
So the array should looks like this at the end of the script:
test_string1 = 123456 = 20
= 256468 = 15
test_string2 = 123155 = 10
Edit 1:
Which type I need here to store the information?
Can you tell me how to resolve this script?
It seems you want to count how many times an ID in combination with a test string occurs.
There can be multiple ID/count combinations associated with every test string.
This suggests that you should use a dictionary indexed by the test strings to store the results. In that dictionary I would suggest to store collections.Counter objects.
This way, you would have to add a special case when a key in the results dictionary isn't found to add an empty Counter. This is a common problem, so there is a specialized form of dictionary in the collections module called defaultdict.
import collections
import csv
# Using a tuple for the keys so it cannot be accidentally modified
keys = ('test_string1', 'test_string2')
result = collections.defaultdict(collections.Counter)
with open('path/to/csv/file.csv',newline="") as csvfile:
linesreader = csv.reader(csvfile,delimiter=',',quotechar="|")
for row in linesreader:
for key in keys:
if key in row:
id = row[-6:] # ID's are six digits in your example.
# The first index is into the dict, the second into the Counter.
result[key][id] += 1
There is an even easier way, by using regular expressions.
Since you seem to treat every row in a CSV file as a string, there is little need to use the CSV reader, so I'll just read the whole file as text.
import re
with open('path/to/csv/file.csv') as datafile:
text = datafile.read()
pattern = r'\?(.*)&id=(\d+)'
The pattern is a regular expression. This is a large topic in and of itself, so I'll only cover briefly what it does. (You might also want to check out the relevant HOWTO) At first glance it looks like complete gibberish, but it is actually a complete language.
In looks for two things in a line. Anything between ? and &id=, and a sequence of digits after &id=.
I'll be using IPython to give an example.
(If you don't know it, check out IPython. It is great for trying things and see if they work.)
In [1]: import re
In [2]: pattern = r'\?(.*)&id=(\d+)'
In [3]: text = """www.test.com/?test_string1&id=123456
....: www.test.com/?test_string1&id=123456
....: www.test.com/?test_string1&id=234567
....: www.test.com/?foo&id=234567
....: www.test.com/?foo&id=123456
....: www.test.com/?foo&id=1234
....: www.test.com/?foo&id=1234
....: www.test.com/?foo&id=1234"""
The text variable points to the string which is a mock-up for the contents of your CSV file.
I am assuming that:
every URL is on its own line
ID's are a sequence of digits.
If these assumptions are wrong, this won't work.
Using findall to extract every match of the pattern from the text.
In [4]: re.findall(pattern, test)
Out[4]:
[('test_string1', '123456'),
('test_string1', '123456'),
('test_string1', '234567'),
('foo', '234567'),
('foo', '123456'),
('foo', '1234'),
('foo', '1234'),
('foo', '1234')]
The findall function returns a list of 2-tuples (that is key, ID pairs). Now we just need to count those.
In [5]: import collections
In [6]: result = collections.defaultdict(collections.Counter)
In [7]: intermediate = re.findall(pattern, test)
Now we fill the result dict from the list of matches that is the intermediate result.
In [8]: for key, id in intermediate:
....: result[key][id] += 1
....:
In [9]: print(result)
defaultdict(<class 'collections.Counter'>, {'foo': Counter({'1234': 3, '123456': 1, '234567': 1}), 'test_string1': Counter({'123456': 2, '234567': 1})})
So the complete code would be:
import collections
import re
with open('path/to/csv/file.csv') as datafile:
text = datafile.read()
result = collections.defaultdict(collections.Counter)
pattern = r'\?(.*)&id=(\d+)'
intermediate = re.findall(pattern, test)
for key, id in intermediate:
result[key][id] += 1
This approach has two advantages.
You don't have to know the keys in advance.
ID's are not limited to six digits.
A brief summary of the python data types you mentioned:
A dictionary is an associative array, aka hashtable.
A list is a sequence of values.
An array is essentially the same as a list, but limited to basic datatypes. My impression is that they only exists for performance reasons, don't think I've ever used one. If performance is that critical to you, you probably don't want to use python in the first place.
A tuple is a fixed-length sequence of values (whereas lists and arrays can grow).
Lets take them one by one.
Lists:
List is a very naive kind of data structure similar to arrays in other languages in terms of the way we write them like:
['a','b','c']
This is a list in python , but seems very similar to array structure.
However there is a very large difference in the way lists are used in python and the usual arrays.
Lists are heterogenous in nature. This means that we can store any kind of data simultaneously inside it like:
ls = [1,2,'a','g',True]
As you can see, we have various kinds of data within a list and is a valid list.
However, one important thing about them is that we can access the list items using zero based indices. So we can write:
print ls[0],ls[3]
output: 1 g
Dictionary:
This datastructure is similar to a hash map data structure. It contains a (key,Value) pair. An empty dictionary looks like:
dc = {}
Now, to store a key,value pair, e.g., ('potato',3),(tomato,5), we can do as:
dc['potato'] = 3
dc['tomato'] = 5
and we saved the data in the dictionary dc.
The important thing is that we can even store another data structure element like a list within a dictionary like:
dc['list1'] = ls , where ls is the list defined above.
This shows the power of using dictionary.
In your case, you have difined a dictionary like this:
data = {'type':[],'id':[]}
This means that your dictionary will consist of only two keys and each key corresponds to a list, which are empty for now.
Talking a bit about your script, the expression :
current_row_string[-8:]
doesn't make a sense. The index should have been -6 instead of -8 that would give you the id part of the current row.
This part is the id and should have been stored in a variable say :
id = current_row_string[-6:]
Further action can be performed as seen the answer given by Roland.
I am importing data from a file, which is working correctly. I have appended the data from this file into 3 different lists, name, mark, mark2 although I don't understand how or if i can make a new list called total_marks and add a calculation appending mark + mark2 into total_marks. Tried looking about for help on this and couldn't find much relating to it. The plan is to actually add the two lists together and work out a percentage which the total marks would be 150.
To add the two lists item by item:
combined = []
for m1, m2 in zip(mark, mark2):
combined.append(m1+m2)
The zip function returns an item pair from the two lists for each pair in the lists.:
https://docs.python.org/3/library/functions.html#zip
Then you can perform the final operation this way:
final = []
for m in combined:
final.append(m/150*100)
As I said in my comment, I highly recommend that once you've gotten past learning the basics that you then take the time to learn two libraries: pandas and xlwings. These will greatly help your ability to interact between python and excel. An operation like you have here becomes much simpler once you learn pandas dataframes.
Here is a better way, using pandas.
import pandas
df = pandas.read_csv('Classmarks.csv', index_col = 'student_name', names = ('student_name', 'mark1', 'mark2'), header = None)
df['combined'] = df['mark1'] + df['mark2']
df['final'] = df['combined'] / 150 * 100
print(df)
Don't have to do any loops using pandas. And you can then write it back to a csv file:
df.to_csv('Classmarksout.csv')