Question
I am currently pulling data from yahoo finance. When using try and except, the function stops after the error has been reached. How can I continue the function after the except statement to pull the remaining data for stocks in the index?
index = sp500
def yhooKeyStats():
try:
for eachStock in index:
isUrl = 'http://finance.yahoo.com/q/is?s='+eachStock+'+Income+Statement&annual'
bsUrl = 'http://finance.yahoo.com/q/bs?s='+eachStock+'+Balance+Sheet&annual'
cfUrl = 'http://finance.yahoo.com/q/cf?s='+eachStock+'+Cash+Flow&annual'
def bsYhooStats(url):
req = urllib.request.Request(url)
resp = urllib.request.urlopen(req)
respData = resp.read()
dRespData = respData.decode('utf-8')
gw = dRespData.split('Goodwill</td><td align="right">')[1].split('  ')[0]
if len(gw) < 14:
gw = gw
else:
gw = '-'
return gw
print(eachStock, bsYhooStats(bsUrl))
except IndexError:
pass
yhooKeyStats()
Output
MMM 7,050,000
ABT 10,067,000
ABBV 5,862,000
ACN 2,395,894
ACE -
ACT 24,521,500
ADT 3,738,000
AES 1,458,000
AET 10,613,200
AFL -
Just put the try/except inside the loop. One of several possibilities:
for eachStock in index:
...
try:
def bsYhooStats(url):
...
return gw if len(gw) < 14 else '-'
print(eachStock, bsYhooStats(bsUrl))
except IndexError:
pass
Related
I have an issue converting a chunked list into multiple dictionaries in order to send my request batched:
fd = open(filename, 'r')
sqlFile = fd.read()
fd.close()
commands = sqlFile.split(';')
for command in commands:
try:
c = conn.cursor()
c.execute(command)
// create a list with the query results with batches of size 100
for batch in grouper(c.fetchall(),100):
// This is where the error occurs:
result = [dict(zip([key[0] for key in c.description], i)) for i in batch]
# TODO: Send the json with 100 items to API
except RuntimeError:
print('Error.')
The issue is that it only iterates through the batches once and gives the following error. Actually, the number of rows are 167. So there should be a result of 100 items to be sent in a first request, while the second iteration should contain 67 items to be sent in a second request.
TypeError: zip argument #2 must support iteration
I solved the issue by making a dictionary right away with c.rowfactory = makeDictFactory(c):
def makeDictFactory(cursor):
columnNames = [d[0] for d in cursor.description]
def createRow(*args):
return dict(zip(columnNames, args))
return createRow
def getAndConvertDataFromDatabase:(filename)
fd = open(filename, 'r')
sqlFile = fd.read()
fd.close()
commands = sqlFile.split(';')
for command in commands:
try:
c = conn.cursor()
c.execute(command)
c.rowfactory = makeDictFactory(c)
data = c.fetchall()
for batch in [data[x:x+100] for x in range(0, len(data), 100)]:
return postBody(json.dumps(batch,default = myconverter), dataList[filename])
except RuntimeError:
print('Error.')
I have weird problem with my function.
When I call function _read_page it works fine.
When I run code below with loop it works but only for few iteration and then it breaks.
When I call each row in "while loop" it works but after few iteration function _read_page does not return any value and does not assign it to y. But it seems to be fine, function _read_page does not return any error. Thereafter next step pd.concat returns error about NoneType object.
Important is when I call _read_page again with the same parameters it returns value and value is assign to y.
EDIT:
it ends on the command fin = req.json()
Do you know what is the problem?
Can it be caused by something with memory or by something like this?
Thank you and sorry for my English...
...
x_df = pd.DataFrame()
first_pass = True
y = {"total_count": 0}
continuous_count = 0
while first_pass or continuous_count < y["total_count"]:
first_pass = False
y = _read_page(pods_url=pods_url,
auth=auth,
data=data)
x_df = pd.concat([x_df,
y["data"]["Data"]], ignore_index=False)
scroll_id = y["data"]["ScrollId"][0]
data["ScrolledFilterRequest"].update({"ScrollId": scroll_id})
continuous_count = continuous_count + page_size
...
def _read_page(pods_url, auth, data, dataframe=True):
try:
url = pods_url + "api/v1/Catalog/NestScrollApi"
req = requests.post(url=url, auth=auth, verify=False, json=data)
if req.status_code != 200:
df = pd.DataFrame()
fin = {"TotalCount": 0}
else:
fin = req.json()
df = pd.DataFrame(fin)
return {"data": df,
"total_count": fin["TotalCount"],
"response": req.status_code}
except Exception as e:
print(e)
Finally, it is MemoryError.
There is error in line fin = req.json() in json self.content.decode(encoding), **kwargs MemoryError
Solution is change the 32-bit version of Python to 64-bit version of Python and it works without problem, without change the code.
The below python program asks the user for two reddit usernames and compares their score.
import json
from urllib import request
def obtainKarma(users_data):
users_info = []
for user_data in users_data:
data = json.load(user_data)
posts = data["data"]["children"]
num_posts = len(posts)
scores = []
comments = []
for post_id in range(num_posts):
score = posts[post_id]["data"]["score"]
comment = posts[post_id]["num_comments"]
scores.append(score)
comments.append(comment)
users_info.append((scores,comments))
user_id = 0
for user_info in users_info:
user_id+=1
print("User"+str(user_id))
for user_attr in user_info:
print(user_attr)
def getUserInfo():
count = 2
users_data = []
while count:
count = count + 1
username = input("Please enter username:\n")
url = "https://reddit.com/user/"+username+".json"
try:
user_data = request.urlopen(url)
except:
print("No such user.\nRetry Please.\n")
count = count + 1
raise
users_data.append(user_data)
obtainKarma(users_data)
if __name__ == '__main__':
getUserInfo()
However, when I run the program and enter a username, I get an error:
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 429: Too Many Requests
I tried looking for similar issues but none of them satisfied to solve this specific issue. Looking at the error, it would make sense to say that the URL includes an amount of data that exceeds a specific limit? But that still sounds absurd because it is not that much of a data.
Thanks.
The problem seems to be resolved when you supply a User-Agent with your request.
import json
from urllib import request
def obtainKarma(users_data):
users_info = []
for user_data in users_data:
data = json.loads(user_data) # I've changed 'json.load' to 'json.loads' because you want to parse a string, not a file
posts = data["data"]["children"]
num_posts = len(posts)
scores = []
comments = []
for post_id in range(num_posts):
score = posts[post_id]["data"]["score"]
comment = posts[post_id]["data"]["num_comments"] # I think you forgot '["data"]' here, so I added it
scores.append(score)
comments.append(comment)
users_info.append((scores,comments))
user_id = 0
for user_info in users_info:
user_id+=1
print("User"+str(user_id))
for user_attr in user_info:
print(user_attr)
def getUserInfo():
count = 2
users_data = []
while count:
count = count + 1
username = input("Please enter username:\n")
url = "https://reddit.com/user/"+username+".json"
user_data = None
try:
req = request.Request(url)
req.add_header('User-Agent', 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)')
resp = request.urlopen(req)
user_data = resp.read().decode("utf-8")
except Exception as e:
print(e)
print("No such user.\nRetry Please.\n")
count = count + 1
raise # why raise? --> Program will end if user is not found
if user_data:
print(user_data)
users_data.append(user_data)
obtainKarma(users_data)
if __name__ == '__main__':
getUserInfo()
There were still other issues with your code:
You should not write json.load(user_data), because you are parsing a string. So I changed it to use json.loads(user_data).
The Python documentation for json.loads states:
Deserialize s (a str instance containing a JSON document) to a Python object using this conversion table.
And in the code comment = posts[post_id]["num_comments"], I think you forgot to index on 'data', so I changed it to comment = posts[post_id]["data"]["num_comments"]
And why are you raising the exception in the except-block? This will end the program, however it seems that you expect it not to, from looking at the following code:
print("No such user.\nRetry Please.\n")
I am trying to create a calorie counter the standard input goes like this:
python3 calories.txt < test.txt
Inside calories the food is the following format: apples 500
The problem I am having is that whenever I calculate the values for the person it seems to never return to an empty list..
import sys
food = {}
eaten = {}
finished = {}
total = 0
#mappings
def calories(x):
with open(x,"r") as file:
for line in file:
lines = line.strip().split()
key = " ".join(lines[0:-1])
value = lines[-1]
food[key] = value
def calculate(x):
a = []
for keys,values in x.items():
for c in values:
try:
a.append(int(food[c]))
except:
a.append(100)
print("before",a)
a = []
total = sum(a) # Problem here
print("after",a)
print(total)
def main():
calories(sys.argv[1])
for line in sys.stdin:
lines = line.strip().split(',')
for c in lines:
values = lines[0]
keys = lines[1:]
eaten[values] = keys
calculate(eaten)
if __name__ == '__main__':
main()
Edit - forgot to include what test.txt would look like:
joe,almonds,almonds,blue cheese,cabbage,mayonnaise,cherry pie,cola
mary,apple pie,avocado,broccoli,butter,danish pastry,lettuce,apple
sandy,zuchini,yogurt,veal,tuna,taco,pumpkin pie,macadamia nuts,brazil nuts
trudy,waffles,waffles,waffles,chicken noodle soup,chocolate chip cookie
How to make it easier on yourself:
When reading the calories-data, convert the calories to int() asap, no need to do it every time you want to sum up somthing that way.
Dictionary has a .get(key, defaultvalue) accessor, so if food not found, use 100 as default is a 1-liner w/o try: ... except:
This works for me, not using sys.stdin but supplying the second file as file as well instead of piping it into the program using <.
I modified some parsings to remove whitespaces and return a [(name,cal),...] tuplelist from calc.
May it help you to fix it to your liking:
def calories(x):
with open(x,"r") as file:
for line in file:
lines = line.strip().split()
key = " ".join(lines[0:-1])
value = lines[-1].strip() # ensure no whitespaces in
food[key] = int(value)
def getCal(foodlist, defValueUnknown = 100):
"""Get sum / total calories of a list of ingredients, unknown cost 100."""
return sum( food.get(x,defValueUnknown ) for x in foodlist) # calculate it, if unknown assume 100
def calculate(x):
a = []
for name,foods in x.items():
a.append((name, getCal(foods))) # append as tuple to list for all names/foods eaten
return a
def main():
calories(sys.argv[1])
with open(sys.argv[2]) as f: # parse as file, not piped in via sys.stdin
for line in f:
lines = line.strip().split(',')
for c in lines:
values = lines[0].strip()
keys = [x.strip() for x in lines[1:]] # ensure no whitespaces in
eaten[values] = keys
calced = calculate(eaten) # calculate after all are read into the dict
print (calced)
Output:
[('joe', 1400), ('mary', 1400), ('sandy', 1600), ('trudy', 1000)]
Using sys.stdin and piping just lead to my console blinking and waiting for manual input - maybe VS related...
I'm new to Python and programming in general and need a little help with this (partially finished) function. It's calling a text file with a bunch of rows of comma delimited data (age, salary, education and so on). However, I've run into a problem from the outset. I don't know how to return the results.
My aim is to create dictionaries for each category and for each row to be sorted and tallied.
e.g. 100 people over 50, 200 people under 50 and so on.
Am I in the correct ball park?
file = "adultdata.txt"
def make_data(file):
try:
f = open(file, "r")
except IOError as e:
print(e)
return none
large_list = []
avg_age = 0
row_count_under50 = 0
row_count_over50 = 0
#create 2 dictionaries per category
employ_dict_under50 = {}
employ_dict_over50 = {}
for row in f:
edited_row = row.strip()
my_list = edited_row.split(",")
try:
#Age Category
my_list[0] = int(my_list[0])
#Work Category
if my_list[-1] == " <=50K":
if my_list[1] in employ_dict_under50:
employ_dict_under50[my_list[1]] += 1
else:
employ_dict_under50[my_list[1]] = 1
row_count_u50 += 1
else:
if my_list[1] in emp_dict_o50:
employ_dict_over50[my_list[1]] += 1
else:
employ_dict_over50[my_list[1]] = 1
row_count_o50 += 1
# Other categories here
print(my_list)
#print(large_list)
#return
# Ignored categories here - e.g. my_list[insert my list numbers here] = None
I do not have access to your file but I had a go at correcting most of the errors you had in your code.
These are a list of the mistakes I found in your code:
your function make_data is essentially useless and is out of scope. You need to remove it entirely
When using a file object f, you need to use readline to extract data from the file.
It is also best to use a with statement when using IO resources like files
You had numerous variables which were badly named in the inner loop and did not exist
You declared a try in the inner loop without a catch. You can remove the try because you are not trying to catch any Error
You have some very basic errors which are related to general programming, can I assume your new to this? If thats the case then you should probably follow some more beginner tutorials online until you get a grasp of what commands you need to use to perform basic tasks.
Try compare your code to this and see if you can understand what i'm trying to say:
file = "adultdata.txt"
large_list = []
avg_age = 0
row_count_under50 = 0
row_count_over50 = 0
#create 2 dictionaries per category
employ_dict_under50 = {}
employ_dict_over50 = {}
with open(file, "r") as f:
row = f.readline()
edited_row = row.strip()
my_list = edited_row.split(",")
#Age Category
my_list[0] = int(my_list[0])
#Work Category
if my_list[-1] == " <=50K":
if my_list[1] in employ_dict_under50:
employ_dict_under50[my_list[1]] += 1
else:
employ_dict_under50[my_list[1]] = 1
row_count_under50 += 1
else:
if my_list[1] in employ_dict_over50:
employ_dict_over50[my_list[1]] += 1
else:
employ_dict_over50[my_list[1]] = 1
row_count_over50 += 1
# Other categories here
print(my_list)
#print(large_list)
#return
I cannot say for certain if this code will work or not without your file but it should give you a head start.