I'm trying to understand if it's possible to set a loop inside of a Try/Except call, or if I'd need to restructure to use functions. Long story short, after spending a few hours learning Python and BeautifulSoup, I managed to frankenstein some code together to scrape a list of URLs, pull that data out to CSV (and now update it to a MySQL db). The code is now working as planned, except that I occasionally run into a 10054, either because my VPN hiccups, or possibly the source host server is occasionally bouncing me (I have a 30 second delay in my loop but it still kicks me on occasion).
I get the general idea of Try/Except structure, but I'm not quite sure how I would (or if I could) loop inside it to try again. My base code to grab the URL, clean it and parse the table I need looks like this:
for url in contents:
print('Processing record', (num+1), 'of', len(contents))
if url:
print('Retrieving data from ', url[0])
html = requests.get(url[0]).text
soup = BeautifulSoup(html, 'html.parser')
for span in soup('span'):
span.decompose()
trs = soup.select('div#collapseOne tr')
if trs:
print('Processing')
for t in trs:
for header, value in zip(t.select('td')[0], t.select('td:nth-child(2)')):
if num == 0:
headers.append(' '.join(header.split()))
values.append(re.sub(' +', ' ', value.get_text(' ', strip=True)))
After that is just processing the data to CSV and running an update sql statement.
What I'd like to do is if the HTML request call fails is wait 30 seconds, try the request again, then process, or if the retry fails X number of times, go ahead and exit the script (assuming at that point I have a full connection failure).
Is it possible to do something like that in line, or would I need to make the request statement into a function and set up a loop to call it? Have to admit I'm not familiar with how Python works with function returns yet.
You can add an inner loop for the retries and put your try/except block in that. Here is a sketch of what it would look like. You could put all of this into a function and put that function call in its own try/except block to catch other errors that cause the loop to exit.
Looking at requests exception hierarchy, Timeout covers multiple recoverable exceptions and is a good start for everything you may want to catch. Other things like SSLError aren't going to get better just because you retry, so skip them. You can go through the list to see what is reasonable for you.
import itertools
# requests exceptions at
# https://requests.readthedocs.io/en/master/_modules/requests/exceptions/
for url in contents:
print('Processing record', (num+1), 'of', len(contents))
if url:
print('Retrieving data from ', url[0])
retry_count = itertools.count()
# loop for retries
while True:
try:
# get with timeout and convert http errors to exceptions
resp = requests.get(url[0], timeout=10)
resp.raise_for_status()
# the things you want to recover from
except requests.Timeout as e:
if next(retry_count) <= 5:
print("timeout, wait and retry:", e)
time.sleep(30)
continue
else:
print("timeout, exiting")
raise # reraise exception to exit
except Exception as e:
print("unrecoverable error", e)
raise
break
html = resp.text
etc…
I've done a little example by myself to graphic this, and yes, you can put loops inside try/except blocks.
from sys import exit
def example_func():
try:
while True:
num = input("> ")
try:
int(num)
if num == "10":
print("Let's go!")
else:
print("Not 10")
except ValueError:
exit(0)
except:
exit(0)
example_func()
This is a fairly simple program that takes input and if it's 10, then it says "Let's go!", otherwise it tells you it's not 10 (if it's not a valid value, it just kicks you out).
Notice that inside the while loop I put a try/except block, taking into account the necessary indentations. You can take this program as a model and use it on your favor.
Related
In the following piece of code, I am trying to extract some data from the imdb site.I am iterating over the titles (tt000001,tt000002 etc.) which is stored in the csv file and putting the iterated value into the address and requesting the page.I am using proxies to avoid getting the connectionerror, so i put the code in the try and except block so that if any problem surfaces it can just change the proxy and the program can continue without getting interrupted.
for i in sheet2.iter_cols(min_row=2,max_row=diff+2,min_col=1,max_col=1):
for j in i:
try:
print("getting address")
req=requests.get("https://www.imdb.com/title/"+str(j.value),proxies=pro,headers=headers)
soup=bs4.BeautifulSoup(req.text,'html.parser')
x=soup.find('div',class_="title_wrapper")
list1.append(x.h1.getText())
print(list1)
except:
print("Proxy {} not working, changing it".format(pro))
pro=oneproxypls()
headers={'User-Agent':ua.random}
else:
print("Written in the {} successfully".format(j.value))
The problem with this is whenever it is encountering an error, it is changing that proxy but it skips that iteration, sometimes two or more, if the next proxy doesn't works also.So my question is, is there any way so that after changing the proxy it doesn't skip that iteration.Thanks in advance!
this need to work:
for i in sheet2.iter_cols(min_row=2,max_row=diff+2,min_col=1,max_col=1):
for j in i:
try:
print("getting address")
req=requests.get("https://www.imdb.com/title/"+str(j.value),proxies=pro,headers=headers)
soup=bs4.BeautifulSoup(req.text,'html.parser')
x=soup.find('div',class_="title_wrapper")
list1.append(x.h1.getText())
print(list1)
except:
print("Proxy {} not working, changing it".format(pro))
pro=oneproxypls()
headers={'User-Agent':ua.random}
continue
else:
print("Written in the {} successfully".format(j.value))
and if continue not works try pass
After spending a lot of hours looking for a solution in stackoverflow, I did not find a good solution to set a timeout for a block of code. There are approximations to set a timeout for a function. Nevertheless, I would like to know how to set a timeout without having a function. Let's take the following code as an example:
print("Doing different things")
for i in range(0,10)
# Doing some heavy stuff
print("Done. Continue with the following code")
So, How would you break the for loop if it has not finished after x seconds? Just continue with the code (maybe saving some bool variables to know that timeout was reached), despite the fact that the for loop did not finish properly.
i think implement this efficiently without using functions not possible , look this code ..
import datetime as dt
print("Doing different things")
# store
time_out_after = dt.timedelta(seconds=60)
start_time = dt.datetime.now()
for i in range(10):
if dt.datetime.now() > time_started + time_out:
break
else:
# Doing some heavy stuff
print("Done. Continue with the following code")
the problem : the timeout will checked in the beginning of every loop cycle, so it may be take more than the specified timeout period to break of the loop, or in worst case it maybe not interrupt the loop ever becouse it can't interrupt the code that never finish un iteration.
update :
as op replayed, that he want more efficient way, this is a proper way to do it, but using functions.
import asyncio
async def test_func():
print('doing thing here , it will take long time')
await asyncio.sleep(3600) # this will emulate heaven task with actual Sleep for one hour
return 'yay!' # this will not executed as the timeout will occur early
async def main():
# Wait for at most 1 second
try:
result = await asyncio.wait_for(test_func(), timeout=1.0) # call your function with specific timeout
# do something with the result
except asyncio.TimeoutError:
# when time out happen program will break from the test function and execute code here
print('timeout!')
print('lets continue to do other things')
asyncio.run(main())
Expected output:
doing thing here , it will take long time
timeout!
lets continue to do other things
note:
now timeout will happen after exactly the time you specify. in this example code, after one second.
you would replace this line:
await asyncio.sleep(3600)
with your actual task code.
try it and let me know what do you think. thank you.
read asyncio docs:
link
update 24/2/2019
as op noted that asyncio.run introduced in python 3.7 and asked for altrnative on python 3.6
asyncio.run alternative for python older than 3.7:
replace
asyncio.run(main())
with this code for older version (i think 3.4 to 3.6)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
loop.close()
You may try the following way:
import time
start = time.time()
for val in range(10):
# some heavy stuff
time.sleep(.5)
if time.time() - start > 3: # 3 is timeout in seconds
print('loop stopped at', val)
break # stop the loop, or sys.exit() to stop the script
else:
print('successfully completed')
I guess it is kinda viable approach. Actual timeout is greater than 3 seconds and depends on the single step execution time.
Okay, so I am created a DataStream object which is just a wrapper class around asyncio.Queue. I am passing this around all over and everything is working fine up until the following functions. I am calling ensure_future to run 2 infinite loops, one that replicates the data in one DataStream object, and one that sends data to a websocket. here is that code:
def start(self):
# make sure that we set the event loop before we run our async requests
print("Starting WebsocketProducer on ", self.host, self.port)
RUNTIME_LOGGER.info(
"Starting WebsocketProducer on %s:%i", self.host, self.port)
#Get the event loop and add a task to it.
asyncio.set_event_loop(self.loop)
asyncio.get_event_loop().create_task(self._mirror_stream(self.data_stream))
asyncio.ensure_future(self._serve(self.ssl_context))enter code here
Ignore the indent issue, SO wont indent correctly.
And here is the method that is failing with the error 'Task was destroyed but it is pending!'. Keep in mind, if I do not include the lines with 'data_stream.get()' the function runs fine. I made sure, the objects in both locations have the same memory address AND value for id(). If i print the data that comes from the await self.data_stream.get() I get the correct data. However after that it seems to just return and break. Here is the code:
async def _mirror_stream(self):
while True:
stream_length = self.data_stream.length
try:
if stream_length > 1:
for _ in range(0, stream_length):
data = await self.data_stream.get()
else:
data = await self.data_stream.get()
except Exception as e:
print(str(e))
# If the data is null, keep the last known value
if self._is_json_serializable(data) and data is not None:
self.payload = json.dumps(data)
else:
RUNTIME_LOGGER.warning(
"Mirroring stream encountered a Null payload in WebsocketProducer!")
await asyncio.sleep(self.poll_rate)enter code here
The issue has been resolved by implementing my own async Queue by utilizing the normal queue.Queue object. For some reason the application would only work if I would 'await' for queue.get(), even though it wasnt an asyncio.Queue object... Not entirely sure why this behavior occured, however the application is running well, and still performing as if the Queue were from the asyncio lib. Thanks to those who looked!
I am attempting to make a few thousand dns queries. I have written my script to use python-adns. I have attempted to add threading and queue's to ensure the script runs optimally and efficiently.
However, I can only achieve mediocre results. The responses are choppy/intermittent. They start and stop, and most times pause for 10 to 20 seconds.
tlock = threading.Lock()#printing to screen
def async_dns(i):
s = adns.init()
for i in names:
tlock.acquire()
q.put(s.synchronous(i, adns.rr.NS)[0])
response = q.get()
q.task_done()
if response == 0:
dot_net.append("Y")
print(i + ", is Y")
elif response == 300:
dot_net.append("N")
print(i + ", is N")
tlock.release()
q = queue.Queue()
threads = []
for i in range(100):
t = threading.Thread(target=async_dns, args=(i,))
threads.append(t)
t.start()
print(threads)
I have spent countless hours on this. I would appreciate some input from expedienced pythonista's . Is this a networking issue ? Can this bottleneck/intermittent responses be solved by switching servers ?
Thanks.
Without answers to the questions, I asked in comments above, I'm not sure how well I can diagnose the issue you're seeing, but here are some thoughts:
It looks like each thread is processing all names instead of just a portion of them.
Your Queue seems to be doing nothing at all.
Your lock seems to guarantee that you actually only do one query at a time (defeating the purpose of having multiple threads).
Rather than trying to fix up this code, might I suggest using multiprocessing.pool.ThreadPool instead? Below is a full working example. (You could use adns instead of socket if you want... I just couldn't easily get it installed and so stuck with the built-in socket.)
In my testing, I also sometimes see pauses; my assumption is that I'm getting throttled somewhere.
import itertools
from multiprocessing.pool import ThreadPool
import socket
import string
def is_available(host):
print('Testing {}'.format(host))
try:
socket.gethostbyname(host)
return False
except socket.gaierror:
return True
# Test the first 1000 three-letter .com hosts
hosts = [''.join(tla) + '.com' for tla in itertools.permutations(string.ascii_lowercase, 3)][:1000]
with ThreadPool(100) as p:
results = p.map(is_available, hosts)
for host, available in zip(hosts, results):
print('{} is {}'.format(host, 'available' if available else 'not available'))
As far as I can tell, my code works absolutely fine- though it probably looks a bit rudimentary and crude to more experienced eyes.
Objective:
Create a 'filter' that loops through a (large) range of possible ID numbers. Each ID should tried to log-in at the url website. If the id is valid it should be saved to hit_list.
Issue:
In large loops, the programme 'hangs' for indefinite periods of time. Although I have no evidence (no exception is thrown) I suspect this is a timeout issue (or rather, would be if timeout was specified)
Question:
I want to add a timeout- and then handle the timeout exception so that my programme will stop hanging. If this theory is wrong, I would also like to hear what my issue might be.
How to add a timeout is a question that has been asked before: Here and here, but after spending all weekend working on this, I'm still at a loss. Put blunty, I don't understand those answers.
What I've tried:
Create a try & except block in the id_filter function. The try is at r=s.get(url) and the exception is at the end of the function. I've read the requests docs in detail, here and here. This didn't work.
The more I read about futures the more I'm convinced that excepting errors has to be done in futures, rather than requests (as I did above). So I tried inserting a timeout in the brackets after boss.map, but as far as I could tell, this had no effect- it seems too simple anyway.
So, to reiterate:
For large loops (50,000 +) my programme tends to hang for an indefinite period of time (there is no exact point when this starts, though it's usually after 90% of the loop has been processed). I don't know why, but suspect adding a timeout would throw an exception- which I can then except. This theory may, however be wrong. I have tried to add timeout and handle other errors in the requests part, but to no effect.
-Python 3.5
My code:
import concurrent.futures as cf
import requests
from bs4 import BeautifulSoup
hit_list =[]
processed_list=[]
startrange= 100050000
end_range = 100150000
loop_size=range(startrange,end_range)
workers= 70 #
chunks= 300
url = 'https://ndber.seai.ie/pass/ber/search.aspx'
def id_filter(_range):
with requests.session() as s:
s.headers.update({
'user-agent': 'FOR MORE INFORMATION ABOUT THIS DATA COLLECTION PLEASE CONTACT ########'
})
r = s.get(url)
time.sleep(.1)
soup = BeautifulSoup(r.content, 'html.parser')
viewstate = soup.find('input', {'name': '__VIEWSTATE' }).get('value')
viewstategen = soup.find('input', {'name': '__VIEWSTATEGENERATOR' }).get('value')
validation = soup.find('input', {'name': '__EVENTVALIDATION' }).get('value')
for ber in _range:
data = {
'ctl00$DefaultContent$BERSearch$dfSearch$txtBERNumber': ber,
'ctl00$DefaultContent$BERSearch$dfSearch$Bottomsearch': 'Search',
'__VIEWSTATE' : viewstate,
'__VIEWSTATEGENERATOR' : viewstategen,
'__EVENTVALIDATION' : validation,
}
y = s.post(url, data=data)
if 'No results found' in y.text:
#print('Invalid ID', ber)
else:
hit_list.append(ber)
print('Valid ID',ber)
if __name__ == '__main__':
with cf.ThreadPoolExecutor(max_workers=workers) as boss:
jobs= [loop_size[x: x + chunks] for x in range(0, len(loop_size), chunks)]
boss.map(id_filter, jobs)
#record data below