python asycio RuntimeWarning: coroutine was never awaited - python-3.x

I am trying to send many requests to a url (~50) concurrently.
from asyncio import Queue
import yaml
import asyncio
from aiohttp import ClientSession, TCPConnector
async def http_get(url, cookie):
cookie = cookie.split('; ')
cookie1 = cookie[0].split('=')
cookie2 = cookie[1].split('=')
cookies = {
cookie1[0]: cookie1[1],
cookie2[0]: cookie2[1]
}
async with ClientSession(cookies=cookies) as session:
async with session.get(url, ssl=False) as response:
return await response.json()
class FetchUtil:
def __init__(self):
self.config = yaml.safe_load(open('../config.yaml'))
def fetch(self):
asyncio.run(self.extract_objects())
async def http_get_objects(self, object_type, limit, offset):
path = '/path' + \
'?query=&filter=%s&limit=%s&offset=%s' % (
object_type,
limit,
offset)
return await self.http_get_domain(path)
async def http_get_objects_limit(self, object_type, offset):
result = await self.http_get_objects(
object_type,
self.config['object_limit'],
offset
)
return result['result']
async def http_get_domain(self, path):
return await http_get(
f'https://{self.config["domain"]}{path}',
self.config['cookie']
)
async def call_objects(self, object_type, offset):
result = await self.http_get_objects_limit(
object_type,
offset
)
return result
async def extract_objects(self):
calls = []
object_count = (await self.http_get_objects(
'PV', '1', '0'))['result']['count']
for i in range(0, object_count, self.config['object_limit']):
calls.append(self.call_objects('PV', str(i)))
queue = Queue()
for i in range(0, len(calls), self.config['call_limit']):
results = await asyncio.gather(*calls[i:self.config['call_limit']])
await queue.put(results)
After running this code using fetch as the entry point i get the following error message:
/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/events.py:88: RuntimeWarning: coroutine 'FetchUtil.call_objects' was never awaited
self._context.run(self._callback, *self._args)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
The program that stops executing after asyncio.gather returns for the first time. I am having trouble understanding this message since I thought that I diligently made sure all functions were async tasks. The only function i didn't await was call_objects since i wanted it to run concurrently.
https://xinhuang.github.io/posts/2017-07-31-common-mistakes-using-python3-asyncio.html#org630d301
in this article gives the following explanation:
This runtime warning can happen in many scenarios, but the cause are
same: A coroutine object is created by the invocation of an async
function, but is never inserted into an EventLoop.
I believed that was what i was doing when i called the async tasks with asyncio.gather.
I should note that when i put a print('url') inside http_get it outputs the first 50 urls like i want, the problem seems to occur when asyncio.gather returns for the first time.

The posted code has a logic error: [i:self.config['call_limit']] should be [i:i + self.config['call_limit']].
It causes the error because the expression evaluates to an entry slice in iterations after the first one, causing some of the calls coroutines to never get passed to gather, and therefore never awaited)
i don't actually understand why it didn't just keep executing the same requests many time instead of stopping with an error.
Because your i would get incremented, making it larger than call_limit in every loop iteration except the first. For example, assuming call_limit is 10, i will be 0 in the first iteration, and you'll await calls[0:10], so far so good. But in the next iteration, i will be 10, and you'll be awaiting calls[10:10], an empty slice. In the iteration after that you'll await calls[20:10] (also an empty slice, although logically it should be an error), then calls[30:10], again empty, and so on. Only the first iteration picks up actual members of the list.

Related

asyncio.wait not returning on first exception

I have an AMQP publisher class with the following methods. on_response is the callback that is called when a consumer sends back a message to the RPC queue I setup. I.e. the self.callback_queue.name you see in the reply_to of the Message. publish publishes out to a direct exchange with a routing key that has multiple consumers (very similar to a fanout), and multiple responses come back. I create a number of futures equal to the number of responses I expect, and asyncio.wait for those futures to complete. As I get responses back on the queue and consume them, I set the result to the futures.
async def on_response(self, message: IncomingMessage):
if message.correlation_id is None:
logger.error(f"Bad message {message!r}")
await message.ack()
return
body = message.body.decode('UTF-8')
future = self.futures[message.correlation_id].pop()
if hasattr(body, 'error'):
future.set_execption(body)
else:
future.set_result(body)
await message.ack()
async def publish(self, routing_key, expected_response_count, msg, timeout=None, return_partial=False):
if not self.connected:
logger.info("Publisher not connected. Waiting to connect first.")
await self.connect()
correlation_id = str(uuid.uuid4())
futures = [self.loop.create_future() for _ in range(expected_response_count)]
self.futures[correlation_id] = futures
await self.exchange.publish(
Message(
str(msg).encode(),
content_type="text/plain",
correlation_id=correlation_id,
reply_to=self.callback_queue.name,
),
routing_key=routing_key,
)
done, pending = await asyncio.wait(futures, timeout=timeout, return_when=asyncio.FIRST_EXCEPTION)
if not return_partial and pending:
raise asyncio.TimeoutError(f'Failed to return all results for publish to {routing_key}')
for f in pending:
f.cancel()
del self.futures[correlation_id]
results = []
for future in done:
try:
results.append(json.loads(future.result()))
except json.decoder.JSONDecodeError as e:
logger.error(f'Client did not return JSON!! {e!r}')
logger.info(future.result())
return results
My goal is to either wait until all futures are finished, or a timeout occurs. This is all working nicely at the moment. What doesn't work, is when I added return_when=asyncio.FIRST_EXCEPTION, the asyncio.wait.. does not finish after the first call of future.set_exception(...) as I thought it would.
What do I need to do with the future so that when I get a response back and see that an error occurred on the consumer side (before the timeout, or even other responses) the await asyncio.wait will no longer be blocking. I was looking at the documentation and it says:
The function will return when any future finishes by raising an exception
when return_when=asyncio.FIRST_EXCEPTION. My first thought is that I'm not raising an exception in my future correctly, only, I'm having trouble finding out exactly how I should do that then. From the API documentation for the Future class, it looks like I'm doing the right thing.
When I created a minimum viable example, I realized I was actually doing things MOSTLY right after all, and I glanced over other errors causing this not to work. Here is my minimum example:
The most important change I had to do was actually pass in an Exception object.. (subclass of BaseException) do the set_exception method.
import asyncio
async def set_after(future, t, body, raise_exception):
await asyncio.sleep(t)
if raise_exception:
future.set_exception(Exception("problem"))
else:
future.set_result(body)
print(body)
async def main():
loop = asyncio.get_event_loop()
futures = [loop.create_future() for _ in range(2)]
asyncio.create_task(set_after(futures[0], 3, 'hello', raise_exception=True))
asyncio.create_task(set_after(futures[1], 7, 'world', raise_exception=False))
print(futures)
done, pending = await asyncio.wait(futures, timeout=10, return_when=asyncio.FIRST_EXCEPTION)
print(done)
print(pending)
asyncio.run(main())
In this line of code if hasattr(body, 'error'):, body was a string. I thought it was JSON at that point already. Should have been using "error" in body as my condition in any case. whoops!

How to manage sessions with aiohttp?

I'm using aiohttp with asyncio to make a batch of requests. My first approach was to create a session inside the fetch() function (which starts an asyncio.gather job), and then passing the session object around to the functions that perform the post requests (get_info)
def batch_starter(item_list)
return_value = loop.run_until_complete(fetch(item_list))
return return_value
async def fetch(item_list):
async with aiohttp.ClientSession() as session: # <- session started here
results = await asyncio.gather(*[asyncio.ensure_future(get_info(session, item)) for item in item_list])
async def get_info(session, item): # <- session passed to the function
async with session.post("some_url", data={"id": item}) as resp:
html = await resp.json()
some_info = html.get('info')
return some_info
but thanks to my confusion, I am now leaning towards instantiating the session right away once the script is imported, like below, at the top of the file:
import asyncio
import aiohttp
import json
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
session = aiohttp.ClientSession() # <- session started at top of file
def batch_starter(item_list)
return_value = loop.run_until_complete(fetch(item_list))
return return_value
async def fetch(item_list):
results = await asyncio.gather(*[asyncio.ensure_future(get_info(item)) for item in item_list])
async def get_info(item):
async with session.post("some_url", data={"id": item}) as resp: # <- session from outer scope is used
html = await resp.json()
some_info = html.get('info')
return some_info
the docs explain that opening a session with every request is a "very bad" idea (obviously). But this is stated right after the example which does apparently exactly that (first approach)? Which one of this is correct, and how is the session going to behave when it is used like in the second approach, at the top of the file? wouldn't the session just stay open forever if I'm using the second approach?
The batch_starter() function is not going to be called a lot, but with 9000+ items in the item_list. I assumed this was already reducing the amount of sessions to 1 (per gather job), but apparently this is the "bad idea" example, and needs to be corrected? the docs are a bit unclear about this...

Handling ensure_future and its missing tasks

I have a streaming application that almost continuously takes the data given as input and sends an HTTP request using that value and does something with the returned value.
Obviously to speed things up I've used asyncio and aiohttp libraries in Python 3.7 to get the best performance, but it becomes hard to debug given how fast the data moves.
This is what my code looks like
'''
Gets the final requests
'''
async def apiRequest(info, url, session, reqType, post_data=''):
if reqType:
async with session.post(url, data = post_data) as response:
info['response'] = await response.text()
else:
async with session.get(url+post_data) as response:
info['response'] = await response.text()
logger.debug(info)
return info
'''
Loops through the batches and sends it for request
'''
async def main(data, listOfData):
tasks = []
async with ClientSession() as session:
for reqData in listOfData:
try:
task = asyncio.ensure_future(apiRequest(**reqData))
tasks.append(task)
except Exception as e:
print(e)
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print(exc_type, fname, exc_tb.tb_lineno)
responses = await asyncio.gather(*tasks)
return responses #list of APIResponses
'''
Streams data in and prepares batches to send for requests
'''
async def Kconsumer(data, loop, batchsize=100):
consumer = AIOKafkaConsumer(**KafkaConfigs)
await consumer.start()
dataPoints = []
async for msg in consumer:
try:
sys.stdout.flush()
consumedMsg = loads(msg.value.decode('utf-8'))
if consumedMsg['tid']:
dataPoints.append(loads(msg.value.decode('utf-8')))
if len(dataPoints)==batchsize or time.time() - startTime>5:
'''
#1: The task below goes and sends HTTP GET requests in bulk using aiohttp
'''
task = asyncio.ensure_future(getRequests(data, dataPoints))
res = await asyncio.gather(*[task])
if task.done():
outputs = []
'''
#2: Does some ETL on the returned values
'''
ids = await asyncio.gather(*[doSomething(**{'tid':x['tid'],
'cid':x['cid'], 'tn':x['tn'],
'id':x['id'], 'ix':x['ix'],
'ac':x['ac'], 'output':to_dict(xmltodict.parse(x['response'],encoding='utf-8')),
'loop':loop, 'option':1}) for x in res[0]])
simplySaveDataIntoDataBase(id) # This is where I see some missing data in the database
dataPoints = []
except Exception as e:
logger.error(e)
logger.error(traceback.format_exc())
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
logger.error(str(exc_type) +' '+ str(fname) +' '+ str(exc_tb.tb_lineno))
if __name__ == '__main__':
loop = asyncio.get_event_loop()
asyncio.ensure_future(Kconsumer(data, loop, batchsize=100))
loop.run_forever()
Does the ensure_future need to be awaited ?
How does aiohttp handle requests that come a little later than the others? Shouldn't it hold the whole batch back instead of forgetting about it altoghter?
Does the ensure_future need to be awaited ?
Yes, and your code is doing that already. await asyncio.gather(*tasks) awaits the provided tasks and returns their results in the same order.
Note that await asyncio.gather(*[task]) doesn't make sense, because it is equivalent to await asyncio.gather(task), which is again equivalent to await task. In other words, when you need the result of getRequests(data, dataPoints), you can write res = await getRequests(data, dataPoints) without the ceremony of first calling ensure_future() and then calling gather().
In fact, you almost never need to call ensure_future yourself:
if you need to await multiple tasks, you can pass coroutine objects directly to gather, e.g. gather(coroutine1(), coroutine2()).
if you need to spawn a background task, you can call asyncio.create_task(coroutine(...))
How does aiohttp handle requests that come a little later than the others? Shouldn't it hold the whole batch back instead of forgetting about it altoghter?
If you use gather, all requests must finish before any of them return. (That is not aiohttp policy, it's how gather works.) If you need to implement a timeout, you can use asyncio.wait_for or similar.

How to gather task results in Trio?

I wrote a script that uses a nursery and the asks module to loop through and call an API based upon the loop variables. I get responses but don't know how to return the data like you would with asyncio.
I also have a question on limiting the APIs to 5 per second.
from datetime import datetime
import asks
import time
import trio
asks.init("trio")
s = asks.Session(connections=4)
async def main():
start_time = time.time()
api_key = 'API-KEY'
org_id = 'ORG-ID'
networkIds = ['id1','id2','idn']
url = 'https://api.meraki.com/api/v0/networks/{0}/airMarshal?timespan=3600'
headers = {'X-Cisco-Meraki-API-Key': api_key, 'Content-Type': 'application/json'}
async with trio.open_nursery() as nursery:
for i in networkIds:
nursery.start_soon(fetch, url.format(i), headers)
print("Total time:", time.time() - start_time)
async def fetch(url, headers):
print("Start: ", url)
response = await s.get(url, headers=headers)
print("Finished: ", url, len(response.content), response.status_code)
if __name__ == "__main__":
trio.run(main)
When I run nursery.start_soon(fetch...) , I am printing data within fetch, but how do I return the data? I didn't see anything similar to asyncio.gather(*tasks) function.
Also, I can limit the number of sessions to 1-4, which helps get down below the 5 API per second limit, but was wondering if there was a built in way to ensure that no more than 5 APIs get called in any given second?
Returning data: pass the networkID and a dict to the fetch tasks:
async def main():
…
results = {}
async with trio.open_nursery() as nursery:
for i in networkIds:
nursery.start_soon(fetch, url.format(i), headers, results, i)
## results are available here
async def fetch(url, headers, results, i):
print("Start: ", url)
response = await s.get(url, headers=headers)
print("Finished: ", url, len(response.content), response.status_code)
results[i] = response
Alternately, create a trio.Queue to which you put the results; your main task can then read the results from the queue.
API limit: create a trio.Queue(10) and start a task along these lines:
async def limiter(queue):
while True:
await trio.sleep(0.2)
await queue.put(None)
Pass that queue to fetch, as another argument, and call await limit_queue.get() before each API call.
Based on this answers, you can define the following function:
async def gather(*tasks):
async def collect(index, task, results):
task_func, *task_args = task
results[index] = await task_func(*task_args)
results = {}
async with trio.open_nursery() as nursery:
for index, task in enumerate(tasks):
nursery.start_soon(collect, index, task, results)
return [results[i] for i in range(len(tasks))]
You can then use trio in the exact same way as asyncio by simply patching trio (adding the gather function):
import trio
trio.gather = gather
Here is a practical example:
async def child(x):
print(f"Child sleeping {x}")
await trio.sleep(x)
return 2*x
async def parent():
tasks = [(child, t) for t in range(3)]
return await trio.gather(*tasks)
print("results:", trio.run(parent))
Technically, trio.Queue has been deprecated in trio 0.9. It has been replaced by trio.open_memory_channel.
Short example:
sender, receiver = trio.open_memory_channel(len(networkIds)
async with trio.open_nursery() as nursery:
for i in networkIds:
nursery.start_soon(fetch, sender, url.format(i), headers)
async for value in receiver:
# Do your job here
pass
And in your fetch function you should call async sender.send(value) somewhere.
When I run nursery.start_soon(fetch...) , I am printing data within fetch, but how do I return the data? I didn't see anything similar to asyncio.gather(*tasks) function.
You're asking two different questions, so I'll just answer this one. Matthias already answered your other question.
When you call start_soon(), you are asking Trio to run the task in the background, and then keep going. This is why Trio is able to run fetch() several times concurrently. But because Trio keeps going, there is no way to "return" the result the way a Python function normally would. where would it even return to?
You can use a queue to let fetch() tasks send results to another task for additional processing.
To create a queue:
response_queue = trio.Queue()
When you start your fetch tasks, pass the queue as an argument and send a sentintel to the queue when you're done:
async with trio.open_nursery() as nursery:
for i in networkIds:
nursery.start_soon(fetch, url.format(i), headers)
await response_queue.put(None)
After you download a URL, put the response into the queue:
async def fetch(url, headers, response_queue):
print("Start: ", url)
response = await s.get(url, headers=headers)
# Add responses to queue
await response_queue.put(response)
print("Finished: ", url, len(response.content), response.status_code)
With the changes above, your fetch tasks will put responses into the queue. Now you need to read responses from the queue so you can process them. You might add a new function to do this:
async def process(response_queue):
async for response in response_queue:
if response is None:
break
# Do whatever processing you want here.
You should start this process function as a background task before you start any fetch tasks so that it will process responses as soon as they are received.
Read more in the Synchronizing and Communicating Between Tasks section of the Trio documentation.

Await a method and assign a variable to the returned value with asyncio?

I'm using asyncio with requests to try to make a core module asynchronous program. I've ran into a difficulty when trying to do something like this
import asyncio
import requests
async def main():
await r = requests.get(URL)
What I thought this would do is wait for the get request to finish, then take the return value and put it in r, but this error happens
File "prog.py", line 20
await r = requests.get(URL)
^
SyntaxError: can't assign to await expression
r = await requests.get(URL) doesn't seem to work either, giving
prog.py:31: RuntimeWarning: coroutine 'coroutine' was never awaited
coroutine(args)
Does anyone know how to do this?
How to use await?
await can be used only to await coroutine - special object returned by calling function defined with async def:
import asyncio
async def test():
return True
async def main():
# test() returns coroutine:
coro = test()
print(coro) # <coroutine object test at ...>
# we can await for coroutine to get result:
res = await coro
print(res) # True
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
Read also this answer about using asyncio.
Why await requests.get(URL) doesn't work?
Because requests.get is not a coroutine (it's not defined with async def), it's regular function by nature.
If you want to make request asynchronously you should either use special async module like aiohttp for this or wrap requests into coroutine using threads. See code snippets here for both examples.

Resources