I have async call, for example
from httpx import AsyncClient, Response
client = AsyncClient()
my_call = client.get(f"{HOST}/api/my_method") # async call
And I want to pass it to some retry logic like
async def retry_http(http_call):
count = 5
status, response = None, None
while count > 0:
response: Response = await http_call
if response.status_code == 200:
break
count -= 1
if response.status_code in (502, 504):
await asyncio.sleep(2)
else:
break
if response.status_code != 200:
return {
"success": False,
"result": {
"error": f"Response Error",
"response_code": response.status_code,
"response": response.text,
}
}
return response.json()
await retry_http(my_call)
but I got
RuntimeError
cannot reuse already awaited coroutine
Are there any method to make my_call an reusable coroutine ?
It is not possible in Python - a co-routine, once created, have an internal state that can't be easily duplicated - so once it runs, the internal state changes, including the internal line of code that is in execution, and there is no way to "rewind" that.
The most simple approach is to do like in #RyabchenkoAlexander's answer and accept the co-routine function and its parameters separately, and create the co-routine inside your retry function.
An alternative that is a nice Python idiom is to decorate the co-routine function - you make your retry_http a decorator instead, which wraps the underlying co-routine function in the retrying code.
Then, if the functions where you want this behavior are in your code, you can use the decorator syntax (#name prefixing the function definion) so that all calls will have the retry behavior, or you can apply it as a plain expression to get a new, retriable, co-routine function. Your final call could be:
result = await (retry_http(client.get) (f"{HOST}/api/my_method"))
(note the extra pair of parentheses around client.get, decorating it)
The decorator itself could be:
def retry_http(coro_func):
async def wrapper(*args, **kw):
# your original code - just replacing the await expression
...
while count > 5:
...
result = await coro_func(*args, **kw)
...
...
return result
return wrapper
As for your original intent: it would actually be possible to introspect a coroutine object, its internal variables and passed parameter, to recreate a co-routine object that has not yet started - however, that would involve using introspection to locate the original callable, and making the call again - it would be cumbersome, could be slow, and for little gain. I will outline the requirements, nonetheless:
A co-routine object has the cr_code and cr_frame attributes - you'd need to retrieve the function associated with the code object in cr_code- probably using the garbage colector API, or recreate a new function re-using the same code object, by calling types.FunctionType with the same parameters - and the local and global variables can be retrieved from the frame object in cr_frame.
can be fixed in next way
async def retry_http(http_call, *args, **kwargs):
count = 5
status, response = None, None
while count > 0:
response: Response = await http_call(*args, **kwargs)
if response.status_code == 200:
break
count -= 1
if response.status_code in (502, 504):
await asyncio.sleep(2)
else:
break
if response.status_code != 200:
return {
"success": False,
"result": {
"error": f"Response Error",
"response_code": response.status_code,
"response": response.text,
}
}
return response.json()
client = AsyncClient()
await retry_http(client.get, f"{HOST}/api/my_method")
Related
I am trying to send many requests to a url (~50) concurrently.
from asyncio import Queue
import yaml
import asyncio
from aiohttp import ClientSession, TCPConnector
async def http_get(url, cookie):
cookie = cookie.split('; ')
cookie1 = cookie[0].split('=')
cookie2 = cookie[1].split('=')
cookies = {
cookie1[0]: cookie1[1],
cookie2[0]: cookie2[1]
}
async with ClientSession(cookies=cookies) as session:
async with session.get(url, ssl=False) as response:
return await response.json()
class FetchUtil:
def __init__(self):
self.config = yaml.safe_load(open('../config.yaml'))
def fetch(self):
asyncio.run(self.extract_objects())
async def http_get_objects(self, object_type, limit, offset):
path = '/path' + \
'?query=&filter=%s&limit=%s&offset=%s' % (
object_type,
limit,
offset)
return await self.http_get_domain(path)
async def http_get_objects_limit(self, object_type, offset):
result = await self.http_get_objects(
object_type,
self.config['object_limit'],
offset
)
return result['result']
async def http_get_domain(self, path):
return await http_get(
f'https://{self.config["domain"]}{path}',
self.config['cookie']
)
async def call_objects(self, object_type, offset):
result = await self.http_get_objects_limit(
object_type,
offset
)
return result
async def extract_objects(self):
calls = []
object_count = (await self.http_get_objects(
'PV', '1', '0'))['result']['count']
for i in range(0, object_count, self.config['object_limit']):
calls.append(self.call_objects('PV', str(i)))
queue = Queue()
for i in range(0, len(calls), self.config['call_limit']):
results = await asyncio.gather(*calls[i:self.config['call_limit']])
await queue.put(results)
After running this code using fetch as the entry point i get the following error message:
/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/events.py:88: RuntimeWarning: coroutine 'FetchUtil.call_objects' was never awaited
self._context.run(self._callback, *self._args)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
The program that stops executing after asyncio.gather returns for the first time. I am having trouble understanding this message since I thought that I diligently made sure all functions were async tasks. The only function i didn't await was call_objects since i wanted it to run concurrently.
https://xinhuang.github.io/posts/2017-07-31-common-mistakes-using-python3-asyncio.html#org630d301
in this article gives the following explanation:
This runtime warning can happen in many scenarios, but the cause are
same: A coroutine object is created by the invocation of an async
function, but is never inserted into an EventLoop.
I believed that was what i was doing when i called the async tasks with asyncio.gather.
I should note that when i put a print('url') inside http_get it outputs the first 50 urls like i want, the problem seems to occur when asyncio.gather returns for the first time.
The posted code has a logic error: [i:self.config['call_limit']] should be [i:i + self.config['call_limit']].
It causes the error because the expression evaluates to an entry slice in iterations after the first one, causing some of the calls coroutines to never get passed to gather, and therefore never awaited)
i don't actually understand why it didn't just keep executing the same requests many time instead of stopping with an error.
Because your i would get incremented, making it larger than call_limit in every loop iteration except the first. For example, assuming call_limit is 10, i will be 0 in the first iteration, and you'll await calls[0:10], so far so good. But in the next iteration, i will be 10, and you'll be awaiting calls[10:10], an empty slice. In the iteration after that you'll await calls[20:10] (also an empty slice, although logically it should be an error), then calls[30:10], again empty, and so on. Only the first iteration picks up actual members of the list.
I have a streaming application that almost continuously takes the data given as input and sends an HTTP request using that value and does something with the returned value.
Obviously to speed things up I've used asyncio and aiohttp libraries in Python 3.7 to get the best performance, but it becomes hard to debug given how fast the data moves.
This is what my code looks like
'''
Gets the final requests
'''
async def apiRequest(info, url, session, reqType, post_data=''):
if reqType:
async with session.post(url, data = post_data) as response:
info['response'] = await response.text()
else:
async with session.get(url+post_data) as response:
info['response'] = await response.text()
logger.debug(info)
return info
'''
Loops through the batches and sends it for request
'''
async def main(data, listOfData):
tasks = []
async with ClientSession() as session:
for reqData in listOfData:
try:
task = asyncio.ensure_future(apiRequest(**reqData))
tasks.append(task)
except Exception as e:
print(e)
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print(exc_type, fname, exc_tb.tb_lineno)
responses = await asyncio.gather(*tasks)
return responses #list of APIResponses
'''
Streams data in and prepares batches to send for requests
'''
async def Kconsumer(data, loop, batchsize=100):
consumer = AIOKafkaConsumer(**KafkaConfigs)
await consumer.start()
dataPoints = []
async for msg in consumer:
try:
sys.stdout.flush()
consumedMsg = loads(msg.value.decode('utf-8'))
if consumedMsg['tid']:
dataPoints.append(loads(msg.value.decode('utf-8')))
if len(dataPoints)==batchsize or time.time() - startTime>5:
'''
#1: The task below goes and sends HTTP GET requests in bulk using aiohttp
'''
task = asyncio.ensure_future(getRequests(data, dataPoints))
res = await asyncio.gather(*[task])
if task.done():
outputs = []
'''
#2: Does some ETL on the returned values
'''
ids = await asyncio.gather(*[doSomething(**{'tid':x['tid'],
'cid':x['cid'], 'tn':x['tn'],
'id':x['id'], 'ix':x['ix'],
'ac':x['ac'], 'output':to_dict(xmltodict.parse(x['response'],encoding='utf-8')),
'loop':loop, 'option':1}) for x in res[0]])
simplySaveDataIntoDataBase(id) # This is where I see some missing data in the database
dataPoints = []
except Exception as e:
logger.error(e)
logger.error(traceback.format_exc())
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
logger.error(str(exc_type) +' '+ str(fname) +' '+ str(exc_tb.tb_lineno))
if __name__ == '__main__':
loop = asyncio.get_event_loop()
asyncio.ensure_future(Kconsumer(data, loop, batchsize=100))
loop.run_forever()
Does the ensure_future need to be awaited ?
How does aiohttp handle requests that come a little later than the others? Shouldn't it hold the whole batch back instead of forgetting about it altoghter?
Does the ensure_future need to be awaited ?
Yes, and your code is doing that already. await asyncio.gather(*tasks) awaits the provided tasks and returns their results in the same order.
Note that await asyncio.gather(*[task]) doesn't make sense, because it is equivalent to await asyncio.gather(task), which is again equivalent to await task. In other words, when you need the result of getRequests(data, dataPoints), you can write res = await getRequests(data, dataPoints) without the ceremony of first calling ensure_future() and then calling gather().
In fact, you almost never need to call ensure_future yourself:
if you need to await multiple tasks, you can pass coroutine objects directly to gather, e.g. gather(coroutine1(), coroutine2()).
if you need to spawn a background task, you can call asyncio.create_task(coroutine(...))
How does aiohttp handle requests that come a little later than the others? Shouldn't it hold the whole batch back instead of forgetting about it altoghter?
If you use gather, all requests must finish before any of them return. (That is not aiohttp policy, it's how gather works.) If you need to implement a timeout, you can use asyncio.wait_for or similar.
I've a REST API running on Python 3.7 + Tornado 5, with postgresql as database, using aiopg with SQLAlchemy core (via the aiopg.sa binding). For the unit tests, I use py.test with pytest-tornado.
All the tests go ok as soon as no query to the database is involved, where I'd get this:
Runtime error: Task cb=[IOLoop.add_future..() at venv/lib/python3.7/site-packages/tornado/ioloop.py:719]> got Future attached to a different loop
The same code works fine out of the tests, I'm capable of handling 100s of requests so far.
This is part of an #auth decorator which will check the Authorization header for a JWT token, decode it and get the user's data and attach it to the request; this is the part for the query:
partner_id = payload['partner_id']
provided_scopes = payload.get("scope", [])
for scope in scopes:
if scope not in provided_scopes:
logger.error(
'Authentication failed, scopes are not compliant - '
'required: {} - '
'provided: {}'.format(scopes, provided_scopes)
)
raise ForbiddenException(
"insufficient permissions or wrong user."
)
db = self.settings['db']
partner = await Partner.get(db, username=partner_id)
# The user is authenticated at this stage, let's add
# the user info to the request so it can be used
if not partner:
raise UnauthorizedException('Unknown user from token')
p = Partner(**partner)
setattr(self.request, "partner_id", p.uuid)
setattr(self.request, "partner", p)
The .get() async method from Partner comes from the Base class for all models in the app. This is the .get method implementation:
#classmethod
async def get(cls, db, order=None, limit=None, offset=None, **kwargs):
"""
Get one instance that will match the criteria
:param db:
:param order:
:param limit:
:param offset:
:param kwargs:
:return:
"""
if len(kwargs) == 0:
return None
if not hasattr(cls, '__tablename__'):
raise InvalidModelException()
tbl = cls.__table__
instance = None
clause = cls.get_clause(**kwargs)
query = (tbl.select().where(text(clause)))
if order:
query = query.order_by(text(order))
if limit:
query = query.limit(limit)
if offset:
query = query.offset(offset)
logger.info(f'GET query executing:\n{query}')
try:
async with db.acquire() as conn:
async with conn.execute(query) as rows:
instance = await rows.first()
except DataError as de:
[...]
return instance
The .get() method above will either return a model instance (row representation) or None.
It uses the db.acquire() context manager, as described in aiopg's doc here: https://aiopg.readthedocs.io/en/stable/sa.html.
As described in this same doc, the sa.create_engine() method returns a connection pool, so the db.acquire() just uses one connection from the pool. I'm sharing this pool to every request in Tornado, they use it to perform the queries when they need it.
So this is the fixture I've set up in my conftest.py:
#pytest.fixture
async def db():
dbe = await setup_db()
return dbe
#pytest.fixture
def app(db, event_loop):
"""
Returns a valid testing Tornado Application instance.
:return:
"""
app = make_app(db)
settings.JWT_SECRET = 'its_secret_one'
return app
I can't find an explanation of why this is happening; Tornado's doc and source makes it clear that asyncIO event loop is used as default, and by debugging it I can see the event loop is indeed the same one, but for some reason it seems to get closed or stopped abruptly.
This is one test that fails:
#pytest.mark.gen_test(timeout=2)
def test_score_returns_204_empty(app, http_server, http_client, base_url):
score_url = '/'.join([base_url, URL_PREFIX, 'score'])
token = create_token('test', scopes=['score:get'])
headers = {
'Authorization': f'Bearer {token}',
'Accept': 'application/json',
}
response = yield http_client.fetch(score_url, headers=headers, raise_error=False)
assert response.code == 204
This test fails as it returns 401 instead of 204, given the query on the auth decorator fails due to the RuntimeError, which returns then an Unauthorized response.
Any idea from the async experts here will be very appreciated, I'm quite lost on this!!!
Well, after a lot of digging, testing and, of course, learning quite a lot about asyncio, I made it work myself. Thanks for the suggestions so far.
The issue was that the event_loop from asyncio was not running; as #hoefling mentioned, pytest itself does not support coroutines, but pytest-asyncio brings such a useful feature to your tests. This is very well explained here: https://medium.com/ideas-at-igenius/testing-asyncio-python-code-with-pytest-a2f3628f82bc
So, without pytest-asyncio, your async code that needs to be tested will look like this:
def test_this_is_an_async_test():
loop = asyncio.get_event_loop()
result = loop.run_until_complete(my_async_function(param1, param2, param3)
assert result == 'expected'
We use loop.run_until_complete() as, otherwise, the loop will never be running, as this is the way asyncio works by default (and pytest makes nothing to make it work differently).
With pytest-asyncio, your test works with the well-known async / await parts:
async def test_this_is_an_async_test(event_loop):
result = await my_async_function(param1, param2, param3)
assert result == 'expected'
pytest-asyncio in this case wraps the run_until_complete() call above, summarizing it heavily, so the event loop will run and be available for your async code to use it.
Please note: the event_loop parameter in the second case is not even necessary here, pytest-asyncio gives one available for your test.
On the other hand, when you are testing your Tornado app, you usually need to get a http server up and running during your tests, listening in a well-known port, etc., so the usual way goes by writing fixtures to get a http server, base_url (usually http://localhost:, with an unused port, etc etc).
pytest-tornado comes up as a very useful one, as it offers several of these fixtures for you: http_server, http_client, unused_port, base_url, etc.
Also to mention, it gets a pytest mark's gen_test() feature, which converts any standard test to use coroutines via yield, and even to assert it will run with a given timeout, like this:
#pytest.mark.gen_test(timeout=3)
def test_fetch_my_data(http_client, base_url):
result = yield http_client.fetch('/'.join([base_url, 'result']))
assert len(result) == 1000
But, this way it does not support async / await, and actually only Tornado's ioloop will be available via the io_loop fixture (although Tornado's ioloop uses by default asyncio underneath from Tornado 5.0), so you'd need to combine both pytest.mark.gen_test and pytest.mark.asyncio, but in the right order! (which I did fail).
Once I understood better what could be the problem, this was the next approach:
#pytest.mark.gen_test(timeout=2)
#pytest.mark.asyncio
async def test_score_returns_204_empty(http_client, base_url):
score_url = '/'.join([base_url, URL_PREFIX, 'score'])
token = create_token('test', scopes=['score:get'])
headers = {
'Authorization': f'Bearer {token}',
'Accept': 'application/json',
}
response = await http_client.fetch(score_url, headers=headers, raise_error=False)
assert response.code == 204
But this is utterly wrong, if you understand how Python's decorator wrappers work. With the code above, pytest-asyncio's coroutine is then wrapped in a pytest-tornado yield gen.coroutine, which won't get the event-loop running... so my tests were still failing with the same problem. Any query to the database were returning a Future waiting for an event loop to be running.
My updated code once I made myself up of the silly mistake:
#pytest.mark.asyncio
#pytest.mark.gen_test(timeout=2)
async def test_score_returns_204_empty(http_client, base_url):
score_url = '/'.join([base_url, URL_PREFIX, 'score'])
token = create_token('test', scopes=['score:get'])
headers = {
'Authorization': f'Bearer {token}',
'Accept': 'application/json',
}
response = await http_client.fetch(score_url, headers=headers, raise_error=False)
assert response.code == 204
In this case, the gen.coroutine is wrapped inside the pytest-asyncio coroutine, and the event_loop runs the coroutines as expected!
But there were still a minor issue that took me a little while to realize, too; pytest-asyncio's event_loop fixture creates for every test a new event loop, while pytest-tornado creates too a new IOloop. And the tests were still failing, but this time with a different error.
The conftest.py file now looks like this; please note I've re-declared the event_loop fixture to use the event_loop from pytest-tornado io_loop fixture itself (please recall pytest-tornado creates a new io_loop on each test function):
#pytest.fixture(scope='function')
def event_loop(io_loop):
loop = io_loop.current().asyncio_loop
yield loop
loop.stop()
#pytest.fixture(scope='function')
async def db():
dbe = await setup_db()
yield dbe
#pytest.fixture
def app(db):
"""
Returns a valid testing Tornado Application instance.
:return:
"""
app = make_app(db)
settings.JWT_SECRET = 'its_secret_one'
yield app
Now all my tests work, I'm back a happy man and very proud of my now better understanding of the asyncio way of life. Cool!
I wrote a script that uses a nursery and the asks module to loop through and call an API based upon the loop variables. I get responses but don't know how to return the data like you would with asyncio.
I also have a question on limiting the APIs to 5 per second.
from datetime import datetime
import asks
import time
import trio
asks.init("trio")
s = asks.Session(connections=4)
async def main():
start_time = time.time()
api_key = 'API-KEY'
org_id = 'ORG-ID'
networkIds = ['id1','id2','idn']
url = 'https://api.meraki.com/api/v0/networks/{0}/airMarshal?timespan=3600'
headers = {'X-Cisco-Meraki-API-Key': api_key, 'Content-Type': 'application/json'}
async with trio.open_nursery() as nursery:
for i in networkIds:
nursery.start_soon(fetch, url.format(i), headers)
print("Total time:", time.time() - start_time)
async def fetch(url, headers):
print("Start: ", url)
response = await s.get(url, headers=headers)
print("Finished: ", url, len(response.content), response.status_code)
if __name__ == "__main__":
trio.run(main)
When I run nursery.start_soon(fetch...) , I am printing data within fetch, but how do I return the data? I didn't see anything similar to asyncio.gather(*tasks) function.
Also, I can limit the number of sessions to 1-4, which helps get down below the 5 API per second limit, but was wondering if there was a built in way to ensure that no more than 5 APIs get called in any given second?
Returning data: pass the networkID and a dict to the fetch tasks:
async def main():
…
results = {}
async with trio.open_nursery() as nursery:
for i in networkIds:
nursery.start_soon(fetch, url.format(i), headers, results, i)
## results are available here
async def fetch(url, headers, results, i):
print("Start: ", url)
response = await s.get(url, headers=headers)
print("Finished: ", url, len(response.content), response.status_code)
results[i] = response
Alternately, create a trio.Queue to which you put the results; your main task can then read the results from the queue.
API limit: create a trio.Queue(10) and start a task along these lines:
async def limiter(queue):
while True:
await trio.sleep(0.2)
await queue.put(None)
Pass that queue to fetch, as another argument, and call await limit_queue.get() before each API call.
Based on this answers, you can define the following function:
async def gather(*tasks):
async def collect(index, task, results):
task_func, *task_args = task
results[index] = await task_func(*task_args)
results = {}
async with trio.open_nursery() as nursery:
for index, task in enumerate(tasks):
nursery.start_soon(collect, index, task, results)
return [results[i] for i in range(len(tasks))]
You can then use trio in the exact same way as asyncio by simply patching trio (adding the gather function):
import trio
trio.gather = gather
Here is a practical example:
async def child(x):
print(f"Child sleeping {x}")
await trio.sleep(x)
return 2*x
async def parent():
tasks = [(child, t) for t in range(3)]
return await trio.gather(*tasks)
print("results:", trio.run(parent))
Technically, trio.Queue has been deprecated in trio 0.9. It has been replaced by trio.open_memory_channel.
Short example:
sender, receiver = trio.open_memory_channel(len(networkIds)
async with trio.open_nursery() as nursery:
for i in networkIds:
nursery.start_soon(fetch, sender, url.format(i), headers)
async for value in receiver:
# Do your job here
pass
And in your fetch function you should call async sender.send(value) somewhere.
When I run nursery.start_soon(fetch...) , I am printing data within fetch, but how do I return the data? I didn't see anything similar to asyncio.gather(*tasks) function.
You're asking two different questions, so I'll just answer this one. Matthias already answered your other question.
When you call start_soon(), you are asking Trio to run the task in the background, and then keep going. This is why Trio is able to run fetch() several times concurrently. But because Trio keeps going, there is no way to "return" the result the way a Python function normally would. where would it even return to?
You can use a queue to let fetch() tasks send results to another task for additional processing.
To create a queue:
response_queue = trio.Queue()
When you start your fetch tasks, pass the queue as an argument and send a sentintel to the queue when you're done:
async with trio.open_nursery() as nursery:
for i in networkIds:
nursery.start_soon(fetch, url.format(i), headers)
await response_queue.put(None)
After you download a URL, put the response into the queue:
async def fetch(url, headers, response_queue):
print("Start: ", url)
response = await s.get(url, headers=headers)
# Add responses to queue
await response_queue.put(response)
print("Finished: ", url, len(response.content), response.status_code)
With the changes above, your fetch tasks will put responses into the queue. Now you need to read responses from the queue so you can process them. You might add a new function to do this:
async def process(response_queue):
async for response in response_queue:
if response is None:
break
# Do whatever processing you want here.
You should start this process function as a background task before you start any fetch tasks so that it will process responses as soon as they are received.
Read more in the Synchronizing and Communicating Between Tasks section of the Trio documentation.
Due to some unusual constraints, I need to synchronously wait for a callback URL from another service before returning a response. Currently I have something resembling:
ROUTE = '/operation'
async def post(self):
##SOME OPERATIONS##
post_body = { 'callbackUrl' : 'myservice.com/cb' }
response = await other_service.post('/endpoint')
global my_return_value
my_return_value = None
while not my_return_value:
pass
return self.make_response(my_return_value)
Then I have a way to handle the callback URL something like:
ROUTE = '/cb'
async def post(self):
##OPERATIONS###
global my_return_value
my_return_value = some_value
return web.json_response()
The problem with this code is that it forever gets trapped in that while loop forever even if the callback URL gets invoked. I suspect there is a better way to do this, but I'm not sure how to go about it nor how to google for it. Any ideas?
Thanks in advance!
Just a quick scan, but I think you're trapped in
while not my_return_value:
pass
Python will be trapped there and not have time to deal with the callback function. What you need is
while not my_return_value:
await asyncio.sleep(1)
(or you can even do an asyncio.sleep(0) if you don't want the millisecond delay).
An even nicer way would be (and now I'm writing from memory, no guarantees...):
my_return_value = asyncio.get_event_loop().create_future()
await my_return_value
return self.make_response(my_return_value.result())
async def post(self):
##OPERATIONS###
my_return_value.set_result(some_value)
return web.json_response()
Note however that either way will break very much if there is ever more than one concurrent use of this system. It feels very fragile! Maybe even better:
ROUTE = '/operation'
my_return_value = {}
async def post(self):
##SOME OPERATIONS##
token = "%016x" % random.SystemRandom().randint(0, 2**128)
post_body = { 'callbackUrl' : 'myservice.com/cb?token='+token }
response = await other_service.post('/endpoint')
my_return_value[token] = asyncio.get_event_loop().create_future()
await my_return_value[token]
result = my_return_value[token].result()
del my_return_value[token]
return self.make_response(result)
async def post(self):
##OPERATIONS###
token = self.arguments("token")
my_return_value[token].set_result(some_value)
return web.json_response()
Now cherry on top would be a timer that would cancel the future after a timeout and clean up the entry in my_return_value after a while if the callback does not happen. Also, if you're going with my last suggestion, don't call it my_return_value but something like callback_future_by_token...