Asyncio task creation with different accepted methods give different results - python-3.x

Having the fololowing code:
import asyncio
async def mycoro(number: int):
print(f"Starting {number}")
await asyncio.sleep(1)
print(f"Finishing {number}")
return str(number)
c = mycoro(7)
# task = asyncio.ensure_future(c)
task = asyncio.create_task(c)
loop = asyncio.get_event_loop()
loop.run_until_complete(task)
loop.close()
Why do I get sys:1: RuntimeWarning: coroutine 'mycoro' was never awaited
while:
import asyncio
async def mycoro(number: int):
print(f"Starting {number}")
await asyncio.sleep(1)
print(f"Finishing {number}")
return str(number)
c = mycoro(7)
task = asyncio.ensure_future(c)
# task = asyncio.create_task(c)
loop = asyncio.get_event_loop()
loop.run_until_complete(task)
loop.close()
runs as expected
IF
as per https://docs.python.org/3.9/library/asyncio-task.html#creating-tasks both ways of creating tasks:
task = asyncio.create_task(coro())
task = asyncio.ensure_future(coro())
are accepted?
*Running with python3.9

The warning is just a side effect of the error that occurs before the warning and which is the real issue:
Traceback (most recent call last):
File "/home/hniksic/Desktop/a.py", line 11, in <module>
task = asyncio.create_task(c)
File "/usr/lib/python3.8/asyncio/tasks.py", line 381, in create_task
loop = events.get_running_loop()
RuntimeError: no running event loop
You can't call asyncio.create_task() outside of a running event loop; it is designed to be called from inside a coroutine to submit additional tasks to the event loop. The reason for that design is that submitting tasks to the "current" event loop (one returned by asyncio.get_event_loop()), as ensure_future does, is incompatible with asyncio.run(), which always creates a fresh event loop. In other words, if asyncio.create_task(c) at top-level submitted c to the "current" event loop, then asyncio.run(other_coroutine()) would ignore c because it runs a different event loop. To avoid confusion, asyncio.create_task() requires being called from inside a running event loop.
If you want to create tasks on a particular event loop before running it with run_until_complete(), you can use loop.create_task(c) instead. Making that change removes the error.

Related

running asyncio task concurrently and in background

So apologies because i've seen this question asked a bunch, but having looked through all of the questions none seem to fix my problem. My code looks like this
TDSession = TDClient()
TDSession.grab_refresh_token()
q = queue.Queue(10)
asyncio.run(listener.startStreaming(TDSession, q))
while True:
message = q.get()
print('oh shoot!')
print(message)
orderEntry.placeOrder(TDSession=TDSession)
I have tried doing asyncio.create_task(listener.startStreaming(TDSession,q)), the problem is I get
RuntimeError: no running event loop
sys:1: RuntimeWarning: coroutine 'startStreaming' was never awaited
which confused me because this seemed to work here Can an asyncio event loop run in the background without suspending the Python interpreter? which is what i'm trying to do.
with the listener.startStreaming func looking like this
async def startStreaming(TDSession, q):
streamingClient = TDSession.create_streaming_session()
streamingClient.account_activity()
await streamingClient.build_pipeline()
while True:
message = await streamingClient.start_pipeline()
message = parseMessage(message)
if message != None:
print('putting message into q')
print( dict(message) )
q.put(message)
Is there a way to make this work where I can run the listener in the background?
EDIT: I've tried this as well, but it only runs the consumer function, instead of running both at the same time
TDSession.grab_refresh_token()
q = queue.Queue(10)
loop = asyncio.get_event_loop()
loop.create_task(listener.startStreaming(TDSession, q))
loop.create_task(consumer(TDSession, q))
loop.run_forever()
As you found out, the asyncio.run function runs the given coroutine until it is complete. In other words, it waits for the coroutine returned by listener.startStreaming to finish before proceeding to the next line.
Using asyncio.create_task, on the other hand, requires the caller to be already running inside an asyncio loop already. From the docs:
The task is executed in the loop returned by get_running_loop(), RuntimeError is raised if there is no running loop in current thread.
What you need is to combine the two, by creating a function that's async, and then call create_task inside that async function.
For example:
async def main():
TDSession = TDClient()
TDSession.grab_refresh_token()
q = asyncio.Queue(10)
streaming_task = asyncio.create_task(listener.startStreaming(TDSession, q))
while True:
message = await q.get()
print('oh shoot!')
print(message)
orderEntry.placeOrder(TDSession=TDSession)
await streaming_task # If you want to wait for `startStreaming` to complete after the while loop
if __name__ == '__main__':
asyncio.run(main())
Edit: From your comment I realized you want to use the producer-consumer pattern, so I also updated the example above to use asyncio.Queue instead of a queue.Queue, in order for the thread to be able to jump between the producer (startStreaming) and the consumer (the while loop)

How to check if a certain task is running?

I tried the following:
loop = asyncio.get_running_loop
task1 = asyncio.create_task(any_coroutine())
if task1 in asyncio.all_tasks(loop):
do something...
but it never meets the if condition. Can someone help?
According to document, as create_task calls get_running_loop you don't have to get event loop - assuming you're already running event loop so get_running_loop() didn't raise following error which your code does:
RuntimeError: no running event loop
Instead, try new async and await keywords, loop arguments will be deprecated on python 3.10 so it's better move if you're learning asyncio.
Following example will meet condition in if block correctly.
import asyncio
async def my_coro():
await asyncio.sleep(5)
async def main():
task = asyncio.create_task(my_coro())
if task in asyncio.all_tasks():
print('Task found!')
else:
print('Missed!')
if __name__ == '__main__':
asyncio.run(main())

Event loop in asyncio is overflowing. Though adding 4 executions at a time it is overloading

In the following code I am calling getSUEPEvent() funtion 4 time for single loop. I restart loop again for next 4. Still execution keeps on adding to loop. If loop is global than can any one suggest another strategy to can funtion n number of time by grouping or any else means.
def SEUPCustomers(featurecode,threshholdTime):
# headers = buildHeaders()
with open("ActiveCustomers.csv","r") as f:
SEUPCustomersList = []
csvReader = csv.reader(f)
tasks = []
for row in csvReader:
tasks.append(asyncio.ensure_future(getSEUPEvents(featurecode,row,threshholdTime,SEUPCustomersList)))
for task in range(0,len(tasks),4):
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait(tasks[task:task+4]))
loop.close()
ensure_future and run_until_complete don't work the way you expect them to. Here is what they do:
ensure_future schedules the awaitable to run in the main loop, effectively creating what could be called "background task", which will run whenever the main loop runs;
run_until_complete submits the given awaitable to the event loop and runs the event loop until that particular future completes.
So, if you submit 100 tasks to the event loop and then use run_until_complete to wait until one of them finishes, the loop will run all 100 tasks, and stop once the one whose completion you're interested in finishes.
To write the code you wanted, you can simply avoid the ensure_future step:
def SEUPCustomers(featurecode,threshholdTime):
# headers = buildHeaders()
with open("ActiveCustomers.csv","r") as f:
SEUPCustomersList = []
csvReader = csv.reader(f)
coros = []
for row in csvReader:
coros.append(getSEUPEvents(featurecode,row,threshholdTime,SEUPCustomersList))
loop = asyncio.get_event_loop()
for i in range(0,len(coros),4):
loop.run_until_complete(asyncio.wait(coros[i:i+4]))
Also, loop.close() is incorrect if you plan to use the loop later on. If you call loop.close() at all, you should call it once you're completely done with the event loop.

Why is this exception immediately raised from an asyncio Task?

My understanding from the documentation is that asyncio.Tasks, as an asyncio.Future subclass, will store exceptions raised in them and they can be retrieved at my leisure.
However, in this sample code, the exception is raised immediately:
import asyncio
async def bad_task():
raise Exception()
async def test():
loop = asyncio.get_event_loop()
task = loop.create_task(bad_task())
await task
# I would expect to get here
exp = task.exception()
# but we never do because the function exits on line 3
loop = asyncio.get_event_loop()
loop.run_until_complete(test())
loop.close()
Example output (Python 3.6.5):
python3 ./test.py
Traceback (most recent call last):
File "./test.py", line 15, in <module>
loop.run_until_complete(test())
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/base_events.py", line 468, in run_until_complete
return future.result()
File "./test.py", line 9, in test
await task
File "./test.py", line 4, in bad_task
raise Exception()
Exception
Is this a quirk of creating & calling tasks when already within async code?
await will raise any exception thrown by the task, because it's meant to make asynchronous code look almost exactly like synchronous code. If you want to catch them, you can use a normal try...except clause.
As Matti explained, exceptions raised by a coroutine are propagated to the awaiting site. This is intentional, as it ensures that errors do not pass silently by default. However, if one needs to do so, it is definitely possible to await a task's completion without immediately accessing its result/exception.
Here is a simple and efficient way to do so, by using a small intermediate Future:
async def test():
loop = asyncio.get_event_loop()
task = loop.create_task(bad_task())
task_done = loop.create_future() # you could also use asyncio.Event
# Arrange for task_done to complete once task completes.
task.add_done_callback(task_done.set_result)
# Wait for the task to complete. Since we're not obtaining its
# result, this won't raise no matter what bad_task() does...
await task_done
# ...and this will work as expected.
exp = task.exception()

Using Asyncio subprocess in a pyramid view

I am trying to run a asyncio sub-process in a pyramid view but the view hangs and the async task appears to never complete. I can run this example outside of a pyramid view and it works.
With that said I have tested originally using loop = asyncio.get_event_loop() but this tells me RuntimeError: There is no current event loop in thread 'Dummy-2'
There are certainly things I don't fully understand here. Like maybe the view thread is different to the main thread so get_event_loop doesn't work.
So does anybody know why my async task might not yield its result in this scenario? This is a naive example.
#asyncio.coroutine
def async_task(dir):
# This task can be of varying length for each handled directory
print("Async task start")
create = asyncio.create_subprocess_exec(
'ls',
'-l',
dir,
stdout=asyncio.subprocess.PIPE)
proc = yield from create
# Wait for the subprocess exit
data = yield from proc.stdout.read()
exitcode = yield from proc.wait()
return (exitcode, data)
#view_config(
route_name='test_async',
request_method='GET',
renderer='json'
)
def test_async(request):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
dirs = ['/tmp/1/', '/tmp/2/', '/tmp/3/']
tasks = []
for dir in dirs:
tasks.append(asyncio.ensure_future(async_task(dir), loop=loop))
loop.run_until_complete(asyncio.gather(*tasks))
loop.close()
return
You are invoking loop.run_until_complete in your view so clearly it is going to block until complete!
If you want to use asyncio with a WSGI app then you need to do so in another thread. For example you could spin up a thread that contains the eventloop and executes your async code. WSGI code is all synchronous and so any async code must be done this way, with it's own issues, or you can just live with it blocking the request thread like you're doing now.

Resources