My understanding from the documentation is that asyncio.Tasks, as an asyncio.Future subclass, will store exceptions raised in them and they can be retrieved at my leisure.
However, in this sample code, the exception is raised immediately:
import asyncio
async def bad_task():
raise Exception()
async def test():
loop = asyncio.get_event_loop()
task = loop.create_task(bad_task())
await task
# I would expect to get here
exp = task.exception()
# but we never do because the function exits on line 3
loop = asyncio.get_event_loop()
loop.run_until_complete(test())
loop.close()
Example output (Python 3.6.5):
python3 ./test.py
Traceback (most recent call last):
File "./test.py", line 15, in <module>
loop.run_until_complete(test())
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/base_events.py", line 468, in run_until_complete
return future.result()
File "./test.py", line 9, in test
await task
File "./test.py", line 4, in bad_task
raise Exception()
Exception
Is this a quirk of creating & calling tasks when already within async code?
await will raise any exception thrown by the task, because it's meant to make asynchronous code look almost exactly like synchronous code. If you want to catch them, you can use a normal try...except clause.
As Matti explained, exceptions raised by a coroutine are propagated to the awaiting site. This is intentional, as it ensures that errors do not pass silently by default. However, if one needs to do so, it is definitely possible to await a task's completion without immediately accessing its result/exception.
Here is a simple and efficient way to do so, by using a small intermediate Future:
async def test():
loop = asyncio.get_event_loop()
task = loop.create_task(bad_task())
task_done = loop.create_future() # you could also use asyncio.Event
# Arrange for task_done to complete once task completes.
task.add_done_callback(task_done.set_result)
# Wait for the task to complete. Since we're not obtaining its
# result, this won't raise no matter what bad_task() does...
await task_done
# ...and this will work as expected.
exp = task.exception()
Related
Having the fololowing code:
import asyncio
async def mycoro(number: int):
print(f"Starting {number}")
await asyncio.sleep(1)
print(f"Finishing {number}")
return str(number)
c = mycoro(7)
# task = asyncio.ensure_future(c)
task = asyncio.create_task(c)
loop = asyncio.get_event_loop()
loop.run_until_complete(task)
loop.close()
Why do I get sys:1: RuntimeWarning: coroutine 'mycoro' was never awaited
while:
import asyncio
async def mycoro(number: int):
print(f"Starting {number}")
await asyncio.sleep(1)
print(f"Finishing {number}")
return str(number)
c = mycoro(7)
task = asyncio.ensure_future(c)
# task = asyncio.create_task(c)
loop = asyncio.get_event_loop()
loop.run_until_complete(task)
loop.close()
runs as expected
IF
as per https://docs.python.org/3.9/library/asyncio-task.html#creating-tasks both ways of creating tasks:
task = asyncio.create_task(coro())
task = asyncio.ensure_future(coro())
are accepted?
*Running with python3.9
The warning is just a side effect of the error that occurs before the warning and which is the real issue:
Traceback (most recent call last):
File "/home/hniksic/Desktop/a.py", line 11, in <module>
task = asyncio.create_task(c)
File "/usr/lib/python3.8/asyncio/tasks.py", line 381, in create_task
loop = events.get_running_loop()
RuntimeError: no running event loop
You can't call asyncio.create_task() outside of a running event loop; it is designed to be called from inside a coroutine to submit additional tasks to the event loop. The reason for that design is that submitting tasks to the "current" event loop (one returned by asyncio.get_event_loop()), as ensure_future does, is incompatible with asyncio.run(), which always creates a fresh event loop. In other words, if asyncio.create_task(c) at top-level submitted c to the "current" event loop, then asyncio.run(other_coroutine()) would ignore c because it runs a different event loop. To avoid confusion, asyncio.create_task() requires being called from inside a running event loop.
If you want to create tasks on a particular event loop before running it with run_until_complete(), you can use loop.create_task(c) instead. Making that change removes the error.
I have a problem understanding some of the limitations using print inside an async function. Basically this is my code:
#!/usr/bin/env python
import sys
import asyncio
import aiohttp
async amain(loop):
session = aiohttp.ClientSession(loop=loop)
try:
# using session to fetch a large json file which is stored
# in obj
print(obj) # for debugging purposes
finally:
await session.close()
def main():
loop = asyncio.get_event_loop()
res = 1
try:
res = loop.run_until_complete(amain(loop, args))
except KeyboardInterrupt:
# silence traceback when pressing ctrl+c
pass
loop.close()
return res
if __name__ == '__main__':
sys.exit(main())
If I execute this, then the json object is printed on stdout and the suddenly dies with this error
$ dwd-get-sensor-file ; echo $?
Traceback (most recent call last):
File "/home/yanez/anaconda/py3/envs/mondas/bin/dwd-get-sensor-file", line 11, in <module>
load_entry_point('mondassatellite', 'console_scripts', 'dwd-get-sensor-file')()
File "/home/yanez/projects/mondassatellite/mondassatellite/mondassatellite/bin/dwd_get_sensor_file.py", line 75, in main
res = loop.run_until_complete(amain(loop, args))
File "/home/yanez/anaconda/py3/envs/mondas/lib/python3.7/asyncio/base_events.py", line 579, in run_until_complete
return future.result()
File "/home/yanez/projects/mondassatellite/mondassatellite/mondassatellite/bin/dwd_get_sensor_file.py", line 57, in amain
print(obj)
BlockingIOError: [Errno 11] write could not complete without blocking
1
The funny thing is that when I execute my code redirecting stdout to a file like this
$ dwd-get-sensor-file > output.txt ; echo $?
0
the exception doesn't happen and the whole output is correctly redirected to output.txt.
For testing purposes I converted the json object to a string and instead of doing print(obj) I do sys.stdout.write(obj_as_str) then I get this
exception:
BlockingIOError: [Errno 11] write could not complete without blocking
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>
I've searched for this BlockingIOError exception but all threads I find have something to do with network sockets or CI builds. But I found one
interesting github comment:
The make: write error is almost certainly EAGAIN from stdout. Pretty much every command line tool expects stdout to be in blocking mode, and does not properly retry when in nonblocking mode.
So when I executed this
python -c 'import os,sys,fcntl; flags = fcntl.fcntl(sys.stdout, fcntl.F_GETFL); print(flags&os.O_NONBLOCK);'
I get 2048, which means blocking (or is this the other way round? I'm confused). After executing this
python -c 'import os,sys,fcntl; flags = fcntl.fcntl(sys.stdout, fcntl.F_GETFL); fcntl.fcntl(sys.stdout, fcntl.F_SETFL, flags&~os.O_NONBLOCK);'
I don't get the BlockingIOError exceptions anymore, but I don't like this solution though.
So, my question is: how should we deal when writing to stdout inside an async function? If I know that I'm dealing with stdout, should I
set stdout to non-blocking and revert it back when my program exits? Is there a specific strategy for this?
Give aiofiles a try, using stdout FD as the file object.
aiofiles helps with this by introducing asynchronous versions of files that support delegating operations to a separate thread pool.
In terms of actually using aiofiles with an FD directly, you could probably extend the aiofiles.os module, using wrap(os.write).
I am trying to create TCP server in python with a tornado.
My handle stream method looks like this,
async def handle_stream(self, stream, address):
while True:
try:
stream.read_until_close(streaming_callback=self._on_read)
except StreamClosedError:
break
in the _on_read method, I am trying to read and process the data but whenever a new client connects to the server it gives AssertionError: Already reading error.
File "/.local/lib/python3.5/site-packages/tornado/iostream.py", line 525, in read_until_close
future = self._set_read_callback(callback)
File "/.local/lib/python3.5/site-packages/tornado/iostream.py", line 860, in _set_read_callback
assert self._read_future is None, "Already reading"
read_until_close asynchronously reads all data from the socket until it is closed. read_until_close has to be called once but the cycle forces the second call that's why you got an error:
on the first iteration, read_until_close sets streaming_callback and returns Future so that you could await it or use later;
on the second iteration, read_until_close raises an exception since you already set a callback on the first iteration.
read_until_close returns Future object and you can await it to make things work:
async def handle_stream(self, stream, address):
try:
await stream.read_until_close(streaming_callback=self._on_read)
except StreamClosedError:
# do something
Hello I have the following for my async loop
async def start_process(restore_items, args, loop):
with GlacierRestorer(args.temp_dir, args.error_log_bucket, loop) as restorer:
restorer.initiate_restore_all(restore_items)
tasks = []
semaphore = asyncio.BoundedSemaphore(4)
for item in restore_items:
tasks.append(asyncio.ensure_future(restorer.transfer(item, semaphore)))
await asyncio.gather(*tasks)
def main():
args = get_args()
restore_items = get_restore_items(args)
for item in restore_items:
print(item.source, ':', item.destination)
try:
loop = asyncio.get_event_loop()
loop.run_until_complete(start_process(restore_items, args, loop))
except KeyboardInterrupt:
pass
My job and files get larger I see that I keep getting an
socket.send() exception
After reading the documentation it seems to be coming from loop.run_until_complete
The exception doesn't come the program to crash, but eventually bogs it down so much it gets stuck printing the exception.
How do I modify the current code to fix this?
run_until_complete only propagates the exception raised inside start_process. This means that if an exception happens at any point during start_process, and start_process doesn't catch it, run_until_complete(start_process()) will re-raise the same exception.
In your case the exception likely originally gets raised somewhere in restorer.transfer(). The call to gather returns the results of the coroutines, which includes raising an exception, if one occurred.
The exception doesn't come the program to crash, but eventually bogs it down so much it gets stuck printing the exception. How do I modify the current code to fix this?
Ideally you would fix the cause of the exception - perhaps you are sending in too many requests at once, or you are using the GlacierRestorer API incorrectly. But some exceptions cannot be avoided, e.g. ones caused by a failing network. To ignore such exceptions, you can wrap the call to restorer.transfer in a separate coroutine:
async def safe_transfer(restorer, item, semaphore):
try:
return await restorer.transfer(item, semaphore)
except socket.error as e:
print(e) # here you can choose not to print exceptions you
# don't care about if doing so bogs down the program
In start_process you would call this coroutine instead of restorer_transfer:
coros = []
for item in restore_items:
coros.append(safe_transfer(restorer, item, semaphore))
await asyncio.gather(*coros)
Note that you don't need to call asyncio.ensure_future() to pass a coroutine to asyncio.gather; it will be called automatically.
I have a pyspark streaming job doing something along these lines:
def printrddcount(rdd):
c = rdd.count()
print("{1}: Received an RDD of {0} rows".format("CANNOTCOUNT", datetime.now().isoformat()) )
and then:
...
stream.foreachRDD(printrddcount)
From what I get, the printrdd function will be executed within the workers
And, yes, I know it's a bad idea to do a print() within the worker. But that's not the point.
I'm pretty sure this very code was working until very recently.
(and, it looked differently, because the content of 'c' was actually printed in the print statement, rather than just collected, and then thrown away...)
But now, it seems that (all of a sudden?), then rdd.count() has stopped working ans is making my worker process die saying:
UnpicklingError: NEWOBJ class argument has NULL tp_new
full (well, python only) stacktrace:
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/worker.py", line 163, in main
func, profiler, deserializer, serializer = read_command(pickleSer, infile)
File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/worker.py", line 54, in read_command
command = serializer._read_with_length(file)
File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/serializers.py", line 169, in _read_with_length
return self.loads(obj)
File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/serializers.py", line 454, in loads
return pickle.loads(obj)
UnpicklingError: NEWOBJ class argument has NULL tp_new
The line where it fails is, indeed, the one saying rdd.count()
Any idea why rdd.count() would fail?
If something is supposed to be serialized, it should be the rdd, right?
Ok. I investigated a bit further.
There's nothing wrong with rdd.count()
Only thing wrong is that there is another transformation along the pipe that somehow 'corrupts' (Closes? Invalidates? Something along those lines) the rdd.
So, when it gets to the printrddcount function it cannot be serialized any more and gives the error.
The issue is within a code that looks like:
...
log = logging.getLogger(__name__)
...
def parse(parse_function):
def parse_function_wrapper(event):
try:
log.info("parsing")
new_event = parse_function(event)
except ParsingFailedException as e:
pass
return new_event
return parse_function_wrapper
and then:
stream = stream.map(parse(parse_event))
Now, the log.info (tried a lot of variations, in the beginning logging was within an exception handler) is the one creating the issue.
Which leads me to say that, most probably, it is the logger object that cannot be serialized, for some reason.
Closing this thread myself as it has actually nothing to do with rdd serialization; and most probably not even with pyspark even.