I am trying to create TCP server in python with a tornado.
My handle stream method looks like this,
async def handle_stream(self, stream, address):
while True:
try:
stream.read_until_close(streaming_callback=self._on_read)
except StreamClosedError:
break
in the _on_read method, I am trying to read and process the data but whenever a new client connects to the server it gives AssertionError: Already reading error.
File "/.local/lib/python3.5/site-packages/tornado/iostream.py", line 525, in read_until_close
future = self._set_read_callback(callback)
File "/.local/lib/python3.5/site-packages/tornado/iostream.py", line 860, in _set_read_callback
assert self._read_future is None, "Already reading"
read_until_close asynchronously reads all data from the socket until it is closed. read_until_close has to be called once but the cycle forces the second call that's why you got an error:
on the first iteration, read_until_close sets streaming_callback and returns Future so that you could await it or use later;
on the second iteration, read_until_close raises an exception since you already set a callback on the first iteration.
read_until_close returns Future object and you can await it to make things work:
async def handle_stream(self, stream, address):
try:
await stream.read_until_close(streaming_callback=self._on_read)
except StreamClosedError:
# do something
Related
I have 2 programs.
The first (which could be written in any language, actually and therefore cannot be altered at all) looks like this:
#!/bin/env python3
import random
while True:
s = input() # get input from stdin
i = random.randint(0, len(s)) # process the input
print(f"New output {i}", flush=True) # prints processed input to stdout
It runs forever, read something from stdin, processes it and writes the result to stdout.
I am trying to write a second program in Python using the asyncio library.
It executes the first program as a subprocess and attempt to feed it input via its stdin and retrieve the result from the its stdout.
Here is my code so far:
#!/bin/env python3
import asyncio
import asyncio.subprocess as asp
async def get_output(process, input):
out, err = await process.communicate(input)
print(err) # shows that the program crashes
return out
# other attempt to implement
process.stdin.write(input)
await process.stdin.drain() # flush input buffer
out = await process.stdout.read() # program is stuck here
return out
async def create_process(cmd):
process = await asp.create_subprocess_exec(
cmd, stdin=asp.PIPE, stdout=asp.PIPE, stderr=asp.PIPE)
return process
async def run():
process = await create_process("./test.py")
out = await get_output(process, b"input #1")
print(out) # b'New output 4'
out = await get_output(process, b"input #2")
print(out) # b''
out = await get_output(process, b"input #3")
print(out) # b''
out = await get_output(process, b"input #4")
print(out) # b''
async def main():
await asyncio.gather(run())
asyncio.run(main())
I struggle to implement the get_output function. It takes a bytestring (as needed by the input parameter of the .communicate() method) as parameter, writes it to the stdin of the program, reads the response from its stdout and returns it.
Right now, only the first call to get_output works properly. This is because the implementation of the .communicate() method calls the wait() method, effectively causing the program to terminate (which it isn't meant to). This can be verified by examining the value of err in the get_output function, which shows the first program reached EOF. And thus, the other calls to get_output return an empty bytestring.
I have tried another way, even less successful, since the program gets stuck at the line out = await process.stdout.read(). I haven't figured out why.
My question is how do I implement the get_output function to capture the program's output in (near) real time and keep it running ? It doesn't have to be using asyncio, but I have found this library to be the best one so far for that.
Thank you in advance !
If the first program is guaranteed to print only one line of output in response to the line of input that it has read, you can change await process.stdout.read() to await process.stdout.readline() and your second approach should work.
The reason it didn't work for you is that your run function has a bug: it never sends a newline to the child process. Because of that, the child process is stuck in input() and never responds. If you add \n at the end of the bytes literals you're passing to get_output, the code works correctly.
Example error handling function:
def read_file():
try:
with open(filename, 'rb') as fd:
x = fd.read()
except FileNotFoundError as e:
return(e)
return(x)
I would call the function like so:
file = read_file("test.txt")
if file:
#do something
is there a more efficient/effective way to handle errors than using return multiple times?
It's very strange to catch e and then return it; why would a user of your function want the error to be returned instead of raised? Returning an error doesn't handle it, it just passes responsibility to the caller to handle the error. Letting the error be raised is a more natural way to make the caller responsible for handling it. So it makes sense not to catch the error at all:
def read_file(filename):
with open(filename, 'rb') as fd:
return fd.read()
For your desired use-case where you want to write if file: to test whether the file existed, your read_file function could catch the error and return None, so that your if condition will be falsey:
def read_file(filename):
try:
with open(filename, 'rb') as fd:
return fd.read()
except FileNotFoundError:
return None
However, this means that if the caller isn't aware that the function might return None, you'll get an error from using None where the file data was expected, instead of a FileNotFoundError, and it will be harder to identify the problem in your code.
If you do intend for your function to be called with a filename that might not exist, naming the function something like read_file_if_exists might be a better way to make clear that calling this function with a non-existent filename isn't considered an error.
Okay, so I am created a DataStream object which is just a wrapper class around asyncio.Queue. I am passing this around all over and everything is working fine up until the following functions. I am calling ensure_future to run 2 infinite loops, one that replicates the data in one DataStream object, and one that sends data to a websocket. here is that code:
def start(self):
# make sure that we set the event loop before we run our async requests
print("Starting WebsocketProducer on ", self.host, self.port)
RUNTIME_LOGGER.info(
"Starting WebsocketProducer on %s:%i", self.host, self.port)
#Get the event loop and add a task to it.
asyncio.set_event_loop(self.loop)
asyncio.get_event_loop().create_task(self._mirror_stream(self.data_stream))
asyncio.ensure_future(self._serve(self.ssl_context))enter code here
Ignore the indent issue, SO wont indent correctly.
And here is the method that is failing with the error 'Task was destroyed but it is pending!'. Keep in mind, if I do not include the lines with 'data_stream.get()' the function runs fine. I made sure, the objects in both locations have the same memory address AND value for id(). If i print the data that comes from the await self.data_stream.get() I get the correct data. However after that it seems to just return and break. Here is the code:
async def _mirror_stream(self):
while True:
stream_length = self.data_stream.length
try:
if stream_length > 1:
for _ in range(0, stream_length):
data = await self.data_stream.get()
else:
data = await self.data_stream.get()
except Exception as e:
print(str(e))
# If the data is null, keep the last known value
if self._is_json_serializable(data) and data is not None:
self.payload = json.dumps(data)
else:
RUNTIME_LOGGER.warning(
"Mirroring stream encountered a Null payload in WebsocketProducer!")
await asyncio.sleep(self.poll_rate)enter code here
The issue has been resolved by implementing my own async Queue by utilizing the normal queue.Queue object. For some reason the application would only work if I would 'await' for queue.get(), even though it wasnt an asyncio.Queue object... Not entirely sure why this behavior occured, however the application is running well, and still performing as if the Queue were from the asyncio lib. Thanks to those who looked!
Hello I have the following for my async loop
async def start_process(restore_items, args, loop):
with GlacierRestorer(args.temp_dir, args.error_log_bucket, loop) as restorer:
restorer.initiate_restore_all(restore_items)
tasks = []
semaphore = asyncio.BoundedSemaphore(4)
for item in restore_items:
tasks.append(asyncio.ensure_future(restorer.transfer(item, semaphore)))
await asyncio.gather(*tasks)
def main():
args = get_args()
restore_items = get_restore_items(args)
for item in restore_items:
print(item.source, ':', item.destination)
try:
loop = asyncio.get_event_loop()
loop.run_until_complete(start_process(restore_items, args, loop))
except KeyboardInterrupt:
pass
My job and files get larger I see that I keep getting an
socket.send() exception
After reading the documentation it seems to be coming from loop.run_until_complete
The exception doesn't come the program to crash, but eventually bogs it down so much it gets stuck printing the exception.
How do I modify the current code to fix this?
run_until_complete only propagates the exception raised inside start_process. This means that if an exception happens at any point during start_process, and start_process doesn't catch it, run_until_complete(start_process()) will re-raise the same exception.
In your case the exception likely originally gets raised somewhere in restorer.transfer(). The call to gather returns the results of the coroutines, which includes raising an exception, if one occurred.
The exception doesn't come the program to crash, but eventually bogs it down so much it gets stuck printing the exception. How do I modify the current code to fix this?
Ideally you would fix the cause of the exception - perhaps you are sending in too many requests at once, or you are using the GlacierRestorer API incorrectly. But some exceptions cannot be avoided, e.g. ones caused by a failing network. To ignore such exceptions, you can wrap the call to restorer.transfer in a separate coroutine:
async def safe_transfer(restorer, item, semaphore):
try:
return await restorer.transfer(item, semaphore)
except socket.error as e:
print(e) # here you can choose not to print exceptions you
# don't care about if doing so bogs down the program
In start_process you would call this coroutine instead of restorer_transfer:
coros = []
for item in restore_items:
coros.append(safe_transfer(restorer, item, semaphore))
await asyncio.gather(*coros)
Note that you don't need to call asyncio.ensure_future() to pass a coroutine to asyncio.gather; it will be called automatically.
My understanding from the documentation is that asyncio.Tasks, as an asyncio.Future subclass, will store exceptions raised in them and they can be retrieved at my leisure.
However, in this sample code, the exception is raised immediately:
import asyncio
async def bad_task():
raise Exception()
async def test():
loop = asyncio.get_event_loop()
task = loop.create_task(bad_task())
await task
# I would expect to get here
exp = task.exception()
# but we never do because the function exits on line 3
loop = asyncio.get_event_loop()
loop.run_until_complete(test())
loop.close()
Example output (Python 3.6.5):
python3 ./test.py
Traceback (most recent call last):
File "./test.py", line 15, in <module>
loop.run_until_complete(test())
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/base_events.py", line 468, in run_until_complete
return future.result()
File "./test.py", line 9, in test
await task
File "./test.py", line 4, in bad_task
raise Exception()
Exception
Is this a quirk of creating & calling tasks when already within async code?
await will raise any exception thrown by the task, because it's meant to make asynchronous code look almost exactly like synchronous code. If you want to catch them, you can use a normal try...except clause.
As Matti explained, exceptions raised by a coroutine are propagated to the awaiting site. This is intentional, as it ensures that errors do not pass silently by default. However, if one needs to do so, it is definitely possible to await a task's completion without immediately accessing its result/exception.
Here is a simple and efficient way to do so, by using a small intermediate Future:
async def test():
loop = asyncio.get_event_loop()
task = loop.create_task(bad_task())
task_done = loop.create_future() # you could also use asyncio.Event
# Arrange for task_done to complete once task completes.
task.add_done_callback(task_done.set_result)
# Wait for the task to complete. Since we're not obtaining its
# result, this won't raise no matter what bad_task() does...
await task_done
# ...and this will work as expected.
exp = task.exception()