how to make this code Non-Blocking with Asyncio? - python-3.x

I'm trying to create a code that is non-blocking and that allows me to create multiple clients to do some request on a server. However, I can't create more than 1 client simultaneously!
CLIENT.PY
import asyncio
PYTHONASYNCIODEBUG = 1
#ECHO CLIENT PROTOCOL
async def tcp_echo_client(message, loop):
# Send request to server
reader, writer = await asyncio.open_connection('127.0.0.1', 8888, loop=loop)
print('Send: %r' % message)
writer.write(message.encode())
# Receive the information
if message == '1':
await asyncio.Task(read_position(reader))
else:
await asyncio.ensure_future(read_server(reader))
# Close the connection
print('Close the socket')
writer.close()
#ASYNCIO COROUTINES TO REQUEST INFORMATION
async def read_server(reader):
server_message = await reader.read()
print(type(server_message))
print('Received: %r' % server_message.decode())
async def read_position(reader):
while True:
print("I'm Here")
server_message = await reader.read(50)
position = server_message.split()
print(position)
print(type(position))
print('Received: %r' % server_message.decode())
#FUNCTION THAT CREATES THE CLIENT
def main(message):
'''This function creates the client'''
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(tcp_echo_client(message, loop))
finally:
pass
# This is how I create a new client
if __name__ == '__main__':
message = '2'
main(message)
message = '3'
main(message)
I want to create multiples clients, however, the code is blocking in the first main when I send the message('1'). I don't know why the code is blocking if I'm using asyncio. My server accepts multiples connections, because if I run this code seperatly I can do everything. The propose of this is to create clients every time I click a button at my Kivy app to send a request to the server.
This problems exists because I want to control a Robot and do a lot of things simultaneously, however with a blocking code I can't do it because I'm get stuck
Maybe it's a stupid question but I've only started coded 2 months ago and I haven't any help.

Your main function doesn't "create the client", as its docstring claims. It creates the client and runs it to completion. This is why multiple invocations of main() result in serial execution. main() being a regular function, that's exactly what you'd expect, asyncio doesn't change that. It's useful to remember that asyncio is single-threaded, so it can't do some "run in the background" magic, unless you cooperate.
To cooperate, you need to tell aysncio to start both clients, and then await them in parallel:
async def main(messages):
loop = asyncio.get_event_loop()
# launch the coroutines in parallel
tasks = [loop.create_task(tcp_echo_client(msg, loop)) for msg in messages]
# don't exit until all of them are done
await asyncio.gather(*tasks)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main(['2', '3']))
Note that when awaiting your coroutines, you don't need to wrap them in asyncio.ensure_future() or asyncio.Task() - asyncio will handle that automatically. await read_position(reader) and await read_server(reader) would work just fine and have the same meaning as the longer versions.

Related

Asyncio with two loops, best practice

I have two infinite loops. Their processing is lightweight. I don't want them to block each other. Is using await asyncio.sleep(0) a good practice?
This is my code
import asyncio
async def loop1():
while True:
print("loop1")
# pull data from kafka
await asyncio.sleep(0)
async def loop2():
while True:
print("loop2")
# send data to all clients using asyncio stream api
await asyncio.sleep(0)
async def main():
await asyncio.gather(loop1(), loop2())
asyncio.run(main())
Two (many more) asyncio tasks will not block each other until one of tasks have some long sync operation inside.
Both of your tasks have only network operations inside (Kafka and API requests), so none of them will block another task.
When should you use asyncio.sleep(0)?
Imagine you have some long sync operation - calculations. Calculations is not I/O operation.
This example is more like good to know, if you have such operations in real app, you have to move them in loop.run_in_executor and use concurrent.futures.ProcessPoolExecutor as executor. The example:
import asyncio
async def long_calc():
"""
Some Heavy CPU bound task.
Better make it sync function and move to ProcessPoolExecutor
"""
s = 0
for _ in range(100):
for i in range(1_000_000):
s += i**2
# comment the line and watch result
# you'll get no working messages
# that's why I use sleep(0.0) here
await asyncio.sleep(0.0)
return s
async def pinger():
"""Task which shows that app is alive"""
n = 0
while True:
await asyncio.sleep(1)
print(f"Working {n}")
n += 1
async def amain():
"""Main async function in this app"""
# run in asyncio.create_task since we want the task
# to run in parallel with long_calc +
# we do not want to wait till it will be finished
# If it were thread it would be called daemon thread
asyncio.create_task(pinger())
# await results of long task
s = await long_calc()
print(f"Done: {s}")
if __name__ == '__main__':
asyncio.run(amain())
If you need me to provide you with run_in_executor example - let me know.

Real time stdout redirect from a python function call to an async method

So I have a heavy time-consuming function call my_heavy_function and I need to redirect that output to a web interface that is calling it, I have a method to send messages to the web interface, let's called that method async push_message_to_user().
basically, it's something like
import time
def my_heavy_function():
time_on = 0
for i in range(20):
time.sleep(1)
print(f'time spend {time_on}')
time_on = time_on+1
async def push_message_to_user(message:str):
# some lib implementation
pass
if __name__ == "__main__":
my_heavy_function() # how push prints to the UI ?
maybe there is a way giving my_heavy_function(stdout_obj) and use that "std_object"(StringIO) to do something like stdout_object.push(f'time spend {time_on}'). I can do that, but what I can't change the my_heavy_function() by an async version, to add push_message_to_user() directly instead of the print (it's used by other non-ascyn routines)
what I would want it's something like (pseudocode)
with contextlib.redirect_output(my_prints):
my_heavy_function()
while my_prints.readable():
# realtime push
await push_message_to_user(my_prints)
Thanks!
Thanks for the comment of #HTF I finally managed to solve the problem with janus. I copy the example of the repo, and I modified in order to receive a variable number of messages (because I don't know how many iterations my_heavy_function() will use)
import asyncio
import janus
import time
def my_heavy_function(sync_q):
for i in range(10):
sync_q.put(i)
time.sleep(1)
sync_q.put('end') # is there a more elegant way to do this ?
sync_q.join()
async def async_coro(async_q):
while True:
val = await async_q.get()
print(f'arrived {val}')
# send val to UI
# await push_message_to_user(val)
async_q.task_done()
if val == 'end':
break
async def main():
queue = janus.Queue()
loop = asyncio.get_running_loop()
fut = loop.run_in_executor(None, my_heavy_function, queue.sync_q)
await async_coro(queue.async_q)
await fut
queue.close()
await queue.wait_closed()
asyncio.run(main())

running asyncio task concurrently and in background

So apologies because i've seen this question asked a bunch, but having looked through all of the questions none seem to fix my problem. My code looks like this
TDSession = TDClient()
TDSession.grab_refresh_token()
q = queue.Queue(10)
asyncio.run(listener.startStreaming(TDSession, q))
while True:
message = q.get()
print('oh shoot!')
print(message)
orderEntry.placeOrder(TDSession=TDSession)
I have tried doing asyncio.create_task(listener.startStreaming(TDSession,q)), the problem is I get
RuntimeError: no running event loop
sys:1: RuntimeWarning: coroutine 'startStreaming' was never awaited
which confused me because this seemed to work here Can an asyncio event loop run in the background without suspending the Python interpreter? which is what i'm trying to do.
with the listener.startStreaming func looking like this
async def startStreaming(TDSession, q):
streamingClient = TDSession.create_streaming_session()
streamingClient.account_activity()
await streamingClient.build_pipeline()
while True:
message = await streamingClient.start_pipeline()
message = parseMessage(message)
if message != None:
print('putting message into q')
print( dict(message) )
q.put(message)
Is there a way to make this work where I can run the listener in the background?
EDIT: I've tried this as well, but it only runs the consumer function, instead of running both at the same time
TDSession.grab_refresh_token()
q = queue.Queue(10)
loop = asyncio.get_event_loop()
loop.create_task(listener.startStreaming(TDSession, q))
loop.create_task(consumer(TDSession, q))
loop.run_forever()
As you found out, the asyncio.run function runs the given coroutine until it is complete. In other words, it waits for the coroutine returned by listener.startStreaming to finish before proceeding to the next line.
Using asyncio.create_task, on the other hand, requires the caller to be already running inside an asyncio loop already. From the docs:
The task is executed in the loop returned by get_running_loop(), RuntimeError is raised if there is no running loop in current thread.
What you need is to combine the two, by creating a function that's async, and then call create_task inside that async function.
For example:
async def main():
TDSession = TDClient()
TDSession.grab_refresh_token()
q = asyncio.Queue(10)
streaming_task = asyncio.create_task(listener.startStreaming(TDSession, q))
while True:
message = await q.get()
print('oh shoot!')
print(message)
orderEntry.placeOrder(TDSession=TDSession)
await streaming_task # If you want to wait for `startStreaming` to complete after the while loop
if __name__ == '__main__':
asyncio.run(main())
Edit: From your comment I realized you want to use the producer-consumer pattern, so I also updated the example above to use asyncio.Queue instead of a queue.Queue, in order for the thread to be able to jump between the producer (startStreaming) and the consumer (the while loop)

Python asyncio Protocol behaviour with multiple clients and infinite loop

I'm having difficulty understanding the behaviour of my altered echo server, which attempts to take advantage of python 3's asyncio module.
Essentially I have an infinite loop (lets say I want to stream some data from the server to the client indefinitely whilst the connection has been made) e.g. MyServer.py:
#! /usr/bin/python3
import asyncio
import os
import time
class MyProtocol(asyncio.Protocol):
def connection_made(self, transport):
peername = transport.get_extra_info('peername')
print('Connection from {}'.format(peername))
self.transport = transport
def connection_lost(self, exc):
asyncio.get_event_loop().stop()
def data_received(self, data):
i = 0
while True:
self.transport.write(b'>> %i' %i)
time.sleep(2)
i+=1
loop = asyncio.get_event_loop()
coro = loop.create_server(MyProtocol,
os.environ.get('MY_SERVICE_ADDRESS', 'localhost'),
os.environ.get('MY_SERVICE_PORT', 8100))
server = loop.run_until_complete(coro)
try:
loop.run_forever()
except:
loop.run_until_complete(server.wait_closed())
finally:
loop.close()
Next when I connect with nc ::1 8100 and send some text (e.g. "testing") I get the following:
user#machine$ nc ::1 8100
*** Connection from('::1', 58503, 0, 0) ***
testing
>> 1
>> 2
>> 3
^C
Now when I attempt to connect using nc again, I do not get any welcome message and after I attempt to send some new text to the server I get an endless stream of the following error:
user#machine$ nc ::1 8100
Is there anybody out there?
socket.send() raised exception
socket.send() raised exception
...
^C
Just to add salt to the wound the socket.send() raised exception message continues to spam my terminal until I kill the python server process...
As I'm new to web technologies (been a desktop dinosaur for far too long!), I'm not sure why I am getting the above behaviour and I haven't got a clue on how to produce the intended behaviour, which loosely looks like this:
server starts
client 1 connects to server
server sends welcome message to client
4 client 1 sends an arbitrary message
server sends messages back to client 1 for as long as the client is connected
client 1 disconnects (lets say the cable is pulled out)
client 2 connects to server
Repeat steps 3-6 for client 2
Any enlightenment would be extremely welcome!
There are multiple problems with the code.
First and foremost, data_received never returns. At the transport/protocol level, asyncio programming is single-threaded and callback-based. Application code is scattered across callbacks like data_received, and the event loop runs the show, monitoring file descriptors and invoking the callbacks as needed. Each callback is only allowed to perform a short calculation, invoke methods on transport, and arrange for further callbacks to be executed. What the callback cannot do is take a lot of time to complete or block waiting for something. A while loop that never exits is especially bad because it doesn't allow the event loop to run at all.
This is why the code only spits out exceptions once the client disconnects: connection_lost is never called. It's supposed to be called by the event loop, and the never-returning data_received is not giving the event loop a chance to resume. With the event loop blocked, the program is unable to respond to other clients, and data_received keeps trying to send data to the disconnected client, and logs its failure to do so.
The correct way to express the idea can look like this:
def data_received(self, data):
self.i = 0
loop.call_soon(self.write_to_client)
def write_to_client(self):
self.transport.write(b'>> %i' % self.i)
self.i += 1
loop.call_later(2, self.write_to_client)
Note how both data_received and write_to_client do very little work and quickly return. No calls to time.sleep(), and definitely no infinite loops - the "loop" is hidden inside the kind-of-recursive call to write_to_client.
This change reveals the second problem in the code. Its MyProtocol.connection_lost stops the whole event loop and exits the program. This renders the program unable to respond to the second client. The fix could be to replace loop.stop() with setting a flag in connection_lost:
def data_received(self, data):
self._done = False
self.i = 0
loop.call_soon(self.write_to_client)
def write_to_client(self):
if self._done:
return
self.transport.write(b'>> %i' % self.i)
self.i += 1
loop.call_later(2, self.write_to_client)
def connection_lost(self, exc):
self._done = True
This allows multiple clients to connect.
Unrelated to the above issues, the callback-based code is a bit tiresome to write, especially when taking into account complicated code paths and exception handling. (Imagine trying to express nested loops with callbacks, or propagating an exception occurring inside a deeply embedded callback.) asyncio supports coroutines-based streams as alternative to callback-based transports and protocols.
Coroutines allow writing natural-looking code that contains loops and looks like it contains blocking calls, which under the hood are converted into suspension points that enable the event loop to resume. Using streams the code from the question would look like this:
async def talk_to_client(reader, writer):
peername = writer.get_extra_info('peername')
print('Connection from {}'.format(peername))
data = await reader.read(1024)
i = 0
while True:
writer.write(b'>> %i' % i)
await writer.drain()
await asyncio.sleep(2)
i += 1
loop = asyncio.get_event_loop()
coro = asyncio.start_server(talk_to_client,
os.environ.get('MY_SERVICE_ADDRESS', 'localhost'),
os.environ.get('MY_SERVICE_PORT', 8100))
server = loop.run_until_complete(coro)
loop.run_forever()
talk_to_client looks very much like the original implementation of data_received, but without the drawbacks. At each point where it uses await the event loop is resumed if the data is not available. time.sleep(n) is replaced with await asyncio.sleep(n) which does the equivalent of loop.call_later(n, <resume current coroutine>). Awaiting writer.drain() ensures that the coroutine pauses when the peer cannot process the output it gets, and that it raises an exception when the peer has disconnected.

Asynchronously writing to console from stdin and other sources

I try to try to write some kind of renderer for the command line that should be able to print data from stdin and from another data source using asyncio and blessed, which is an improved version of python-blessings.
Here is what I have so far:
import asyncio
from blessed import Terminal
#asyncio.coroutine
def render(term):
while True:
received = yield
if received:
print(term.bold + received + term.normal)
async def ping(renderer):
while True:
renderer.send('ping')
await asyncio.sleep(1)
async def input_reader(term, renderer):
while True:
with term.cbreak():
val = term.inkey()
if val.is_sequence:
renderer.send("got sequence: {0}.".format((str(val), val.name, val.code)))
elif val:
renderer.send("got {0}.".format(val))
async def client():
term = Terminal()
renderer = render(term)
render_task = asyncio.ensure_future(renderer)
pinger = asyncio.ensure_future(ping(renderer))
inputter = asyncio.ensure_future(input_reader(term, renderer))
done, pending = await asyncio.wait(
[pinger, inputter, renderer],
return_when=asyncio.FIRST_COMPLETED,
)
for task in pending:
task.cancel()
if __name__ == '__main__':
asyncio.get_event_loop().run_until_complete(client())
asyncio.get_event_loop().run_forever()
For learning and testing purposes there is just a dump ping that sends 'ping' each second and another routine, that should grab key inputs and also sends them to my renderer.
But ping only appears once in the command line using this code and the input_reader works as expected. When I replace input_reader with a pong similar to ping everything is fine.
This is how it looks when typing 'pong', although if it takes ten seconds to write 'pong':
$ python async_term.py
ping
got p.
got o.
got n.
got g.
It seems like blessed is not built to work correctly with asyncio: inkey() is a blocking method. This will block any other couroutine.
You can use a loop with kbhit() and await asyncio.sleep() to yield control to other coroutines - but this is not a clean asyncio solution.

Resources