Python3.7 asyncio start webserver (FastAPI) and aio_pika consumer - python-3.x

In my project I try to start a REST API (built with FastAPI and run with Hypercorn), additional I want on startup also to start a RabbitMQ Consumer (with aio_pika):
Aio Pika offers a robust connection which automatically reconnects on failure. If I run the code below with hypercorn app:app the consumer and the rest interface starts correctly, but the reconnect from aio_pika does not work anymore. How can I archive a production stable RabbitMQ Consumer and RestAPI in two different processes (or threads?). My python version is 3.7, please note I am actually a Java and Go developer in case my approach is not the Python way :-)
#app.on_event("startup")
def startup():
loop = asyncio.new_event_loop()
asyncio.ensure_future(main(loop))
#app.get("/")
def read_root():
return {"Hello": "World"}
async def main(loop):
connection = await aio_pika.connect_robust(
"amqp://guest:guest#127.0.0.1/", loop=loop
)
async with connection:
queue_name = "test_queue"
# Creating channel
channel = await connection.channel() # type: aio_pika.Channel
# Declaring queue
queue = await channel.declare_queue(
queue_name,
auto_delete=True
) # type: aio_pika.Queue
async with queue.iterator() as queue_iter:
# Cancel consuming after __aexit__
async for message in queue_iter:
async with message.process():
print(message.body)
if queue.name in message.body.decode():
break

With the help of #pgjones I managed changed the consuming start to:
#app.on_event("startup")
def startup():
loop = asyncio.get_event_loop()
asyncio.ensure_future(main(loop))
And start the job with asyncio.ensure_future and pass the current event loop as an argument, which solved the issue.
Would be interesting if somebody has a different/better approach
Thanks!

Related

Using asyncio for doing a/b testing in Python

Let's say there's some API that's running in production already and you created another API which you kinda want to A/B test using the incoming requests that's hitting the production-api. Now I was wondering, is it possible to do something like this, (I am aware of people doing traffic splits by keeping two different API versions for A/B testing etc)
As soon as you get the incoming request for your production-api, you make an async request to your new API and then carry on with the rest of the code for the production-api and then, just before returning the final response to the caller back, you check whether you have the results computed for that async task that you had created before. If it's available, then you return that instead of the current API.
I am wondering, what's the best way to do something like this? Do we try to write a decorator for this or something else? i am a bit worried about lot of edge cases that can happen if we use async here. Anyone has any pointers on making the code or the whole approach better?
Thanks for your time!
Some pseudo-code for the approach above,
import asyncio
def call_old_api():
pass
async def call_new_api():
pass
async def main():
task = asyncio.Task(call_new_api())
oldResp = call_old_api()
resp = await task
if task.done():
return resp
else:
task.cancel() # maybe
return oldResp
asyncio.run(main())
You can't just execute call_old_api() inside asyncio's coroutine. There's detailed explanation why here. Please, ensure you understand it, because depending on how your server works you may not be able to do what you want (to run async API on a sync server preserving the point of writing an async code, for example).
In case you understand what you're doing, and you have an async server, you can call the old sync API in thread and use a task to run the new API:
task = asyncio.Task(call_new_api())
oldResp = await in_thread(call_old_api())
if task.done():
return task.result() # here you should keep in mind that task.result() may raise exception if the new api request failed, but that's probably ok for you
else:
task.cancel() # yes, but you should take care of the cancelling, see - https://stackoverflow.com/a/43810272/1113207
return oldResp
I think you can go even further and instead of always waiting for the old API to be completed, you can run both APIs concurrently and return the first that's done (in case new api works faster than the old one). With all checks and suggestions above, it should look something like this:
import asyncio
import random
import time
from contextlib import suppress
def call_old_api():
time.sleep(random.randint(0, 2))
return "OLD"
async def call_new_api():
await asyncio.sleep(random.randint(0, 2))
return "NEW"
async def in_thread(func):
loop = asyncio.get_running_loop()
return await loop.run_in_executor(None, func)
async def ensure_cancelled(task):
task.cancel()
with suppress(asyncio.CancelledError):
await task
async def main():
old_api_task = asyncio.Task(in_thread(call_old_api))
new_api_task = asyncio.Task(call_new_api())
done, pending = await asyncio.wait(
[old_api_task, new_api_task], return_when=asyncio.FIRST_COMPLETED
)
if pending:
for task in pending:
await ensure_cancelled(task)
finished_task = done.pop()
res = finished_task.result()
print(res)
asyncio.run(main())

Two coroutines running simultaneously using asyncio

I'm trying to make a program that reads in data as well as sends and receives data from a server through a websocket. The goal is to create synchronized lamps where there are two client lamps and one server. When one of the lamps changes state it sends a request to the server and the server updates the other lamp. I'm currently stuck on the client code. I can establish a websocket connection to the server, read and send data to the server, and I can read in light data. I'm having an issue running both of these tasks simultaneously. I'd like to do it asynchronously to avoid race condition issues. I'm using python 3.8 and asyncio.
Here is my websocket client code so far:
async def init_connection(message):
global CONNECTION_OPEN
global CLIENT_WS
uri = WS_URI
async with websockets.connect(uri) as websocket:
CONNECTION_OPEN = True
CLIENT_WS = websocket
# send init message
await websocket.send(message)
while CONNECTION_OPEN:
await handleMessages(websocket, message)
await websocket.send(json.dumps({'type': MessageType.Close.name, 'message': USERNAME}))
await websocket.close()
Here is my read in data code so far:
async def calculate_idle(t):
global STATE
global prevColor
x_arr = []
y_arr = []
z_arr = []
while t >= 0:
x, y, z = lis3dh.acceleration
print("Current colors")
print(accel_to_color(x,y,z))
x_arr.append(x)
y_arr.append(y)
z_arr.append(z)
newColor = accel_to_color(x,y,z)
# remember prev color
do_fade(prevColor, newColor)
#strip.fill((int(a_x), int(a_y), int(a_z), 0))
#strip.show()
prevColor = newColor
time.sleep(.2)
t -= .2
is_idle = is_lamp_idle(np.std(x_arr), np.std(y_arr), np.std(z_arr))
if is_idle and STATE == "NOT IDLE" and CONNECTION_OPEN:
STATE = "IDLE"
print("Sending color")
await asyncio.sleep(1)
elif is_idle and CONNECTION_OPEN:
# Check for data
STATE = "IDLE"
print ("Receiving data")
await asyncio.sleep(1)
elif is_idle and not CONNECTION_OPEN:
print ("Idle and not connected")
rainbow_cycle(0.001) # rainbow cycle with 1ms delay per step
await asyncio.sleep(1)
else:
STATE = "NOT IDLE"
await asyncio.sleep(1)
print("Is not idle")
Here is the code that is supposed to tie them together:
async def main():
message = json.dumps({'type': "authentication", 'payload': {
'username': 'user1', 'secret': SHARED_SECRET}})
loop = asyncio.get_event_loop()
start_light = asyncio.create_task(calculate_idle(3))
await asyncio.gather(init_connection(message), start_light)
asyncio.run(main())
There's other functions, but the premise is there's a websocket connection sending and receiving data and another process reading in light data. I also need to be able to read the current state of the lights and set the current state of the lights which is why I was using global variables. Currently, it'll read the lights until it hits an await asyncio.sleep(1) in calculate idle, then switch to the websocket code and hang receiving data from the server. Ideally, it would alternate between reading the current state and checking for websocket messages. If the state changes, it would then send a websocket message.
How can I run both of these routine asynchronously and share the data between them? Any help is appreciated!
Thanks to user4815162342's comments to help narrow down the issue. My calculate idle didn't have a while true and I changed time.sleep(.2) to await asyncio.sleep(.2) and I was able to read data from the server and the lights at the same time.

Run actions on Tornado main loop, after it starts

I'm creating a python3 tornado web server that may listen to an MQTT broker and whenever listens a new message from it, broadcasts it to the connected browsers, through web sockets. However, seems that Tornado doesn't like calls to its API from a thread different to IOLoop.current() and I can't figure out another solution...
I've already tried to write some code. I've put the whole MQTT client (in this case called PMCU client), on a separated thread which loops and listens to MQTT notifications.
def on_pmcu_data(data):
for websocket_client in websocket_clients:
print("Sending websocket message")
websocket_client.write_message(data) # Here it stuck!
print("Sent")
class WebSocketHandler(tornado.websocket.WebSocketHandler):
def open(self):
websocket_clients.append(self)
def on_close(self):
websocket_clients.remove(self)
def make_app():
return tornado.web.Application([
(r'/ws', WebSocketHandler)
])
if __name__ == "__main__":
main_loop = IOLoop().current()
pmcu_client = PMCUClient(on_pmcu_data)
threading.Thread(target=lambda: pmcu_client.listen("5.4.3.2")).start()
app = make_app()
app.listen(8080)
main_loop.start()
However as I said, seems that calls to Tornado API outside the IOLoop.current() blocks: the code above only prints Sending websocket message.
My intent is to run websocket_client.write_message(data) on IOLoop.current() event loop. But seems that the function IOLoop.current().spawn_callback(lambda: websocket_client.write_message(data)) not works after IOLoop.current() has started. How could I achieve that?
I know that I have a huge misunderstanding of IOLoop, asyncio, on which it depends, and python3 async.
on_pmcu_data is being called in a separate thread but the websocket is controlled by Tornado's event loop. You can't write to a websocket from a thread unless you have access to the event loop.
You'll need to ask the IOLoop to write the data to websockets.
Solution 1:
For simple cases, if you don't want to change much in the code, you can do this:
if __name__ == "__main__":
main_loop = IOLoop().current()
on_pmcu_data_callback = lambda data: main_loop.add_callback(on_pmcu_data, data)
pmcu_client = PMCUClient(on_pmcu_data_callback)
...
This should solve your problem.
Solution 2:
For more elaborate cases, you can pass the main_loop to PMCUClient class and then use add_callback (or spawn_callback) to run on_pmcu_data.
Example:
if __name__ == "__main__":
main_loop = IOLoop().current()
pmcu_client = PMCUClient(on_pmcu_data, main_loop) # also pass the main loop
...
Then in PMCUCLient class:
class PMCUClient:
def __init__(self, on_pmcu_data, main_loop):
...
self.main_loop = main_loop
def lister(...):
...
self.main_loop.add_callback(self.on_pmcu_data, data)

Stream producer and consumer with asyncio gather python

I wrote a script for a socket server that simply listens for incoming connections and processes the incoming data. The chosen architecture is the asyncio.start_server for the socket management and the asyncio.Queues for passing the data between the producer and consumer coroutines. The problem is that the consume(q1) function is executed only once (at the first script startup). Then it is not more executed. Is the line run_until_complete(asyncio.gather()) wrong?
import asyncio
import functools
async def handle_readnwrite(reader, writer, q1): #Producer coroutine
data = await reader.read(1024)
message = data.decode()
await writer.drain()
await q1.put(message[3:20])
await q1.put(None)
writer.close() #Close the client socket
async def consume(q1): #Consumer coroutine
while True:
# wait for an item from the producer
item = await q1.get()
if item is None:
logging.debug('None items') # the producer emits None to indicate that it is done
break
do_something(item)
loop = asyncio.get_event_loop()
q1 = asyncio.Queue(loop=loop)
producer_coro = asyncio.start_server(functools.partial(handle_readnwrite, q1=q1), '0.0.0.0', 3000, loop=loop)
consumer_coro = consume(q1)
loop.run_until_complete(asyncio.gather(consumer_coro,producer_coro))
try:
loop.run_forever()
except KeyboardInterrupt:
pass
loop.close()
handle_readnwrite always enqueues the None terminator, which causes consume to break (and therefore finish the coroutine). If consume should continue running and process other messages, the None terminator must not be sent after each message.

python3 asyncio ZeroMQ .connect() blocks

I try to implement a REQ/REP pattern, with python3 asyncio and ZeroMQ
My client async function:
import zmq
import os
from time import time
import asyncio
import zmq.asyncio
print ('Client %i'%os.getpid())
context = zmq.asyncio.Context(1)
loop = zmq.asyncio.ZMQEventLoop()
asyncio.set_event_loop(loop)
async def client():
socket = context.socket(zmq.REQ)
socket.connect('tcp://11.111.11.245:5555')
while True:
data = zmq.Message(str(os.getpid()).encode('utf8'))
start = time()
print('send')
await socket.send(data)
print('wait...')
data = await socket.recv()
print('recv')
print(time() - start, data)
loop.run_until_complete(client())
As I understand, the call to a socket.connect( "tcp://11.111.11.245:5555" ) method is a blocking method.
How to make a non-blocking connection call, in my case?
As far as I understand the ZeroMQ API, the call to .connect() method is not synchronous with building the real connection ( if not introduced by the wrapper, the underlying API is non-blocking - ref. below ).
The connection will not be performed immediately but as needed by ØMQ. Thus a successful invocation of zmq_connect() does not indicate that a physical connection was or can actually be established.
Ref.: ZeroMQ API - zmq_connect(3)

Resources