Asyncios await reader.read() is waiting forever - python-3.x

I have a small Server written like this:
async def handle_client(reader, writer):
request = (await reader.read()).decode('utf8') # should read until end of msg
print(request)
response = "thx"
writer.write(response.encode('utf8'))
await writer.drain()
writer.close()
loop = asyncio.get_event_loop()
loop.create_task(asyncio.start_server(handle_client, socket.gethostname(), 8886))
loop.run_forever()
and a small client written like this:
async def tcp_echo_client(message):
reader, writer = await asyncio.open_connection(
my_ip, 8886)
print(f'Send: {message!r}')
writer.write(message.encode())
await writer.drain()
data = await reader.read() # should read until end of msg
print(f'Received: {data.decode()!r}')
print('Close the connection')
writer.close()
await writer.wait_closed()
asyncio.run(tcp_echo_client(f"Hello World!"))
the client and the server start the comunication but never finish it.
why does the reader not recognise end of msg?
if i write
request = (await reader.read(1024)).decode('utf8')
instead it works, but i need to recive undefined large amount of data.
i tried to modify the code of the server like this:
while True:
request = (await reader.read(1024)).decode('utf8')
if not request:
break
it recieves all data blocks but still waits forever after the last block. why?
how do i tell the reader from the server to stop listenig and proceed in the code to send the answer?

TCP connections are stream-based, which means that when you write a "message" to a socket, the bytes will be sent to the peer without including a delimiter between messages. The peer on the other side of the connection can retrieve the bytes, but it needs to figure out on its own how to slice them into "messages". This is why reading the last block appears to hang: read() simply waits for the peer to send more data.
To enable retrieval of individual messages, the sender must frame or delimit each message. For example, the sender could just close the connection after sending a message, which would allow the other side to read the message because it would be followed by the end-of-file indicator. However, that would allow sender to only send one message without the ability to read a response because the socket would be closed.
A better option is for the writer to only close the writing side of the socket, such partial close sometimes being referred to as shutdown). In asyncio this is done with a call to write_eof:
writer.write(message.encode())
await writer.drain()
writer.write_eof()
Sent like this, the message will be followed by end-of-file and the read on the server side won't hang. While the client will be able to read the response, it will still be limited to sending only one message because further writes will be impossible on the socket whose writing end was closed.
To implement communication consisting of an arbitrary number of requests and responses, you need to frame each message. A simple way to do so is by prefixing each message with message length:
writer.write(struct.pack('<L', len(request)))
writer.write(request)
The receiver first reads the message size and then the message itself:
size, = struct.unpack('<L', await reader.readexactly(4))
request = await reader.readexactly(size)

Related

Getting not persistent messages from nats.io server

My question is simple:
Now this code sends empty message to subject chan.01.msg and gets message that is being currently broadcasted or prints nats: timeout. Altogether this request message is also shown (something like: Received a message on chan.01.msg _INBOX.<hash_my>.<salt_up>: b'') on subject and is not desirable there. I do filter it in callback, but I really feel that it's kinda wrong way to do it.
Can I just pull messages with desired subject?
async def msgcb(msg):
"""
Message callback function
"""
subject = msg.subject
reply = msg.reply
data = msg.data
if len(data) > 0:
print(f"Received a message on {subject} {reply}: {data}")
logging.debug("Prepare to subscribe")
sub = await nc.subscribe(subject="chan.01.msg", cb=msgcb)
logging.debug("loop process messages on subject")
while True:
await asyncio.sleep(1)
try:
resp = await nc.request('chan.01.msg')
print(resp)
except Exception as e:
print(e)
You are subscribing to the same subject where you are publishing so naturally would get the message when sending a request. To avoid receiving messages the same client produces you can use the no_echo option on connect.

asyncio.wait not returning on first exception

I have an AMQP publisher class with the following methods. on_response is the callback that is called when a consumer sends back a message to the RPC queue I setup. I.e. the self.callback_queue.name you see in the reply_to of the Message. publish publishes out to a direct exchange with a routing key that has multiple consumers (very similar to a fanout), and multiple responses come back. I create a number of futures equal to the number of responses I expect, and asyncio.wait for those futures to complete. As I get responses back on the queue and consume them, I set the result to the futures.
async def on_response(self, message: IncomingMessage):
if message.correlation_id is None:
logger.error(f"Bad message {message!r}")
await message.ack()
return
body = message.body.decode('UTF-8')
future = self.futures[message.correlation_id].pop()
if hasattr(body, 'error'):
future.set_execption(body)
else:
future.set_result(body)
await message.ack()
async def publish(self, routing_key, expected_response_count, msg, timeout=None, return_partial=False):
if not self.connected:
logger.info("Publisher not connected. Waiting to connect first.")
await self.connect()
correlation_id = str(uuid.uuid4())
futures = [self.loop.create_future() for _ in range(expected_response_count)]
self.futures[correlation_id] = futures
await self.exchange.publish(
Message(
str(msg).encode(),
content_type="text/plain",
correlation_id=correlation_id,
reply_to=self.callback_queue.name,
),
routing_key=routing_key,
)
done, pending = await asyncio.wait(futures, timeout=timeout, return_when=asyncio.FIRST_EXCEPTION)
if not return_partial and pending:
raise asyncio.TimeoutError(f'Failed to return all results for publish to {routing_key}')
for f in pending:
f.cancel()
del self.futures[correlation_id]
results = []
for future in done:
try:
results.append(json.loads(future.result()))
except json.decoder.JSONDecodeError as e:
logger.error(f'Client did not return JSON!! {e!r}')
logger.info(future.result())
return results
My goal is to either wait until all futures are finished, or a timeout occurs. This is all working nicely at the moment. What doesn't work, is when I added return_when=asyncio.FIRST_EXCEPTION, the asyncio.wait.. does not finish after the first call of future.set_exception(...) as I thought it would.
What do I need to do with the future so that when I get a response back and see that an error occurred on the consumer side (before the timeout, or even other responses) the await asyncio.wait will no longer be blocking. I was looking at the documentation and it says:
The function will return when any future finishes by raising an exception
when return_when=asyncio.FIRST_EXCEPTION. My first thought is that I'm not raising an exception in my future correctly, only, I'm having trouble finding out exactly how I should do that then. From the API documentation for the Future class, it looks like I'm doing the right thing.
When I created a minimum viable example, I realized I was actually doing things MOSTLY right after all, and I glanced over other errors causing this not to work. Here is my minimum example:
The most important change I had to do was actually pass in an Exception object.. (subclass of BaseException) do the set_exception method.
import asyncio
async def set_after(future, t, body, raise_exception):
await asyncio.sleep(t)
if raise_exception:
future.set_exception(Exception("problem"))
else:
future.set_result(body)
print(body)
async def main():
loop = asyncio.get_event_loop()
futures = [loop.create_future() for _ in range(2)]
asyncio.create_task(set_after(futures[0], 3, 'hello', raise_exception=True))
asyncio.create_task(set_after(futures[1], 7, 'world', raise_exception=False))
print(futures)
done, pending = await asyncio.wait(futures, timeout=10, return_when=asyncio.FIRST_EXCEPTION)
print(done)
print(pending)
asyncio.run(main())
In this line of code if hasattr(body, 'error'):, body was a string. I thought it was JSON at that point already. Should have been using "error" in body as my condition in any case. whoops!

Two coroutines running simultaneously using asyncio

I'm trying to make a program that reads in data as well as sends and receives data from a server through a websocket. The goal is to create synchronized lamps where there are two client lamps and one server. When one of the lamps changes state it sends a request to the server and the server updates the other lamp. I'm currently stuck on the client code. I can establish a websocket connection to the server, read and send data to the server, and I can read in light data. I'm having an issue running both of these tasks simultaneously. I'd like to do it asynchronously to avoid race condition issues. I'm using python 3.8 and asyncio.
Here is my websocket client code so far:
async def init_connection(message):
global CONNECTION_OPEN
global CLIENT_WS
uri = WS_URI
async with websockets.connect(uri) as websocket:
CONNECTION_OPEN = True
CLIENT_WS = websocket
# send init message
await websocket.send(message)
while CONNECTION_OPEN:
await handleMessages(websocket, message)
await websocket.send(json.dumps({'type': MessageType.Close.name, 'message': USERNAME}))
await websocket.close()
Here is my read in data code so far:
async def calculate_idle(t):
global STATE
global prevColor
x_arr = []
y_arr = []
z_arr = []
while t >= 0:
x, y, z = lis3dh.acceleration
print("Current colors")
print(accel_to_color(x,y,z))
x_arr.append(x)
y_arr.append(y)
z_arr.append(z)
newColor = accel_to_color(x,y,z)
# remember prev color
do_fade(prevColor, newColor)
#strip.fill((int(a_x), int(a_y), int(a_z), 0))
#strip.show()
prevColor = newColor
time.sleep(.2)
t -= .2
is_idle = is_lamp_idle(np.std(x_arr), np.std(y_arr), np.std(z_arr))
if is_idle and STATE == "NOT IDLE" and CONNECTION_OPEN:
STATE = "IDLE"
print("Sending color")
await asyncio.sleep(1)
elif is_idle and CONNECTION_OPEN:
# Check for data
STATE = "IDLE"
print ("Receiving data")
await asyncio.sleep(1)
elif is_idle and not CONNECTION_OPEN:
print ("Idle and not connected")
rainbow_cycle(0.001) # rainbow cycle with 1ms delay per step
await asyncio.sleep(1)
else:
STATE = "NOT IDLE"
await asyncio.sleep(1)
print("Is not idle")
Here is the code that is supposed to tie them together:
async def main():
message = json.dumps({'type': "authentication", 'payload': {
'username': 'user1', 'secret': SHARED_SECRET}})
loop = asyncio.get_event_loop()
start_light = asyncio.create_task(calculate_idle(3))
await asyncio.gather(init_connection(message), start_light)
asyncio.run(main())
There's other functions, but the premise is there's a websocket connection sending and receiving data and another process reading in light data. I also need to be able to read the current state of the lights and set the current state of the lights which is why I was using global variables. Currently, it'll read the lights until it hits an await asyncio.sleep(1) in calculate idle, then switch to the websocket code and hang receiving data from the server. Ideally, it would alternate between reading the current state and checking for websocket messages. If the state changes, it would then send a websocket message.
How can I run both of these routine asynchronously and share the data between them? Any help is appreciated!
Thanks to user4815162342's comments to help narrow down the issue. My calculate idle didn't have a while true and I changed time.sleep(.2) to await asyncio.sleep(.2) and I was able to read data from the server and the lights at the same time.

How to use REQ and REP in pyzmq with asyncio?

I'm trying to implement asynchronous client and server using pyzmq and asyncio in python3.5. I've used the asyncio libraries provided by zmq. Below is my code for client(requester.py) and server(responder.py). My requirement is to use only REQ and REP zmq sockets to achieve async client-server.
requester.py:
import asyncio
import zmq
import zmq.asyncio
async def receive():
message = await socket.recv()
print("Received reply ", "[", message, "]")
return message
async def send(i):
print("Sending request ", i,"...")
request = "Hello:" + str(i)
await socket.send(request.encode('utf-8'))
print("sent:",i)
async def main_loop_num(i):
await send(i)
# Get the reply.
message = await receive()
print("Message :", message)
async def main():
await asyncio.gather(*(main_loop_num(i) for i in range(1,10)))
port = 5556
context = zmq.asyncio.Context.instance()
socket = context.socket(zmq.REQ)
socket.connect("tcp://localhost:%d" % port)
asyncio.get_event_loop().run_until_complete(asyncio.wait([main()]))
responder.py:
import asyncio
import zmq
import zmq.asyncio
async def receive():
message = await socket.recv()
print("Received message:", message)
await asyncio.sleep(10)
print("Sleep complete")
return message
async def main_loop():
while True:
message = await receive()
print("back to main loop")
await socket.send(("World from %d" % port).encode('utf-8'))
print("sent back")
port = 5556
context = zmq.asyncio.Context.instance()
socket = context.socket(zmq.REP)
socket.bind("tcp://*:%d" % port)
asyncio.get_event_loop().run_until_complete(asyncio.wait([main_loop()]))
The output that I'm getting is:
requester.py:
Sending request 5 ...
sent: 5
Sending request 6 ...
Sending request 1 ...
Sending request 7 ...
Sending request 2 ...
Sending request 8 ...
Sending request 3 ...
Sending request 9 ...
Sending request 4 ...
responder.py:
Received message: b'Hello:5'
Sleep complete
back to main loop
sent back
From the output, I assume that the requester has sent multiple requests, but only the first one has reached the responder. Also, the response sent by responder for the first request has not even reached back to the requester. Why does this happen? I have used async methods everywhere possible, still the send() and recv() methods are not behaving asynchronously. Is it possible to make async req-rep without using any other sockets like router, dealer, etc?
ZMQs REQ-REP sockets expect a strict order of one request - one reply - one request - one reply - ...
your requester.py starts all 10 requests in parallel:
await asyncio.gather(*(main_loop_num(i) for i in range(1,10)))
when sending the second request ZMQ complains about this:
zmq.error.ZMQError: Operation cannot be accomplished in current state
Try to change your main function to send one request at a time:
async def main():
for i in range(1, 10):
await main_loop_num(i)
If you need to send several requests in parallel then you can't use a REQ-REP socket pair but for example a DEALER-REP socket pair.

Asyncio server stops to respond after the first request

I'm trying to write an asyncio-based server. The problem is, that it stops to respond after the first request.
My code is built upon this template for echo-server and this method to pass parameters to coroutines.
class MsgHandler:
def __init__(self, mem):
# here (mem:dict) I store received metrics
self.mem = mem
async def handle(self, reader, writer):
#this coroutine handles requests
data = await reader.read(1024)
print('request:', data.decode('utf-8'))
# read_msg returns an answer based on the request received
# My server closes connection on every second request
# For the first one, everything works as intended,
# so I don't thik the problem is in read_msg()
response = read_msg(data.decode('utf-8'), self.mem)
print('response:', response)
writer.write(response.encode('utf-8'))
await writer.drain()
writer.close()
def run_server(host, port):
mem = {}
msg_handler = MsgHandler(mem)
loop = asyncio.get_event_loop()
coro = asyncio.start_server(msg_handler.handle, host, port, loop=loop)
server = loop.run_until_complete(coro)
try:
loop.run_forever()
except KeyboardInterrupt:
pass
server.close()
loop.run_until_complete(server.wait_closed())
loop.close()
On the client-side I either get an empty response or ConnectionResetError (104, 'Connection reset by peer').
You are closing the writer with writer.close() in the handler, which closes the socket.
From the 3.9 docs on StreamWriter:
Also, if you don't close the stream writer, then you would still have store it somewhere in order to keep receiving messages over that same connection.

Resources