Python Tornado send WebSocket messages from another thread - python-3.x

I want to use WebSockets in Python to keep web clients up to date about data that I am reading from a serial port using PySerial. I am currently using the following code to read the serial data in continuously with a separate thread
def read_from_port():
while running:
reading = ser.readline().decode()
handle_data(reading)
thread = threading.Thread(target=read_from_port)
thread.daemon = True
thread.start()
I am performing some processing on the serial data and then want to broadcast a message to all the connected WebSocket clients if the calculated result differs from its previous value. For this I have set up the following code
clients = []
def Broadcast(message):
for client in clients:
client.sendMessage(json.dumps(message).encode('utf8'))
print("broadcasted")
worker.broadcast = Broadcast
class WSHandler(tornado.websocket.WebSocketHandler):
def open(self):
print('new connection')
clients.append(self)
def on_message(self, message):
print('message received: %s' % message)
response = handler.HandleRequest(message, self.write_message)
def on_close(self):
print('connection closed')
clients.remove(self)
def check_origin(self, origin):
return True
application = tornado.web.Application([
(r'/ws', WSHandler),
])
if __name__ == "__main__":
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(8765)
myIP = socket.gethostbyname(socket.gethostname())
print('*** Websocket Server Started at %s***' % myIP)
tornado.ioloop.IOLoop.instance().start()
I then want to use the "broadcast" method in the worker to broadcast out a result. Using this method from the worker thread produces the following error
File "main.py", line 18, in Broadcast
client.write_message(message)
File "/usr/local/lib/python3.8/site-packages/tornado/websocket.py", line 342, in write_message
return self.ws_connection.write_message(message, binary=binary)
File "/usr/local/lib/python3.8/site-packages/tornado/websocket.py", line 1098, in write_message
fut = self._write_frame(True, opcode, message, flags=flags)
File "/usr/local/lib/python3.8/site-packages/tornado/websocket.py", line 1075, in _write_frame
return self.stream.write(frame)
File "/usr/local/lib/python3.8/site-packages/tornado/iostream.py", line 555, in write
future = Future() # type: Future[None]
File "/usr/local/Cellar/python#3.8/3.8.3_1/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/events.py", line 639, in get_event_loop
raise RuntimeError('There is no current event loop in thread %r.'
RuntimeError: There is no current event loop in thread 'Thread-1'.
I understand the issue is that the Tornado write_message function is not thread safe and that this error is being produced because I am trying to call the function directly from the worker thread. As far as I can determine, the recommended way to use concurrent code with Tornado is through asyncio, but I think a threading approach might be more appropriate in this situation where I have a loop that essentially runs in parallel constantly.
Unfortunately I know very little about asyncio and how threading is implemented in Python, so I would like to find out what is the simplest way that I can send WebSocket messages from a different thread.

Reading the official documentation for using asyncio and multithreading together at https://docs.python.org/3/library/asyncio-dev.html#asyncio-multithreading gave me the necessary clue that one can achieve this quite elegantly using the "call_soon_threadsafe" function. The following code thus seems to do the trick
tornado.ioloop.IOLoop.configure("tornado.platform.asyncio.AsyncIOLoop")
io_loop = tornado.ioloop.IOLoop.current()
asyncio.set_event_loop(io_loop.asyncio_loop)
clients = []
def bcint(message):
for client in clients:
client.write_message(message)
print("broadcasted")
def Broadcast(message):
io_loop.asyncio_loop.call_soon_threadsafe(bcint, message)
worker.broadcast = Broadcast
class WSHandler(tornado.websocket.WebSocketHandler):
def open(self):
print('new connection')
clients.append(self)
def on_message(self, message):
print('message received: %s' % message)
response = handler.HandleRequest(message, self.write_message)
def on_close(self):
print('connection closed')
clients.remove(self)
def check_origin(self, origin):
return True
application = tornado.web.Application([
(r'/ws', WSHandler),
])
if __name__ == "__main__":
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(8765)
myIP = socket.gethostbyname(socket.gethostname())
print('*** Websocket Server Started at %s***' % myIP)
tornado.ioloop.IOLoop.current().start()

One cleaner option is to use queues such as pyzmq that will help you establish communication from one thread to another.
Looking at your use case, you can use PUB/SUB model. Here is a sample code. Also, you can use 'inproc' instead of 'tcp'. This will reduce the latency since you will be communicating between multiple threads in same process.

Related

Wait for message using python's async protocol

Into:
I am working in a TCP server that receives events over TCP. For this task, I decided to use asyncio Protocol libraries (yeah, maybe I should have used Streams), the reception of events works fine.
Problem:
I need to be able to connect to the clients, so I create another "server" used to look up all my connected clients, and after finding the correct one, I use the Protocol class transport object to send a message and try to grab the response by reading a buffer variable that always has the last received message.
My problem is, after sending the message, I don't know how to wait for the response, so I always get the previous message from the buffer.
I will try to simplify the code to illustrate (please, keep in mind that this is an example, not my real code):
import asyncio
import time
CONN = set()
class ServerProtocol(asyncio.Protocol):
def connection_made(self, transport):
self.transport = transport
CONN.add(self)
def data_received(self, data):
self.buffer = data
# DO OTHER STUFF
print(data)
def connection_lost(self, exc=None):
CONN.remove(self)
class ConsoleProtocol(asyncio.Protocol):
def connection_made(self, transport):
self.transport = transport
# Get first value just to ilustrate
self.client = next(iter(CONN))
def data_received(self, data):
# Forward the message to the client
self.client.transport.write(data)
# wait a fraction of a second
time.sleep(0.2)
# foward the response of the client
self.transport.write(self.client.buffer)
def main():
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(
loop.create_server(protocol_factory=ServerProtocol,
host='0.0.0.0',
port=6789))
loop.run_until_complete(
loop.create_server(protocol_factory=ConsoleProtocol,
host='0.0.0.0',
port=9876))
try:
loop.run_forever()
except Exception as e:
print(e)
finally:
loop.close()
if __name__ == '__main__':
main()
This is not only my first experience writing a TCP server, but is also my first experience working with parallelism. So it took me days to realize that my sleep not only would not work, but I was locking the server while it "sleeps".
Any help is welcome.
time.sleep(0.2) is blocking, should not used in async programming, which will block the whole execution, if your program runing with 100 clients, the last client will be delayed for 0.2*99 seconds, which is not what you want.
the right way is trying to let program wait 0.2s but not blocking, then other concurrent clients would not be delayed,we can use thread.
import asyncio
import time
import threading
CONN = set()
class ServerProtocol(asyncio.Protocol):
def dealy_thread(self):
time.sleep(0.2)
def connection_made(self, transport):
self.transport = transport
CONN.add(self)
def data_received(self, data):
self.buffer = data
# DO OTHER STUFF
print(data)
def connection_lost(self, exc=None):
CONN.remove(self)
class ConsoleProtocol(asyncio.Protocol):
def connection_made(self, transport):
self.transport = transport
# Get first value just to ilustrate
self.client = next(iter(CONN))
def data_received(self, data):
# Forward the message to the client
self.client.transport.write(data)
# wait a fraction of a second
thread = threading.Thread(target=self.delay_thread, args=())
thread.daemon = True
thread.start()
# foward the response of the client
self.transport.write(self.client.buffer)

Shutdown RPyC server from client

I've created a RPyC server. Connecting works, all my exposed methods work. Now I am looking to shut down the server from the client. Is this even possible? Security is not a concern as I am not worried about a rogue connection shutting down the server.
It is started with (Which is blocking):
from rpyc import ThreadPoolServer
from service import MyService
t = ThreadPoolServer(MyService(), port=56565)
t.start()
Now I just need to shut it down. I haven't found any documentation on stopping the server.
You can add to your Service class the method:
def exposed_stop(self):
pid = os.getpid()
if platform.system() == 'Windows':
PROCESS_TERMINATE = 1
handle = ctypes.windll.kernel32.OpenProcess(PROCESS_TERMINATE, False, pid)
ctypes.windll.kernel32.TerminateProcess(handle, -1)
ctypes.windll.kernel32.CloseHandle(handle)
else:
os.kill(pid, signal.SIGTERM)
This will make the service get its own PID and send SIGTERM to itself. There may be an better way of doing this hiding in some dark corner of the API, but I've found no better method.
If you want to do clean-up before your thread terminates, you can set up exit traps:
t = rpyc.utils.server.ThreadedServer(service, port=port, auto_register=True)
# Set up exit traps for graceful exit.
signal.signal(signal.SIGINT, lambda signum, frame: t.close())
signal.signal(signal.SIGTERM, lambda signum, frame: t.close())
t.start() # blocks thread
# SIGTERM or SIGINT was received and t.close() was called
print('Closing service.')
t = None
shutil.rmtree(tempdir)
# etc.
In case anybody is interested, I have found another way of doing it.
I'm just making the server object on a global scope, and then adding an exposed method to close it.
import rpyc
from rpyc.utils.server import ThreadedServer
class MyService(rpyc.Service):
def exposed_stop(self):
server.close()
def exposed_echo(self, text):
print(text)
server = ThreadedServer(MyService, port = 18812)
if __name__ == "__main__":
print("server start")
server.start()
print("Server closed")
On the client side, you will have an EOF error due to the connection being remotely closed. So it's better to catch it.
import rpyc
c = rpyc.connect("localhost", 18812)
c.root.echo("hello")
try :
c.root.stop()
except EOFError as e:
print("Server was closed")
EDIT: I needed to be able to dinamically specify the server. So I came with this (Is it better ? I don't know, but it works well. Be careful though, if you have multiple server running this service: things could become weird):
import rpyc
from rpyc.utils.server import ThreadedServer
class MyService(rpyc.Service):
_server:ThreadedServer
#staticmethod
def set_server(inst=ThreadedServer):
MyService._server = inst
def exposed_stop(self):
if self._server:
self._server.close()
def exposed_echo(self, text):
print(text)
if __name__ == "__main__":
server = ThreadedServer(MyService, port = 18812)
MyService.set_server(server)
print("server start")
server.start()
print("Server closed")
PS: It probably is possible to avoid the EOF error by using Asynchronous Operations

How to send data periodically with asyncio.Protocol subclass

I have asyncio.Protocol subclass
class MyProtocol(Protocol):
def __init__(self, exit_future):
self.exit_future = exit_future
def connection_made(self, transport):
self.transport = transport
def data_received(self, data):
pass
def eof_received(self):
self.exit_future.set_result(True)
def connection_lost(self, exc):
self.exit_future.set_result(True)
and network connection created with
while True:
try:
exit_future = Future(loop=loop)
transport, protocol = await loop.create_connection(lambda: MyProtocol(exit_future), host, port)
await exit_future
transport.close()
except:
pass
Now the question is: how can I send some data on some external event occurs? For instance when asyncio.Queue is not empty (queue.get will not block), what fills that queue is not related to asyncio? What is the most correct way to call transport.write when something happens?
how can I send some data on some external event occurs?
The easiest way is to spawn the coroutine in connection_made and leave it to handle the event in the "background":
def connection_made(self, transport):
self.transport = transport
loop = asyncio.get_event_loop()
self._interesting_events = asyncio.Queue()
self.monitor = loop.create_task(self._monitor_impl())
def connection_lost(self, exc):
self.exit_future.set_result(True)
self.monitor.cancel()
async def _monitor_impl(self):
while True:
# this can also await asyncio.sleep() or whatever is needed
event = await self._interesting_events.get()
self.transport.write(...)
Note that in the long run it might be worth it to replace create_connection with open_connection and use the streams API from the ground up. That way you can use coroutines all the way without worrying about the callback/coroutine impedance mismatch.
On an unrelated note, try followed by except: pass is an anti-pattern - consider catching a specific exception instead, or at least logging the exception.

python program not exiting on exception using aiozmq

I am using aiozmq for a simple RPC program.
I have created a client and server.
When the server is running, the client runs just fine.
I have a timeout set in the client to raise an exception in the event of no server being reachable.
The client code is below. When I run it without the server running, I get an expected exception but the script doesn't actually return to the terminal. It still seems to be executing.
Could someone firstly explain how this is happening and secondly how to fix it?
import asyncio
from asyncio import TimeoutError
from aiozmq import rpc
import sys
import os
import signal
import threading
import sys
import traceback
#signal.signal(signal.SIGINT, signal.SIG_DFL)
async def client():
print("waiting for connection..")
client = await rpc.connect_rpc(
connect='tcp://127.0.0.1:5555',
timeout=1
)
print("got client")
for i in range(100):
print("{}: calling simple_add".format(i))
ret = await client.call.simple_add(1, 2)
assert 3 == ret
print("calling slow_add")
ret = await client.call.slow_add(3, 5)
assert 8 == ret
client.close()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.set_debug(True)
future = asyncio.ensure_future(client())
try:
loop.run_until_complete(future)
except TimeoutError:
print("Timeout occurred...")
future.cancel()
loop.stop()
#loop.run_forever()
main_thread = threading.currentThread()
for t in threading.enumerate():
if t is main_thread:
print("skipping main_thread...")
continue
print("Thread is alive? {}".format({True:'yes',
False:'no'}[t.is_alive()]))
print("Waiting for thread...{}".format(t.getName()))
t.join()
print(sys._current_frames())
traceback.print_stack()
for thread_id, frame in sys._current_frames().items():
name = thread_id
for thread in threading.enumerate():
if thread.ident == thread_id:
name = thread.name
traceback.print_stack(frame)
print("exiting..")
sys.exit(1)
#os._exit(1)
print("eh?")
The result of running the above is below. Note again that the program was still running, I had to to exit.
> python client.py
waiting for connection..
got client
0: calling simple_add
Timeout occurred...
skipping main_thread...
{24804: <frame object at 0x00000000027C3848>}
File "client.py", line 54, in <module>
traceback.print_stack()
File "client.py", line 60, in <module>
traceback.print_stack(frame)
exiting..
^C
I also tried sys.exit() which also didn't work:
try:
loop.run_until_complete(future)
except:
print("exiting..")
sys.exit(1)
I can get the program to die, but only if I use os._exit(1). sys.exit() doesn't seem to cut it. I doesn't appear that there are any other threads preventing the interpreter from dying. (Unless I'm mistaken?) What else could be stopping the program from exiting?

Delay opening an asyncio connection

Some of my django REST services have to connect to an asyncio server to get some information. So I'm working in a threaded environment.
While connecting, the open_connection() takes an unreasonable 2 seconds (almost exactly, always just a bit more).
Client code:
import asyncio
import datetime
def call():
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
#asyncio.coroutine
def msg_to_mars():
print("connecting", datetime.datetime.now())
reader, writer = yield from asyncio.open_connection('localhost', 8888, loop=loop)
print("connected", datetime.datetime.now()) # time reported here will be +2 seconds
return None
res = loop.run_until_complete(msg_to_mars())
loop.close()
return res
call()
Server code:
import asyncio
#asyncio.coroutine
def handle_connection(reader: asyncio.StreamReader, writer: asyncio.StreamWriter):
pass
loop = asyncio.get_event_loop()
asyncio.set_event_loop(loop)
# Each client connection will create a new protocol instance
coro = asyncio.start_server(handle_connection, '0.0.0.0', 8888, loop=loop)
server = loop.run_until_complete(coro)
# Serve requests until Ctrl+C is pressed
print('MARS Device server serving on {}'.format(server.sockets[0].getsockname()))
try:
loop.run_forever()
except KeyboardInterrupt:
pass
server.close()
loop.run_until_complete(server.wait_closed())
loop.close()
Both are basically copied from asyncio documentation samples for streamed communication, except for the additional assigning of event loop for threading.
How can I make this delay go away?
Turns out, the problem was in Windows DNS resolution.
Changing URLs from my computer name to 127.0.0.1 immediately killed the delays.

Resources