According to the docs Pika is not thread safe:
Pika does not have any notion of threading in the code. If you want to use Pika with threading, make sure you have a Pika connection per thread, created in that thread. It is not safe to share one Pika connection across threads, with one exception: you may call the connection method add_callback_threadsafe from another thread to schedule a callback within an active pika connection.
Lets say I have a subscriber which I have started using channel.start_consuming(). That thread will be blocked waiting for messages to arrive. These messages might be a long time apart (hours sometimes).
Surely if I want to safely / cleanly shutdown the subscriber, I must do-so from another thread? Or else how can I trigger the consumer to break out of blocking?
You could use connection.process_data_events() instead of just channel.start_consuming(). The advantage here is that you could do something like this to close a connection.
def consume_messages(self):
while self.running:
self.connection.process_data_events()
sleep(0.1)
self.connection.close()
You would then just close the connection by setting self.running to False.
Related
I have a network application which is listening on multiple sockets.
To handle each socket individually, I use Python's threading.Thread module.
These sockets must be able to run tasks on packet reception without delaying any further packet reception from the socket handling thread.
To do so, I've declared the method(s) that are running the previously mentioned tasks with the keyword async so I can run them asynchronously with asyncio.run(my_async_task(my_parameters)).
I have tested this approach on a single socket (running on the main thread) with great success.
But when I use multiple sockets (each one with it's independent handler thread), the following exception is raised:
ValueError: set_wakeup_fd only works in main thread
My question is the following: Is asyncio the appropriate tool for what I need? If it is, how do I run an async method from a thread that is not a main thread.
Most of my search results are including "event loops" and "awaiting" assync results, which (if I understand these results correctly) is not what I am looking for.
I am talking about sockets in this question to provide context but my problem is mostly about the behaviour of asyncio in child threads.
I can, if needed, write a short code sample to reproduce the error.
Thank you for the help!
Edit1, here is a minimal reproducible code example:
import asyncio
import threading
import time
# Handle a specific packet from any socket without interrupting the listenning thread
async def handle_it(val):
print("handled: {}".format(val))
# A class to simulate a threaded socket listenner
class MyFakeSocket(threading.Thread):
def __init__(self, val):
threading.Thread.__init__(self)
self.val = val # Value for a fake received packet
def run(self):
for i in range(10):
# The (fake) socket will sequentially receive [val, val+1, ... val+9]
asyncio.run(handle_it(self.val + i))
time.sleep(0.5)
# Entry point
sockets = MyFakeSocket(0), MyFakeSocket(10)
for socket in sockets:
socket.start()
This is possibly related to the bug discussed here: https://bugs.python.org/issue34679
If so, this would be a problem with python 3.8 on windows. To work around this, you could try either downgrading to python 3.7, which doesn't include asyncio.main so you will need to get and run the event loop manually like:
loop = asyncio.get_event_loop()
loop.run_until_complete(<your tasks>)
loop.close()
Otherwise, would you be able to run the code in a docker container? This might work for you and would then be detached from the OS behaviour, but is a lot more work!
Pika states it is not thread-safe and not to share one connection across multiple threads.
(I think) I am running one thread per connection which should be ok but the wording of other answers suggests that there may be a subtle difference in 'running one thread per connection' and 'running one connection per thread'.
My goal is to have a consumer that listens for RMQ messages and when a message is received, do some work which takes time. The work logic itself is not multithreaded as it executes 'synchronously'. The exchange and queue was setup manually - I am just writing a consumer.
However due to the fact that the work takes time (url calls), currently I have each callback create and execute a single thread;
class MyThread(threading.Thread):
def run(self):
name = random.choice(string.ascii_letters)
print(f"Thread {name} execution started.")
# Simulate URL calls
time.sleep(random.randrange(1,5))
print(f"Thread {name} execution ended.")
class MyClass():
def connect(self, url, queue):
connection = pika.BlockingConnection(
pika.connection.URLParameters(url)
)
channel = connection.channel()
channel.basic_consume(queue=queue, on_message_callback=self.callback)
# Infinite loop that waits for incoming messages
channel.start_consuming()
def callback(self, ch, method, properties, body):
thread = MyThread()
thread.start()
# Not sure of this ACK and how to NACK
ch.basic_ack(delivery_tag = method.delivery_tag)
When executing the program the threads complete in different orders which is what I expect.
Thread H execution started.
Thread E execution started.
Thread G execution started.
Thread j execution started.
Thread E execution ended.
Thread j execution ended.
Thread H execution ended.
Thread G execution ended.
My understanding is that the following implementation (that I am not using) would cause thread safe issues;
def callback(self, ch, method, properties, body):
thread1 = MyThread()
thread2 = MyThread()
thread3 = MyThread()
thread1.start()
thread2.start()
thread3.start()
Is my implementation thread safe? If not, how would I implement a thread safe version?
EDIT I added my implementation of an ACK, I'm not sure how to implement add_callback_threadsafe
Your code appears fine. When you acknowledge a message from the thread spawned by the callback, be sure to use this method that uses add_callback_threadsafe.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
I am making a bot for telegram, this bot will use a database (SQLite3).
I am familiar with threads and locks and I know that is safe to launch multiple thread that make query to the database.
My problem rises when I want to update/insert data.
With the use Condition and Event from the threading module, I can prevent new thread to access the database while a thread is updating/inserting data.
What I haven't figured out is how to wait that all the thread that are accessing the database are done, before updating/inserting data.
If I could get the count of semaphore I would just wait for it to drop to 0, but since is not possible, what approach should I use?
UPDATE: I can't use join() since I am using telegram bot and create thread dynamically with each request to my bot, therefore when a thread is created I don't know if I'll have to wait for it to end or not.
CLARIFICATION: join() can only be used if, at the start of a thread you know wether you'll have to wait for it to end or not. Since I create a thread for each request of my clients and I am unaware of what they'll ask or and when the request will be done, I can't know whether to use join() or not.
UPDATE2: Here the code regarding the locks. I haven't finished the code regarding the database since I am more concerned with the locks and it doesn't seems relevant to the question.
lock = threading.Lock()
evLock = threading.Event()
def addBehaviours(dispatcher):
evLock.set()
# (2) Fetch the list of events
events_handler = CommandHandler('events', events)
dispatcher.add_handler(events_handler)
# (3) Add a new event
addEvent_handler = CommandHandler('addEvent', addEvent)
dispatcher.add_handler(addEvent_handler)
# (2) Fetch the list of events
#run_async
def events(bot, update):
evLock.wait()
# fetchEvents()
# (3) Add a new event
#run_async
def addEvent(bot, update):
with lock:
evLock.clear()
# addEvent()
evLock.set()
You can use threading.Thread.join(). This will wait for a thread to end and only continue on when the thread is done.
Usage below:
import threading as thr
thread1 = thr.Thread() # some thread to be waited for
thread1 = thr.Thread() # something that runs after thread1 finishes
thread1.start() # start up this thread
thread1.join() # wait until this thread finishes
thread2.start()
...
I'm putting together a client server app on RPi. It has a main thread which creates a comms thread to talk to an iOS device.
The main thread creates an asyncio event loop and a sendQ and a recvQ and passes them as args to the commsDelegate main method in the comms thread.
The trouble I'm having is when iOS device connects, it needs to receive unsolicited data from this Python app as soon as the data becomes available and it needs to be able to send data up to the Python app. So send and receive need to be non-blocking.
There are great echo server tutorials out there. But little in terms of the server doing something useful with the data.
Can anyone assist me in getting asyncio to read my send queue and forward data as soon as the main thread has queued it? I have receive working great.
Main Thread creates a loop and starts the comms thread:
commsLoop = asyncio.new_event_loop()
commsMainThread = threading.Thread(target=CommsDelegate.commsDelegate, args=(commsInQ,commsOutQ,commsLoop,commsPort,), daemon=True)
commsMainThread.start()
Then asyncio in module CommsDelegate should run the loop as loop.run_forever() server task reading and writing from a socket stream and sending receiving messages using queues back up to the main thread.
Here's my code so far. I found that if I create a factory for the protocol generator, I can pass it the queue names so the receipt of messages is all good now. When they arrive from the client they are queued _nowait and the main thread receives them just fine.
I just need asyncio to handle the queue of outbound messages from the Main thread as they arrive on sendQ, so it can send them on to the connected client.
#!/usr/bin/env python3.6
import asyncio
class ServerProtocol(asyncio.Protocol):
def __init__(self, loop, recvQ, sendQ):
self.loop = loop
self.recvQ = recvQ
self.sendQ = sendQ
def connection_made(self, transport):
peername = transport.get_extra_info('peername')
print('Connection from {}'.format(peername))
self.transport = transport
def data_received(self, data):
message = data.decode()
print('Data received: {!r}'.format(message))
self.recvQ.put_nowait(message.rstrip())
# Needs work... I think the queue.get_nowait should be a co-ro maybe?
def unknownAtTheMo():
dataToSend = sendQ.get_nowait()
print('Send: {!r}'.format(message))
self.transport.write(dataToSend)
# Needs work to close on request from client or server or exc...
def handleCloseSocket(self):
print('Close the client socket')
self.transport.close()
# async co-routine to consume the send message Q from Main Thread
async def consume(sendQ):
print("In consume coro")
while True:
outboundData = await self.sendQ.get()
print("Consumed", outboundData)
self.transport.write(outboundData.encode('ascii'))
def commsDelegate(recvQ, sendQ, loop, port):
asyncio.set_event_loop(loop)
# Connection coroutine - Create a factory to assist the protocol in receipt of the queues as args
factory = lambda: ProveItServerProtocol(loop, recvQ, sendQ)
# Each client connection will create a new protocol instance
connection = loop.run_until_complete(loop.create_server(factory, host='192.168.1.199', port=port))
# Outgoing message queue handler
consumer = asyncio.ensure_future(consume(sendQ))
# Set up connection
loop.run_until_complete(connection)
# Wait until the connection is closed
loop.run_forever()
# Wait until the queue is empty
loop.run_until_complete(queue.join())
# Cancel the consumer
consumer.cancel()
# Let the consumer terminate
loop.run_until_complete(consumer)
# Close the connection
connection.close()
# Close the loop
loop.close()
I send all data messages as json and CommsDelegate performs encode and decode then relays them asis.
Update: asyncio thread seems to be working well for incoming traffic. Server receives json and relays it via a queue - non-blocking.
Once the send is working, I'll have a reusable blackbox server on a thread.
I can see two problems with your approach. First, all your clients are using the same recv and send queues, so there is no way the consume coroutine can know who to reply to.
The second issue has to do with your use of queues as a bridge between the synchronous and the asynchronous worlds. See this part of your code:
await self.sendQ.get()
If sendQ is a regular queue (from the queue module), this line will fail because sendQ is not a coroutine. On the other hand, if sendQ is an asyncio.Queue, the main thread won't be able to use sendQ.put because it is a coroutine. It would be possible to use put_nowait, but thread-safety is not guaranteed in asyncio. Instead, you'd have to use loop.call_soon_threadsafe:
loop.call_soon_threadsafe(sendQ.put_nowait, message)
In general, remember that asyncio is designed to run as the main application. It's supposed to run in the main thread, and communicate with synchronous code through a ThreadPoolExecutor (see loop.run_in_executor).
More information about multithreading in the asyncio documentation. You might also want to have a look at the asyncio stream API that provides a much nicer interface to work with TCP.
I have been fighting with Websockify the last days trying to make it work. There is no apparent documentation so I end up doing things with trial & error.
I have a server which runs on two threads. One thread always sends and receives information while the second thread does other work. However I can't seem to make the two threads talk with each other.
#!/usr/bin/env python
from websocket import WebSocketServer
from threading import Thread
from time import sleep
class Server(WebSocketServer):
a=10
def new_client(self):
while True:
sleep(1)
print("Thread 2: ", self.a)
server = Server('', 9017)
Thread(target=server.start_server).start()
# Main thread continues
while 1:
sleep(1)
server.a+=2
print("Main thread: ", server.a)
Output:
Main thread: 18
Thread 2: 16
Main thread: 20
Thread 2: 16
Main thread: 22
Thread 2: 16
Main thread: 24
Thread 2: 16
Obviously the two threads don't share the same attribute a. Why?
By default websockify spawns a new process for each new client connection (websockify connections tend to be long-lived so the process creation overhead isn't generally an issue). This provides some security isolation to reduce the risk that bugs in websockify can be exploited to allow one client to listen in or otherwise affect other client connections.
You can find the process creation code in the top_new_client method. There is an option called --run-once that will handle the a single client in the same process. However, it is designed to exit the main loop in top_new_client after a single connection. You could remove the break statement in the self.run_once conditional check but it means you won't be able to connect more than one client at a time, but perhaps that is sufficient for what you are trying to do.
I also have some unpushed in-progress code to switch WebSocketServer to be more like the HTTPServer class where you provide your own threading or multiprocessing mixin. If you think that might help, let me know and I can push that out to a branch.
Another option for your case would be to use some form of IPC communication to communicate between each client process and the parent process.