PyQt5: How to set a priority to a pyqtSignal? - python-3.x

1. Background info
I'm working in Python 3.7. The python Qt version Pyqt5 enables you to fire custom pyqt signals. For example:
from PyQt5.QtWidgets import *
from PyQt5.QtCore import *
class MyClass(QObject):
mysignal = pyqtSignal(str)
def __init__(self):
super().__init__()
self.mysignal.connect(self.bar)
return
def foo(self):
self.mysignal.emit("foobar")
return
#pyqtSlot
def bar(self, mystr):
print("signal received: {0}".format(mystr))
return
2. The problem
PyQt starts an event listener loop in the main thread: it waits for incoming events on a queue and processes them one-by-one. Most of these events are user-invoked things like pushing a button, clicking something, ...
If you fire pyqt signals programatically, as in the foo() function above, you also push events onto this queue (I think). That shouldn't be a big deal, unless you fire too many pyqt signals in a short burst. The queue is overwhelmed and user events don't get processed in time. The user sees a freezed GUI. Yikes!
3. Solution
One way to tackle this problem could be assigning low priorities to programatically fired pyqt signal. Is this possible? How?
If not - do you know other ways to solve the problem?

In the case of direct connections (sender and receiver in the same thread), the slot will be directly called when you emit your signal.
So, in your example, you can replace your emit by a direct call to self.bar.
But, if your slot is too long, the event loop has to wait before it will be able to process the user events.
If your UI is freezing when you call your slot, that means you should use another thread to let the event loop process user events.

Related

ValueError when asyncio.run() is called in separate thread

I have a network application which is listening on multiple sockets.
To handle each socket individually, I use Python's threading.Thread module.
These sockets must be able to run tasks on packet reception without delaying any further packet reception from the socket handling thread.
To do so, I've declared the method(s) that are running the previously mentioned tasks with the keyword async so I can run them asynchronously with asyncio.run(my_async_task(my_parameters)).
I have tested this approach on a single socket (running on the main thread) with great success.
But when I use multiple sockets (each one with it's independent handler thread), the following exception is raised:
ValueError: set_wakeup_fd only works in main thread
My question is the following: Is asyncio the appropriate tool for what I need? If it is, how do I run an async method from a thread that is not a main thread.
Most of my search results are including "event loops" and "awaiting" assync results, which (if I understand these results correctly) is not what I am looking for.
I am talking about sockets in this question to provide context but my problem is mostly about the behaviour of asyncio in child threads.
I can, if needed, write a short code sample to reproduce the error.
Thank you for the help!
Edit1, here is a minimal reproducible code example:
import asyncio
import threading
import time
# Handle a specific packet from any socket without interrupting the listenning thread
async def handle_it(val):
print("handled: {}".format(val))
# A class to simulate a threaded socket listenner
class MyFakeSocket(threading.Thread):
def __init__(self, val):
threading.Thread.__init__(self)
self.val = val # Value for a fake received packet
def run(self):
for i in range(10):
# The (fake) socket will sequentially receive [val, val+1, ... val+9]
asyncio.run(handle_it(self.val + i))
time.sleep(0.5)
# Entry point
sockets = MyFakeSocket(0), MyFakeSocket(10)
for socket in sockets:
socket.start()
This is possibly related to the bug discussed here: https://bugs.python.org/issue34679
If so, this would be a problem with python 3.8 on windows. To work around this, you could try either downgrading to python 3.7, which doesn't include asyncio.main so you will need to get and run the event loop manually like:
loop = asyncio.get_event_loop()
loop.run_until_complete(<your tasks>)
loop.close()
Otherwise, would you be able to run the code in a docker container? This might work for you and would then be detached from the OS behaviour, but is a lot more work!

Process finishes but cannot be joined?

To accelerate a certain task, I'm subclassing Process to create a worker that will process data coming in samples. Some managing class will feed it data and read the outputs (using two Queue instances). For asynchronous operation I'm using put_nowait and get_nowait. At the end I'm sending a special exit code to my process, upon which it breaks its internal loop. However... it never happens. Here's a minimal reproducible example:
import multiprocessing as mp
class Worker(mp.Process):
def __init__(self, in_queue, out_queue):
super(Worker, self).__init__()
self.input_queue = in_queue
self.output_queue = out_queue
def run(self):
while True:
received = self.input_queue.get(block=True)
if received is None:
break
self.output_queue.put_nowait(received)
print("\tWORKER DEAD")
class Processor():
def __init__(self):
# prepare
in_queue = mp.Queue()
out_queue = mp.Queue()
worker = Worker(in_queue, out_queue)
# get to work
worker.start()
in_queue.put_nowait(list(range(10**5))) # XXX
# clean up
print("NOTIFYING")
in_queue.put_nowait(None)
#out_queue.get() # XXX
print("JOINING")
worker.join()
Processor()
This code never completes, hanging permanently like this:
NOTIFYING
JOINING
WORKER DEAD
Why?
I've marked two lines with XXX. In the first one, if I send less data (say, 10**4), everything will finish normally (processes join as expected). Similarly in the second, if I get() after notifying the workers to finish. I know I'm missing something but nothing in the documentation seems relevant.
Documentation mentions that
When an object is put on a queue, the object is pickled and a background thread later flushes the pickled data to an underlying pipe. This has some consequences [...] After putting an object on an empty queue there may be an infinitesimal delay before the queue’s empty() method returns False and get_nowait() can return without raising queue.Empty.
https://docs.python.org/3.7/library/multiprocessing.html#pipes-and-queues
and additionally that
whenever you use a queue you need to make sure that all items which have been put on the queue will eventually be removed before the process is joined. Otherwise you cannot be sure that processes which have put items on the queue will terminate.
https://docs.python.org/3.7/library/multiprocessing.html#multiprocessing-programming
This means that the behaviour you describe is caused probably by a racing condition between self.output_queue.put_nowait(received) in the worker and joining the worker with worker.join() in the Processers __init__. If joining was faster than feeding it into the queue, everything finishes fine. If it was too slow, there is an item in the queue, and the worker would not join.
Uncommenting the out_queue.get() in the main process would empty the queue, which allows joining. But as it is important for the queue to return if the queue would already be empty, using a time-out might be an option to try to wait out the racing condition, e.g out_qeue.get(timeout=10).
Possibly important might also be to protect the main routine, especially for Windows (python multiprocessing on windows, if __name__ == "__main__")

Adding item to python thread's queue from within a function with the thread

I'm building a robot head that listens and speaks via an AIML bot whilst a camera picks out faces with opencv2. I have various threads running so that it can look, listen, speak and move its mouth, eyes and entire head with servos simultaneously. The threads need to interact with each other and I have code that does this. This "listening" thread needs to pass data back to a main module. This is working fine:
class listen(threading.Thread):
def __init__(self, workQueue, queueLock):
super(listen, self).__init__()
self.workQueue = workQueue
self.queueLock= queueLock
def run(self): # "listening" thread
with sr.Microphone() as source:
# do lots of things......
self.workQueue.put(some_data) # put data in queue
My problem is that my "listening" thread calls a function "say()" and it is from within this function that I need to do the workQueue.put() rather than within the run(self) definition above. I cannot fathom the syntax of how to reference self.workQueue.put(some_data) from within the say() function or if indeed it is possible. Hope this makes sense.

Python Notify when all files have been transferred

I am using "watchdog" api to keep checking changes in a folder in my filesystem. Whatever files changes in that folder, I pass them to a particular function which starts threads for each file I pass them.
But watchdog, or any other filesystem watcher api (in my knowledge), notifies users file by file i.e. as the files come by, they notify the user. But I would like it to notify me a whole bunch of files at a time so that I can pass that list to my function and take use of multi-threading. Currently, when I use "watchdog", it notifies me one file at a time and I am only able to pass that one file to my function. I want to pass it many files at a time to be able to have multithreading.
One solution that comes to my mind is: you see when you copy a bunch of files in a folder, OS shows you a progress bar. If it would be possible for me to be notified when that progress bar is done, then it would be a perfect solution for my question. But I don't know if that is possible.
Also I know that watchdog is a polling API, and an ideal API for watching filesystem would be interrupt driven api like pyinotify. But I didn't find any API which was interrupt driven and also cross platform. iWatch is good, but only for linux, and I want something for all OS. So, if you have suggestions on any other API, please do let me know.
Thanks.
Instead of accumulating filesystem events, you could spawn a pool of worker
threads which get tasks from a common queue. The watchdog thread could then put
tasks in the queue as filesystem events occur. Done this way, a worker thread
can start working as soon as an event occurs.
For example,
import logging
import Queue
import threading
import time
import watchdog.observers as observers
import watchdog.events as events
logger = logging.getLogger(__name__)
SENTINEL = None
class MyEventHandler(events.FileSystemEventHandler):
def on_any_event(self, event):
super(MyEventHandler, self).on_any_event(event)
queue.put(event)
def __init__(self, queue):
self.queue = queue
def process(queue):
while True:
event = queue.get()
logger.info(event)
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG,
format='[%(asctime)s %(threadName)s] %(message)s',
datefmt='%H:%M:%S')
queue = Queue.Queue()
num_workers = 4
pool = [threading.Thread(target=process, args=(queue,)) for i in range(num_workers)]
for t in pool:
t.daemon = True
t.start()
event_handler = MyEventHandler(queue)
observer = observers.Observer()
observer.schedule(
event_handler,
path='/tmp/testdir',
recursive=True)
observer.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join()
Running
% mkdir /tmp/testdir
% script.py
yields output like
[14:48:31 Thread-1] <FileCreatedEvent: src_path=/tmp/testdir/.#foo>
[14:48:32 Thread-2] <FileModifiedEvent: src_path=/tmp/testdir/foo>
[14:48:32 Thread-3] <FileModifiedEvent: src_path=/tmp/testdir/foo>
[14:48:32 Thread-4] <FileDeletedEvent: src_path=/tmp/testdir/.#foo>
[14:48:42 Thread-1] <FileDeletedEvent: src_path=/tmp/testdir/foo>
[14:48:47 Thread-2] <FileCreatedEvent: src_path=/tmp/testdir/.#bar>
[14:48:49 Thread-4] <FileCreatedEvent: src_path=/tmp/testdir/bar>
[14:48:49 Thread-4] <FileModifiedEvent: src_path=/tmp/testdir/bar>
[14:48:49 Thread-1] <FileDeletedEvent: src_path=/tmp/testdir/.#bar>
[14:48:54 Thread-2] <FileDeletedEvent: src_path=/tmp/testdir/bar>
Doug Hellman has written an excellent set of tutorials (which has now been edited into a book) which should help you get started:
on using Queue
the threading module
how to setup and use a pool of worker processes
how to setup a pool of worker threads
I didn't actually end up using a multiprocessing Pool or ThreadPool as discussed
in the last two links, but you may find them useful anyway.

gtk_main() and unix sockets

I'm working on a chat application using C and unix low level sockets. I have succeeded in making the console version, but I want to make a GUI for the application.
I would like to use GTK for the GUI.
my problem is how to "synchronize" the socket and the GUI.
because I have to call gtk_main() as the last GTK statement and the application itself is an infinite loop. How can I update the GUI when a message comes in?
You are facing the problem that you have several event systems at once but only one thread. Gtk+ comes with its own event handler, that eventually boils down to a select() which will wake up on any user input or other gtk event. You yourself want to handle networking with your own event handling, which typically consists of a select() on your socket(s) or using the sockets in blocking mode.
One solution is to integrate your events into the event loop of Gtk+.
You can make Gtk+ watch/select() your sockets and call a specific function when their state changes (data readable).
See the section "Creating new source types" on http://developer.gnome.org/glib/2.30/glib-The-Main-Event-Loop.html
Another solution would be to use Gtk+ networking functionality.
Typically you don't want to do something so special with the sockets that it is not easily wrapable with Glib IO Channels. See http://developer.gnome.org/glib/2.30/glib-IO-Channels.html
A third solution is to start a second thread that handles your networking, e.g. with posix threads or Gtk+ threading functionality.
Separating GUI from the worker part of your application is in general a good idea. However for a chat application, it probably does not give any benefit over the other solutions. See http://developer.gnome.org/glib/2.30/glib-Threads.html
This is an example in python with pygobject using GLib.IOChannel to add a watch in the gtk main event loop.
One watch is for listening to new connections.
The other for receiving data.
This is adapted from this pygtk example: http://rox.sourceforge.net/desktop/node/413.html
import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk, GLib;
from socket import socket
def listener(io, cond, sock):
conn = sock.accept()[0]
GLib.io_add_watch(GLib.IOChannel(conn.fileno()),0,GLib.IOCondition.IN, handler, conn)
return True
def handler(io, cond, sock):
print(sock.recv(1000))
return True
s = socket()
s.bind(('localhost', 50555))
s.listen()
GLib.io_add_watch(GLib.IOChannel(s.fileno()), 0, GLib.IOCondition.IN, listener, s)
Gtk.main()

Resources