pyqtSignals not emitted in QThread worker - multithreading

I have an implementation of a BackgroundTask object that looks like the following:
class BackgroundTask(QObject):
'''
A utility class that makes running long-running tasks in a separate thread easier
:type task: callable
:param task: The task to run
:param args: positional arguments to pass to task
:param kawrgs: keyword arguments to pass to task
.. warning :: There is one **MAJOR** restriction in task: It **cannot** interact with any Qt GUI objects.
doing so will cause the GUI to crash. This is a limitation in Qt's threading model, not with
this class's design
'''
finished = pyqtSignal() #: Signal that is emitted when the task has finished running
def __init__(self, task, *args, **kwargs):
super(BackgroundTask, self).__init__()
self.task = task #: The callable that does the actual task work
self.args = args #: positional arguments passed to task when it is called
self.kwargs = kwargs #: keyword arguments pass to task when it is called
self.results= None #: After :attr:`finished` is emitted, this will contain whatever value :attr:`task` returned
def runTask(self):
'''
Does the actual calling of :attr:`task`, in the form ``task(*args, **kwargs)``, and stores the returned value
in :attr:`results`
'''
flushed_print('Running Task')
self.results = self.task(*self.args, **self.kwargs)
flushed_print('Got My Results!')
flushed_print('Emitting Finished!')
self.finished.emit()
def __repr__(self):
return '<BackgroundTask(task = {}, {}, {})>'.format(self.task, *self.args, **self.kwargs)
#staticmethod
def build_and_run_background_task(thread, finished_notifier, task, *args, **kwargs):
'''
Factory method that builds a :class:`BackgroundTask` and runs it on a thread in one call
:type finished_notifier: callable
:param finished_notifier: Callback that will be called when the task has completed its execution. Signature: ``func()``
:rtype: :class:`BackgroundTask`
:return: The created :class:`BackgroundTask` object, which will be running in its thread.
Once finished_notifier has been called, the :attr:`results` attribute of the returned :class:`BackgroundTask` should contain
the return value of the input task callable.
'''
flushed_print('Setting Up Background Task To Run In Thread')
bg_task = BackgroundTask(task, *args, **kwargs)
bg_task.moveToThread(thread)
bg_task.finished.connect(thread.quit)
thread.started.connect(bg_task.runTask)
thread.finished.connect(finished_notifier)
thread.start()
flushed_print('Thread Started!')
return bg_task
As my docstrings indicate, this should allow me to pass an arbitrary callable and its arguments to build_and_run_background_task, and upon completion of the task it should call the callable passed as finished_notifier and kill the thread. However, when I run it with the following as finished_notifier
def done():
flushed_print('Done!')
I get the following output:
Setting Up Background Task To Run In Thread
Thread Started!
Running Task
Got My Results!
Emitting Finished!
And that's it. The finished_notifier callback is never executed, and the thread's quit method is never called, suggesting the finshed signal in BackgroundTask isn't actually being emitted. If, however, I bind to finshed directly and call runTask directly (not in a thread), everything works as expected. I'm sure I just missed something stupid, any suggestions?

Figured out the problem myself, I needed to call qApp.processEvents() where another point in the application was waiting for this operation to finish. I had been testing on the command-line as well and that had masked the same problem.

Related

Dart: Store heavy object in an isolate and access its method from main isolate without reinstatiating it

is it possible in Dart to instantiate a class in an isolate, and then send message to this isolate to receive a return value from its methods (instead of spawning a new isolate and re instantiate the same class every time)? I have a class with a long initialization, and heavy methods. I want to initialize it once and then access its methods without compromising the performance of the main isolate.
Edit: I mistakenly answered this question thinking python rather than dart. snakes on the brain / snakes on a plane
I am not familiar with dart programming, but it would seem the concurrency model has a lot of similarities (isolated memory, message passing, etc..). I was able to find an example of 2 way message passing with a dart isolate. There's a little difference in how it gets set-up, and the streams are a bit simpler than python Queue's, but in general the idea is the same.
Basically:
Create a port to receive data from the isolate
Create the isolate passing it the port it will send data back on
Within the isolate, create the port it will listen on, and send the other end of it back to main (so main can send messages)
Determine and implement a simple messaging protocol for remote method call on an object contained within the isolate.
This is basically duplicating what a multiprocessing.Manager class does, however it can be helpful to have a simplified example of how it can work:
from multiprocessing import Process, Lock, Queue
from time import sleep
class HeavyObject:
def __init__(self, x):
self._x = x
sleep(5) #heavy init
def heavy_method(self, y):
sleep(.2) #medium weight method
return self._x + y
def HO_server(in_q, out_q):
ho = HeavyObject(5)
#msg format for remote method call: ("method_name", (arg1, arg2, ...), {"kwarg1": 1, "kwarg2": 2, ...})
#pass None to exit worker cleanly
for msg in iter(in_q.get, None): #get a remote call message from the queue
out_q.put(getattr(ho, msg[0])(*msg[1], **msg[2])) #call the method with the args, and put the result back on the queue
class RMC_helper: #remote method caller for convienience
def __init__(self, in_queue, out_queue, lock):
self.in_q = in_queue
self.out_q = out_queue
self.l = lock
self.method = None
def __call__(self, *args, **kwargs):
if self.method is None:
raise Exception("no method to call")
with self.l: #isolate access to queue so results don't pile up and get popped off in possibly wrong order
print("put to queue: ", (self.method, args, kwargs))
self.in_q.put((self.method, args, kwargs))
return self.out_q.get()
def __getattr__(self, name):
if not name.startswith("__"):
self.method = name
return self
else:
super().__getattr__(name)
def child_worker(remote):
print("child", remote.heavy_method(5)) #prints 10
sleep(3) #child works on something else
print("child", remote.heavy_method(2)) #prints 7
if __name__ == "__main__":
in_queue = Queue()
out_queue = Queue()
lock = Lock() #lock is used as to not confuse which reply goes to which request
remote = RMC_helper(in_queue, out_queue, lock)
Server = Process(target=HO_server, args=(in_queue, out_queue))
Server.start()
Worker = Process(target=child_worker, args=(remote, ))
Worker.start()
print("main", remote.heavy_method(3)) #this will *probably* start first due to startup time of child
Worker.join()
with lock:
in_queue.put(None)
Server.join()
print("done")

How to make a group from tasks with bind = True?

Task with bind=True
I have a celery task that runs a computation lasting a few seconds.
from celery import states
#celery.task(name="crunch.task", bind=True)
def crunch(self, data):
try:
pass
# ... run computation with data here
except Exception as exc:
self.update_state(
state=states.FAILURE,
meta={"details": "error details here"}
)
raise exc
The important feature here is I'm using bind=True which passes the task into the function as the self parameter. This allows access to the task's update_state method which is great for error handling.
Group job
I now want to run this task in a batch job using celery.group.
#celery.task(name="batch.task", bind=True)
def batch(self, data_list):
try:
# HERE! IT SEEM WRONG TO PASS `SELF` INTO THE CHILD TASKS
job = group([crunch(self, data) for data in data_list]) # <-- here `self` should be created by celery!
job.async_apply()
except Exception as exc:
self.update_state(
state=states.FAILURE,
meta={"details": "error details here"}
)
raise exc
How can I make a celery.group composed of tasks with bind=True?
You need to work with signatures,
crunch.si(data)

Python's asyncio.Event() across different classes

I'm writing a Python program to interact with a device based on a CAN Bus. I'm using the python-can module successfully for this purpose. I'm also using asyncio to react to asynchronous events. I have written a "CanBusManager" class that is used by the "CanBusSequencer" class. The "CanBusManager" class takes care of generating/sending/receiving messages, and the CanBusSequencer drives the sequence of messages to be sent.
At some point in the sequence I want to wait until a specific message is received to "unlock" the remaining messages to be sent in the sequence. Overview in code:
main.py
async def main():
event = asyncio.Event()
sequencer = CanBusSequencer(event)
task = asyncio.create_task(sequencer.doSequence())
await task
asyncio.run(main(), debug=True)
canBusSequencer.py
from canBusManager import CanBusManager
class CanBusSequencer:
def __init__(self, event)
self.event = event
self.canManager = CanBusManager(event)
async def doSequence(self):
for index, row in self.df_sequence.iterrows():
if:...
self.canManager.sendMsg(...)
else:
self.canManager.sendMsg(...)
await self.event.wait()
self.event.clear()
canBusManager.py
import can
class CanBusManager():
def __init__(self, event):
self.event = event
self.startListening()
**EDIT**
def startListening(self):
self.msgNotifier = can.Notifier(self.canBus, self.receivedMsgCallback)
**EDIT**
def receivedMsgCallback(self, msg):
if(msg == ...):
self.event.set()
For now my program stays by the await self.event.wait(), even though the relevant message is received and the self.event.set() is executed. Running the program with debug = True reveals an
RuntimeError: Non-thread-safe operation invoked on an event loop other than the current one
that I don't really get. It has to do with the asyncio event loop, somehow not properly defined/managed. I'm coming from the C++ world and I'm currently writing my first large program with Python. Any guidance would be really appreciated:)
Your question doesn't explain how you arrange for receivedMsgCallback to be invoked.
If it is invoked by a classic "async" API which uses threads behind the scenes, then it will be invoked from outside the thread that runs the event loop. According to the documentation, asyncio primitives are not thread-safe, so invoking event.set() from another thread doesn't properly synchronize with the running event loop, which is why your program doesn't wake up when it should.
If you want to do anything asyncio-related, such as invoke Event.set, from outside the event loop thread, you need to use call_soon_threadsafe or equivalent. For example:
def receivedMsgCallback(self, msg):
if msg == ...:
self.loop.call_soon_threadsafe(self.event.set)
The event loop object should be made available to the CanBusManager object, perhaps by passing it to its constructor and assigning it to self.loop.
On a side note, if you are creating a task only to await it immediately, you don't need a task in the first place. In other words, you can replace task = asyncio.create_task(sequencer.doSequence()); await task with the simpler await sequencer.doSequence().

Python: asyncio loops with threads

Could you tell me if this is a correct approach to build several independent async loops inside own threads?
def init():
print("Initializing Async...")
global loop_heavy
loop_heavy = asyncio.new_event_loop()
start_loop(loop_heavy)
def start_loop(loop):
thread = threading.Thread(target=loop.run_forever)
thread.start()
def submit_heavy(task):
future = asyncio.run_coroutine_threadsafe(task, loop_heavy)
try:
future.result()
except Exception as e:
print(e)
def stop():
loop_heavy.call_soon_threadsafe(loop_heavy.stop)
async def heavy():
print("3. heavy start %s" % threading.current_thread().name)
await asyncio.sleep(3) # or await asyncio.sleep(3, loop=loop_heavy)
print("4. heavy done")
Then I am testing it with:
if __name__ == "__main__":
init()
print("1. submit heavy: %s" % threading.current_thread().name)
submit_heavy(heavy())
print("2. submit is done")
stop()
I am expecting to see 1->3->2->4 but in fact it is 1->3->4->2:
Initializing Async...
1. submit heavy: MainThread
3. heavy start Thread-1
4. heavy done
2. submit is done
I think that I miss something in understanding async and threads.
Threads are different. Why am I waiting inside MainThread until the job inside Thread-1 is finished?
Why am I waiting inside MainThread until the job inside Thread-1 is finished?
Good question, why are you?
One possible answer is, because you actually want to block the current thread until the job is finished. This is one of the reasons to put the event loop in another thread and use run_coroutine_threadsafe.
The other possible answer is that you don't have to if you don't want. You can simply return from submit_heavy() the concurrent.futures.Future object returned by run_coroutine_threadsafe, and leave it to the caller to wait for the result (or check if one is ready) at their own leisure.
Finally, if your goal is just to run a regular function "in the background" (without blocking the current thread), perhaps you don't need asyncio at all. Take a look at the concurrent.futures module, whose ThreadPoolExecutor allows you to easily submit a function to a thread pool and leave it to execute unassisted.
I will add one of the possible solutions that I found from the asyncio documentation.
I'm not sure that it is the correct way, but it works as expected (MainThread is not blocked by the execution of the child thread)
Running Blocking Code
Blocking (CPU-bound) code should not be called directly. For example, if a function performs a CPU-intensive calculation for 1 second, all concurrent asyncio Tasks and IO operations would be delayed by 1 second.
An executor can be used to run a task in a different thread or even in a different process to avoid blocking block the OS thread with the event loop. See the loop.run_in_executor() method for more details.
Applying to my code:
import asyncio
import threading
import concurrent.futures
import multiprocessing
import time
def init():
print("Initializing Async...")
global loop, thread_executor_pool
thread_executor_pool = concurrent.futures.ThreadPoolExecutor(max_workers=multiprocessing.cpu_count())
loop = asyncio.get_event_loop()
thread = threading.Thread(target=loop.run_forever)
thread.start()
def submit_task(task, *args):
loop.run_in_executor(thread_executor_pool, task, *args)
def stop():
loop.call_soon_threadsafe(loop.stop)
thread_executor_pool.shutdown()
def blocked_task(msg1, msg2):
print("3. task start msg: %s, %s, thread: %s" % (msg1, msg2, threading.current_thread().name))
time.sleep(3)
print("4. task is done -->")
if __name__ == "__main__":
init()
print("1. --> submit task: %s" % threading.current_thread().name)
submit_task(blocked_task, "a", "b")
print("2. --> submit is done")
stop()
Output:
Initializing Async...
1. --> submit task: MainThread
3. task start msg: a, b, thread: ThreadPoolExecutor-0_0
2. --> submit is done
4. task is done -->
Correct me if there are still any mistakes or it can be done in the other way.

Gifs shown inside a label doesn't update itself regulary in Pyqt5

I have a loading widget that consists of two labels, one is the status label and the other one is the label that the animated gif will be shown in. If I call show() method before heavy stuff gets processed, the gif at the loading widget doesn't update itself at all. There's nothing wrong with the gif btw(looping problems etc.). The main code(caller) looks like this:
self.loadingwidget = LoadingWidgetForm()
self.setCentralWidget(self.loadingwidget)
self.loadingwidget.show()
...
...
heavy stuff
...
...
self.loadingwidget.hide()
The widget class:
class LoadingWidgetForm(QWidget, LoadingWidget):
def __init__(self, parent=None):
super().__init__(parent=parent)
self.setupUi(self)
self.setWindowFlags(self.windowFlags() | Qt.FramelessWindowHint)
self.setAttribute(Qt.WA_TranslucentBackground)
pince_directory = SysUtils.get_current_script_directory() # returns current working directory
self.movie = QMovie(pince_directory + "/media/loading_widget_gondola.gif", QByteArray())
self.label_Animated.setMovie(self.movie)
self.movie.setScaledSize(QSize(50, 50))
self.movie.setCacheMode(QMovie.CacheAll)
self.movie.setSpeed(100)
self.movie.start()
self.not_finished=True
self.update_thread = Thread(target=self.update_widget)
self.update_thread.daemon = True
def showEvent(self, QShowEvent):
QApplication.processEvents()
self.update_thread.start()
def hideEvent(self, QHideEvent):
self.not_finished = False
def update_widget(self):
while self.not_finished:
QApplication.processEvents()
As you see I tried to create a seperate thread to avoid workload but it didn't make any difference. Then I tried my luck with the QThread class by overriding the run() method but it also didn't work. But executing QApplication.processEvents() method inside of the heavy stuff works well. I also think I shouldn't be using seperate threads, I feel like there should be a more elegant way to do this. The widget looks like this btw:
Processing...
Full version of the gif:
Thanks in advance! Have a good day.
Edit: I can't move the heavy stuff to a different thread due to bugs in pexpect. Pexpect's spawn() method requires spawned object and any operations related with the spawned object to be in the same thread. I don't want to change the working flow of the whole program
In order to update GUI animations, the main Qt loop (located in the main GUI thread) has to be running and processing events. The Qt event loop can only process a single event at a time, however because handling these events typically takes a very short time control is returned rapidly to the loop. This allows the GUI updates (repaints, including animation etc.) to appear smooth.
A common example is having a button to initiate loading of a file. The button press creates an event which is handled, and passed off to your code (either via events directly, or via signals). Now the main thread is in your long-running code, and the event loop is stalled — and will stay stalled until the long-running job (e.g. file load) is complete.
You're correct that you can solve this with threads, but you've gone about it backwards. You want to put your long-running code in a thread (not your call to processEvents). In fact, calling (or interacting with) the GUI from another thread is a recipe for a crash.
The simplest way to work with threads is to use QRunner and QThreadPool. This allows for multiple execution threads. The following wall of code gives you a custom Worker class that makes it simple to handle this. I normally put this in a file threads.py to keep it out of the way:
import sys
from PyQt5.QtCore import QObject, QRunnable
class WorkerSignals(QObject):
'''
Defines the signals available from a running worker thread.
error
`tuple` (exctype, value, traceback.format_exc() )
result
`dict` data returned from processing
'''
finished = pyqtSignal()
error = pyqtSignal(tuple)
result = pyqtSignal(dict)
class Worker(QRunnable):
'''
Worker thread
Inherits from QRunnable to handler worker thread setup, signals and wrap-up.
:param callback: The function callback to run on this worker thread. Supplied args and
kwargs will be passed through to the runner.
:type callback: function
:param args: Arguments to pass to the callback function
:param kwargs: Keywords to pass to the callback function
'''
def __init__(self, fn, *args, **kwargs):
super(Worker, self).__init__()
# Store constructor arguments (re-used for processing)
self.fn = fn
self.args = args
self.kwargs = kwargs
self.signals = WorkerSignals()
#pyqtSlot()
def run(self):
'''
Initialise the runner function with passed args, kwargs.
'''
# Retrieve args/kwargs here; and fire processing using them
try:
result = self.fn(*self.args, **self.kwargs)
except:
traceback.print_exc()
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
else:
self.signals.result.emit(result) # Return the result of the processing
finally:
self.signals.finished.emit() # Done
To use the above, you need a QThreadPool to handle the threads. You only need to create this once, for example during application initialisation.
threadpool = QThreadPool()
Now, create a worker by passing in the Python function to execute:
from .threads import Worker # our custom worker Class
worker = Worker(fn=<Python function>) # create a Worker object
Now attach signals to get back the result, or be notified of an error:
worker.signals.error.connect(<Python function error handler>)
worker.signals.result.connect(<Python function result handler>)
Then, to execute this Worker, you can just pass it to the QThreadPool.
threadpool.start(worker)
Everything will take care of itself, with the result of the work returned to the connected signal... and the main GUI loop will be free to do it's thing!

Resources