PyQT - QThread.sleep(...) in separate thread blocks UI - multithreading

First I'll give a short description of the user interface and its functions before diving into the problem and code. Sorry in advance but I am unable to provide the complete code (even if I could it's...a lot of lines :D).
Description of the UI and what it does
I have a custom QWidget or to be more precise N instances of that custom widget aligned in a grid layout. Each instance of the widget has its own QThread which holds a worker QObject and a QTimer. In terms of UI components the widget contains two important components - a QLabel which visualizes a status and a QPushButton, which either starts (by triggering a start() slot in the Worker) or stops (by triggering a slot() slot in the worker) an external process. Both slots contain a 5s delay and also disable the push button during their execution. The worker itself not only controls the external process (through calls to the two slots mentioned above) but also checks if the process is running by a status() slot, which is triggered by the QTimer every 1s. As mentioned both the worker and timer live inside the thread! (I have double-checked that by printing the thread ID of the main (where the UI is) and the one of each worker (different from the main 100% sure).
In order to reduce the amount of calls from the UI to the worker and vice versa I decided to declare the _status attribute (which hold the state of the external process - inactive, running, error) of my Worker class as Q_PROPERTY with a setter, getter and notify the last being a signal triggered from within the setter IF and only if the value has changed from the old one. My previous design was much more signal/slot intensive since the status was emitted literally every second.
Now it's time for some code. I have reduced the code only to the parts which I deem to provide enough info and the location where the problem occurs:
Inside the QWidget
# ...
def createWorker(self):
# Create thread
self.worker_thread = QThread()
# Create worker and connect the UI to it
self.worker = None
if self.pkg: self.worker = Worker(self.cmd, self.pkg, self.args)
else: self.worker = Worker(cmd=self.cmd, pkg=None, args=self.args)
# Trigger attempt to recover previous state of external process
QTimer.singleShot(1, self.worker.recover)
self.worker.statusChanged_signal.connect(self.statusChangedReceived)
self.worker.block_signal.connect(self.block)
self.worker.recover_signal.connect(self.recover)
self.start_signal.connect(self.worker.start)
self.stop_signal.connect(self.worker.stop)
self.clear_error_signal.connect(self.worker.clear_error)
# Create a timer which will trigger the status slot of the worker every 1s (the status slot sends back status updates to the UI (see statusChangedReceived(self, status) slot))
self.timer = QTimer()
self.timer.setInterval(1000)
self.timer.timeout.connect(self.worker.status)
# Connect the thread to the worker and timer
self.worker_thread.finished.connect(self.worker.deleteLater)
self.worker_thread.finished.connect(self.timer.deleteLater)
self.worker_thread.started.connect(self.timer.start)
# Move the worker and timer to the thread...
self.worker.moveToThread(self.worker_thread)
self.timer.moveToThread(self.worker_thread)
# Start the thread
self.worker_thread.start()
#pyqtSlot(int)
def statusChangedReceived(self, status):
'''
Update the UI based on the status of the running process
:param status - status of the process started and monitored by the worker
Following values for status are possible:
- INACTIVE/FINISHED - visual indicator is set to INACTIVE icon; this state indicates that the process has stopped running (without error) or has never been started
- RUNNING - if process is started successfully visual indicator
- FAILED_START - occurrs if the attempt to start the process has failed
- FAILED_STOP - occurrs if the process wasn't stop from the UI but externally (normal exit or crash)
'''
#print(' --- main thread ID: %d ---' % QThread.currentThreadId())
if status == ProcStatus.INACTIVE or status == ProcStatus.FINISHED:
# ...
elif status == ProcStatus.RUNNING:
# ...
elif status == ProcStatus.FAILED_START:
# ...
elif status == ProcStatus.FAILED_STOP:
# ...
#pyqtSlot(bool)
def block(self, block_flag):
'''
Enable/Disable the button which starts/stops the external process
This slot is used for preventing the user to interact with the UI while starting/stopping the external process after a start/stop procedure has been initiated
After the respective procedure has been completed the button will be enabled again
:param block_flag - enable/disable flag for the button
'''
self.execute_button.setDisabled(block_flag)
# ...
Inside the Worker
# ...
#pyqtSlot()
def start(self):
self.block_signal.emit(True)
if not self.active and not self.pid:
self.active, self.pid = QProcess.startDetached(self.cmd, self.args, self.dir_name)
QThread.sleep(5)
# Check if launching the external process was successful
if not self.active or not self.pid:
self.setStatus(ProcStatus.FAILED_START)
self.block_signal(False)
self.cleanup()
return
self.writePidToFile()
self.setStatus(ProcStatus.RUNNING)
self.block_signal.emit(False)
#pyqtSlot()
def stop(self):
self.block_signal.emit(True)
if self.active and self.pid:
try:
kill(self.pid, SIGINT)
QThread.sleep(5) # <----------------------- UI freezes here
except OSError:
self.setStatus(ProcStatus.FAILED_STOP)
self.cleanup()
self.active = False
self.pid = None
self.setStatus(ProcStatus.FINISHED)
self.block_signal.emit(False)
#pyqtSlot()
def status(self):
if self.active and self.pid:
running = self.checkProcessRunning(self.pid)
if not running:
self.setStatus(ProcStatus.FAILED_STOP)
self.cleanup()
self.active = False
self.pid = None
def setStatus(self, status):
if self._status == status: return
#print(' --- main thread ID: %d ---' % QThread.currentThreadId())
self._status = status
self.statusChanged_signal.emit(self._status)
And now about my problem: I have noticed that the UI freezes ONLY whenever the stop() slot is triggered and the execution of the code goes through the QThread.sleep(5). I thought that this should also affect start but with multiple instances of my widget (each controlling its own thread with a worker and timer living in it) and all of that running the start works as intended - the push button, which is used to trigger the start() and stop() slots, gets disabled for 5 seconds and then gets enabled. With the stop() being triggered this doesn't happen at all.
I really cannot explain this behaviour. What's even worse is that the status updates that I am emitting through the Q_PROPERTY setter self.setStatus(...) get delayed due to this freezing which leads to some extra calls of my cleanup() function which basically deletes a generated file.
Any idea what is going on here? The nature of a slot and signal is that once a signal is emitted the slot connected to it is called right away. And since the UI runs in a different thread then the worker I don't see why all this is happening.

I actually corrected the spot where the problem was coming from in my question. In my original code I have forgotten the # before the pyqtSlot() of my stop() function. After adding it, it works perfectly fine. I had NO IDEA that such a thing can cause such huge problem!

Related

Running Python’s SimpleHTTPServer in a GUI

I am writing a GUI wrapper around Python’s SimpleHTTPServer. It looks like this:
The GUI uses tkinter. When I click on the OK button, it launches the web server.
The web server code is based on the article at https://docs.python.org/3/library/http.server.html. Part of it is:
with socketserver.TCPServer(("", PORT), Handler) as httpd:
print("serving at port", PORT)
httpd.serve_forever()
It all works as expected so far, but when the https.server_forever() function runs, I get the spinning beachball, and I can’t close the window or quit. I can force quit and try again, which is not convenient. The server does do its job, however.
If I run the same code from the command line (a non-gui version), I can easily interrupt it with ctrl-c; I can catch that and exit more gracefully.
How can I interrupt the running server more politely from the server?
You will need to run the server in a thread or a separate process, since both the web server and the UI need separate event loops.
If you want the server to communicate with the tkinter program, you'll need to set up a queue. As a general rule, you should only access tkinter objects from the thread that they were created in. However, I think it's safe to send virtual events from a worker thread to the GUI thread, which you can use to cause the GUI thread to read data from the queue.
For example, here's a simple threaded server. It must be passed the host and port, a reference to the root window, and a reference to the queue where information can be sent to the GUI.
class ExampleServer(threading.Thread):
def __init__(self, host, port, gui, queue):
threading.Thread.__init__(self)
self.host = host
self.port = port
self.gui = gui
self.queue = queue
self.daemon = True
def run(self):
print(f"Listening on http://{self.host}:{self.port}\n")
server = HTTPServer((self.host, self.port), ExampleHandler)
server.serve_forever()
In your request handler, you can push items on the queue and then generate an event on the root window. It might look something like this:
class ExampleHandler(BaseHTTPRequestHandler):
def do_GET(self):
# handle the request
...
# notify the UI
self.queue.put("anything you want")
self.gui.event_generate("<<DataAvailable>>")
Your gui also needs to take the queue as an argument and needs to set a binding to the virtual event. It might look something like this:
class ExampleGUI(tk.Tk):
def __init__(self, queue):
super().__init__()
self.queue = queue
# set up the rest of the GUI
...
# bind to the virtual event
self.bind("<<DataAvailable>>", self.poll_queue)
def poll_queue(self, event):
while not queue.empty():
data = self.queue.get_nowait()
# Do whatever you need to do with the data
...
def start(self):
self.mainloop()
Finally, you tie it all together with something like this:
if __name__ == "__main__":
queue = queue.Queue()
gui = ExampleGUI(queue)
server = ExampleServer("localhost", 8910, gui, queue)
server.start()
gui.start()

How to implement custom timeout for function that connects to server

I want to establish a connection with a b0 client for Coppelia Sim using the Python API. Unfortunately, this connection function does not have a timeout and will run indefinitely if it fails to connect.
To counter that, I tried moving the connection to a separate process (multiprocessing) and check after a couple of seconds, whether the process is still alive. If it still is, I kill the process and continue with the program.
This sort of works, as it does not block my program anymore, however the process does not stop when the connection is successfully made, so it kills the process, even when the connection succeeds.
How can I fix this and also write the b0client to the global variable?
def connection_function():
global b0client
b0client = b0RemoteApi.RemoteApiClient('b0RemoteApi_pythonClient', 'b0RemoteApi', 60)
print('Success!')
return 0
def establish_b0_connection(timeout):
connection_process = multiprocessing.Process(target=connection_function)
connection_process.start()
# Wait for [timeout] seconds or until process finishes
connection_process.join(timeout=timeout)
# If thread is still active
if connection_process.is_alive():
print('[INITIALIZATION OF B0 API CLIENT FAILED]')
# Terminate - may not work if process is stuck for good
connection_process.terminate()
# OR Kill - will work for sure, no chance for process to finish nicely however
# connection_process.kill()
connection_process.join()
print('[CONTINUING WITHOUT B0 API CLIENT]')
return False
else:
return True
if __name__ == '__main__':
b0client = None
establish_b0_connection(timeout=5)
# Continue with the code, with or without connection.
# ...

How to shut down CherryPy in no incoming connections for specified time?

I am using CherryPy to speak to an authentication server. The script runs fine if all the inputted information is fine. But if they make an mistake typing their ID the internal HTTP error screen fires ok, but the server keeps running and nothing else in the script will run until the CherryPy engine is closed so I have to manually kill the script. Is there some code I can put in the index along the lines of
if timer >10 and connections == 0:
close cherrypy (< I have a method for this already)
Im mostly a data mangler, so not used to web servers. Googling shows lost of hits for closing CherryPy when there are too many connections but not when there have been no connections for a specified (short) time. I realise the point of a web server is usually to hang around waiting for connections, so this may be an odd case. All the same, any help welcome.
Interesting use case, you can use the CherryPy plugins infrastrcuture to do something like that, take a look at this ActivityMonitor plugin implementation, it shutdowns the server if is not handling anything and haven't seen any request in a specified amount of time (in this case 10 seconds).
Maybe you have to adjust the logic on how to shut it down or do anything else in the _verify method.
If you want to read a bit more about the publish/subscribe architecture take a look at the CherryPy Docs.
import time
import threading
import cherrypy
from cherrypy.process.plugins import Monitor
class ActivityMonitor(Monitor):
def __init__(self, bus, wait_time, monitor_time=None):
"""
bus: cherrypy.engine
wait_time: Seconds since last request that we consider to be active.
monitor_time: Seconds that we'll wait before verifying the activity.
If is not defined, wait half the `wait_time`.
"""
if monitor_time is None:
# if monitor time is not defined, then verify half
# the wait time since the last request
monitor_time = wait_time / 2
super().__init__(
bus, self._verify, monitor_time, self.__class__.__name__
)
# use a lock to make sure the thread that triggers the before_request
# and after_request does not collide with the monitor method (_verify)
self._active_request_lock = threading.Lock()
self._active_requests = 0
self._wait_time = wait_time
self._last_request_ts = time.time()
def _verify(self):
# verify that we don't have any active requests and
# shutdown the server in case we haven't seen any activity
# since self._last_request_ts + self._wait_time
with self._active_request_lock:
if (not self._active_requests and
self._last_request_ts + self._wait_time < time.time()):
self.bus.exit() # shutdown the engine
def before_request(self):
with self._active_request_lock:
self._active_requests += 1
def after_request(self):
with self._active_request_lock:
self._active_requests -= 1
# update the last time a request was served
self._last_request_ts = time.time()
class Root:
#cherrypy.expose
def index(self):
return "Hello user: current time {:.0f}".format(time.time())
def main():
# here is how to use the plugin:
ActivityMonitor(cherrypy.engine, wait_time=10, monitor_time=5).subscribe()
cherrypy.quickstart(Root())
if __name__ == '__main__':
main()

multiprocessing share object (socket) between process.

I would like to create a Process that store many objects(connect to devices via socket).
I have a GUI (PyQT5) that should store information about progress abut processes and status about devices. Example tell more:
# Process1
def conf1():
dev = some_signal_that_ask_about_dev("device1");
conf_dev(dev)
return_device("device1", dev)
# Process2
def conf2():
dev = some_signal_that_ask_about_dev("device2");
do_sth_withd_dev(dev)
return_device("device2", dev)
# Process3
class DevicesHolder(object):
self.devices = {
"device1": Device1("192.168.1.1", 8080),
"device2": Device2("192.168.1.2", 8081)
}
def some_signal_that_ask_about_dev(self, dev_name):
if self.devices[dev_name]:
dev = self.devices[dev_name]
# this device is taken by process.
# If process take device and faild. device should be recreated!
self.devices[dev_name] = None
return dev
def return_device(dev_name, dev):
self.devices[dev_name] = dev
def get_status_of_devices():
# Check connection to devices and return response
pass
# Process 4:
# GUI:
get_status_of_device();
So process1 and process2 do some work and sending progress to GUI. I would like to have info about devices status also.
Why just not create local object (process) and sending info from that process?
Process can run for a few seconds. When apps run for minutes. I want to know that there is connection problem before I press a start button. And all apps fail because of connection.
I think I am complicating simple problem. Help me!
More info
Every process is configuring sth else but on the same connection.
I would like to work this as quick as possible.
It will work on Linux. But I care about multi platform.

Freeze when using tkinter + pyhook. Two event loops and multithreading

I am writing a tool in python 2.7 registering the amount of times the user pressed a keyboard or mouse button. The amount of clicks will be displayed in a small black box in the top left of the screen. The program registers clicks even when another application is the active one.
It works fine except when I move the mouse over the box. The mouse then freezes for a few seconds after which the program works again. If I then move the mouse over the box a second time, the mouse freezes again, but this time the program crashes.
I have tried commenting out pumpMessages() and then the program works. The problem looks a lot like this question pyhook+tkinter=crash, but no solution was given there.
Other answers has shown that there is a bug with the dll files when using wx and pyhook together in python 2.6. I don't know if that is relevant here.
My own thoughts is that it might have something to do with the two event loops running parallel. I have read that tkinter isn't thread safe, but I can't see how I can make this program run in a single thread since I need to have both pumpmessages() and mainlooop() running.
To sum it up: Why does my program freeze on mouse over?
import pythoncom, pyHook, time, ctypes, sys
from Tkinter import *
from threading import Thread
print 'Welcome to APMtool. To exit the program press delete'
## Creating input hooks
#the function called when a MouseAllButtonsUp event is called
def OnMouseUpEvent(event):
global clicks
clicks+=1
updateCounter()
return True
#the function called when a KeyUp event is called
def OnKeyUpEvent(event):
global clicks
clicks+=1
updateCounter()
if (event.KeyID == 46):
killProgram()
return True
hm = pyHook.HookManager()# create a hook manager
# watch for mouseUp and keyUp events
hm.SubscribeMouseAllButtonsUp(OnMouseUpEvent)
hm.SubscribeKeyUp(OnKeyUpEvent)
clicks = 0
hm.HookMouse()# set the hook
hm.HookKeyboard()
## Creating the window
root = Tk()
label = Label(root,text='something',background='black',foreground='grey')
label.pack(pady=0) #no space around the label
root.wm_attributes("-topmost", 1) #alway the top window
root.overrideredirect(1) #removes the 'Windows 7' box around the label
## starting a new thread to run pumMessages() and mainloop() simultaniusly
def startRootThread():
root.mainloop()
def updateCounter():
label.configure(text=clicks)
def killProgram():
ctypes.windll.user32.PostQuitMessage(0) # stops pumpMessages
root.destroy() #stops the root widget
rootThread.join()
print 'rootThread stopped'
rootThread = Thread(target=startRootThread)
rootThread.start()
pythoncom.PumpMessages() #pump messages is a infinite loop waiting for events
print 'PumpMessages stopped'
I've solved this problem with multiprocessing:
the main process handles the GUI (MainThread) and a thread that consumes messages from the second process
a child process hooks all mouse/keyboard events and pushes them to the main process (via a Queue object)
From the information that Tkinter needs to run in the main thread and not be called outside this thred, I found a solution:
My problem was that both PumpMessages and mainLoop needed to run in the main thread. In order to both receive inputs and show a Tkinter label with the amount of clicks I need to switch between running pumpMessages and briefly running mainLoop to update the display.
To make mainLoop() quit itself I used:
after(100,root.quit()) #root is the name of the Tk()
mainLoop()
so after 100 milliseconds root calls it's quit method and breaks out of its own main loop
To break out of pumpMessages I first found the pointer to the main thread:
mainThreadId = win32api.GetCurrentThreadId()
I then used a new thread that sends the WM_QUIT to the main thread (note PostQuitMessage(0) only works if it is called in the main thread):
win32api.PostThreadMessage(mainThreadId, win32con.WM_QUIT, 0, 0)
It was then possible to create a while loop which changed between pumpMessages and mainLoop, updating the labeltext in between. After the two event loops aren't running simultaneously anymore, I have had no problems:
def startTimerThread():
while True:
win32api.PostThreadMessage(mainThreadId, win32con.WM_QUIT, 0, 0)
time.sleep(1)
mainThreadId = win32api.GetCurrentThreadId()
timerThread = Thread(target=startTimerThread)
timerThread.start()
while programRunning:
label.configure(text=clicks)
root.after(100,root.quit)
root.mainloop()
pythoncom.PumpMessages()
Thank you to Bryan Oakley for information about Tkinter and Boaz Yaniv for providing the information needed to stop pumpMessages() from a subthread
Tkinter isn't designed to be run from any thread other than the main one. It might help to put the GUI in the main thread and put the call to PumpMessages in a separate thread. Though you have to be careful and not call any Tkinter functions from the other thread (except perhaps event_generate).

Resources