I am trying to understand synchronization and have the following code with Reentrant lock
import threading
from time import sleep,ctime,time
class show:
lock=threading.RLock()
def __init__(self):
self.x=0
def increment(self):
show.lock.acquire()
print("x=",self.x)
# show.lock.acquire()
self.x+=1
show.lock.release()
class mythread(threading.Thread):
def __init__(self,aa):
super().__init__(group=None)
self.obj=aa
def run(self):
for i in range(0,100):
self.obj.increment()
ss=show()
ss1=show()
one=mythread(ss)
two=mythread(ss)
one.start()
two.start()
Now if i run the code as above things are working fine and i get output from 0 to 199. But if i uncomment the line above where we reacquire the lock the output is 0 to 99. why is this change. how reacquiring lock is changing the output
After uncommenting, one of threads is blocked by another which still holds a hundered of locks on class show after terminating. You should always match number of aquired and releases locks even if using recursive (aka reentrant) locks.
Check the Wikipedia or the docs for the rlock definition. The latter says:
To unlock the lock, a thread calls its release() method.
acquire()/release() call pairs may be nested; only the final release()
(the release() of the outermost pair) resets the lock to unlocked and
allows another thread blocked in acquire() to proceed.
To avoid the issues with missing lock releases I recommend a context manager
def increment(self):
with show.lock:
print("x=", self.x)
self.x += 1
Related
I have multiple locks that lock different parts of my API.
To lock any method I do something like this :
import threading
class DoSomething:
def __init__():
self.lock = threading.Lock()
def run(self):
with self.lock:
# do stuff requiring lock here
And for most use cases this works just fine.
But, I am unsure if what I am doing when requiring multiple locks works or not :
import threading
class DoSomething:
def __init__():
self.lock_database = threading.Lock()
self.lock_logger = threading.Lock()
def run(self):
with self.lock_database and self.lock_logger:
# do stuff requiring lock here
As it is, the code runs just fine but I am unsure if it runs as I want it to.
My question is : are the locks being obtained simultaneously or is the first one acquired and only then the second is also acquired.
Is my previous code as follows ?
with self.lock1:
with self.lock2:
# do stuff here
As it is, the code currently works but, since the chances of my threads requiring the same lock simultaneously is extremely low to begin with, I may end up with a massive headache to debug later
I am asking the question as I am very uncertain on how to test my code to ensure that it is working as intended and am equally interested in having the answer and knowing how I can test it to ensure that it works ( and not end up with the end users testing it for me )
Yes, you can do that, but beware of deadlocks. A deadlock occurs when one thread is unable to make progress because it needs to acquire a lock that some other thread is holding, but the second thread is unable to make progress because it wants the lock that the first thread already is holding.
Your code example locks lock_database first, and lock_logger second. If you can guarantee that any thread that locks them both will always lock them in that same order, then you're safe. A deadlock can never happen that way. But if one thread locks lock_database before trying to lock lock_logger, and some other thread tries to grab them both in the opposite order, that's a deadlock waiting to happen.
Looks easy. And it is, except...
...In a more sophisticated program, where locks are attached to objects that are passed around to different functions, then it may not be so easy because one thread may call some foobar(a, b) function, while another thread calls the same foobar() on the same two objects, except the objects are switched.
I am trying to display a loading gif on my PyQt5 QMainWindow while an intensive process is running. Rather than playing normally, the QMovie pauses. As far as I can tell, the event loop shouldn't be blocked as the intensive process is in its own QObject passed to its own QThread. Relevant code below:
QMainWindow:
class EclipseQa(QMainWindow):
def __init__(self):
QMainWindow.__init__(self)
self.initUI()
def initUI(self):
...
self.loadingMovie = QMovie("./loading.gif")
self.loadingMovie.setScaledSize(QSize(149, 43))
self.statusLbl = QLabel(self)
self.statusLbl.setMovie(self.loadingMovie)
self.grid.addWidget(self.statusLbl, 6, 2, 2, 2, alignment=Qt.AlignCenter)
self.statusLbl.hide()
...
def startLoadingGif(self):
self.statusLbl.show()
self.loadingMovie.start()
def stopLoadingGif(self):
self.loadingMovie.stop()
self.statusLbl.hide()
def maskDose(self):
self.startLoadingGif()
# Set up thread and associated worker object
self.thread = QThread()
self.worker = DcmReadWorker()
self.worker.moveToThread(self.thread)
self.worker.finished.connect(self.thread.quit)
self.worker.updateRd.connect(self.updateRd)
self.worker.updateRs.connect(self.updateRs)
self.worker.updateStructures.connect(self.updateStructures)
self.worker.clearRd.connect(self.clearRd)
self.worker.clearRs.connect(self.clearRs)
self.thread.started.connect(lambda: self.worker.dcmRead(caption, fname[0]))
self.thread.finished.connect(self.stopLoadingGif)
self.maskThread.start()
def showDoneDialog(self):
...
self.stopLoadingGif()
...
Worker class:
class DoseMaskWorker(QObject):
clearRd = pyqtSignal()
clearRs = pyqtSignal()
finished = pyqtSignal()
startLoadingGif = pyqtSignal()
stopLoadingGif = pyqtSignal()
updateMaskedRd = pyqtSignal(str)
def __init__(self, parent=None):
QObject.__init__(self, parent)
#pyqtSlot(name="maskDose")
def maskDose(self, rd, rdName, rdId, rs, maskingStructure_dict):
...
self.updateMaskedRd.emit(maskedRdName)
self.finished.emit()
For brevity, '...' indicates code that I figured is probably not relevant.
Your use of lambda to call the slot when the threads started signal is emitted is likely causing it to execute in the main thread. There are a couple of things you need to do to fix this.
Firstly, your use of pyqtSlot does not contain the types of the arguments to the maskDose method. You need to update it so that it does. Presumably you also need to do this for the dcmRead method which you call from the lambda but haven't included in your code. See the documentation for more details.
In order to remove the use of the lambda, you need to define a new signal and a new slot within the EclipseQa class. The new signal should be defined such that the required number of parameters for the dcmRead method are emitted, and the types correctly specified (documentation for this is also in the link above). This signal should be connected to the workers dcmRead slot (make sure to do it after the worker object has been moved to the thread or else you might run into this bug!). The slot should take no arguments, and be connected to the threads started signal. The code in the slot should simply emit your new signal with the appropriate arguments to be passed to dcmRead (e.g. like self.my_new_signal.emit(param1, param2)).
Note: You can check what thread any code is running in using the Python threading module (even when using QThreads) by printing threading.current_thread().name from the location you wish to check.
Note 2: If your thread is CPU bound rather than IO bound, you may still experience performance issues because of the Python GIL which only allows a single thread to execute at any one time (it will swap between the threads regularly though so code in both threads should run, just maybe not at the performance you expect). QThreads (which are implemented in C++ and are theoretically able to release the GIL) do not help with this because they are running your Python code and so the GIL is still held.
I have a background thread that main calls, the background thread can open a number of different scripts but occasionally it will get an infinite print loop like this.
In thing.py
import foo
def main():
thr = Thread(target=background)
thr.start()
thread_list.append(thr)
def background():
getattr(foo, 'bar')()
return
And then in foo.py
def bar():
while True:
print("stuff")
This is what it's supposed to do but I want to be able to kill it when I need to. Is there a way for me to kill the background thread and all the functions it has called? I've tried putting flags in background to return when the flag goes high, but background is never able to check the flags since its waiting for bar to return.
EDIT: foo.py is not my code so I'm hesitant to edit it, ideally I could do this without modifying foo.py but if its impossible to avoid its okay
First of all it is very difficult (if possible) to control threads from other threads, no matter what language you are using. This is due to potential security issues. So what you do is you create a shared object which both threads can freely access. You can set a flag on it.
But luckily in Python each thread has its own Thread object which we can use:
import foo
def main():
thr = Thread(target=background)
thr.exit_requested = False
thr.start()
thread_list.append(thr)
def background():
getattr(foo, 'bar')()
return
And in foo:
import threading
def bar():
th = threading.current_thread()
# What happens when bar() is called from the main thread?
# The commented code is not thread safe.
# if not hasattr(th, 'exit_requested'):
# th.exit_requested = False
while not th.exit_requested:
print("stuff")
Although this will probably be hard to maintain/debug. Treat it more like a hack. Cleaner way would be to create a shared object and pass it around to all calls.
I'm trying to create a thread for a GUI that wraps a long-running function. My problem is thus phrased in terms of PyQt and QThreads, but I imagine the same concept could apply to standard python threads too, and would appreciate any suggestions generally.
Typically, to allow a thread to be exited while running, I understand that including a "wants_to_end" flag that is periodically checked within the thread is a good practice - e.g.:
Pseudocode (in my thread):
def run(self):
i = 0
while (not self.wants_to_end) and (i < 100):
function_step(i) # where this is some long-running function that includes many streps
i += 1
However, as my GUI is to wrap a pre-written long-running function, I cannot simply insert such a "wants_to_end" flag poll into the long running code.
Is there another way to forcibly terminate my worker thread from my main GUI (i.e. enabling me to include a button in the GUI to stop the processing)?
My simple example case is:
class Worker(QObject):
finished = pyqtSignal()
def __init__(self, parent=None, **kwargs):
super().__init__(parent)
self.kwargs = kwargs
#pyqtSlot()
def run(self):
result = SomeLongComplicatedProcess(**self.kwargs)
self.finished.emit(result)
with usage within my MainWindow GUI:
self.thread = QThread()
self.worker = Worker(arg_a=1, arg_b=2)
self.worker.finished.connect(self.doSomethingInGUI)
self.worker.moveToThread(self.thread)
self.thread.started.connect(self.worker.run)
self.thread.start()
If the long-running function blocks, the only way to forcibly stop the thread is via its terminate() method (it may also be necessary to call wait() as well). However, there is no guarantee that this will always work, and the docs also state the following:
Warning: This function is dangerous and its use is discouraged. The
thread can be terminated at any point in its code path. Threads can be
terminated while modifying data. There is no chance for the thread to
clean up after itself, unlock any held mutexes, etc. In short, use
this function only if absolutely necessary.
A much cleaner solution is to use a separate process, rather than a separate thread. In python, this could mean using the multiprocessing module. But if you aren't familiar with that, it might be simpler to run the function as a script via QProcess (which provides signals that should allow easier integration with your GUI). You can then simply kill() the worker process whenever necessary. However, if that solution is somehow unsatisfactory, there are many other IPC approaches that might better suit your requirements.
I'm using the following template to recreate threads that I need to run into infinity.
I want to know if this template is scalable in terms of memory. Are threaded destroyed properly?
import threading
import time
class aLazyThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
time.sleep(10)
print "I don not want to work :("
class aWorkerThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
time.sleep(1)
print "I want to work!!!!!!!"
threadA = aLazyThread()
threadA.start()
threadB = aWorkerThread()
threadB.start()
while True:
if not (threadA.isAlive()):
threadA = aLazyThread()
threadA.start()
if not (threadB.isAlive()):
threadB = aWorkerThread()
threadB.start()
The thing that bother me is the following picture taking in eclipse which show debug info, and It seems that thread are stacking it.
I see nothing wrong with the image. There's the main thread and the 2 threads that you created (according to the code, 3 threads are supposed to be running at any time)
Like any other Python objects, threads are garbage collected when they're not used; e.g. in your main while cycle, when you instantiate the class (let's say aLazyThread), the old threadA value is destroyed (maybe not exactly at that point, but shortly after)
The main while cycle, could also use a sleep (e.g. time.sleep(1)), otherwise it will consume the processor, uselessly checking if the other threads are running.