Locking a method in Python? - python-3.x

Here is my problem: I'm using APScheduler library to add scheduled jobs in my application. I have multiple jobs executing same code at the same time, but with different parameters. The problem occurs when these jobs access the same method at the same time which causes my program to work incorrectly.
I wanna know if there is a way to lock a method in Python 3.4 so that only one thread may access it at a time? If so, could you please post a simple example code? Thanks.

You can use a basic python locking mechanism:
from threading import Lock
lock = Lock()
...
def foo():
lock.acquire()
try:
# only one thread can execute code there
finally:
lock.release() #release lock
Or with context managment:
def foo():
with lock:
# only one thread can execute code there
For more details see Python 3 Lock Objects and Thread Synchronization Mechanisms in Python.

Related

Python3 : can I lock multiple locks simultaneously?

I have multiple locks that lock different parts of my API.
To lock any method I do something like this :
import threading
class DoSomething:
def __init__():
self.lock = threading.Lock()
def run(self):
with self.lock:
# do stuff requiring lock here
And for most use cases this works just fine.
But, I am unsure if what I am doing when requiring multiple locks works or not :
import threading
class DoSomething:
def __init__():
self.lock_database = threading.Lock()
self.lock_logger = threading.Lock()
def run(self):
with self.lock_database and self.lock_logger:
# do stuff requiring lock here
As it is, the code runs just fine but I am unsure if it runs as I want it to.
My question is : are the locks being obtained simultaneously or is the first one acquired and only then the second is also acquired.
Is my previous code as follows ?
with self.lock1:
with self.lock2:
# do stuff here
As it is, the code currently works but, since the chances of my threads requiring the same lock simultaneously is extremely low to begin with, I may end up with a massive headache to debug later
I am asking the question as I am very uncertain on how to test my code to ensure that it is working as intended and am equally interested in having the answer and knowing how I can test it to ensure that it works ( and not end up with the end users testing it for me )
Yes, you can do that, but beware of deadlocks. A deadlock occurs when one thread is unable to make progress because it needs to acquire a lock that some other thread is holding, but the second thread is unable to make progress because it wants the lock that the first thread already is holding.
Your code example locks lock_database first, and lock_logger second. If you can guarantee that any thread that locks them both will always lock them in that same order, then you're safe. A deadlock can never happen that way. But if one thread locks lock_database before trying to lock lock_logger, and some other thread tries to grab them both in the opposite order, that's a deadlock waiting to happen.
Looks easy. And it is, except...
...In a more sophisticated program, where locks are attached to objects that are passed around to different functions, then it may not be so easy because one thread may call some foobar(a, b) function, while another thread calls the same foobar() on the same two objects, except the objects are switched.

When using asyncio.run, how do I submit the threadpool executor to the event loop?

In the Python documentation, it states:
Application developers should typically use the high-level asyncio
functions, such as asyncio.run(), and should rarely need to reference
the loop object or call its methods.
Consider also using the asyncio.run() function instead of using lower
level functions to manually create and close an event loop.
If I need to use asyncio and a ThreadPoolExecutor, how would I submit the executor to the event loop?
Normally you could do:
# Create a limited thread pool.
executor = concurrent.futures.ThreadPoolExecutor(
max_workers=3,
)
event_loop = asyncio.get_event_loop()
try:
event_loop.run_until_complete(
run_blocking_tasks(executor)
)
finally:
event_loop.close()

Can I do work in another thread while waiting for subprocess Popen

I have a Python 3.7 project
It is using a library which uses subprocess Popen to call out to a shell script.
I am wondering: if were to put the library calls in a separate thread, would I be able to do work in the main thread while waiting for the result from Popen in the other thread?
There is an answer here https://stackoverflow.com/a/33352871/202168 which says:
The way Python threads work with the GIL is with a simple counter.
With every 100 byte codes executed the GIL is supposed to be released
by the thread currently executing in order to give other threads a
chance to execute code. This behavior is essentially broken in Python
2.7 because of the thread release/acquire mechanism. It has been fixed in Python 3.
Either way does not sound particularly hopeful for what I want to do. It sounds like if the "library calls" thread has not hit the 100 bytecode trigger point when the call to Popen.wait is made then probably it will not switch to my other thread and the whole app will wait for the subprocess?
Maybe this info is wrong however.
Here is another answer https://stackoverflow.com/a/16262657/202168 which says:
...the interpreter can always release the GIL; it will give it to some
other thread after it has interpreted enough instructions, or
automatically if it does some I/O. Note that since recent Python 3.x,
the criteria is no longer based on the number of executed
instructions, but on whether enough time has elapsed.
This sounds more hopeful, since presumably communicating with the subprocess would involve I/O and might therefore allow a context switch for my main thread to be able to proceed in the meantime. (or perhaps just elapsed time waiting on the wait would cause a context switch)
I am aware of https://docs.python.org/3/library/asyncio-subprocess.html which explicitly solves this problem, but I am calling a 3rd-party library which just uses plain subprocess.Popen.
Can anyone confirm if the "subprocess calls in a separate thread" idea is likely to be useful to me, in Python 3.7 specifically?
I had time to make an experiment, so I will answer my own question...
I set up two files:
mainthread.py
#!/usr/bin/env python
import subprocess
import threading
import time
def run_busyproc():
print(f'{time.time()} Starting busyprocess...')
subprocess.run(["python", "busyprocess.py"])
print(f'{time.time()} busyprocess done.')
if __name__ == "__main__":
thread = threading.Thread(target=run_busyproc)
print("Starting thread...")
thread.start()
while thread.is_alive():
print(f"{time.time()} Main thread doing its thing...")
time.sleep(0.5)
print("Thread is done (?)")
print("Exit main.")
and busyprocess.py:
#!/usr/bin/env python
from time import sleep
if __name__ == "__main__":
for _ in range(100):
print("Busy...")
sleep(0.5)
print("Done")
Running mainthread.py from the command-line I can see that there is the context-switch that you would hope to see - main thread is able to do work while waiting on the result of the subprocess:
Starting thread...
1555970578.20475 Main thread doing its thing...
1555970578.204679 Starting busyprocess...
Busy...
1555970578.710308 Main thread doing its thing...
Busy...
1555970579.2153869 Main thread doing its thing...
Busy...
1555970579.718168 Main thread doing its thing...
Busy...
1555970580.2231748 Main thread doing its thing...
Busy...
1555970580.726122 Main thread doing its thing...
Busy...
1555970628.009814 Main thread doing its thing...
Done
1555970628.512945 Main thread doing its thing...
1555970628.518155 busyprocess done.
Thread is done (?)
Exit main.
Good news everybody, python threading works :)

Terminating a QThread that wraps a function (i.e. can't poll a "wants_to_end" flag)

I'm trying to create a thread for a GUI that wraps a long-running function. My problem is thus phrased in terms of PyQt and QThreads, but I imagine the same concept could apply to standard python threads too, and would appreciate any suggestions generally.
Typically, to allow a thread to be exited while running, I understand that including a "wants_to_end" flag that is periodically checked within the thread is a good practice - e.g.:
Pseudocode (in my thread):
def run(self):
i = 0
while (not self.wants_to_end) and (i < 100):
function_step(i) # where this is some long-running function that includes many streps
i += 1
However, as my GUI is to wrap a pre-written long-running function, I cannot simply insert such a "wants_to_end" flag poll into the long running code.
Is there another way to forcibly terminate my worker thread from my main GUI (i.e. enabling me to include a button in the GUI to stop the processing)?
My simple example case is:
class Worker(QObject):
finished = pyqtSignal()
def __init__(self, parent=None, **kwargs):
super().__init__(parent)
self.kwargs = kwargs
#pyqtSlot()
def run(self):
result = SomeLongComplicatedProcess(**self.kwargs)
self.finished.emit(result)
with usage within my MainWindow GUI:
self.thread = QThread()
self.worker = Worker(arg_a=1, arg_b=2)
self.worker.finished.connect(self.doSomethingInGUI)
self.worker.moveToThread(self.thread)
self.thread.started.connect(self.worker.run)
self.thread.start()
If the long-running function blocks, the only way to forcibly stop the thread is via its terminate() method (it may also be necessary to call wait() as well). However, there is no guarantee that this will always work, and the docs also state the following:
Warning: This function is dangerous and its use is discouraged. The
thread can be terminated at any point in its code path. Threads can be
terminated while modifying data. There is no chance for the thread to
clean up after itself, unlock any held mutexes, etc. In short, use
this function only if absolutely necessary.
A much cleaner solution is to use a separate process, rather than a separate thread. In python, this could mean using the multiprocessing module. But if you aren't familiar with that, it might be simpler to run the function as a script via QProcess (which provides signals that should allow easier integration with your GUI). You can then simply kill() the worker process whenever necessary. However, if that solution is somehow unsatisfactory, there are many other IPC approaches that might better suit your requirements.

Python thread never starts if run() contains yield from

Python 3.4, I'm trying to make a server using the websockets module (I was previously using regular sockets but wanted to make a javascript client) when I ran into an issue (because it expects async, at least if the examples are to be trusted, which I didn't use before). Threading simply does not work. If I run the following code, bar will never be printed, whereas if I comment out the line with yield from, it works as expected. So yield is probably doing something I don't quite understand, but why is it never even executed? Should I install python 3.5?
import threading
class SampleThread(threading.Thread):
def __init__(self):
super(SampleThread, self).__init__()
print("foo")
def run(self):
print("bar")
yield from var2
thread = SampleThread()
thread.start()
This is not the correct way to handle multithreading. run is neither a generator nor a coroutine. It should be noted that the asyncio event loop is only defined for the main thread. Any call to asyncio.get_event_loop() in a new thread (without first setting it with asyncio.set_event_loop() will throw an exception.
Before looking at running the event loop in a new thread, you should first analyze to see if you really need the event loop running in its own thread. It has a built-in thread pool executor at: loop.run_in_executor(). This will take a pool from concurrent.futures (either a ThreadPoolExecutor or a ProcessPoolExecutor) and provides a non-blocking way of running processes and threads directly from the loop object. As such, these can be await-ed (with Python3.5 syntax)
That being said, if you want to run your event loop from another thread, you can do it thustly:
import asyncio
class LoopThread(threading.Thread):
def __init__(self):
self.loop = asyncio.new_event_loop()
def run():
ayncio.set_event_loop(self.loop)
self.loop.run_forever()
def stop():
self.loop.call_soon_threadsafe(self.loop.stop)
From here, you still need to device a thread-safe way of creating tasks, etc. Some of the code in this thread is usable, although I did not have a lot of success with it: python asyncio, how to create and cancel tasks from another thread

Resources