Is it possible to interrupt the main thread using child thread? - python-3.x

I have a function that will compute something then put it on a PDF. However, this process takes time. Because of this, we implement stop button for the user to stop the process.
I tried using thread.Event(), but the function I am working with is a non-looping function. This means that, I can't regularly check if event is set. (Hard coding the application by simply putting multiple checker will do but nope -- that idea is not accepted).
def generate_data(self, event):
self.thread = threading.Thread(target=self.check_event)
self.thread.start()
...
def check_event(self):
while True:
if self.event.is_set():
self.controller.enable_run_btn_and_tab()
return
time.sleep(1)
My idea is to create another thread that will regularly check if event is set so I can return and exit the function. However, the above code for def check_event(self) will only exit the child thread, not the thread for generate_date(self).
Is there some code or modification in my code to stop the main thread using child thread?

Related

Python (3.7) asyncio, worker task with while loop and a custom signal handler

I'm trying to understand the pattern for indefinitely running asyncio Tasks
and the difference that a custom loop signal handler makes.
I create workers using loop.create_task() so that they run concurrently.
In my regular workers' code I am polling for data and act accordingly when data is there.
I'm trying to handle the shutdown process gracefully on a signal.
When a signal is delivered - I again create_task() with the shutdown function, so that currently running tasks continue, and shutdown gets executed in next iteration of the event loop.
Now - when a single worker's while loop doesn't actually do any IO or work then it prevents the signal handler from being executed. It never ends and does not give back execution so that other tasks could be run.
When I don't attach a custom signal handler to a loop and run this program, then a signal is delivered and the program stops. I assume it's a main thread that stops the loop itself.
This is obviously different from trying to schedule a (new) shutdown task on a running loop, because that running loop is stuck in a single coroutine which is blocked in a while loop and doesn't give back any control or time for other tasks.
Is there any standard pattern for such cases?
Do I need to asyncio.sleep() if there's no work to do, do I replace the while loop with something else (e.g. rescheduling the work function itself)?
If the range(5) is replaced with range(1, 5) then all workers do await asyncio.sleep,
but if one of them does not, then everything gets blocked. How to handle this case, is there any standard approach?
The code below illustrates the problem.
async def shutdown(loop, sig=None):
print("SIGNAL", sig)
tasks = [t for t in asyncio.all_tasks()
if t is not asyncio.current_task()]
[t.cancel() for t in tasks]
results = await asyncio.gather(*tasks, return_exceptions=True)
# handle_task_results(results)
loop.stop()
async def worker(intval):
print("start", intval)
while True:
if intval:
print("#", intval)
await asyncio.sleep(intval)
loop = asyncio.get_event_loop()
for sig in {signal.SIGINT, signal.SIGTERM}:
loop.add_signal_handler(
sig,
lambda s=sig: asyncio.create_task(shutdown(loop, sig=s)))
workers = [loop.create_task(worker(i)) for i in range(5)] # this range
loop.run_forever()

How to schedule a function to operate as a background task

Apologies for my poor phrasing but here goes.
I need to execute a function every thirty minutes whilst other tasks are running however I have no idea how to do this or to phrase it into google. My goal is to modify my script so that it operates (without a UI) like the task manager program with background services, programs, utils, ect.
I have tried to create this by timing each function and creating functions that execute other functions however no matter what I try it operates in an asynchronous fashion like any script would.
An example of this would include the following.
def function_1():
"""Perform operations"""
pass
def function_2():
"""Perform operations"""
pass
def executeAllFunctions():
function_1()
function_2()
How can I initialize function_1 as a background task whilst function_2 is executed in a normal manner?
There is an excellent answer here.
The main idea is to run an async coroutine in a forever loop inside a thread.
In your case, you have to define function one as a coroutine use a caller function to be in the thread and create a thread.
Sample example heavily inspired to the answer in the link but adapted to your question.
#asyncio.coroutine
def function_1():
while True:
do_stuff
yield from asyncio.sleep(1800)
def wrapper(loop):
asyncio.set_event_loop(loop)
loop.run_until_complete(function_1())
def function_2():
do_stuff
def launch():
loop = asyncio.get_event_loop()
t = threading.Thread(target=wrapper, args=(loop,)) # create the thread
t.start() # launch the thread
function_2()
t.exit() # when function_2 return

Process finishes but cannot be joined?

To accelerate a certain task, I'm subclassing Process to create a worker that will process data coming in samples. Some managing class will feed it data and read the outputs (using two Queue instances). For asynchronous operation I'm using put_nowait and get_nowait. At the end I'm sending a special exit code to my process, upon which it breaks its internal loop. However... it never happens. Here's a minimal reproducible example:
import multiprocessing as mp
class Worker(mp.Process):
def __init__(self, in_queue, out_queue):
super(Worker, self).__init__()
self.input_queue = in_queue
self.output_queue = out_queue
def run(self):
while True:
received = self.input_queue.get(block=True)
if received is None:
break
self.output_queue.put_nowait(received)
print("\tWORKER DEAD")
class Processor():
def __init__(self):
# prepare
in_queue = mp.Queue()
out_queue = mp.Queue()
worker = Worker(in_queue, out_queue)
# get to work
worker.start()
in_queue.put_nowait(list(range(10**5))) # XXX
# clean up
print("NOTIFYING")
in_queue.put_nowait(None)
#out_queue.get() # XXX
print("JOINING")
worker.join()
Processor()
This code never completes, hanging permanently like this:
NOTIFYING
JOINING
WORKER DEAD
Why?
I've marked two lines with XXX. In the first one, if I send less data (say, 10**4), everything will finish normally (processes join as expected). Similarly in the second, if I get() after notifying the workers to finish. I know I'm missing something but nothing in the documentation seems relevant.
Documentation mentions that
When an object is put on a queue, the object is pickled and a background thread later flushes the pickled data to an underlying pipe. This has some consequences [...] After putting an object on an empty queue there may be an infinitesimal delay before the queue’s empty() method returns False and get_nowait() can return without raising queue.Empty.
https://docs.python.org/3.7/library/multiprocessing.html#pipes-and-queues
and additionally that
whenever you use a queue you need to make sure that all items which have been put on the queue will eventually be removed before the process is joined. Otherwise you cannot be sure that processes which have put items on the queue will terminate.
https://docs.python.org/3.7/library/multiprocessing.html#multiprocessing-programming
This means that the behaviour you describe is caused probably by a racing condition between self.output_queue.put_nowait(received) in the worker and joining the worker with worker.join() in the Processers __init__. If joining was faster than feeding it into the queue, everything finishes fine. If it was too slow, there is an item in the queue, and the worker would not join.
Uncommenting the out_queue.get() in the main process would empty the queue, which allows joining. But as it is important for the queue to return if the queue would already be empty, using a time-out might be an option to try to wait out the racing condition, e.g out_qeue.get(timeout=10).
Possibly important might also be to protect the main routine, especially for Windows (python multiprocessing on windows, if __name__ == "__main__")

Python: how does a thread wait for other thread to end before resuming it's execution?

I am making a bot for telegram, this bot will use a database (SQLite3).
I am familiar with threads and locks and I know that is safe to launch multiple thread that make query to the database.
My problem rises when I want to update/insert data.
With the use Condition and Event from the threading module, I can prevent new thread to access the database while a thread is updating/inserting data.
What I haven't figured out is how to wait that all the thread that are accessing the database are done, before updating/inserting data.
If I could get the count of semaphore I would just wait for it to drop to 0, but since is not possible, what approach should I use?
UPDATE: I can't use join() since I am using telegram bot and create thread dynamically with each request to my bot, therefore when a thread is created I don't know if I'll have to wait for it to end or not.
CLARIFICATION: join() can only be used if, at the start of a thread you know wether you'll have to wait for it to end or not. Since I create a thread for each request of my clients and I am unaware of what they'll ask or and when the request will be done, I can't know whether to use join() or not.
UPDATE2: Here the code regarding the locks. I haven't finished the code regarding the database since I am more concerned with the locks and it doesn't seems relevant to the question.
lock = threading.Lock()
evLock = threading.Event()
def addBehaviours(dispatcher):
evLock.set()
# (2) Fetch the list of events
events_handler = CommandHandler('events', events)
dispatcher.add_handler(events_handler)
# (3) Add a new event
addEvent_handler = CommandHandler('addEvent', addEvent)
dispatcher.add_handler(addEvent_handler)
# (2) Fetch the list of events
#run_async
def events(bot, update):
evLock.wait()
# fetchEvents()
# (3) Add a new event
#run_async
def addEvent(bot, update):
with lock:
evLock.clear()
# addEvent()
evLock.set()
You can use threading.Thread.join(). This will wait for a thread to end and only continue on when the thread is done.
Usage below:
import threading as thr
thread1 = thr.Thread() # some thread to be waited for
thread1 = thr.Thread() # something that runs after thread1 finishes
thread1.start() # start up this thread
thread1.join() # wait until this thread finishes
thread2.start()
...

Terminating a QThread that wraps a function (i.e. can't poll a "wants_to_end" flag)

I'm trying to create a thread for a GUI that wraps a long-running function. My problem is thus phrased in terms of PyQt and QThreads, but I imagine the same concept could apply to standard python threads too, and would appreciate any suggestions generally.
Typically, to allow a thread to be exited while running, I understand that including a "wants_to_end" flag that is periodically checked within the thread is a good practice - e.g.:
Pseudocode (in my thread):
def run(self):
i = 0
while (not self.wants_to_end) and (i < 100):
function_step(i) # where this is some long-running function that includes many streps
i += 1
However, as my GUI is to wrap a pre-written long-running function, I cannot simply insert such a "wants_to_end" flag poll into the long running code.
Is there another way to forcibly terminate my worker thread from my main GUI (i.e. enabling me to include a button in the GUI to stop the processing)?
My simple example case is:
class Worker(QObject):
finished = pyqtSignal()
def __init__(self, parent=None, **kwargs):
super().__init__(parent)
self.kwargs = kwargs
#pyqtSlot()
def run(self):
result = SomeLongComplicatedProcess(**self.kwargs)
self.finished.emit(result)
with usage within my MainWindow GUI:
self.thread = QThread()
self.worker = Worker(arg_a=1, arg_b=2)
self.worker.finished.connect(self.doSomethingInGUI)
self.worker.moveToThread(self.thread)
self.thread.started.connect(self.worker.run)
self.thread.start()
If the long-running function blocks, the only way to forcibly stop the thread is via its terminate() method (it may also be necessary to call wait() as well). However, there is no guarantee that this will always work, and the docs also state the following:
Warning: This function is dangerous and its use is discouraged. The
thread can be terminated at any point in its code path. Threads can be
terminated while modifying data. There is no chance for the thread to
clean up after itself, unlock any held mutexes, etc. In short, use
this function only if absolutely necessary.
A much cleaner solution is to use a separate process, rather than a separate thread. In python, this could mean using the multiprocessing module. But if you aren't familiar with that, it might be simpler to run the function as a script via QProcess (which provides signals that should allow easier integration with your GUI). You can then simply kill() the worker process whenever necessary. However, if that solution is somehow unsatisfactory, there are many other IPC approaches that might better suit your requirements.

Resources