Run Twisted reactor from a Thread - multithreading

When i run reactor from thread in a synchrone python program, the twisted code is never called.
To resolve this problem, I had to put a sleep.
def _reactor_thread(self):
if not self.reactor.running:
self.reactor.run(installSignalHandlers=0)
def _start_thread( self ):
self.client_thread = Thread( target=self._reactor_thread,
name="mine" )
self.client_thread.setDaemon(True)
self.client_thread.start()
from time import sleep
sleep( 0.5 )
What is the best way to do it, instead of calling sleep?

We can do this with Crochet. Or with using addSystemEventTrigger.

Related

How to use ThreadPoolExecutor inside a gunicorn process?

I am running FastAPI app with gunicorn with the following config:
bind = 0.0.0.0:8080
worker_class = "uvicorn.workers.UvicornWorker"
workers = 3
loglevel = ServerConfig.LOG_LEVEL.lower()
max_requests = 1500
max_requests_jitter = 300
timeout = 120
Inside this app, I am doing some task (not very long running) every 0.5 seconds (through a Job Scheduler) and doing some processing on the data.
In that Job scheduler, I am calling "perform" method (See code below):
class BaseQueueConsumer:
def __init__(self, threads: int):
self._threads = threads
self._executor = ThreadPoolExecutor(max_workers=1)
def perform(self, param1, param2, param3) -> None:
futures = []
for _ in range(self._threads):
futures.append(
self._executor.submit(
BaseQueueConsumer.consume, param1, param2, param3
)
)
for future in futures:
future.done()
#staticmethod
def consume(param1, param2, param3) -> None:
# Doing some work here
The problem is, whenever this app is under a high load, I am getting the following error:
cannot schedule new futures after shutdown
My guess is that the gunicorn process restarts every 1500 requests (max_requests) and the tasks that are already submitted are causing this issue.
What I am not able to understand is that whatever thread gunicorn process starts due to threadpoolexecutor should also end when the process is terminated but that is not the case.
Can someone help me explain this behaviour and a possible solution for gracefully ending the gunicorn process without these threadpoolexecutor tasks causing errors?
I am using python 3.8 and gunicorn 0.15.0

Getting `BrokenProcessPool` error in a `concurrent.futures` example

The example I am running is mentioned in this PyMOTW3 link. I am reproducing the code here:
from concurrent import futures
import os
def task(n):
return (n, os.getpid())
ex = futures.ProcessPoolExecutor(max_workers=2)
results = ex.map(task, range(5, 0, -1))
for n, pid in results:
print('ran task {} in process {}'.format(n, pid))
As per source, I am supposed to get following output:
ran task 5 in process 40854
ran task 4 in process 40854
ran task 3 in process 40854
ran task 2 in process 40854
ran task 1 in process 40854
Instead, I'm getting a long message with following concluding line -
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
I am using Windows machine and running Python 9. All other examples are otherwise running fine. What is going wrong here?
I've finally been able to resolve the issue. The issue seems to be Windows specific. Following a related Stack Overflow post, I used if __name__=="__main__" idiom. The modified code is:
from concurrent import futures
import os
def task(n):
return (n, os.getpid())
def main():
ex = futures.ProcessPoolExecutor(max_workers=2)
results = ex.map(task, range(5, 0, -1))
for n, pid in results:
print('ran task {} in process {}'.format(n, pid))
if __name__ == '__main__':
main()
It worked, although I'm still not sure why this worked.

How to kill a QProcess instance using os.kill()?

Problem
Hey, recently when I'm using pyqt6's QProcess, I try to use os.kill() to kill a QProcess's instance. (The reason why I want to use os.kill() instead of QProcess().kill() is that I want to send a CTRL_C_EVENT signal when killing the process.) Even though with using correct pid (acquired by calling QProcess().processId()), it seems that a signal would be sent to all processes unexpectedly.
Code
Here's my code:
from PyQt6.QtCore import QProcess
import os
import time
import signal
process_a = QProcess()
process_a.start("python", ['./test.py'])
pid_a = process_a.processId()
print(f"pid_a = {pid_a}")
process_b = QProcess()
process_b.start("python", ['./test.py'])
pid_b = process_b.processId()
print(f"pid_b = {pid_b}")
os.kill(pid_a, signal.CTRL_C_EVENT)
try:
time.sleep(1)
except KeyboardInterrupt:
print("A KeyboardInterrupt should not be caught here.")
process_a.waitForFinished()
process_b.waitForFinished()
print(f"process_a: {process_a.readAll().data().decode('gbk')}")
print(f"process_b: {process_b.readAll().data().decode('gbk')}")
and ./test.py is simple:
import time
time.sleep(3)
print("Done")
What I'm expecting
pid_a = 19956
pid_b = 28468
process_a:
process_b: Done
What I've got
pid_a = 28040
pid_b = 23708
A KeyboardInterrupt should not be caught here.
process_a:
process_b:
Discussion
I don't know whether this is a bug or misusage. It seems that signal.CTRL_C_EVENT is sent to all processes. So, how do I kill one QProcess instance with signal CTRL_C_EVENT correctly?

Ways of splitting CPU-Bound tasks to avoid blocking other asyncio tasks

I'm pretty new to asyncio, correct me if my terminology is wrong.
Suppose I have a CPU-bound task that runs a for loop.
I have another task that pings every 1sec.
If we initiate both, the CPU-bound task will block the pinging task.
How do split the CPU-bound task so as it doesn't block the pinging task.
Currently I only have a dumb way of doing this, which is to divide the CPU-bound task into 1000 pieces and sleep 1sec after finishing every 1 piece.
Any better solutions to this?
Thanks
import time
import asyncio
async def ping():
print(f'ping - {time.time()}')
while True:
await asyncio.sleep(1)
print(f'ping - {time.time()}')
async def process():
result = 0.0
print(f'process - {time.time()}')
for i in range(100000000):
result += i * i
print(f'process - {time.time()}')
return result
async def run():
task1 = asyncio.create_task(ping())
task2 = asyncio.create_task(process())
await asyncio.gather(task1, task2)
asyncio.run(run())
The output of the above code follows
ping - 1659977867.4815755
process - 1659977867.4816022
process - 1659977874.5827787
ping - 1659977874.5828712
ping - 1659977875.5845447
ping - 1659977876.5862029
ping - 1659977877.5878963
ping - 1659977878.5894375

PySide QtCore.QThreadPool and QApplication.quit() causes hangs?

I want to use Qt's QThreadPool, but it seems to be hanging my application if the workers in the queue do not finish before calling QApplication.quit(). Can anyone tell me if i'm doing something wrong in the reduced testcase below?
import logging
log = logging.getLogger(__name__)
import sys
from PySide import QtCore
import time
class SomeWork(QtCore.QRunnable):
def __init__(self, sleepTime=1):
super(SomeWork, self).__init__()
self.sleepTime = sleepTime
def run(self):
time.sleep(self.sleepTime)
print "work", QtCore.QThread.currentThreadId()
def _test(argv):
logging.basicConfig(level=logging.NOTSET)
app = QtCore.QCoreApplication(argv)
pool = QtCore.QThreadPool.globalInstance()
TASK_COUNT = int(argv[1]) if len(argv) > 1 else 1
mainThread = QtCore.QThread.currentThreadId()
print "Main thread: %s"%(mainThread)
print "Max thread count: %s"%(pool.maxThreadCount())
print "Work count: %s"%(TASK_COUNT)
for i in range(TASK_COUNT):
pool.start(SomeWork(1))
def boom():
print "boom(); calling app.quit()"
app.quit()
QtCore.QTimer.singleShot(2000, boom)
#import signal
#signal.signal(signal.SIGINT, signal.SIG_DFL)
return app.exec_()
if __name__ == '__main__':
sys.exit(_test(sys.argv))
To be clear, this is the output I get:
(env)root#localhost:# python test_pool.py 1
Main thread: 3074382624
Max thread count: 1
Work count: 1
work 3061717872
boom(); calling app.quit()
(env)root#workshop:/home/workshop/workshop/workshop# python test_pool.py 20
Main thread: 3074513696
Max thread count: 1
Work count: 20
work 3060783984
boom(); calling app.quit()
And it hangs forever on the second command, but not the first.
Thanks for any help you may have.
EDIT:
To be clear, I expect that if app.quit() is called while threads are in the thread queue, they do not run. Already running threads should run to completion. Then, the application should close.
This example fails on a Windows machine as well
This example works on the same Windows machine, but using PyQt4
Adding this to _test() just before the exec() fixes the issue, although all the threads run:
def waitForThreads():
print "Waiting for thread pool"
pool.waitForDone()
app.aboutToQuit.connect(waitForThreads)

Resources