I tried to run below multi-threaded python program using python 3.4.3 interpreter.
My expectation is that after all items in dish_queue are gotten and processed (that's the purpose of function task_done, right?), the dish_queue will not block the program any more , so the program can exit normally.
The result is after the line Drying desert <Thread(Thread-2, started 140245865154304)> is printed out, the program never exits regardless of if the line dish_queue.join() is commented or not.
It seems the main thread is stuck in the statement washer(dishes,dish_queue) ? Can anybody explain to me why?
$ cat threading_dish.py
import threading,queue
import time
def washer(dishes,dishqueue):
for dish in dishes:
time.sleep(5)
print("washing",dish,threading.current_thread())
time.sleep(5)
dishqueue.put(dish)
def dryer(dishqueue):
while True:
dish=dishqueue.get()
print("Drying",dish,threading.current_thread())
#time.sleep(10)
dishqueue.task_done()
dish_queue=queue.Queue()
for n in range(2):
dryer_thread=threading.Thread(target=dryer,args=(dish_queue,))
dryer_thread.start()
dishes=['salad','bread','entree','desert']
washer(dishes,dish_queue)
#dish_queue.join()
$ python3 threading_dish.py
washing salad <_MainThread(MainThread, started 140245895784256)>
Drying salad <Thread(Thread-1, started 140245873547008)>
washing bread <_MainThread(MainThread, started 140245895784256)>
Drying bread <Thread(Thread-2, started 140245865154304)>
washing entree <_MainThread(MainThread, started 140245895784256)>
Drying entree <Thread(Thread-1, started 140245873547008)>
washing desert <_MainThread(MainThread, started 140245895784256)>
Drying desert <Thread(Thread-2, started 140245865154304)>
By comparison, if i run the multi-processing counterpart of the program, the program can exit normally after the last printout.
IS there any difference between the multi-threaded one and the multi-processing one that result in the opposite running result?
$ cat multiprocessing_dishes.py
import multiprocessing as mp
import time
def washer(dishes,output):
for dish in dishes:
print('washing', dish, 'dish',mp.current_process())
output.put(dish)
def dryer(input):
while True:
dish=input.get()
print('Drying',dish,'dish',mp.current_process())
time.sleep(5)
input.task_done()
dishqueue=mp.JoinableQueue()
dryerproc=mp.Process(target=dryer,args=(dishqueue,))
dryerproc.daemon=True
dryerproc.start()
dishes=['xxx','asa','aass']
washer(dishes,dishqueue)
dishqueue.join()
$ python3 multiprocessing_dishes.py
washing xxx dish <_MainProcess(MainProcess, started)>
washing asa dish <_MainProcess(MainProcess, started)>
washing aass dish <_MainProcess(MainProcess, started)>
Drying xxx dish <Process(Process-1, started daemon)>
Drying asa dish <Process(Process-1, started daemon)>
Drying aass dish <Process(Process-1, started daemon)>
$
https://docs.python.org/3.4/library/queue.html
Queue.get() blocks, so after the queue is no longer being produced into, your dryer threads will block indefinitely.
You need to signal in some way to the dryer threads that the washing process is finished.
You can achieve this by querying a shared variable in the while loop's condition, or alternatively by enqueuing a poison pill for each thread. A poison pill is a pre-determined value, that lets a consuming thread know that it needs to terminate.
Related
I am curious why the threads started in a python script are running even when the last statement of the script is executed (which means, the script has completed (I believe)).
I have shared below the code I am talking about. Any insights on this would be helpful:
======================================================================================
import time
import threading
start=time.perf_counter()
def do_something():
print("Waiting for a sec...")
time.sleep(60)
print("Waiting is over!!!")
mid1=time.perf_counter()
t1=threading.Thread(target=do_something)
t2=threading.Thread(target=do_something)
mid2=time.perf_counter()
t1.start()
mid3=time.perf_counter()
t2.start()
finish=time.perf_counter()
print(start,mid1,mid2,mid3,finish)
What output do you see? This is what I see:
Waiting for a sec...
Waiting for a sec...
95783.4201273 95783.4201278 95783.4201527 95783.4217046 95783.4219945
Then it's quiet for a minute, and displays:
Waiting is over!!!
Waiting is over!!!
and then the script ends.
That's all as expected. As part of shutting down, the interpreter waits for all running threads to complete (unless they were created with daemon=True, which you should probably avoid until you know exactly what you're doing). You told your threads to sleep for 60 seconds before finishing, and that's what they did.
I have a Python 3.7 project
It is using a library which uses subprocess Popen to call out to a shell script.
I am wondering: if were to put the library calls in a separate thread, would I be able to do work in the main thread while waiting for the result from Popen in the other thread?
There is an answer here https://stackoverflow.com/a/33352871/202168 which says:
The way Python threads work with the GIL is with a simple counter.
With every 100 byte codes executed the GIL is supposed to be released
by the thread currently executing in order to give other threads a
chance to execute code. This behavior is essentially broken in Python
2.7 because of the thread release/acquire mechanism. It has been fixed in Python 3.
Either way does not sound particularly hopeful for what I want to do. It sounds like if the "library calls" thread has not hit the 100 bytecode trigger point when the call to Popen.wait is made then probably it will not switch to my other thread and the whole app will wait for the subprocess?
Maybe this info is wrong however.
Here is another answer https://stackoverflow.com/a/16262657/202168 which says:
...the interpreter can always release the GIL; it will give it to some
other thread after it has interpreted enough instructions, or
automatically if it does some I/O. Note that since recent Python 3.x,
the criteria is no longer based on the number of executed
instructions, but on whether enough time has elapsed.
This sounds more hopeful, since presumably communicating with the subprocess would involve I/O and might therefore allow a context switch for my main thread to be able to proceed in the meantime. (or perhaps just elapsed time waiting on the wait would cause a context switch)
I am aware of https://docs.python.org/3/library/asyncio-subprocess.html which explicitly solves this problem, but I am calling a 3rd-party library which just uses plain subprocess.Popen.
Can anyone confirm if the "subprocess calls in a separate thread" idea is likely to be useful to me, in Python 3.7 specifically?
I had time to make an experiment, so I will answer my own question...
I set up two files:
mainthread.py
#!/usr/bin/env python
import subprocess
import threading
import time
def run_busyproc():
print(f'{time.time()} Starting busyprocess...')
subprocess.run(["python", "busyprocess.py"])
print(f'{time.time()} busyprocess done.')
if __name__ == "__main__":
thread = threading.Thread(target=run_busyproc)
print("Starting thread...")
thread.start()
while thread.is_alive():
print(f"{time.time()} Main thread doing its thing...")
time.sleep(0.5)
print("Thread is done (?)")
print("Exit main.")
and busyprocess.py:
#!/usr/bin/env python
from time import sleep
if __name__ == "__main__":
for _ in range(100):
print("Busy...")
sleep(0.5)
print("Done")
Running mainthread.py from the command-line I can see that there is the context-switch that you would hope to see - main thread is able to do work while waiting on the result of the subprocess:
Starting thread...
1555970578.20475 Main thread doing its thing...
1555970578.204679 Starting busyprocess...
Busy...
1555970578.710308 Main thread doing its thing...
Busy...
1555970579.2153869 Main thread doing its thing...
Busy...
1555970579.718168 Main thread doing its thing...
Busy...
1555970580.2231748 Main thread doing its thing...
Busy...
1555970580.726122 Main thread doing its thing...
Busy...
1555970628.009814 Main thread doing its thing...
Done
1555970628.512945 Main thread doing its thing...
1555970628.518155 busyprocess done.
Thread is done (?)
Exit main.
Good news everybody, python threading works :)
When using multiprocessing.Pool in python 3.6 or 3.7 with maxtasksperchild=1, I noticed that some processes spawned by the pool are hanging and do not quit, even though the callback to their tasks was already executed. As a result, Pool.join() will block forever, even though all tasks are finished. In the process tree, running but idle child processes can be seen. The problem does not occur if maxtasksperchild=None.
The problem seems to be related to what the callback precisely does. The docs point out that the callback "should return immediately", as it will block other threads managing the pool.
A minimal example to reproduce this behavior on my machine is as follows: (Give it a few tries or increase the number of tasks when it does not block forever.)
from multiprocessing import Pool
from os import getpid
from random import random
from time import sleep
def do_stuff():
pass
def cb(arg):
sleep(random()) # can be replaced with print('foo')
p = Pool(maxtasksperchild=1)
number_of_tasks = 100 # a value may depend on your machine -- for mine 20 is sufficient to trigger the behavior
for i in range(number_of_tasks):
p.apply_async(do_stuff, callback=cb)
p.close()
print("joining ... (this should take just seconds)")
print("use the following command to watch the process tree:")
print(" watch -n .2 pstree -at -p %i" % getpid())
p.join()
Contrary to what I expected, p.join() in the last line will block forever even though do_stuff and cb were both called 100 times.
I am aware that sleep(random()) is in violation of the docs, but is print() also taking 'too long'? The way the docs are written suggest that a non-blocking callback function is required for performance and efficiency and make not clear that a 'slow' callback function will break the pool entirely.
Is print() forbidden in any multiprocessing.Pool callback function? (How to replace that functionality? What is "returning immediately", what is not?)
If yes, should the python documentation be updated to make this clear?
If yes, is it good python practice to rely on "fast" execution of python threads? Does this violate the rule that one should not make assumptions on execution order of threads?
Should I report this to the python bug tracker?
I have an array of data to handle and handler that executing long (1-2 minutes) and takes a lot of memory for its calculations.
raw = ['a', 'b', 'c']
def handler():
# do something long
Since handler requires a lot of memory, I want to execute it in separate subprocess and kill it after execution to release memory. Something like the following snippet:
from multiprocessing import Process
for r in raw:
process = Process(target=handler, args=(r))
process.start()
The problem is that such approach leads to immediate running len(raw) processes. And it's not good.
Also, it's not needed to interchange any kind of data between subprocesses. Just run them consequently.
Therefore it would be great to run a few processes at the same time and add a new one once existing finishes.
How could it be implemented (if it's even possible)?
to run your processes sequentially, just join each process within the loop:
from multiprocessing import Process
for r in raw:
process = Process(target=handler, args=(r))
process.start()
process.join()
that way you're sure that only one process is running at the same time (no concurrency)
That's the simplest way. To run more than one process but limit the number of processes running at the same time, you can use a multiprocessing.Pool object and apply_async
I've built a simple example which computes the square of the argument, and simulates an heavy processing:
from multiprocessing import Pool
import time
def target(r):
time.sleep(5)
return(r*r)
raw = [1,2,3,4,5]
if __name__ == '__main__':
with Pool(3) as p: # 3 processes at a time
reslist = [p.apply_async(target, (r,)) for r in raw]
for result in reslist:
print(result.get())
Running this I get:
<5 seconds wait, time to compute the results>
1
4
9
<5 seconds wait, 3 processes max can run at the same time>
16
25
I need to make some network calls for data in my program. I intend to call them in parallel but not all of them need to complete.
What i have right now is
thread1 = makeNetworkCallThread()
thread1.start()
thread2 = makeLongerNetworkCallThread()
thread2.start()
thread1.join()
foo = thread1.getData()
thread2.join()
if conditionOn(foo):
foo = thread2.getData()
# continue with code
the problem with this is that even if the shorter network call succeeded, I need to wait for the time it takes for the longer network call to complete
What will happen if I move the thread2.join() inside the if statement? The join method might never get called. Will that cause some problems with stale threads etc?
thread2 will still continue to run (subject to the caveats of the GIL, but since it is a network call that's probably not a concern) whether join is called or not. The difference is whether the main context waits for the thread to end before going on to do other things - if you're able to continue processing without that longer network call completing, then there should be no issues.
Do keep in mind that the program will not actually end (the interpreter will not exit) until all threads have been completed. Depending on the latency of this long network call to the run time of the rest of your program (in the event you don't wait), it might appear that the program reaches its end but doesn't actually exit until the network call wraps up. Consider this silly example:
# Python 2.7
import threading
import time
import logging
def wasteTime(sec):
logging.info('Thread to waste %d seconds started' % sec)
time.sleep(sec)
logging.info('Thread to waste %d seconds ended' % sec)
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s %(message)s', level=logging.INFO)
t1 = threading.Thread(target=wasteTime, args=(2,))
t2 = threading.Thread(target=wasteTime, args=(10,))
t1.start()
t2.start()
t1.join()
logging.info('Main context done')
This is the logging output:
$ time python test.py
2015-01-15 09:32:12,239 Thread to waste 2 seconds started
2015-01-15 09:32:12,239 Thread to waste 10 seconds started
2015-01-15 09:32:14,240 Thread to waste 2 seconds ended
2015-01-15 09:32:14,241 Main context done
2015-01-15 09:32:22,240 Thread to waste 10 seconds ended
real 0m10.026s
user 0m0.015s
sys 0m0.010s
Note that although the main context reached its end after 2 seconds (the amount of time it took for thread1 to complete), the program doesn't completely exit until thread2 is completed (ten seconds after start of execution). In situations like this (particularly if the output is being logged as such), it's my opinion that it is better to explicitly call join at some point and explicitly identify in your logs that this is what the program is doing so that it doesn't look to the user/operator like it has hung. For my silly example, that might look like adding lines like this to the end of the main context:
logging.info('Waiting for thread 2 to complete')
t2.join()
Which will generate somewhat less mysterious log output:
$ time python test.py
2015-01-15 09:39:18,979 Thread to waste 2 seconds started
2015-01-15 09:39:18,979 Thread to waste 10 seconds started
2015-01-15 09:39:20,980 Thread to waste 2 seconds ended
2015-01-15 09:39:20,980 Main context done
2015-01-15 09:39:20,980 Waiting for thread 2 to complete
2015-01-15 09:39:28,980 Thread to waste 10 seconds ended
real 0m10.027s
user 0m0.015s
sys 0m0.010s