i am learning multi-threading in python and used following code from the following link.
http://www.tutorialspoint.com/python/python_multithreading.htm
The Pythonwin IDE hangs (not responding) for more than 30 min, Please help me if there is some problem in the code.
import thread
import time
# Define a function for the thread
def print_time( threadName, delay):
count = 0
while count < 5:
time.sleep(delay)
count += 1
print "%s: %s" % ( threadName, time.ctime(time.time()) )
# Create two threads as follows
try:
thread.start_new_thread( print_time, ("Thread-1", 2, ) )
thread.start_new_thread( print_time, ("Thread-2", 4, ) )
except:
print "Error: unable to start thread"
while 1:
pass
Remove the last code:
while 1:
pass
It makes your code running forever ,so your ide won't response anymore.
If you want to wait these threads until run over,you can add time.sleep(35) at last to wait.
The program will never finish.
Your program will write 5 times the data for each thread and then just hang forever on the main thread.
Related
I hope that title is clear. Please consider the below:
import subprocess
import time
for i in range(0, 20, 5):
print(i)
time.sleep(3)
process1 = subprocess.Popen(["G:\\mydir\\myfile.exe"])
process1.wait()
time.sleep(3)
process2 = subprocess.Popen(["G:\\mydir\\myfile.exe"])
process2.wait()
time.sleep(3)
process3 = subprocess.Popen(["G:\\mydir\\myfile.exe"])
process3.wait()
...the purpose of this code should be:
1) Increment for 0, 20 by 5.
2) For each pass of the loop, open three instances of an executable that will do some stuff.
3) Once all three have finished executing and closed, complete the next iteration of the loop.
...the idea is that there will never be more than 3 instances open, but always at least 1 during each pass of the loop.
However, with the above code, each of the three processes is waiting for the previous one to end. So there are still three .exe instances per loop, however there is never more than one open at any one time.
What do I need to do so I get the desired behaviour?
Thanks
Instead of waiting for each subprocess directly after creating it, wait for all subprocesses at the end of the loop. Both can be done in a nested loop.
import subprocess
import time
for i in range(0, 20, 5):
print(i)
child_processes = []
# open all subprocesses
for _ in range(3):
time.sleep(3)
child_processes.append(subprocess.Popen(["G:\\mydir\\myfile.exe"]))
# wait on all subprocesses
for child_process in child_processes:
child_process.wait()
I have this script:
import subprocess
p = subprocess.Popen(["myProgram.exe"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE)
while True:
out, _ = p.communicate(input().encode())
print(out.decode())
which works fine until the second input where I get:
ValueError: Cannot send input after starting communication
Is there a way to have multiple messages sent between the parent and child process in Windows ?
[EDIT]
I don't have access to the source code of myProgram.exe
It is an interactive command line application returning results from queries
Running >> myProgram.exe < in.txt > out.txt works fine with in.txt:
query1;
query2;
query3;
Interacting with another running process via stdin/stdout
To simulate the use case where a Python script starts a command line interactive process and sends/receives text over stdin/stdout, we have a primary script that starts another Python process running a simple interactive loop.
This can also be applied to cases where a Python script needs to start another process and just read its output as it comes in without any interactivity beyond that.
primary script
import subprocess
import threading
import queue
import time
if __name__ == '__main__':
def enqueue_output(outp, q):
for line in iter(outp.readline, ''):
q.put(line)
outp.close()
q = queue.Queue()
p = subprocess.Popen(["/usr/bin/python", "/test/interact.py"],
stdin = subprocess.PIPE,
stdout = subprocess.PIPE,
# stderr = subprocess.STDOUT,
bufsize = 1,
encoding ='utf-8')
th = threading.Thread(target=enqueue_output, args=(p.stdout, q))
th.daemon = True
th.start()
for i in range(4):
print("dir()", file=p.stdin)
print(f"Iteration ({i}) Parent received: {q.get()}", end='')
# p.stdin.write("dir()\n")
# while q.empty():
# time.sleep(0)
# print(f"Parent: {q.get_nowait()}")
interact.py script
if __name__ == '__main__':
for i in range(2):
cmd = raw_input()
print("Iteration (%i) cmd=%s" % (i, cmd))
result = eval(cmd)
print("Iteration (%i) result=%s" % (i, str(result)))
output
Iteration (0) Parent received: Iteration (0) cmd=dir()
Iteration (1) Parent received: Iteration (0) result=['__builtins__', '__doc__', '__file__', '__name__', '__package__', 'cmd', 'i']
Iteration (2) Parent received: Iteration (1) cmd=dir()
Iteration (3) Parent received: Iteration (1) result=['__builtins__', '__doc__', '__file__', '__name__', '__package__', 'cmd', 'i', 'result']
This Q&A was leveraged to simulate non-blocking reads from the target process: https://stackoverflow.com/a/4896288/7915759
This method provides a way to check for output without blocking in the main thread; q.empty() will tell you if there's no data. You can play around with blocking calls too using q.get() or with a timeout q.get(2) - the parameter is number of seconds. It can be a float value less than zero.
Text based interaction between processes can be done without the thread and queue, but this implementation gives more options on how to retrieve the data coming back.
The Popen() parameters, bufsize=1 and encoding='utf-8' make it possible to use <stdout>.readline() from the primary script and sets the encoding to an ascii compatible codec understood by both processes (1 is not the size of the buffer, it's a symbolic value indicating line buffering).
With this configuration, both processes can simply use print() to send text to each other. This configuration should be compatible for a lot of interactive text based command line tools.
I'm facing problem with the thread concept i.e I have a function which will create 10 threads to do task. If any key board interruption occurs, those created threads are still executing and i would like to stop those threads and revert back the changes.
The following code sinppet is the sample approach:
def store_to_db(self,keys_size,master_key,action_flag,key_status):
for iteration in range(10):
t = threading.Thread(target=self.store_worker, args=())
t.start()
threads.append(t)
for t in threads:
t.join()
def store_worker():
print "DOING"
The idea to make this work is:
you need a "thread pool" where threads are checking against if their do_run attribute is falsy.
you need a "sentinel thread" outside that pool which checks the thread status in the pool and adjusts the do_run attribute of the "thread pool" thread on demand.
Example code:
import threading
import random
import time
import msvcrt as ms
def main_logic():
# take 10 worker threads
threads = []
for i in range(10):
t = threading.Thread(target=lengthy_process_with_brake, args=(i,))
# start and append
t.start()
threads.append(t)
# start the thread which allows you to stop all threads defined above
s = threading.Thread(target=sentinel, args=(threads,))
s.start()
# join worker threads
for t in threads:
t.join()
def sentinel(threads):
# this one runs until threads defined in "threads" are running or keyboard is pressed
while True:
# number of threads are running
running = [x for x in threads if x.isAlive()]
# if kb is pressed
if ms.kbhit():
# tell threads to stop
for t in running:
t.do_run = False
# if all threads stopped, exit the loop
if not running:
break
# you don't want a high cpu load for nothing
time.sleep(0.05)
def lengthy_process_with_brake(worker_id):
# grab current thread
t = threading.currentThread()
# start msg
print(f"{worker_id} STARTED")
# exit condition
zzz = random.random() * 20
stop_time = time.time() + zzz
# imagine an iteration here like "for item in items:"
while time.time() < stop_time:
# the brake
if not getattr(t, "do_run", True):
print(f"{worker_id} IS ESCAPING")
return
# the task
time.sleep(0.03)
# exit msg
print(f"{worker_id} DONE")
# exit msg
print(f"{worker_id} DONE")
main_logic()
This solution does not 'kill' threads, just tell them to stop iterating or whatever they do.
EDIT:
I just noticed that "Keyboard exception" was in the title and not "any key". Keyboard Exception handling is a bit different, here is a good solution for that. The point is almost the same: you tell the thread to return if a condition is met.
import thread
import time
#Define a function for the thread
def Print_Time(threadname,delay):
count = 0
while count < 5:
time.sleep(delay)
count += 1
print "%s %s" %(threadname,time.ctime( time.time() ))
#create two threads
try:
thread.start_new_thread(Print_Time("Thread1",2))
thread.start_new_thread(Print_Time("Thread2",4))
except:
print "Error: unable to start thread"
When I run this code, thread one is spawned and printed five times in the output.The moment thread two is to be spawned and exception happens.
Please help to resolve this exception. Thanks in advance.
Why do you use try-except and discard the exception which gives you more information what happened? Your own error message Error: unable to start thread will obviously not give you enough information what happened.
try:
pass # ...
except Exception, e:
print(e) # start_new_thread expected at least 2 arguments, got 1
When you avoid try or print out the exception you see that something is wrong with the way how you passed the arguments to start_new_thread. From the documentation you can see how the parameters are passed.
thread.start_new_thread(function, args[, kwargs])
# False - you execute Print_time on your own
# and pass the return value to start_new_thread
thread.start_new_thread(Print_time("Thread1",2))
# Correct
thread.start_new_thread(Print_time, ("Thread1",2))
thread.start_new_thread(Print_time, ("Thread2",4))
Keep in mind that your main thread will be finished before your threads will be finished. Wait for them with a timer or better use threading
which helps you to wait on the threads you started.
I'm running Jython 2.5.3 on Ubuntu 12.04 with the OpenJDK 64-bit 1.7.0_55 JVM.
I'm trying to create a simple threaded application to optimize data processing and loading. I have populator threads that read records from a database and mangles them a bit before putting them onto a queue. The queue is read by consumer threads that store the data in a different database. Here is the outline of my code:
import sys
import time
import threading
import Queue
class PopulatorThread(threading.Thread):
def __init__(self, mod, mods, queue):
super(PopulatorThread, self).__init__()
self.mod = mod
self.mods = mods
self.queue = queue
def run(self):
# Create db connection
# ...
try:
# Select one segment of records using 'id % mods = mod'
# Process these records & slap them onto the queue
# ...
except:
con.rollback()
raise
finally:
print "Made it to 'finally' in populator %d" % self.mod
con.close()
class ConsumerThread(threading.Thread):
def __init__(self, mod, queue):
super(ConsumerThread, self).__init__()
self.mod = mod
self.queue = queue
def run(self):
# Create db connection
# ...
try:
while True:
item = queue.get()
if not item: break
# Put records from the queue into
# a different database
# ...
queue.task_done()
except:
con.rollback()
raise
finally:
print "Made it to 'finally' in consumer %d" % self.mod
con.close()
def main(argv):
tread1Count = 3
tread2Count = 4
# This is the notefactsselector data queue
nfsQueue = Queue.Queue()
# Start consumer/writer threads
j = 0
treads2 = []
while j < tread2Count:
treads2.append(ConsumerThread(j, nfsQueue))
treads2[-1].start()
j += 1
# Start reader/populator threads
i = 0
treads1 = []
while i < tread1Count:
treads1.append(PopulatorThread(i, tread1Count, nfsQueue))
treads1[-1].start()
i += 1
# Wait for reader/populator threads
print "Waiting to join %d populator threads" % len(treads1)
i = 0
for tread in treads1:
print "Waiting to join a populator thread %d" % i
tread.join()
i += 1
#Add one sentinel value to queue for each write thread
print "Adding sentinel values to end of queue"
for tread in treads2:
nfsQueue.put(None)
# Wait for consumer/writer threads
print "Waiting to join consumer/writer threads"
for tread in treads2:
print "Waiting on a consumer/writer"
tread.join()
# Wait for Queue
print "Waiting to join queue with %d items" % nfsQueue.qsize()
nfsQueue.join()
print "Queue has been joined"
if __name__ == '__main__':
main(sys.argv)
I have simplified the database implementation somewhat to save space.
When I run the code, the populator and consumer threads seem to
reach the end, since I get the "Made it to finally in ..." messages.
I get the "Waiting to join n populator threads" message, and eventually the
"Waiting to join a populator thread n" messages.
I get the "Waiting to join consumer/writer threads" message as well as each of the "Waiting on a consumer/writer" messages I expect.
I get the "Waiting to join queue with 0 items" message I expect, but not the "Queue has been joined" message; apparently the program is blocking while waiting for the queue, and it never terminates.
I suspect I have my thread initializations or thread joinings in the wrong order somehow, but I have little experience with concurrent programming, so my intuitions about how to do things aren't well developed. I find plenty of Python/Jython examples of queues populated by while loops and read by threads, but none so far about queues populated by one set of threads and read by a different set.
The populator and consumer threads appear to finish.
The program seems to be blocking finally waiting for the Queue object to terminate.
Thanks to any who have suggestions and lessons for me!
Are you calling task_done() on each item in the queue when you are done processing it? If you don't tell the queue explicitly that each task is done, it'll never return from join().
PS: You don't see "Waiting to join a populator thread %d" because you forgot the print in front of it :)