I created two threads and executed them in parallel but astonishingly it took more time (33.5 secs) than sequential execution (29.4 secs). Please advice what am doing wrong?
def write_File(fName):
start = timeit.default_timer()
print('writing to {}!\n'.format(fName))
with open(fName, 'a') as f:
for i in range(0, 10000000):
f.write("aadadadadadadadadadadadada" + str(i));
end = timeit.default_timer()
print(end - start)
print('Fn exit!')
start = timeit.default_timer()
t1 = Thread(target=write_File, args=('test.txt',))
t1.start()
t2 = Thread(target=write_File, args=('test1.txt',))
t2.start()
t2.join()
end = timeit.default_timer()
print(end - start)
input('enter to exit')
You aren't doing anything wrong. You fell victim to Python's global interpreter lock. Only one thread can use the interpreter at a time so really under the hood of CPython programs multiple cores have to share one instance of the python interpreter.
Python threads switch when one goes to sleep or is awaiting I/O. So you would achieve a performance benefit from tasks such as
def do_connect():
s = socket.socket()
s.connect(('python.org', 80)) # drop the GIL
for i in range(2):
t = threading.Thread(target=do_connect)
t.start()
Related
Running the code below, I noticed that executor.submit(printer, i) is called for each value of i in range(100) before even the first process finishes. However, since I have set max_workers=3, only three processes can run at a time. Say the program starts and processes for values zero through two are running; at this moment, what happens to the executor.submit(printer, i) called for values three through ninety-nine? And if the answer is "they're stored in memory", is there a way I can calculate how much memory each pending process might take?
import time
from concurrent.futures import ProcessPoolExecutor
def printer(i):
print(i)
end_time = time.time() + 1
while time.time() < end_time:
pass
if __name__ == "__main__":
with ProcessPoolExecutor(max_workers=3) as executor:
for i in range(100):
print(i)
executor.submit(printer, i)
Also, would it be the same if I were to use executor.map(printer, range(100)) instead of the loop?
Im not very good with programming but im currently doing a multiplication learning programm for my brother and was wandering if there is any way to do it, so that he has to answer after a certain ammoun of time or else he fails the question. Here is my Code:
import random
F = 1
while F==1:
x = random.randint(1,10)
y = random.randint(1,10)
Result = y*x
print(y,"*",x)
Input = int(input())
if Result == Input:
print("correct")
else:
print("Wrong, correct result:",Result)
I hope this is good enough. I would appreciate any help! Thank a lot in advande
You can use threading module to create a thread and assign timer to that thread, if the timer runs out that means the sub thread is dead now the program will respond you got late.
Here's the solution:
import random
from threading import Thread
from time import sleep
def timer():
sleep(10) # wait for 10 seconds once the question is asked
return True
if __name__ == '__main__':
while True:
x = random.randint(1, 10)
y = random.randint(1, 10)
Result = y * x
print(y, "*", x)
time = Thread(target=timer) # Creating sub thread for timer processing
time.start() # starting the thread
Input = int(input())
if not time.isAlive(): # checking whether the timer is alive
print('You got late, Failed')
break
else:
pass
if Result == Input:
print("correct")
else:
print("Wrong, correct result:", Result)
if you use time.sleep() method on your main thread your program will hung up and so do your system as well for the time being, so instead of doing that I created a new thread which works completely independent of your main thread and your system will not hung up.
You can define your own using the Python's time module.
For example:
def timer(t):#t must be the time of the timer in seconds
while t:
mins,sec=divmod(t,60)
timer = '{:02d}:{:02d}'.format(mins, secs)
print(timer, end='\r')
time.sleep(1)
t=t-1
print("Time's Up")
I am trying to start a thread to listen to the incoming messages from a socket. so it contains an infinite loop. but when I try to close the gui, it hangs there, and does not close it. here is more simplified code without using any gui.
import threading,time,sys
def f(x):
while True:
time.sleep(0.5)
print(x)
timer = threading.Timer(0.1,f,("some text",) )
timer.start()
time.sleep(2)
print("time to stop")
sys.exit()
as you see the line sys.exit() won't end all threads (main thread and thread started by timer).
now I was wondering how to kill that specific thread which started by the timer.
thank you for your help
I finally find a solution for it. somehow we can use global variables to end an endless loop inside a thread, and therefore close it.
import threading,time
def f(x):
global z
while True:
time.sleep(0.5)
print(x)
if not z:
break
global z
z = True
timer = threading.Timer(0.1,f,("some text",) )
timer.start()
time.sleep(2)
print("time to stop")
z = False
I have the following python program using threads. I am unable to understand why it does not terminate after execution. Suggest possible reasons and how to overcome this problem. Here is the code -
import time
from threading import *
lock1 = Lock()
def func(string):
for i in range(5):
lock1.acquire()
print(string)
lock1.release()
time.sleep(0.1)
t1 = Thread(target = func, args = ('Hello from t1',))
t2 = Thread(target = func, args = ('Hello from t2',))
t1.start()
t2.start()
print(t1.name)
The reason is simple, it is not ending because it is not making an exit from main thread. Moreover, it may run on IDLE but will not run on shell.
I wrote a script that uses 2 queues and 3 types of worker: producer, consumer (CPU-bound task), writer (I need to write the results sequentially).
This is the simplified version of my code:
from queue import Queue
from threading import Thread
def compute_single_score(data):
#do lots of calculations
return 0.0
def producer(out_q, data_to_compute):
while stuff:
data = data_to_compute.popitem()
out_q.put(data)
out_q.put(_sentinel)
def consumer(in_q, out_q):
while True:
data = in_q.get()
if data is _sentinel:
in_q.put(_sentinel)
break
out_q.put([data[0], compute_single_score(*data)])
in_q.task_done()
def writer(in_q):
while True:
data = in_q.get()
if data is _sentinel:
in_q.put(_sentinel)
break
in_q.task_done()
if __name__ == '__main__':
_sentinel = object()
jobs_queue = Queue()
scores_queue = Queue()
t1 = Thread(target=producer, args=(jobs_queue, data_to_compute,))
t2 = Thread(target=consumer, args=(jobs_queue,scores_queue,))
t3 = Thread(target=consumer, args=(jobs_queue,scores_queue,))
t4 = Thread(target=consumer, args=(jobs_queue,scores_queue,))
t5 = Thread(target=consumer, args=(jobs_queue,scores_queue,))
t6 = Thread(target=consumer, args=(jobs_queue,scores_queue,))
t7 = Thread(target=consumer, args=(jobs_queue,scores_queue,))
t8 = Thread(target=consumer, args=(jobs_queue,scores_queue,))
t9 = Thread(target=writer, args=(scores_queue,))
t1.start(); t2.start(); t3.start(); t4.start(); t5.start(); t6.start(); t7.start(); t8.start(); t9.start()
jobs_queue.join()
scores_queue.join()
print('File written')
It immediately prints out 'File written', instead waiting for the queues to be empty. Consequently the script doesn't exit although all the calculations are performed. Two threads seem to remain active.
Thanks a lot for your support.
It does wait for queues to be empty. But since putting things in queue happens in threads then it reaches .join() line faster then .put() happens. So when it does reach .join() queues are empty.
Now I'm not sure what you are trying to achieve simply because a producer has a while stuff loop. I assume that you want to continue processing until this condition is true. In particular you have to wait until t1 thread quits, i.e.
t1.start(); t2.start(); t3.start(); t4.start(); t5.start(); t6.start(); t7.start(); t8.start(); t9.start()
t1.join() # <-- this is important
jobs_queue.join()
scores_queue.join()
print('File written')
Otherwise you won't be able to synchronize it.
Side note 1: due to GIL there is no point in creating CPU bound threads. If your threads are not doing any IO (and they don't) then it will perform better when single-threaded. Well at least multiple consumer threads are pointless.
Side note 2: Do not use commas. It's not pythonic. Instead do this:
threads = []
threads.append(Thread(target=producer, args=(jobs_queue, data_to_compute,)))
threads.append(Thread(target=writer, args=(scores_queue,)))
for i in range(10):
threads.append(Thread(target=consumer, args=(jobs_queue,scores_queue,)))
for t in threads:
t.start()
threads[0].join()
Side note 3: You should handle case when queues are empty. data = in_q.get() will block forever meaning that your script won't quit (unless threads are marked as daemon). You should do for example:
try:
data = in_q.get(timeout=1)
except queue.Empty:
# handle empty queue here, perhaps quit if t1 is not alive
# otherwise just continue the loop
if not t1.is_alive(): # <-- you have to pass t1 to the thread
break
else:
continue
and then join all threads at the end (see side note 2) of the main thread:
for t in threads:
t.start()
for t in threads:
t.join()
print('File written')
And now you don't even have to join queues.
This is the code I used in the end (according to the requirements illustrated before):
from multiprocessing import JoinableQueue
from multiprocessing import Process
def compute_single_score(data):
#do lots of calculations
return 0.0
def producer(out_q, data_to_compute):
while stuff:
data = data_to_compute.popitem()
out_q.put(data)
def consumer(in_q, out_q):
while True:
try:
data = in_q.get(timeout=5)
except:
break
out_q.put([data[0], compute_single_score(*data)])
in_q.task_done()
def writer(in_q):
while True:
try:
data = in_q.get(timeout=5)
except:
break
#write
in_q.task_done()
if __name__ == '__main__':
jobs_queue = JoinableQueue()
scores_queue = JoinableQueue()
processes = []
processes.append(Process(target=producer, args=(jobs_queue, data_to_compute,)))
processes.append(Process(target=writer, args=(scores_queue,)))
for i in range(10):
processes.append(Process(target=consumer, args=(jobs_queue,scores_queue,)))
for p in processes:
p.start()
processes[1].join()
scores_queue.join()
print('File written')
I hope it will be of help for somebody else.