Python: simple parallel example not working - python-3.x

I am trying to learn about multiprocessing in Python and I was having a look at this tutorial.
According to it, the following code should run in about 1 second:
import multiprocessing
import time
def do_something():
print('Sleeping...')
time.sleep(1.0)
print('Done sleeping!')
if __name__ == '__main__':
start = time.perf_counter()
p1 = multiprocessing.Process(target=do_something)
p2 = multiprocessing.Process(target=do_something)
p1.start()
p2.start()
p1.join()
p2.join()
finish = time.perf_counter()
print('Time', finish-start)
but when I run it, I get an execution time of more than 2 seconds.
Sleeping...
Sleeping...
Done sleeping!
Done sleeping!
Time 2.1449603
Why is that happening? What am I missing?

Related

I keep getting an error in Python 3.9, anyone have a solution, its not taking in "import time" and "Lock"

from threading import Thread, Lock, current_thread
from queue import Queue
import time
def worker(q, Lock):
while True:
value = q.get()
#processing...
print(f'in {current_thread().name} got {value}')
q.task_done()
if __name__ == "__main__":
q = Queue()
num_threads = 10
for i in range(num_threads):
thread = Thread(target=worker)
thread.daemon=True
thread.start()
for i in range(1, 21):
q.put(i)
q.join()
print('end main')
I keep getting an error in Python 3.9, anyone have a solution, its not taking in "import time" and "Lock"

How to control thread execution in Python3

Below python function executes after every 30 seconds, I would like to stop the thread execution after 1 hour (60 minutes) of total time. How can I get this ? Or polling is better than this ?
import time, threading
import datetime
def hold():
print(datetime.datetime.now())
threading.Timer(30, hold).start()
if __name__ == '__main__':
hold()
You could simply make use of the time module to do that in the following way
import time, threading
import datetime
def hold():
start = time.time()
while 1:
print(datetime.datetime.now())
# Sleep for 30 secs
time.sleep(30)
# Check if 1 hr over since the start of thread execution
end = time.time()
# Check if 1 hr passed
if(end - start >= 3600):
print(datetime.datetime.now())
break
if __name__ == '__main__':
# Initiating the thread
thread1 = threading.Thread(target=hold)
thread1.start()
thread1.join()
print("Thread execution complete")

Issue with running multiple cplex models in parallel using multiprocessing

I want to solve multiple cplex models simultaneously using python multiprocessing. I understand that the basic example of multiprocessing in python is something like:
from multiprocessing import Process
def func1():
'''some code'''
def func2():
'''some code'''
if __name__=='__main__':
p1 = Process(target = func1)
p1.start()
p2 = Process(target = func2)
p2.start()
p1.join()
p2.join()
The structure of my script is like:
Model1(args**):
'''cplex model written with docplex'''
return model
Model2(args**):
'''cplex model written with docplex'''
return model
Generate_pool1(args**):
cpx = mdl.get_cplex()
cpx.parameters.parallel.set(1)
cpx.parameters.threads.set(5)
cpx.parameters.emphasis.mip.set(4)
cpx.parameters.simplex.tolerances.markowitz.set(0.999)
cpx.parameters.simplex.tolerances.optimality.set(1e-9)
cpx.parameters.simplex.tolerances.feasibility.set(1e-9)
cpx.parameters.mip.pool.intensity.set(4)
cpx.parameters.mip.pool.absgap.set(1e75)
cpx.parameters.mip.pool.relgap.set(1e75)
cpx.populatelim=50
numsol = cpx.solution.pool.get_num()
return numsol
Generate_pool2(args**):
cpx = mdl.get_cplex()
cpx.parameters.parallel.set(1)
cpx.parameters.threads.set(5)
cpx.parameters.emphasis.mip.set(4)
cpx.parameters.simplex.tolerances.markowitz.set(0.999)
cpx.parameters.simplex.tolerances.optimality.set(1e-9)
cpx.parameters.simplex.tolerances.feasibility.set(1e-9)
cpx.parameters.mip.pool.intensity.set(4)
cpx.parameters.mip.pool.absgap.set(1e75)
cpx.parameters.mip.pool.relgap.set(1e75)
cpx.populatelim=50
numsol = cpx.solution.pool.get_num()
return numsol
main():
for i in range(len(data)-1):
m1=Model1(data[i])
m2=Model2(data[i+1])
p1 = Process(target = Generate_pool1,(m1,i),)
p1.start()
p2 = Process(target = Generate_pool2,(m2,i+1),)
p2.start()
p1.join()
p2.join()
When I run this code the cplex part doesn't work. The console keeps running but nothing happens and the process does not finish by itself, I have to keyboard interrupt it everytime. My engine has 32 virtual cores and it's runnig on spyder -windows 10.
with docplex you may find an example in https://www.linkedin.com/pulse/making-optimization-simple-python-alex-fleischer/
https://github.com/PhilippeCouronne/docplex_contribs/blob/master/docplex_contribs/src/zoomontecarlo2.py
which uses
https://github.com/PhilippeCouronne/docplex_contribs/blob/master/docplex_contribs/src/process_pool.py
that relies on
import concurrent.futures
from concurrent.futures import ProcessPoolExecutor
This example relies on docplex

Python 3 threads performance issue

I created two threads and executed them in parallel but astonishingly it took more time (33.5 secs) than sequential execution (29.4 secs). Please advice what am doing wrong?
def write_File(fName):
start = timeit.default_timer()
print('writing to {}!\n'.format(fName))
with open(fName, 'a') as f:
for i in range(0, 10000000):
f.write("aadadadadadadadadadadadada" + str(i));
end = timeit.default_timer()
print(end - start)
print('Fn exit!')
start = timeit.default_timer()
t1 = Thread(target=write_File, args=('test.txt',))
t1.start()
t2 = Thread(target=write_File, args=('test1.txt',))
t2.start()
t2.join()
end = timeit.default_timer()
print(end - start)
input('enter to exit')
You aren't doing anything wrong. You fell victim to Python's global interpreter lock. Only one thread can use the interpreter at a time so really under the hood of CPython programs multiple cores have to share one instance of the python interpreter.
Python threads switch when one goes to sleep or is awaiting I/O. So you would achieve a performance benefit from tasks such as
def do_connect():
s = socket.socket()
s.connect(('python.org', 80)) # drop the GIL
for i in range(2):
t = threading.Thread(target=do_connect)
t.start()

pass parameters in treads in python

i want to send parameters between two function, but it doesn't work properly.
from multiprocessing import Process
a=False
def func1():
a=True
def func2():
if a:
print("hi")
if __name__ == '__main__':
p1 = Process(target=func1)
p1.start()
p2 = Process(target=func2)
p2.start()
p1.join()
p2.join()
Any suggestion would be appreciated.

Resources