I'm finding it difficult to figure out how to use ipyparallel from jupyter lab to execute two functions in parallel. Could someone please give me an example of how this should be done? For example, running these two functions at the same time:
import time
def foo():
print('foo')
time.sleep(5)
def bar():
print('bar')
time.sleep(10)
So first you will need to ensure that ipyparallel is installed and an ipycluster is running - instructions here.
Once you have done that, here is some adapted code that will run your two functions in parallel:
from ipyparallel import Client
rc = Client()
def foo():
import time
time.sleep(5)
return 'foo'
def bar():
import time
time.sleep(10)
return 'bar'
res1 = rc[0].apply(foo)
res2 = rc[1].apply(bar)
results = [res1, res2]
while not all(map(lambda ar: ar.ready(), results)):
pass
print(res1.get(), res2.get())
N.B. I removed the print statements as you can't call back from the child process into the parent Jupyter session in order to print, but we can of course return a result - I block here until both results are completed, but you could instead print the results as they became available
Related
I have the below simple functions:
import time
def foo():
print(f'i am working come back in 5mins')
time.sleep(300)
def boo():
print(f' boo!')
def what_ever_function():
print(f'do whatever function user input at run time.')
What I wish to do is execute foo() and then immediately execute boo() or what_ever_function() without having to wait for 300 seconds for foo() to finish.
Imagine a workflow in Ipython:
>>> foo()
i am working come back in 5mins
>>> boo()
boo!
The idea is after execute foo(), I can use the user-prompt to run another function immediately; whatever function that may be; without having to wait 300 seconds for foo() to finish.
I already tried googleing:
https://docs.python.org/3/library/asyncio.html
and
https://docs.python.org/3/library/threading.html#
But still couldn't achieve the above task.
Any pointer or help please?
Thanks
If you use asyncio, you should use asyncio.sleep instead of time.sleep because it would block the asycio event loop. here is a working example:
import asyncio
async def foo():
print("Waiting...")
await asyncio.sleep(5)
print("Done waiting!")
async def bar():
print("Hello, world!")
async def main():
t1 = asyncio.create_task(foo())
await asyncio.sleep(1)
t2 = asyncio.create_task(bar())
await t1, t2
if __name__ == "__main__":
asyncio.run(main())
In this example, foo and bar run concurrently: bar does execute while foo also do.
I'm trying to write a program that interfaces with hardware via pyserial according to this diagram https://github.com/kiyoshi7/Intrument/blob/master/Idea.gif . my problem is that I don't know how to tell the child process to run a method.
I tried reducing my problem down to the essence of what I am trying to do can call the method request() from the main script. I just dont know how to handle two way communication like this, in examples using queue i just see data shared or i cant understand the examples
import multiprocessing
from time import sleep
class spawn:
def __init__(self, _number, _max):
self._number = _number
self._max = _max
self.Update()
def request(self, x):
print("{} was requested.".format(x))
def Update(self):
while True:
print("Spawned {} of {}".format(self._number, self._max))
sleep(2)
if __name__ == '__main__':
p = multiprocessing.Process(target=spawn, args=(1,1))
p.start()
sleep(5)
p.request(2) #here I'm trying to run the method I want
update thanks to Carcigenicate
import multiprocessing
from time import sleep
from operator import methodcaller
class Spawn:
def __init__(self, _number, _max):
self._number = _number
self._max = _max
# Don't call update here
def request(self, x):
print("{} was requested.".format(x))
def update(self):
while True:
print("Spawned {} of {}".format(self._number, self._max))
sleep(2)
if __name__ == '__main__':
spawn = Spawn(1, 1) # Create the object as normal
p = multiprocessing.Process(target=methodcaller("update"), args=(spawn,)) # Run the loop in the process
p.start()
while True:
sleep(1.5)
spawn.request(2) # Now you can reference the "spawn"
You're going to need to rearrange things a bit. I would not do the long running (infinite) work from the constructor. That's generally poor practice, and is complicating things here. I would instead initialize the object, then run the loop in the separate process:
from operator import methodcaller
class Spawn:
def __init__(self, _number, _max):
self._number = _number
self._max = _max
# Don't call update here
def request(self, x):
print("{} was requested.".format(x))
def update(self):
while True:
print("Spawned {} of {}".format(self._number, self._max))
sleep(2)
if __name__ == '__main__':
spawn = Spawn(1, 1) # Create the object as normal
p = multiprocessing.Process(target=methodcaller("update"), args=(spawn,)) # Run the loop in the process
p.start()
spawn.request(2) # Now you can reference the "spawn" object to do whatever you like
Unfortunately, since Process requires that it's target argument is pickleable, you can't just use a lambda wrapper like I originally had (whoops). I'm using operator.methodcaller to create a pickleable wrapper. methodcaller("update") returns a function that calls update on whatever is given to it, then we give it spawn to call it on.
You could also create a wrapper function using def:
def wrapper():
spawn.update()
. . .
p = multiprocessing.Process(target=wrapper) # Run the loop in the process
But that only works if it's feasible to have wrapper as a global function. You may need to play around to find out what works best, or use a multiprocessing library that doesn't require pickleable tasks.
Note, please use proper Python naming conventions. Class names start with capitals, and method names are lowercase. I fixed that up in the code I posted.
I'm getting very familiar with python's asyncio, the asynchronous programming in python, co-routines etc.
I want to be able to executing several co-routines with my own custom made eventloop.
I'm curious if i can write my own eventloop without importing asyncio at all
I want to be able to executing several co-routines with my own custom made eventloop.
The asyncio event loop is well-tested and can be easily extended to acknowledge non-asyncio events. If you describe the actual use case, it might be easier to help. But if your goal is to learn about async programming and coroutines, read on.
I'm curious if i can write my own eventloop without importing asyncio at all
It's definitely possible - asyncio itself is just a library, after all - but it will take some work for your event loop to be useful. See this excellent talk by David Beazley where he demonstrates writing an event loop in front of a live audience. (Don't be put off by David using the older yield from syntax - await works exactly the same way.)
Ok, so i found an example somewhere (sorry, don't remember where, no link), and changed a little bit.
An eventloop and co-routins without even importing asyncio:
import datetime
import heapq
import types
import time
class Task:
def __init__(self, wait_until, coro):
self.coro = coro
self.waiting_until = wait_until
def __eq__(self, other):
return self.waiting_until == other.waiting_until
def __lt__(self, other):
return self.waiting_until < other.waiting_until
class SleepingLoop:
def __init__(self, *coros):
self._new = coros
self._waiting = []
def run_until_complete(self):
# Start all the coroutines.
for coro in self._new:
wait_for = coro.send(None)
heapq.heappush(self._waiting, Task(wait_for, coro))
# Keep running until there is no more work to do.
while self._waiting:
now = datetime.datetime.now()
# Get the coroutine with the soonest resumption time.
task = heapq.heappop(self._waiting)
if now < task.waiting_until:
# We're ahead of schedule; wait until it's time to resume.
delta = task.waiting_until - now
time.sleep(delta.total_seconds())
now = datetime.datetime.now()
try:
# It's time to resume the coroutine.
wait_until = task.coro.send(now)
heapq.heappush(self._waiting, Task(wait_until, task.coro))
except StopIteration:
# The coroutine is done.
pass
#types.coroutine
def async_sleep(seconds):
now = datetime.datetime.now()
wait_until = now + datetime.timedelta(seconds=seconds)
actual = yield wait_until
return actual - now
async def countdown(label, total_seconds_wait, *, delay=0):
print(label, 'waiting', delay, 'seconds before starting countdown')
delta = await async_sleep(delay)
print(label, 'starting after waiting', delta)
while total_seconds_wait:
print(label, 'T-minus', total_seconds_wait)
waited = await async_sleep(1)
total_seconds_wait -= 1
print(label, 'lift-off!')
def main():
loop = SleepingLoop(countdown('A', 5, delay=0),
countdown('B', 3, delay=2),
countdown('C', 4, delay=1))
start = datetime.datetime.now()
loop.run_until_complete()
print('Total elapsed time is', datetime.datetime.now() - start)
if __name__ == '__main__':
main()
I try to run a python3 asynchronous external command from a Qt Application. Before I was using a multiprocessing thread to do it without freezing the Qt Application. But now, I would like to do it with a QThread to be able to pickle and give a QtWindows as argument for some other functions (not presented here). I did it and test it with success on my Windows OS, but I tried the application on my Linux OS, I get the following error :RuntimeError: Cannot add child handler, the child watcher does not have a loop attached
From that point I tried to isolate the problem, and I obtain the minimal (as possible as I could) example below that replicates the problem.
Of course, as I mentioned before, if I replace QThreadPool by a list of multiprocessing.thread this example is working well. I also realized something that astonished me: if I uncomment the line rc = subp([sys.executable,"./HelloWorld.py"]) in the last part of the example, it works also. I couldn't explain myself why.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
## IMPORTS ##
from functools import partial
from PyQt5 import QtCore
from PyQt5.QtCore import QThreadPool, QRunnable, QCoreApplication
import sys
import asyncio.subprocess
# Global variables
Qpool = QtCore.QThreadPool()
def subp(cmd_list):
""" """
if sys.platform.startswith('linux'):
new_loop = asyncio.new_event_loop()
asyncio.set_event_loop(new_loop)
elif sys.platform.startswith('win'):
new_loop = asyncio.ProactorEventLoop() # for subprocess' pipes on Windows
asyncio.set_event_loop(new_loop)
else :
print('[ERROR] OS not available for encodage... EXIT')
sys.exit(2)
rc, stdout, stderr= new_loop.run_until_complete(get_subp(cmd_list) )
new_loop.close()
if rc!=0 :
print('Exit not zero ({}): {}'.format(rc, sys.exc_info()[0]) )#, exc_info=True)
return rc, stdout, stderr
async def get_subp(cmd_list):
""" """
print('subp: '+' '.join(cmd_list) )
# Create the subprocess, redirect the standard output into a pipe
create = asyncio.create_subprocess_exec(*cmd_list, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE) #
proc = await create
# read child's stdout/stderr concurrently (capture and display)
try:
stdout, stderr = await asyncio.gather(
read_stream_and_display(proc.stdout),
read_stream_and_display(proc.stderr))
except Exception:
proc.kill()
raise
finally:
rc = await proc.wait()
print(" [Exit {}] ".format(rc)+' '.join(cmd_list))
return rc, stdout, stderr
async def read_stream_and_display(stream):
""" """
async for line in stream:
print(line, flush=True)
class Qrun_from_job(QtCore.QRunnable):
def __init__(self, job, arg):
super(Qrun_from_job, self).__init__()
self.job=job
self.arg=arg
def run(self):
code = partial(self.job)
code()
def ThdSomething(job,arg):
testRunnable = Qrun_from_job(job,arg)
Qpool.start(testRunnable)
def testThatThing():
rc = subp([sys.executable,"./HelloWorld.py"])
if __name__=='__main__':
app = QCoreApplication([])
# rc = subp([sys.executable,"./HelloWorld.py"])
ThdSomething(testThatThing,'tests')
sys.exit(app.exec_())
with the HelloWorld.py file:
#!/usr/bin/env python3
import sys
if __name__=='__main__':
print('HelloWorld')
sys.exit(0)
Therefore I have two questions: How to make this example working properly with QThread ? And why a previous call of an asynchronous task (with a call of subp function) change the stability of the example on Linux ?
EDIT
Following advices of #user4815162342, I tried with a run_coroutine_threadsafe with the code below. But it is not working and returns the same error ie RuntimeError: Cannot add child handler, the child watcher does not have a loop attached. I also tried to change the threading command by its equivalent in the module mutliprocessing ; and with the last one, the command subp is never launched.
The code :
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
## IMPORTS ##
import sys
import asyncio.subprocess
import threading
import multiprocessing
# at top-level
loop = asyncio.new_event_loop()
def spin_loop():
asyncio.set_event_loop(loop)
loop.run_forever()
def subp(cmd_list):
# submit the task to asyncio
fut = asyncio.run_coroutine_threadsafe(get_subp(cmd_list), loop)
# wait for the task to finish
rc, stdout, stderr = fut.result()
return rc, stdout, stderr
async def get_subp(cmd_list):
""" """
print('subp: '+' '.join(cmd_list) )
# Create the subprocess, redirect the standard output into a pipe
proc = await asyncio.create_subprocess_exec(*cmd_list, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE) #
# read child's stdout/stderr concurrently (capture and display)
try:
stdout, stderr = await asyncio.gather(
read_stream_and_display(proc.stdout),
read_stream_and_display(proc.stderr))
except Exception:
proc.kill()
raise
finally:
rc = await proc.wait()
print(" [Exit {}] ".format(rc)+' '.join(cmd_list))
return rc, stdout, stderr
async def read_stream_and_display(stream):
""" """
async for line in stream:
print(line, flush=True)
if __name__=='__main__':
threading.Thread(target=spin_loop, daemon=True).start()
# multiprocessing.Process(target=spin_loop, daemon=True).start()
print('thread passed')
rc = subp([sys.executable,"./HelloWorld.py"])
print('end')
sys.exit(0)
As a general design principle, it's unnecessary and wasteful to create new event loops only to run a single subroutine. Instead, create an event loop, run it in a separate thread, and use it for all your asyncio needs by submitting tasks to it using asyncio.run_coroutine_threadsafe.
For example:
# at top-level
loop = asyncio.new_event_loop()
def spin_loop():
asyncio.set_event_loop(loop)
loop.run_forever()
asyncio.get_child_watcher().attach_loop(loop)
threading.Thread(target=spin_loop, daemon=True).start()
# ... the rest of your code ...
With this in place, you can easily execute any asyncio code from any thread whatsoever using the following:
def subp(cmd_list):
# submit the task to asyncio
fut = asyncio.run_coroutine_threadsafe(get_subp(cmd_list), loop)
# wait for the task to finish
rc, stdout, stderr = fut.result()
return rc, stdout, stderr
Note that you can use add_done_callback to be notified when the future returned by asyncio.run_coroutine_threadsafe finishes, so you might not need a thread in the first place.
Note that all interaction with the event loop should go either through the afore-mentioned run_coroutine_threadsafe (when submitting coroutines) or through loop.call_soon_threadsafe when you need the event loop to call an ordinary function. For example, to stop the event loop, you would invoke loop.call_soon_threadsafe(loop.stop).
I suspect that what you are doing is simply unsupported - according to the documentation:
To handle signals and to execute subprocesses, the event loop must be run in the main thread.
As you are trying to execute a subprocess, I do not think running a new event loop in another thread works.
Thing is, Qt already has an event loop, and what you really need is to convince asyncio to use it. That means that you need an event loop implementation that provides the "event loop interface for asyncio" implemented on top of "Qt's event loop".
I believe that asyncqt provides such an implementation. You may want to try to use QEventLoop(app) in place of asyncio.new_event_loop().
I am following the principles laid down in this post to safely output the results which will eventually be written to a file. Unfortunately, the code only print 1 and 2, and not 3 to 6.
import os
import argparse
import pandas as pd
import multiprocessing
from multiprocessing import Process, Queue
from time import sleep
def feed(queue, parlist):
for par in parlist:
queue.put(par)
print("Queue size", queue.qsize())
def calc(queueIn, queueOut):
while True:
try:
par=queueIn.get(block=False)
res=doCalculation(par)
queueOut.put((res))
queueIn.task_done()
except:
break
def doCalculation(par):
return par
def write(queue):
while True:
try:
par=queue.get(block=False)
print("response:",par)
except:
break
if __name__ == "__main__":
nthreads = 2
workerQueue = Queue()
writerQueue = Queue()
considerperiod=[1,2,3,4,5,6]
feedProc = Process(target=feed, args=(workerQueue, considerperiod))
calcProc = [Process(target=calc, args=(workerQueue, writerQueue)) for i in range(nthreads)]
writProc = Process(target=write, args=(writerQueue,))
feedProc.start()
feedProc.join()
for p in calcProc:
p.start()
for p in calcProc:
p.join()
writProc.start()
writProc.join()
On running the code it prints,
$ python3 tst.py
Queue size 6
response: 1
response: 2
Also, is it possible to ensure that the write function always outputs 1,2,3,4,5,6 i.e. in the same order in which the data is fed into the feed queue?
The error is somehow with the task_done() call. If you remove that one, then it works, don't ask me why (IMO that's a bug). But the way it works then is that the queueIn.get(block=False) call throws an exception because the queue is empty. This might be just enough for your use case, a better way though would be to use sentinels (as suggested in the multiprocessing docs, see last example). Here's a little rewrite so your program uses sentinels:
import os
import argparse
import multiprocessing
from multiprocessing import Process, Queue
from time import sleep
def feed(queue, parlist, nthreads):
for par in parlist:
queue.put(par)
for i in range(nthreads):
queue.put(None)
print("Queue size", queue.qsize())
def calc(queueIn, queueOut):
while True:
par=queueIn.get()
if par is None:
break
res=doCalculation(par)
queueOut.put((res))
def doCalculation(par):
return par
def write(queue):
while not queue.empty():
par=queue.get()
print("response:",par)
if __name__ == "__main__":
nthreads = 2
workerQueue = Queue()
writerQueue = Queue()
considerperiod=[1,2,3,4,5,6]
feedProc = Process(target=feed, args=(workerQueue, considerperiod, nthreads))
calcProc = [Process(target=calc, args=(workerQueue, writerQueue)) for i in range(nthreads)]
writProc = Process(target=write, args=(writerQueue,))
feedProc.start()
feedProc.join()
for p in calcProc:
p.start()
for p in calcProc:
p.join()
writProc.start()
writProc.join()
A few things to note:
the sentinel is putting a None into the queue. Note that you need one sentinel for every worker process.
for the write function you don't need to do the sentinel handling as there's only one process and you don't need to handle concurrency (if you would do the empty() and then get() thingie in your calc function you would run into a problem if e.g. there's only one item left in the queue and both workers check empty() at the same time and then both want to do get() and then one of them is locked forever)
you don't need to put feed and write into processes, just put them into your main function as you don't want to run it in parallel anyway.
how can I have the same order in output as in input? [...] I guess multiprocessing.map can do this
Yes map keeps the order. Rewriting your program into something simpler (as you don't need the workerQueue and writerQueue and adding random sleeps to prove that the output is still in order:
from multiprocessing import Pool
import time
import random
def calc(val):
time.sleep(random.random())
return val
if __name__ == "__main__":
considerperiod=[1,2,3,4,5,6]
with Pool(processes=2) as pool:
print(pool.map(calc, considerperiod))