Using asyncio to wait for results from subprocess - python-3.x

My Python script contains a loop that uses subprocess to run commands outside the script. Each subprocess is independent. I listen for the returned message in case there's an error; I can't ignore the result of the subprocess. Here's the script without asyncio (I've replaced my computationally expensive call with sleep):
from subprocess import PIPE # https://docs.python.org/3/library/subprocess.html
import subprocess
def go_do_something(index: int) -> None:
"""
This function takes a long time
Nothing is returned
Each instance is independent
"""
process = subprocess.run(["sleep","2"],stdout=PIPE,stderr=PIPE,timeout=20)
stdout = process.stdout.decode("utf-8")
stderr = process.stderr.decode("utf-8")
if "error" in stderr:
print("error for "+str(index))
return
def my_long_func(val: int) -> None:
"""
This function contains a loop
Each iteration of the loop calls a function
Nothing is returned
"""
for index in range(val):
print("index = "+str(index))
go_do_something(index)
# run the script
my_long_func(3) # launch three tasks
I think I could use asyncio to speed up this activity since the Python script is waiting on the external subprocess to complete. I think threading or multiprocessing are not necessary, though they could also result in faster execution. Using a task queue (e.g., Celery) is another option.
I tried implementing the asyncio approach, but am missing something since the following attempt doesn't change the overall execution time:
import asyncio
from subprocess import PIPE # https://docs.python.org/3/library/subprocess.html
import subprocess
async def go_do_something(index: int) -> None:
"""
This function takes a long time
Nothing is returned
Each instance is independent
"""
process = subprocess.run(["sleep","2"],stdout=PIPE,stderr=PIPE,timeout=20)
stdout = process.stdout.decode("utf-8")
stderr = process.stderr.decode("utf-8")
if "error" in stderr:
print("error for "+str(index))
return
def my_long_func(val: int) -> None:
"""
This function contains a loop
Each iteration of the loop calls a function
Nothing is returned
"""
# https://docs.python.org/3/library/asyncio-eventloop.html
loop = asyncio.get_event_loop()
tasks = []
for index in range(val):
task = go_do_something(index)
tasks.append(task)
# https://docs.python.org/3/library/asyncio-task.html
tasks = asyncio.gather(*tasks)
loop.run_until_complete(tasks)
loop.close()
return
my_long_func(3) # launch three tasks
If I want to monitor the output of each subprocess but not wait while each subprocess runs, can I benefit from asyncio? Or does this situation require something like multiprocessing or Celery?

Try executing the commands using asyncio instead of subprocess.
Define a run() function:
import asyncio
async def run(cmd: str):
proc = await asyncio.create_subprocess_shell(
cmd,
stderr=asyncio.subprocess.PIPE,
stdout=asyncio.subprocess.PIPE
)
stdout, stderr = await proc.communicate()
print(f'[{cmd!r} exited with {proc.returncode}]')
if stdout:
print(f'[stdout]\n{stdout.decode()}')
if stderr:
print(f'[stderr]\n{stderr.decode()}')
Then you may call it as you would call any async function:
asyncio.run(run('sleep 2'))
#=>
['sleep 2' exited with 0]
The example was taken from the official documentation. Also available here.

#ronginat pointed me to https://asyncio.readthedocs.io/en/latest/subprocess.html which I was able to adapt to the situation I am seeking:
import asyncio
async def run_command(*args):
# Create subprocess
process = await asyncio.create_subprocess_exec(
*args,
# stdout must a pipe to be accessible as process.stdout
stdout=asyncio.subprocess.PIPE)
# Wait for the subprocess to finish
stdout, stderr = await process.communicate()
# Return stdout
return stdout.decode().strip()
async def go_do_something(index: int) -> None:
print('index=',index)
res = await run_command('sleep','2')
return res
def my_long_func(val: int) -> None:
task_list = []
for indx in range(val):
task_list.append( go_do_something(indx) )
loop = asyncio.get_event_loop()
commands = asyncio.gather(*task_list)
reslt = loop.run_until_complete(commands)
print(reslt)
loop.close()
my_long_func(3) # launch three tasks
The total time of execution is just over 2 seconds even though there are three sleeps of duration 2 seconds. And I get the stdout from each subprocess.

Related

call several time the same subprocess python function

I need to process-parallelize some computations that are done several time.
So the subprocess python function has to keep alive between two calls.
In a perfect world I would need something like that:
class Computer:
def __init__(self, x):
self.x = x
# Creation of quite heavy python objects that cannot be pickled !!
def call(self, y):
return x+y
process = Computer(4) ## NEED MAGIC HERE to keep "call" alive in a subprocess !!
print(process.call(1)) # prints 5 (=4+1)
print(process.call(12)) # prints 16 (=4+12)
I can follow this answer and communicate via asyncio.subprocess.PIPE, but in my actual use case,
the call argument is a list of list of integers
the call answer is a list of strings
Thus it could be cool to avoid to serialize/deserialize the arguments and return values by hand.
Any ideas of how to keep the function call "alive" and ready to receive new calls ?
Here is an answer, based on this one, but
several subprocesses are created
each subprocess has its own identifier
their calls are parallelized
a small layer to allow exchange of jsons instead of plain byte strings.
hello.py
#!/usr/bin/python3
# This is the taks to be done.
# A task consist in receiving a json assumed to be
# {"vector": [...]}
# and return a json with the length of the vector and
# the worker id.
import sys
import time
import json
ident = sys.argv[1]
while True:
str_data = input()
data = json.loads(str_data)
command = data.get("command", None)
if command == "quit":
answer = {"comment": "I'm leaving",
"my id": ident}
print(json.dumps(answer), end="\n")
sys.exit(1)
time.sleep(1) # simulates 1s of heavy work
answer = {"size": len(data['vector']),
"my id": ident}
print(json.dumps(answer), end="\n")
main.py
#!/usr/bin/python3
import json
from subprocess import Popen, PIPE
import concurrent.futures
from concurrent.futures import ThreadPoolExecutor
dprint = print
def create_proc(arg):
cmd = ["./hello.py", arg]
process = Popen(cmd, stdin=PIPE, stdout=PIPE)
return process
def make_call(proc, arg):
"""Make the call in a thread."""
str_arg = json.dumps(arg)
txt = bytes(str_arg + '\n', encoding='utf8')
proc.stdin.write(txt)
proc.stdin.flush()
b_ans = proc.stdout.readline()
s_ans = b_ans.decode('utf8')
j_ans = json.loads(s_ans)
return j_ans
def search(executor, procs, data):
jobs = [executor.submit(make_call, proc, data) for proc in procs]
answer = []
for job in concurrent.futures.as_completed(jobs):
got_ans = job.result()
answer.append(got_ans)
return answer
def main():
n_workers = 50
idents = [f"{i}st" for i in range(0, n_workers)]
executor = ThreadPoolExecutor(n_workers)
# Create `n_workers` subprocesses waiting for data to work with.
# The subprocesses are all different because they receive different
# "initialization" id.
procs = [create_proc(ident) for ident in idents]
data = {"vector": [1, 2, 23]}
answers = search(executor, procs, data) # takes 1s instead of 5 !
for answer in answers:
print(answers)
search(executor, procs, {"command": "quit"})
main()

Asyncio big list of Task with sequential combine run_in_executor and standard Coroutine in each

I need to handle list of 2500 ip-addresses from csv file. So I need to create_task from coroutine 2500 times. Inside every coroutine firstly I need to fast-check access of IP:PORT via python module "socket" and it is a synchronous function want to be in loop.run_in_executor(). Secondly if IP:PORT is opened I need to connect to this socket via asyncssh.connect() for doing some bash commands and this is standart asyncio coroutine. Then I need to collect results of this bash commands to another csv file.
Additionaly there is an issue in Linux: system can not open more than 1024 connections at same time. I think it may be solved by making list of lists[1000] with asyncio.sleep(1) between or something like that.
I expected my tasks will be executed by 1000 in 1 second but it only 20 in 1 sec. Why?
Little working code snippet with comments here:
#!/usr/bin/env python3
import asyncio
import csv
import time
from pathlib import Path
import asyncssh
import socket
from concurrent.futures import ThreadPoolExecutor as Executor
PARALLEL_SESSIONS_COUNT = 1000
LEASES_ALL = Path("ip_list.csv")
PORT = 22
TIMEOUT = 1
USER = "testuser1"
PASSWORD = "123"
def is_open(ip,port,timeout):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(timeout)
try:
s.connect((ip, int(port)))
s.shutdown(socket.SHUT_RDWR)
return {"result": True, "error": "NoErr"}
except Exception as ex:
return {"result": False, "error": str(ex)}
finally:
s.close()
def get_leases_list():
# Minimal csv content:
# header must contain "IPAddress"
# every other line is concrete ip-address.
result = []
with open(LEASES_ALL, newline="") as csvfile_1:
reader_1 = csv.DictReader(csvfile_1)
result = list(reader_1)
return result
def split_list(some_list, sublist_count):
result = []
while len(some_list) > sublist_count:
result.append(some_list[:sublist_count])
some_list = some_list[sublist_count:]
result.append(some_list)
return result
async def do_single_host(one_lease_dict): # Function for each Task
# Firstly
IP = one_lease_dict["IPAddress"]
loop = asyncio.get_event_loop()
socket_check = await loop.run_in_executor(None, is_open, IP, PORT, TIMEOUT)
print(socket_check, IP)
# Secondly
if socket_check["result"] == True:
async with asyncssh.connect(host=IP, port=PORT, username=USER, password=PASSWORD, known_hosts=None) as conn:
result = await conn.run("uname -r", check=True)
print(result.stdout, end="") # Just print without write in file at this point.
def aio_root():
leases_list = get_leases_list()
list_of_lists = split_list(leases_list, PARALLEL_SESSIONS_COUNT)
r = []
loop = asyncio.get_event_loop()
for i in list_of_lists:
for j in i:
task = loop.create_task(do_single_host(j))
r.append(task)
group = asyncio.wait(r)
loop.run_until_complete(group) # At this line execute only by 20 in 1sec. Can't understand why :(
loop.close()
def main():
aio_root()
if __name__ == '__main__':
main()
loop.run_in_exectutor signature:
awaitable loop.run_in_executor(executor, func, *args)¶
The default ThreadPoolExecutor is used if executor is None.
ThreadPoolExecutor document:
Changed in version 3.5: If max_workers is None or not given, it will default to the number of processors on the machine, multiplied by 5, assuming that ThreadPoolExecutor is often used to overlap I/O instead of CPU work and the number of workers should be higher than the number of workers for ProcessPoolExecutor.
Changed in version 3.8: Default value of max_workers is changed to min(32, os.cpu_count() + 4). This default value preserves at least 5 workers for I/O bound tasks. It utilizes at most 32 CPU cores for CPU bound tasks which release the GIL. And it avoids using very large resources implicitly on many-core machines.

Python multiprocessing with async functions

I built a websocket server, a simplified version of it is shown below:
import websockets, subprocess, asyncio, json, re, os, sys
from multiprocessing import Process
def docker_command(command_words):
return subprocess.Popen(
["docker"] + command_words,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True)
async def check_submission(websocket:object, submission:dict):
exercise=submission["exercise"]
with docker_command(["exec", "-w", "badkan", "grade_exercise", exercise]) as proc:
for line in proc.stdout:
print("> " + line)
await websocket.send(line)
async def run(websocket, path):
submission_json = await websocket.recv() # returns a string
submission = json.loads(submission_json) # converts the string to a python dict
####
await check_submission(websocket, submission)
websocketserver = websockets.server.serve(run, '0.0.0.0', 8888, origins=None)
asyncio.get_event_loop().run_until_complete(websocketserver)
asyncio.get_event_loop().run_forever()
It works fine when there is only a single user at a time. But, when several users try to use the server, the server processes them serially so later users have to wait a long time.
I tried to convert it to a multiprocessing server by replacing the line marked with "####" ("await check_submission...") with:
p = Process(target=check_submission, args=(websocket, submission,))
p.start()
But, it did not work - I got a Runtime Warning: "coroutine: 'check_submission' was never awaited", and I did not see any output coming through the websocket.
I also tried to replace these lines with:
loop = asyncio.get_event_loop()
loop.set_default_executor(ProcessPoolExecutor())
await loop.run_in_executor(None, check_submission, websocket, submission)
but got a different error: "can't pickle asyncio.Future objects".
How can I build this multi-processing websocket server?
this is my example, asyncio.run() worked for me, use multi process start an async function
class FlowConsumer(Base):
def __init__(self):
pass
async def run(self):
self.logger("start consumer process")
while True:
# get flow from queue
flow = {}
# call flow executor get result
executor = FlowExecutor(flow)
rtn = FlowResult()
try:
rtn = await executor.run()
except Exception as e:
self.logger("flow run except:{}".format(traceback.format_exc()))
rtn.status = FLOW_EXCEPT
rtn.msg = str(e)
self.logger("consumer flow finish,result:{}".format(rtn.dict()))
time.sleep(1)
def process(self):
asyncio.run(self.run())
processes = []
consumer_proc_count = 3
# start multi consumer processes
for _ in range(consumer_proc_count):
# old version
# p = Process(target=FlowConsumer().run)
p = Process(target=FlowConsumer().process)
p.start()
processes.append(p)
for p in processes:
p.join()
The problem is that subprocess.Popen is not async, so check_submission blocks the event loop while waiting for the next line of docker output.
You don't need to use multiprocessing at all; since you are blocking while waiting on a subprocess, you just need to switch from subprocess to asyncio.subprocess:
async def docker_command(command_words):
return await asyncio.subprocess.create_subprocess_exec(
*["docker"] + command_words,
stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.STDOUT)
async def check_submission(websocket:object, submission:dict):
exercise = submission["exercise"]
proc = await docker_command(["exec", "-w", "badkan", "grade_exercise", exercise])
async for line in proc.stdout:
print(b"> " + line)
await websocket.send(line)
await proc.wait()

Why python asyncio process in a thread seems unstable on Linux?

I try to run a python3 asynchronous external command from a Qt Application. Before I was using a multiprocessing thread to do it without freezing the Qt Application. But now, I would like to do it with a QThread to be able to pickle and give a QtWindows as argument for some other functions (not presented here). I did it and test it with success on my Windows OS, but I tried the application on my Linux OS, I get the following error :RuntimeError: Cannot add child handler, the child watcher does not have a loop attached
From that point I tried to isolate the problem, and I obtain the minimal (as possible as I could) example below that replicates the problem.
Of course, as I mentioned before, if I replace QThreadPool by a list of multiprocessing.thread this example is working well. I also realized something that astonished me: if I uncomment the line rc = subp([sys.executable,"./HelloWorld.py"]) in the last part of the example, it works also. I couldn't explain myself why.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
## IMPORTS ##
from functools import partial
from PyQt5 import QtCore
from PyQt5.QtCore import QThreadPool, QRunnable, QCoreApplication
import sys
import asyncio.subprocess
# Global variables
Qpool = QtCore.QThreadPool()
def subp(cmd_list):
""" """
if sys.platform.startswith('linux'):
new_loop = asyncio.new_event_loop()
asyncio.set_event_loop(new_loop)
elif sys.platform.startswith('win'):
new_loop = asyncio.ProactorEventLoop() # for subprocess' pipes on Windows
asyncio.set_event_loop(new_loop)
else :
print('[ERROR] OS not available for encodage... EXIT')
sys.exit(2)
rc, stdout, stderr= new_loop.run_until_complete(get_subp(cmd_list) )
new_loop.close()
if rc!=0 :
print('Exit not zero ({}): {}'.format(rc, sys.exc_info()[0]) )#, exc_info=True)
return rc, stdout, stderr
async def get_subp(cmd_list):
""" """
print('subp: '+' '.join(cmd_list) )
# Create the subprocess, redirect the standard output into a pipe
create = asyncio.create_subprocess_exec(*cmd_list, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE) #
proc = await create
# read child's stdout/stderr concurrently (capture and display)
try:
stdout, stderr = await asyncio.gather(
read_stream_and_display(proc.stdout),
read_stream_and_display(proc.stderr))
except Exception:
proc.kill()
raise
finally:
rc = await proc.wait()
print(" [Exit {}] ".format(rc)+' '.join(cmd_list))
return rc, stdout, stderr
async def read_stream_and_display(stream):
""" """
async for line in stream:
print(line, flush=True)
class Qrun_from_job(QtCore.QRunnable):
def __init__(self, job, arg):
super(Qrun_from_job, self).__init__()
self.job=job
self.arg=arg
def run(self):
code = partial(self.job)
code()
def ThdSomething(job,arg):
testRunnable = Qrun_from_job(job,arg)
Qpool.start(testRunnable)
def testThatThing():
rc = subp([sys.executable,"./HelloWorld.py"])
if __name__=='__main__':
app = QCoreApplication([])
# rc = subp([sys.executable,"./HelloWorld.py"])
ThdSomething(testThatThing,'tests')
sys.exit(app.exec_())
with the HelloWorld.py file:
#!/usr/bin/env python3
import sys
if __name__=='__main__':
print('HelloWorld')
sys.exit(0)
Therefore I have two questions: How to make this example working properly with QThread ? And why a previous call of an asynchronous task (with a call of subp function) change the stability of the example on Linux ?
EDIT
Following advices of #user4815162342, I tried with a run_coroutine_threadsafe with the code below. But it is not working and returns the same error ie RuntimeError: Cannot add child handler, the child watcher does not have a loop attached. I also tried to change the threading command by its equivalent in the module mutliprocessing ; and with the last one, the command subp is never launched.
The code :
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
## IMPORTS ##
import sys
import asyncio.subprocess
import threading
import multiprocessing
# at top-level
loop = asyncio.new_event_loop()
def spin_loop():
asyncio.set_event_loop(loop)
loop.run_forever()
def subp(cmd_list):
# submit the task to asyncio
fut = asyncio.run_coroutine_threadsafe(get_subp(cmd_list), loop)
# wait for the task to finish
rc, stdout, stderr = fut.result()
return rc, stdout, stderr
async def get_subp(cmd_list):
""" """
print('subp: '+' '.join(cmd_list) )
# Create the subprocess, redirect the standard output into a pipe
proc = await asyncio.create_subprocess_exec(*cmd_list, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE) #
# read child's stdout/stderr concurrently (capture and display)
try:
stdout, stderr = await asyncio.gather(
read_stream_and_display(proc.stdout),
read_stream_and_display(proc.stderr))
except Exception:
proc.kill()
raise
finally:
rc = await proc.wait()
print(" [Exit {}] ".format(rc)+' '.join(cmd_list))
return rc, stdout, stderr
async def read_stream_and_display(stream):
""" """
async for line in stream:
print(line, flush=True)
if __name__=='__main__':
threading.Thread(target=spin_loop, daemon=True).start()
# multiprocessing.Process(target=spin_loop, daemon=True).start()
print('thread passed')
rc = subp([sys.executable,"./HelloWorld.py"])
print('end')
sys.exit(0)
As a general design principle, it's unnecessary and wasteful to create new event loops only to run a single subroutine. Instead, create an event loop, run it in a separate thread, and use it for all your asyncio needs by submitting tasks to it using asyncio.run_coroutine_threadsafe.
For example:
# at top-level
loop = asyncio.new_event_loop()
def spin_loop():
asyncio.set_event_loop(loop)
loop.run_forever()
asyncio.get_child_watcher().attach_loop(loop)
threading.Thread(target=spin_loop, daemon=True).start()
# ... the rest of your code ...
With this in place, you can easily execute any asyncio code from any thread whatsoever using the following:
def subp(cmd_list):
# submit the task to asyncio
fut = asyncio.run_coroutine_threadsafe(get_subp(cmd_list), loop)
# wait for the task to finish
rc, stdout, stderr = fut.result()
return rc, stdout, stderr
Note that you can use add_done_callback to be notified when the future returned by asyncio.run_coroutine_threadsafe finishes, so you might not need a thread in the first place.
Note that all interaction with the event loop should go either through the afore-mentioned run_coroutine_threadsafe (when submitting coroutines) or through loop.call_soon_threadsafe when you need the event loop to call an ordinary function. For example, to stop the event loop, you would invoke loop.call_soon_threadsafe(loop.stop).
I suspect that what you are doing is simply unsupported - according to the documentation:
To handle signals and to execute subprocesses, the event loop must be run in the main thread.
As you are trying to execute a subprocess, I do not think running a new event loop in another thread works.
Thing is, Qt already has an event loop, and what you really need is to convince asyncio to use it. That means that you need an event loop implementation that provides the "event loop interface for asyncio" implemented on top of "Qt's event loop".
I believe that asyncqt provides such an implementation. You may want to try to use QEventLoop(app) in place of asyncio.new_event_loop().

Subprocess Tornado capture exit status

Scenario: First, I need to update status in db to 'pending' and at the same time, return the status to user. Then subprocess will be running in the background and it will take 30 seconds as I have put time.sleep(30) in dummy.py. After that, I have to update status in db to 'completed'. I am trying to make non-blocking functions using tornado.
My Question: I have captured if Subprocess is finished by using yield. If yield result is 0, I assume that Subprocess has completed. I know something is not right with my logic. How do I capture if Subprocess(Tornado) has finished in correct way?
My current code is:
class MainHandler(tornado.web.RequestHandler):
#coroutine
def get(self, id):
print ("TORNADO ALERT")
self.write("Pending")
#If ID in DB, UPDATE DB
#Update Status to Pending
self.flush()
res =yield self._work()
self.write(res)
#coroutine
def _work(self):
p = Subprocess(['python', 'dummy.py'])
f = Future()
p.set_exit_callback(f.set_result)
h = yield f
print (">>> ",h)
if h == 0:
print("DB Updated")
#Update Status to Completed
raise Return(" Completed ")
My imports are as follows:
from tornado.concurrent import Future
from tornado.process import Subprocess
Use Subprocess.wait_for_exit() (which returns a Future) instead of Subprocess.set_exit_callback(). This can then be used in a coroutine with
async def f():
p = Subprocess(cmd)
await p.wait_for_exit()

Resources