Writing to subprocess.Popen.stdin not Executing Correctly - python-3.x

Why do the "subprocess.Popen.stdin.write" commands seem to fail?
#!/usr/bin/env python3
# coding=utf-8
import os
import subprocess
bash = subprocess.Popen(['bash'],
stdout=subprocess.PIPE, stdin=subprocess.PIPE,
stderr=subprocess.PIPE, shell=True)
(shell=True: see edit 2)
bash.stdin.write(b'echo foo\n')
print(bash.stdout.readline())
bash.stdin.write(b'echo bar\n')
print(bash.stdout.readline())
edit: It blocks at the first subprocess.Popen.stdout.readline(), probably because there are no lines in subprocess.Popen.stdout. It is supposed to print out:
foo
bar
edit 2: This still hangs.

This answer for my question works as long as you run GNU/Linux(maybe others, haven't tested) and install pexpect("pip install pexpect").
#!/usr/bin/env python3
# coding=utf-8
"""
Python command line software control example
"""
import pexpect
encode = 'UTF-8'
bash = pexpect.spawn('/usr/bin/env bash', encoding=encode)
bash.sendline('echo foo')
print(bash.readline())
bash.sendline('echo bar')
print(bash.readline())

Related

Calling a program with command line parameters with Pytest

This is my first time using Pytest, I have a program that is called with command line parameters, as in :
$ myprog -i value_a -o value_b
I am not sure how to use Pytest to test the output of this program. Given values of value_a and value_b, I expect a certain output that I want to test.
The Pytest examples that I see all refer to testing functions, for instance if there is a function such as:
import pytest
def add_nums(x,y):
return x + y
def test_add_nums():
ret = add_nums(2,2)
assert ret == 4
But I am not sure how to call my program using Pytest and not just test individual functions? Do I need to use os.system() and then call my program that way?
In my program I am using argparse module.
The solution is based on monkeypatch fixture. In below example myprog reads number from the file myprog_input.txt adds 2 to it and stores result in myprog_output.txt
Program under test
cat myprog.py
#!/usr/bin/python3.9
import argparse
import hashlib
def main():
parser = argparse.ArgumentParser(description='myprog')
parser.add_argument('-i')
parser.add_argument('-o')
args = parser.parse_args()
with open(args.i) as f:
input_data=int(f.read())
output_data=input_data+2
f.close()
with open(args.o,"w") as fo:
fo.write(str(output_data) + '\n')
fo.close()
with open(args.o) as fot:
bytes = fot.read().encode() # read entire file as bytes
fot.close()
readable_hash = hashlib.sha256(bytes).hexdigest();
return readable_hash
if __name__ == '__main__':
print(main())
Test
cat test_myprog.py
#!/usr/bin/python3.9
import sys
import myprog
def test_myprog(monkeypatch):
with monkeypatch.context() as m:
m.setattr(sys, 'argv', ['myprog', '-i', 'myprog_input.txt', '-o', 'myprog_output.txt'])
assert myprog.main() == 'f0b5c2c2211c8d67ed15e75e656c7862d086e9245420892a7de62cd9ec582a06'
Input file
cat myprog_input.txt
3
Running the program
myprog.py -i myprog_input.txt -o myprog_output.txt
f0b5c2c2211c8d67ed15e75e656c7862d086e9245420892a7de62cd9ec582a06
Testing the program
pytest test_myprog.py
============================================= test session starts =============================================
platform linux -- Python 3.9.5, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /home/<username>/py
plugins: hypothesis-6.23.1
collected 1 item
test_myprog.py . [100%]
============================================== 1 passed in 0.04s ==============================================

Capturing runpy.run_module stdout

I'm unable to capture stdout of runpy.run_module into a variable using StringIO.
To demonstrate the problem, I created a script called runpy_test.py (code below) using an arg switch;
0 = do not redirect stdout.
1 = redirect using StringIO, capture into variable, print variable.
Console Output
(base) PS C:\Users\justi\Documents> python .\runpy_test.py 0
pip 20.0.2 from C:\ProgramData\Anaconda3\lib\site-packages\pip (python 3.6)
(base) PS C:\Users\justi\Documents> python .\runpy_test.py 1
(base) PS C:\Users\justi\Documents>
I was expecting python .\runpy_test.py 1 to print pip 20.0.2 from C:\ProgramData\Anaconda3\lib\site-packages\pip (python 3.6), but as you can see from the above console capture, I'm getting nothing.
runpy_test.py
import io
import sys
import runpy
import copy
capture_stdout = bool(sys.argv[1] == "1")
if capture_stdout:
_stdout = sys.stdout
sys.stdout = io.StringIO()
_argv = copy.deepcopy(sys.argv)
sys.argv = ['', '-V']
runpy.run_module("pip", run_name="__main__")
sys.argv = _argv
if capture_stdout:
result = sys.stdout.getvalue()
sys.stdout = _stdout
print(f"result: {result}")
I'm guessing sys.stdout is not being correctly re-initialised before I print because of something related to runpy.run_module, but not really sure how to debug. Any ideas would be great, solutions even better.
My environment is Python 3.6.10 using conda 4.8.3.
Thanks in advance.
Using subprocess.check_output instead of runpy.run_module solved my problem.
See Installing python module within code
You can use capsys in the pytest framework:
def test_main(capsys):
runpy.run_module(
"helloworld",
init_globals=None,
run_name="__main__",
alter_sys=False)
captured = capsys.readouterr()
assert captured.out == "Hello, World!"

Python subprocess for docker-compose

I have an interesting set of requirements that I am trying to conduct using the Python subprocess module and docker-compose. This whole setup is possible in one docker-compose but due to requirement this is what I would like to setup:
call the docker-compose using python subprocess to activate the
test-servers
print all the std-out of above docker-compose running.
as soon as the test-server up and running via docker-compose; call the testing scripts for that server.
This is my docker-compose.py looks like:
import subprocess
from subprocess import PIPE
import os
from datetime import datetime
class MyLog:
def my_log(self, message):
date_now = datetime.today().strftime('%d-%m-%Y %H:%M:%S')
print("{0} || {1}".format(date_now, message))
class DockercomposeRun:
log = MyLog()
def __init__(self):
dir_name, _ = os.path.split(os.path.abspath(__file__))
self.dirname = dir_name
def run_docker_compose(self, filename):
command_name = ["docker-compose", "-f", self.dirname + filename, "up"]
popen = subprocess.Popen(command_name, stdin=PIPE, stdout=PIPE, stderr=PIPE, universal_newlines=True)
return popen
now in my test.py
as soon as my stdout is blank I would like to break the loop of printing and run the rest of the test in test.py.
docker_compose_run = DockercomposeRun()
rc = docker_compose_run.run_docker_compose('/docker-compose.yml.sas-viya-1')
for line in iter(rc.stdout.readline, ''):
print(line, end='')
if line == '':
break
popen.stdout.close()
# start here actual test cases
.......
But for me the loop is never broken even though the stdout of docker-compose goes blank after the server is up and running. And, the test cases are never executed.
Is it the right approach or how I can achieve this?
I think the issue here is because you are not running docker-compose in detached mode and its blocking the application run. Can you try adding "-d" to command_name?

How to set shell=True for a Subprocess.run with a concurrent.future Pool Threading Executor

I try to use the concurrent.future multithreading in Python with subprocess.run to launch an external Python script. But I have some troubles with the shell=True part of the subprocess.run().
Here is an example of the external code, let's call it test.py:
#! /usr/bin/env python3
import argparse
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('-x', '--x_nb', required=True, help='the x number')
parser.add_argument('-y', '--y_nb', required=True, help='the y number')
args = parser.parse_args()
print('result is {} when {} multiplied by {}'.format(int(args.x_nb) * int(args.y_nb),
args.x_nb,
args.y_nb))
In my main python script I have:
#! /usr/bin/env python3
import subprocess
import concurrent.futures
import threading
...
args_list = []
for i in range(10):
cmd = './test.py -x {} -y 2 '.format(i)
args_list.append(cmd)
# just as an example, this line works fine
subprocess.run(args_list[0], shell=True)
# this multithreading is not working
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
executor.map(subprocess.run, args_list)
The problem here is that I can't pass the shell=True option to the executor.map.
I have already tried without success:
args_list = []
for i in range(10):
cmd = './test.py -x {} -y 2 '.format(i)
args_list.append((cmd, eval('shell=True'))
or
args_list = []
for i in range(10):
cmd = './test.py -x {} -y 2 '.format(i)
args_list.append((cmd, 'shell=True'))
Anyone has an idea on how to solve this problem?
I don't think the map method can call a function with keyword args directly but there are 2 simple solutions to your issue.
Solution 1: Use a lambda to set the extra keyword argument you want
The lambda is basically a small function that calls your real function, passing the arguments through. This is a good solution if the keyword arguments are fixed.
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
executor.map(lambda args: subprocess.run(args, shell=True), args_list)
Solution 2: Use executor.submit to submit the functions to the executor
The submit method lets you specify args and keyword args to the target function.
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
for args in args_list:
executor.submit(subprocess.run, args, shell=True)

Capturing all output from subprocess in python3

I want to capture all output into variables that subprocess prints out. Here is my code:
#!/usr/bin/env python3
import subprocess # Subprocess management
import sys # System-specific parameters and functions
try:
args = ["svn", "info", "/directory/that/does/not/exist"]
output = subprocess.check_output(args).decode("utf-8")
except subprocess.CalledProcessError as e:
error = "CalledProcessError: %s" % str(e)
except:
error = "except: %s" % str(sys.exc_info()[1])
else:
pass
This script still prints this into the terminal:
svn: E155007: '/directory/that/does/not/exist' is not a working copy
How can I capture this into a variable?
check_output only captures stdout and NOT stderr (according to https://docs.python.org/3.6/library/subprocess.html#subprocess.check_output )
In order to capture stderr you should use
>>> subprocess.check_output(
... "ls non_existent_file; exit 0",
... stderr=subprocess.STDOUT, ...)
I recommend reading the docs prior to asking here by the way.

Resources