I have a python3.7 script, which takes a YAML file as input and processes it depending on the instructions within. The YAML file I am using for unit testing looks like this:
...
tasks:
- echo '1'
- echo '2'
- echo '3'
- echo '4'
- echo '5'
The script loops over tasks and then runs each one, using os.system() call.
The manual testing indicates, that the output is as expected:
1
2
3
4
5
But I can't make it work in my unit test. Here's how I'm trying to capture the output:
from application import application
from io import StringIO
import unittest
from unittest.mock import patch
class TestApplication(unittest.TestCase):
def test_application_tasks(self):
expected = ['1','2','3','4','5']
with patch('sys.stdout', new=StringIO()) as fakeOutput:
application.parse_event('some event') # print() is called here within parse_event()
self.assertEqual(fakeOutput.getvalue().strip().split(), expected)
When running python3 -m unittest discover -s tests, all I get is AssertionError: Lists differ: [] != ['1', '2', '3', '4', '5'].
I also tried using with patch('sys.stdout', new_callable=StringIO) as fakeOutput: instead, but to no avail.
Another thing I tried was self.assertEqual(fakeOutput.getvalue(), '1\n2\n3\n4\n5'), and here what the unittest outputs:
AssertionError: '' != '1\n2\n3\n4\n5'
+ 1
+ 2
+ 3
+ 4
+ 5
Obviously, the script works and outputs the right result, but fakeOutput does not capture it.
Using patch as a decorator does not work either:
from application import application
from io import StringIO
import unittest
from unittest.mock import patch
class TestApplication(unittest.TestCase):
#patch('sys.stdout', new_callable=StringIO)
def test_application_tasks(self):
expected = ['1','2','3','4','5']
application.parse_event('some event') # print() is called here within parse_event()
self.assertEqual(fakeOutput.getvalue().strip().split(), expected)
Would output absolutely the same error: AssertionError: Lists differ: [] != ['1', '2', '3', '4', '5']
os.system runs a new process. If you monkey-patch sys.stdout this affects the current process but has no consequences for any new processes.
Consider:
import sys
from os import system
from io import BytesIO
capture = sys.stdout = BytesIO()
system("echo Hello")
sys.stdout = sys.__stdout__
print(capture.getvalue())
Nothing is captured because only the child process has written to its stdout. Nothing has written to the stdout of your Python process.
Generally, avoid os.system. Instead, use the subprocess module which will let you capture output from the process that is run.
Thank you, Jean-Paul Calderone. I realized the fact, that os.system() creates a completely different process and therefore I need to tackle the problem differently, only after I posted the question :)
To actually be able to test my code, I had to rewrite it using subprocess instead of os.system(). In the end, I went with subprocess_run_result = subprocess.run(task, shell=True, stdout=subprocess.PIPE) and then getting the result using subprocess_run_result.stdout.strip().decode("utf-8").
In the tests I just create an instance of class and call a method, which runs the tasks in subprocess.
My whole refactored code and tests are here in this commit if anyone would like to take a look.
Your solution is fine, just use getvalue instead, like so:
with patch("sys.stdout", new_callable=StringIO) as f:
print("Foo")
r = f.getvalue()
print("r: {r!r} ;".format(r=r))
r: "Foo" ;
Related
I have this script, but I only want to start 3 perl processes at a time. Once these 3 are done, the script should start the next three.
at the moment all processes are started in parallel
unfortunately I don't know what to do. can someone help me?
my script:
import json, os
import subprocess
from subprocess import Popen, PIPE
list = open('list.txt', 'r')
procs = []
for dirs in list:
args = ['perl', 'test.pl', '-a', dirs]
proc = subprocess.Popen(args)
procs.append(proc)
for proc in procs:
proc.wait()
list.txt :
dir1
dir2
dir3
dir4
dir5
dir6
dir7
dir8
dir9
dir10
dir11
test.pl
$com=$ARGV[0];
$dirs=$ARGV[1];
print "$com $dirs";
sleep(5);
Use Python's concurrent.futures module - it has the figure of a 'Process Pool' that will automatically keep only that many worker process, and start new tasks as the older ones are completed.
As target function, put a simple Python function to open your external process, and wait synchronously for the result - a function with the lines currently inside your for loop.
Using concurrent.futures, your code might look like this:
import json, os
import subprocess
from concurrent.futures import ThreadPoolExecutor, as_completed
from subprocess import Popen, PIPE
mylist = open('list.txt', 'r')
def worker(dirs):
args = ['perl', 'test.pl', '-a']
proc = subprocess.run(args + [dirs])
executor = ThreadPoolExecutor(3) # 3 is: max-workers.
# ProcessPoolExecutor could be an option, but you don't need
# it - the `perl` process will run in other process anyway.
procs = []
for dirs in mylist:
proc = executor.submit(worker, dirs)
procs.append(proc)
for proc in as_completed(procs):
try:
result = proc.result()
except Exception as exc:
# handle any error that may have been raised in the worker
pass
I'm trying to run a function after entering in npyscreen, tried a few things and am still stuck. Just exits npyscreen and returns to a bash screen. This function is supposed to start a watchdog/rsync watch-folder waiting for files to backup.
#!/usr/bin/env python
# encoding: utf-8
import npyscreen as np
from nextscript import want_to_run_this_function
class WuTangClan(np.NPSAppManaged):
def onStart(self):
self.addForm('MAIN', FormMc, name="36 Chambers")
class FormMc(np.ActionFormExpandedV2):
def create(self):
self.rza_gfk = self.add(np.TitleSelectOne, max_height=4, name="Better MC:", value=[0], values=["RZA", "GhostFace Killah"], scroll_exit=True)
def after_editing(self):
if self.rza_gfk.value == [0]:
want_to_run_this_function()
self.parentApp.setNextForm(None)
else:
self.parentApp.setNextForm(None)
if __name__ == "__main__":
App = WuTangClan()
App.run()
I not sure if i understood correctly what you want.
For executing any kind of bash command i like to use subprocess module, he has the Popen constructor, which you can use to run anything from a bash.
e.g, on windows
import subprocess
process = subprocess.Popen(['ipconfig','/all'])
On unix like system:
import subprocess
process = subprocess.Popen(['ip','a'])
If you have a ".py" file you can pass the parameters like if you where running it from the terminal
e.g
import subprocess
process = subprocess.Popen(['python3','sleeper.py'])
You can even retrieve the process pid and kill it whenever you want, you can look at subprocess module documentation here
I am trying to pass arguments from a pytest testcase to a module being tested. For example, using the main.py from Python boilerplate, I can run it from the command line as:
$ python3 main.py
usage: main.py [-h] [-f] [-n NAME] [-v] [--version] arg
main.py: error: the following arguments are required: arg
$ python3 main.py xx
hello world
Namespace(arg='xx', flag=False, name=None, verbose=0)
Now I am trying to do the same with pytest, with the following test_sample.py
(NOTE: the main.py requires command line arguments. But these arguments need to be hardcoded in a specific test, they should not be command line arguments to pytest. The pytest testcase only needs to send these values as command line arguments to main.main().)
import main
def test_case01():
main.main()
# I dont know how to pass 'xx' to main.py,
# so for now I just have one test with no arguments
and running the test as:
pytest -vs test_sample.py
This fails with error messages. I tried to look at other answers for a solution but could not use them. For example, 42778124 suggests to create a separate file run.py which is not a desirable thing to do. And 48359957 and 40880259 seem to deal more with command line arguments for pytest, instead of passing command line arguments to the main code.
I dont need the pytest to take command line arguments, the arguments can be hardcoded inside a specific test. But these arguments need to be passed as arguments to the main code. Can you give me a test_sample.py, that calls main.main() with some arguments?
If you can't modify the signature of the main method, you can use the monkeypatching technique to temporarily replace the arguments with the test data. Example: imagine writing tests for the following program:
import argparse
def main():
parser = argparse.ArgumentParser(description='Greeter')
parser.add_argument('name')
args = parser.parse_args()
return f'hello {args.name}'
if __name__ == '__main__':
print(main())
When running it from the command line:
$ python greeter.py world
hello world
To test the main function with some custom data, monkeypatch sys.argv:
import sys
import greeter
def test_greeter(monkeypatch):
with monkeypatch.context() as m:
m.setattr(sys, 'argv', ['greeter', 'spam'])
assert greeter.main() == 'hello spam'
When combined with the parametrizing technique, this allows to easily test different arguments without modifying the test function:
import sys
import pytest
import greeter
#pytest.mark.parametrize('name', ['spam', 'eggs', 'bacon'])
def test_greeter(monkeypatch, name):
with monkeypatch.context() as m:
m.setattr(sys, 'argv', ['greeter', name])
assert greeter.main() == 'hello ' + name
Now you get three tests, one for each of the arguments:
$ pytest -v test_greeter.py
...
test_greeter.py::test_greeter[spam] PASSED
test_greeter.py::test_greeter[eggs] PASSED
test_greeter.py::test_greeter[bacon] PASSED
A good practice might to have this kind of code, instead of reading arguments from main method.
# main.py
def main(arg1):
return arg1
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='My awesome script')
parser.add_argument('word', help='a word')
args = parser.parse_args()
main(args.word)
This way, your main method can easily be tested in pytest
import main
def test_case01():
main.main(your_hardcoded_arg)
I am not sure you can call a python script to test except by using os module, which might be not a good practice
I'm using the subprocess module to run a bash command. I want to display the result in real time, including when there's no new line added, but the output is still modified.
I'm using python 3. My code is running with subprocess, but I'm open to any other module. I have some code that return a generator for every new line added.
import subprocess
import shlex
def run(command):
process = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE)
while True:
line = process.stdout.readline().rstrip()
if not line:
break
yield line.decode('utf-8')
cmd = 'ls -al'
for l in run(cmd):
print(l)
The problem comes with commands of the form rsync -P file.txt file2.txt for example, which shows a progress bar.
For example, we can start by creating a big file in bash:
base64 /dev/urandom | head -c 1000000000 > file.txt
Then try to use python to display the rsync command:
cmd = 'rsync -P file.txt file2.txt'
for l in run(cmd):
print(l)
With this code, the progress bar is only printed at the end of the process, but I want to print the progress in real time.
From this answer you can disable buffering when print in python:
You can skip buffering for a whole python process using "python -u"
(or #!/usr/bin/env python -u etc) or by setting the environment
variable PYTHONUNBUFFERED.
You could also replace sys.stdout with some other stream like wrapper
which does a flush after every call.
Something like this (not really tested) might work...but there are
probably problems that could pop up. For instance, I don't think it
will work in IDLE, since sys.stdout is already replaced with some
funny object there which doesn't like to be flushed. (This could be
considered a bug in IDLE though.)
>>> class Unbuffered:
.. def __init__(self, stream):
.. self.stream = stream
.. def write(self, data):
.. self.stream.write(data)
.. self.stream.flush()
.. def __getattr__(self, attr):
.. return getattr(self.stream, attr)
..
>>> import sys
>>> sys.stdout=Unbuffered(sys.stdout)
>>> print 'Hello'
Hello
So i was trying to do project based on argparse. And actually I copied all the code down below from sentdex, who has a channel on Youtube.
But for some reason code of mine doesn't work and his does.
I'd be really happy if someone helped me, because it's so pissing off)
import argparse
import sys
def main():
parser=argparse.ArgumentParser()
parser.add_argument('--x', type=float,default=1.0,
help='What is the first number?')
parser.add_argument('--y', type=float,default=1.0,
help='What is the second number?')
parser.add_argument('--operation', type=str,default='sub',
help='What operation? (add, sub, )')
args=parser.parse_args()
sys.stdout.write(str(calc(args)))
def calc(args):
operation=args.operation
x = args.x
y = args.y
if operation == 'add':
return x + y
elif operation == 'sub':
return x - y
if __name__ =='__main__':
main()
#console:
--x=2 --y=4 --operation=sub
File "<ipython-input-1-f108b29d54dc>", line 1
--x=2 --y=4 --operation=sub
^
SyntaxError: can't assign to operator
argparse parses sys.argv, which is meant to be initialized from running the script in the command line, but you're running this from iPython, so it's treating sub as a built-in operator function.
You should either run this as a script from the command line, or modify args=parser.parse_args() to:
args=parser.parse_args(['--x', '2', '--y', '4', '--operation', 'sub'])
if you just want to test it without running the script from the command line.