Parametrizing Pytest fixtures with unique parameters - python-3.x

I'm trying to run the same set of tests on multiple fixtures, and for the fixtures to run through multiple different inputs, with the inputs being unique to each fixture.
My current code reduces down to something similar to this
FN_A_FILES = ['ab/x.txt','ab/y.txt','ab/z.txt']
FN_B_FILES = ['abcd/x.txt','abcd/y.txt','abcd/z.txt']
#pytest.fixture
def foo(request, fn_a, fn_b):
return request.getfixturevalue(request.param)
#pytest.fixture(scope='session', params=FN_A_FILES)
def fn_a(request):
file_path = request.param[:3]
file_name = request.param[3:]
return [file_path, file_name]
#pytest.fixture(scope='session', params=FN_B_FILES)
def fn_b(request):
file_path = request.param[:5]
file_name = request.param[5:]
return [file_path, file_name]
#pytest.mark.parametrize('foo', ['fn_a', 'fn_b'], indirect=True)
def test_foo(foo):
assert '/' in foo[0]
assert '.txt' in foo[1]
What I want to happen is test_foo to test
fn_a('ab/x.txt')
fn_a('ab/y.txt')
fn_a('ab/z.txt')
fn_b('abcd/x.txt')
fn_b('abcd/y.txt')
fn_b('abcd/z.txt')
As it stands, the code above seems to be running through many more tests than is necessary, so I think it's finding each permutation of (FN_A_FILES, FN_B_FILES), and then some on top of that; I can't quite make sense of the numbers in my head.
In my non-abstracted code, I have three lists of files, two with 3 files each, and one with 1. I have three fixtures (one per list), and one test_function that calls the fixtures. Somehow running this adds up to 27 different tests run, whereas I only want to run 7 (two fixtures with different sets of 3 inputs, and one fixture with 1 input).
Does anyone know how to set this up so that only the 6 tests are run?

I would dodge fixtures to achieve what you are looking for and instead go for a simple data driven approach using yield to parametrize the test like this:
import pytest
FN_A_FILES = ['ab/x.txt','ab/y.txt','ab/z.txt']
FN_B_FILES = ['abcd/x.txt','abcd/y.txt','abcd/z.txt']
def test_data():
for entry in FN_A_FILES:
yield [entry[:3], entry[3:]]
for entry in FN_B_FILES:
yield [entry[:5], entry[5:]]
def bar(q):
return f"{q[0]}-{q[1]}"
#pytest.mark.parametrize('foo', test_data(), ids=bar)
def test_foo(foo):
assert '/' in foo[0]
assert '.txt' in foo[1]
ids=bar is used to dynamically set test names to filenames from test data.
Running this test creates the following tests:
test_foo[ab/-x.txt]
test_foo[ab/-y.txt]
test_foo[ab/-z.txt]
test_foo[abcd/-x.txt]
test_foo[abcd/-y.txt]
test_foo[abcd/-z.txt]

Related

Parameterized fixture with pytest-datafiles

I have a Python function that processes different types of files for which I want set up a testing scheme. For each of the different file types it can handle I have a test file. I'd like to use pytest-datafiles so the tests automatically get performed on copies in a tmpdir. I'm trying to setup a parameterized fixture, similar to #pytest.fixture(params=[...]), so that the test function automatically gets invoked for each test file. How do I achieve this?
I tried the code below, but my datafiles are not copied to the tmpdir, and the test collection fails, because the test_files() fixture does not yield any output. I'm quite new to pytest, so possibly I don't fully understand how it works.
#pytest.fixture(params = [1,2])
#pytest.mark.datafiles('file1.txt','file1.txt')
def test_files(request,datafiles):
for testfile in datafiles.listdir():
yield testfile
#pytest.fixture(params = ['expected_output1','expected_output2'])
def expected_output(request):
return request.param
def my_test_function(test_files,expected_output):
assert myFcn(test_files) == expected_output
After reading up on fixtures and marks I conclude that the way I tried to use pytest.mark.datafiles is probably not possible. Instead I used the built-in tmpdir functionality in pytest, as demonstrated below. (Also, the fact that I named my fixture function test_files() may have messed things up since pytest would recognize it as a test function.)
testFileNames = {1:'file1.txt', 2:'file2.txt'}
expectedOutputs = {1:'expected_output1', 2:'expected_output2'}
#pytest.fixture(params = [1,2])
def testfiles(request,tmpdir):
shutil.copy(testFileNames[request.param],tmpdir)
return os.path.join(tmpdir,testFileNames[request.param])
#pytest.fixture(params = [1,2])
def expected_output(request):
return expectedOutputs[request.param]
def my_test_function(testfiles,expected_output):
assert myFcn(testfiles) == expected_output

Using Pytest fixture with different inputs in different data rather than same data

i have a pytest fixture
#pytest.fixture(params=PW.getCases())
def getData(self, request):
return request.param
this returns a series of test data each time, i have a test like
def test_Sample(self, getData):
for case in range(casecount):
casePage.caseList1()[case].click()
Here the same test executes for all the series of param, but is there a way that i can increment the param for each increment in case?
I have tried using scope in fixture but no success

How to capture the iteration number dyanmically inside test while using pytest-repeat

I'm executing my selenium script multiple times by using pytest-repeat. i need to capture the iteration number during execution and make use of it.
I explored pytest.mark, pytest.collect & pytest.Collector
class Testone():
#pytest.fixture()
def setup(self):
#pytest.mark.repeat(RowCount)
def test_create_eq(self,setup):
Need to capture the iteration number here.
I think there should be an easier and straightforward way than what I describe below. pytest-repeat has a fixture __pytest_repeat_step_number which I hoped could provide the current step number for the test but it did not.
request.node.name provides the name of the test function generated by pytest_repeat and it has the step number which can be extracted for your purpose.
import pytest
class Testone():
#pytest.fixture()
def setup(self):
pass
#pytest.mark.repeat(4)
def test_create_eq(self,setup,request):
current_step = request.node.name.split('[')[1].split('-')[0] #string form; parse to int, if required

Pytest user input simulation

I am brand new to pytest and trying to work my way through it.
I currently am programming a small CLI game which will require multiple user inputs in a row and I cannot figure out how I can do that.
I read a bunch of solutions but did not manage to make it work.
Here is my code:
class Player:
def __init__(self):
self.set_player_name()
self.set_player_funds()
def set_player_name(self):
self.name = str(input("Player, what's you name?\n"))
def set_player_funds(self):
self.funds = int(input("How much money do you want?\n"))
I simply want to automate the user input for those two requests.
(ie: a test that would input "Bob" and test: assert player.name=="Bob"
Can someone help out with that?
Thank you!
Quite elegant way to test inputs is to mock input text by using monkeypatch fixture. To handle multiple inputs can be used lambda statement and iterator object.
def test_set_player_name(monkeypatch):
# provided inputs
name = 'Tranberd'
funds = 100
# creating iterator object
answers = iter([name, str(funds)])
# using lambda statement for mocking
monkeypatch.setattr('builtins.input', lambda name: next(answers))
player = Player()
assert player.name == name
assert player.funds == funds
monkeypatch on docs.pytest.org.
Iterator object on wiki.python.org.

Test-Driven Development in Python

How do i write a test, to Test for the default behavior (of a method ) of printing a range that we give it? Below is my attempt. Pasted code from my implementation file and the test case file.
`class FizzBuzzService:
def print_number(self, num):
for i in range(num):
print(i, end=' ')
import unittest
from app.logic import FizzBuzzService
class FizzBuzzServiceTestCases(unittest.TestCase):
def setUp(self):
"""
Create an instance of fizz_buzz_service
"""
self.fizzbuzz = FizzBuzzService()
def test_it_prints_a_number(self):
"""
Test for the default behavior of printing the range that we give
fizz_buzz_service
"""
number_range = range(10)
self.assertEqual(self.fizzbuzz.print_number(10), print(*number_range))
For me at least TDD is about finding a good design as much as it's about testing. As you've seen, testing for things like output is hard.
printing like this is known as a side effect - put simply it's doing something not based solely on the input parameter to the method. My solution would be to make print_number less side effecty, then test it like that. If you need to print it you can write another function higher up that prints, the output of print_number, but contains no meaningful logic other than that, which doesn't really need testing. Here's an example with your code changed to not have a side effect (it's one of several possible alternatives)
class FizzBuzzService:
def print_number(self, num):
for i in range(num):
yield i
import unittest
class FizzBuzzServiceTestCases(unittest.TestCase):
def setUp(self):
"""
Create an instance of fizz_buzz_service
"""
self.fizzbuzz = FizzBuzzService()
def test_it_prints_a_number(self):
"""
Test for the default behavior of printing the range that we give
fizz_buzz_service
"""
number_range = range(10)
output = []
for x in self.fizzbuzz.print_number(10):
output.append(x)
self.assertEqual(range(10), output)
You need to capture standard outputs in your tests to do that -
import sys
import cStringIO
def test_it_prints_a_number(self):
inital_stdout = sys.stdout
sys.stdout = cStringIO()
self.fizzbuzz.print_number(10)
value = sys.stdout.getvalue()
self.assertEqual(value, str(range(10)))
As you can see it's really messy, thus I'd highly recommend against it. Tests written on the based on string contents, especially standard outputs are utterly fragile. Besides the whole point of TDD is to write well-designed isolated code that is easily testable. If your code is difficult to test, than it is a sure shot indication that there's a problem in your design.
How about you divide your code into two parts, one that produce the numbers and need to be tested and other that just print it.
def get_numbers(self, num):
return range(num)
def print_number(self, num):
print(get_numbers)
# Now you can easily test get_numbers method.
Now if you really want to test printing functionality, then the better way would be use mocking.

Resources