How efficiently unit test a function with multiple input calls? - python-3.x

I have a unit test that works as intended, however I feel that this is not the best way to test multiple inputs with pytest. It definitely violates the DRY principle. I thinking there's a better way to go about this but I can't figure out what. I'm also not sure what to actually do with the mock. It's not used but it has to be there (see 'mock_choice' parameter in the function in the code below).
I thought perhaps looping through the calls would work but that didn't work as intended. I really couldn't figure out another way besides using side_effects and calling four times to test to make sure I get return value as I intended.
Function To Test
def export_options():
while True:
try:
choice = int(input("\nPlease make a selection"))
if choice in ([option for option in range(1, 5)]):
return choice # This what I'm testing
else:
print("\nNot a valid selection\n")
except ValueError as err:
print("Please enter an integer")
Test Function
#mock.patch('realestate.app.user_inputs.input', side_effect=[1, 2, 3, 4])
def test_export_options_valid_choice(mock_choice): # mock_choice needs to be here but isn't used!
export_option = user_inputs.export_options()
assert export_option == 1
export_option = user_inputs.export_options()
assert export_option == 2
export_option = user_inputs.export_options()
assert export_option == 3
export_option = user_inputs.export_options()
assert export_option == 4
The test works. It passes as the function returns all values between 1 and 4. However, because the code is very repetitive, I would like to know if there's a better way to test multiple input calls as I would like to apply the same to future tests.

You can use a for loop to avoid repeating code.
#mock.patch('realestate.app.user_inputs.input')
def test_export_options_valid_choice(self, mock_choice):
for response in [1, 2, 3, 4]:
with self.subTest(response=response)
mock_choice.return_value = response
export_option = user_inputs.export_options()
self.assertEqual(export_option, response)
# Not in original version, but other tests might need it.
mock_choice.assert_called_once_with("\nPlease make a selection")
mock_choice.reset_mock()
The subtest context manager will tell you which input failed.
Is the only way to do something like this is with sub-test using the unittest module? I know pytest doesn't support subtests, but I was hoping there was a similar type of hack.
You can certainly loop without using subtests, but you may have difficulty telling which input failed. More generally, instead of a loop, you can call a common helper function for each test.
For pytest in particular, you can use the #pytest.mark.parametrize decorator to automate this.
#pytest.mark.parametrize('response', [1, 2, 3, 4])
#mock.patch('realestate.app.user_inputs.input')
def test_export_options_valid_choice(mock_choice, response):
mock_choice.return_value = response
export_option = user_inputs.export_options()
assert export_option == response

Related

Python recursion: Why no new object is instantiated?

The following is a simplified example of a problem I ran into recently, and even though I circumvented it by rewriting my code, I still do not understand why it happened.
Consider the following python script:
class Configer(object):
def __init__(self, pops_cfg = {}):
self.pops_cfg = pops_cfg
def add_pop(self, name, gs):
pop_cfg = {}
pop_cfg['gs'] = gs
if name in self.pops_cfg:
self.pops_cfg[name].update(pop_cfg)
else:
self.pops_cfg[name] = pop_cfg
def get(self):
return self.pops_cfg
def get_config(name):
if name == 'I':
c = Configer()
c.add_pop('I', 100)
elif name == 'IE':
c = get_config('I')
c.add_pop('E', 200)
else:
raise
return c
with these test cases:
# test 1
a = get_config('I')
print(a, id(a), a.get())
# <__main__.Configer object at 0x7fa83615c580> 140360438629760 {'I': {'gs': 100}}
# test 2
b = get_config('IE')
print(b, id(b), b.get())
# <__main__.Configer object at 0x7fa83603f610> 140360437462544 {'I': {'gs': 100}, 'E': {'gs': 200}}
# test 3
c = get_config('I')
print(c, id(c), c.get())
# <__main__.Configer object at 0x7fa83603c280> 140360437449344 {'I': {'gs': 100}, 'E': {'gs': 200}}
The first and second tests behave as I expect, adding two dictionaries with keys I and [I,E], respectively. However, the last one, which is identical to the first, surprisingly (to me) has both keys. I understand what is happening, but don't understand why.
What is happening
Here, even in the second test has some flaws. In it the internal get_config('I') call does not create a new instance of the Configer object. Instead, it updates the one created earlier for the first test a = get_config('I'). Similarly, the last test does not create a new object but updates the previously created one, i.e. the variable c which has both I and E as its pops_cfg keys after execution of the second test.
What I don't understand
I'm curious to know why bolded the part in the last paragraph is happening. In particular, I thought the way the if/elif statements is written, would force both name==I and name==IE tests to construct a base case (associated with name==I) and then update it according to the needs. It's not happening though.
Can you please enlighten me?

Is there a way to use `pytest-snapshot` with unsortable list?

I want to implement a test that looks like this :
def test_some_function(snapshot):
my_unsortable_list = my_function()
assert my_unsortable_list == snapshot
However the my_unsortable_list variable is of type List[Dict] and there is no easy way to sort it.
Also, the values inside my_unsortable_list are always the same but the order is random.
Therefore the assertion assert my_unsortable_list == snapshot fails if we run this test.
I was wondering if there is a way to get rid of snapshot order and do something like this :
assert len(my_unsortable_list) == len(snapshot) # Compare number of elements in lists
assert all([x in my_unsortable_list for x in snapshot]) # Assert all elements are in both lists
The code above fails, is there a way to use advanced features of pytest-snapshot so that works ?

Parameterized fixture with pytest-datafiles

I have a Python function that processes different types of files for which I want set up a testing scheme. For each of the different file types it can handle I have a test file. I'd like to use pytest-datafiles so the tests automatically get performed on copies in a tmpdir. I'm trying to setup a parameterized fixture, similar to #pytest.fixture(params=[...]), so that the test function automatically gets invoked for each test file. How do I achieve this?
I tried the code below, but my datafiles are not copied to the tmpdir, and the test collection fails, because the test_files() fixture does not yield any output. I'm quite new to pytest, so possibly I don't fully understand how it works.
#pytest.fixture(params = [1,2])
#pytest.mark.datafiles('file1.txt','file1.txt')
def test_files(request,datafiles):
for testfile in datafiles.listdir():
yield testfile
#pytest.fixture(params = ['expected_output1','expected_output2'])
def expected_output(request):
return request.param
def my_test_function(test_files,expected_output):
assert myFcn(test_files) == expected_output
After reading up on fixtures and marks I conclude that the way I tried to use pytest.mark.datafiles is probably not possible. Instead I used the built-in tmpdir functionality in pytest, as demonstrated below. (Also, the fact that I named my fixture function test_files() may have messed things up since pytest would recognize it as a test function.)
testFileNames = {1:'file1.txt', 2:'file2.txt'}
expectedOutputs = {1:'expected_output1', 2:'expected_output2'}
#pytest.fixture(params = [1,2])
def testfiles(request,tmpdir):
shutil.copy(testFileNames[request.param],tmpdir)
return os.path.join(tmpdir,testFileNames[request.param])
#pytest.fixture(params = [1,2])
def expected_output(request):
return expectedOutputs[request.param]
def my_test_function(testfiles,expected_output):
assert myFcn(testfiles) == expected_output

how to get pytest fixture return value in autouse mode?

I am new to learn pytest. In bellow sample code.
how can i get A() object in test_one function when fixture is in autouse mode?
import pytest
import time
class A:
def __init__(self):
self.abc = 12
#pytest.fixture(autouse=True)
def test_foo():
print('connecting')
yield A()
print('disconnect')
def test_one():
#how can i get A() object?
print([locals()])
assert 1 == 1
You can always add the fixture as parameter despite the autouse:
def test_one(test_foo):
print(test_foo)
assert 1 == 1
If you don't want to use the fixture parameter for some reason, you have to save the object elsewhere to be able to access it from your test :
a = None
#pytest.fixture(autouse=True)
def test_foo():
global a
a = A()
yield
a = None
def test_one():
print(a)
assert 1 == 1
This could be made a little better if using a test class and put a in a class variable to avoid the use of the global var, but the first variant is still the preferred one, as it localizes the definition of the object.
Apart from that, there is no real point in yielding an object you don't have access to. You may consider if autouse is the right option for your use case. Autouse is often used for stateless setup / teardown.
If your use case is to do some setup/teardown regardless (as suggested by the connect/disconnect comments), and give optional access to an object, this is ok, of course.

How can I test a loop with multiple input calls?

I'm trying to test a fuction that dependets a of multiple user inputs to return some value.
I've already looked for multiples unswers here but none was able to resolve my problem. I saw things with parametrize, mock and monkey patch but none helped. I think a lot is because I don't clearly understood the concepts behind what was being done and I couldn't adapt to my problem. I saw suggestion of using external file for this but I don't wont to depend on that. I'm trying with pytest and python 3.7.3
The function that I want to test is something like this
def function():
usr_input = input('please enter a number: ')
while True:
if int(usr_input) < 5:
usr_input = input('please, enter a value less then 5: ')
else:
break
return usr_input
I want to know how can I pass two input values to test the function when the inserted value is not valid. Example: Send value 6 and 2, make an assert expecting value 2 and pass the test. My others tests look like this:
def test_input(monkeypatch):
monkeypatch.setattr('builtins.input', lambda x: 6)
test = function()
assert test == 2
but, for this case, they loop. It's possible to do this only with parametrize or other simple code?
EDIT
I added a int() in my "if", as wim pointed in the accepted answer, just to prevent any confusion for future readers. I'm assuming the cast is possible.
Two problems here, you need to convert the input into a number otherwise the comparison will fail, comparing a string with a number: usr_input < 5. Note that the real input will never return a number, only a string.
Once you've cleared that up, you can monkeypatch input with a callable that can return different values when called:
def fake_input(the_prompt):
prompt_to_return_val = {
'please enter a number: ': '6',
'please, enter a value less then 5: ': '2',
}
val = prompt_to_return_val[the_prompt]
return val
def test_input(monkeypatch):
monkeypatch.setattr('builtins.input', fake_input)
test = function()
assert test == 2
If you install the plugin pytest-mock, you can do this more easily with the mock API:
def test_input(mocker):
mocker.patch('builtins.input', side_effect=["6", "2"])
test = function()
assert test == 2

Resources