monkeypatch in python, leaking mocks to other tests causing them to fail - python-3.x

I am monkeypatching the other function calls while writing pytest unittests as below:
from _pytest.monkeypatch import MonkeyPatch
from third_party import ThirdParty
def test_my_func():
resp1= "resp1"
monkeypatch = MonkeyPatch()
def mock_other_method(*args, **kwargs):
return resp1
monkeypatch.setattr(ThirdParty, "other_method", mock_other_method)
assert ThirdParty().other_method() == "resp1"
# Some assertions
def test_my_func2():
monkeypatch = MonkeyPatch()
expected_result = "This is expected"
result_third_party = ThirdParty().other_method()
assert result_third_party == expected_result
where,
third_party.py has:
class ThirdParty:
def other_method(self):
return "This is expected"
These tests when ran independently run fine (I just wrote it, so there might be some syntax error). But when I run it as pytest -v, 2nd test will fail. The reason is that on calling other_method, it will return the mocked method: mock_other_method, and since the response is different it will fail. Please suggest some solution to this

monkeypatch is a pytest fixture and as such
not supposed to be imported. Instead, you have to provide it as an argument in the test functions. Pytest loads all fixtures at test start and looks them up by name, so the correct usage would be:
from third_party import ThirdParty
# no import from pytest internal module!
def test_my_func(monkeypatch):
resp1 = "resp1"
def mock_other_method(*args, **kwargs):
return resp1
monkeypatch.setattr(ThirdParty, "other_method", mock_other_method)
assert ThirdParty().other_method() == resp1
The monkeypatch fixture has function scope, meaning that the patching will be reverted after each test function automatically.
Note that using the internal pytest API (e.g. importing _pytest) is discouraged, both because it may change with a new version, and because there are more convenient and secure methods to use the features (and not last because these are documented). You should never bother about fixture cleanup yourself, if you use a fixture provided by pytest or a pytest plugin - it would be too easy to forget the cleanup and get unwanted side effects.

I found the solution by adding monkeypatch.undo() at the end of every test. This will prevent the monkeypatch() from leaking into other functions

Related

Class wide mock in pytest (for all methods in the whole TestClass)

I am unittesting my new librabry, which is basically database interface. Our apps use it to access our database. that means, I want to test all methods, but I do not want DB commands to be called for real. I only check if they are called with correct arguemnts.
For that purpose, I am mocking the database library. this is the actual code that DOES work:
import pytest
from unittests.conftest import PyTestConfig as Tconf
from my_lib.influx_interface import InfluxInterface
class TestInfluxInterface:
def test_connect(self, mocker):
"""
Create InfluxConnector object and call connect()
check if __init__() arguments are passed / used correctly
"""
influx_client = mocker.patch('my_lib.influx_interface.InfluxDBClient', autospec=True)
test_connector = InfluxInterface(Tconf.test_id)
# Call connect with no input (influx_client - should be called with no arguemnts
test_connector.connect()
influx_client.assert_called_once()
influx_client.reset_mock()
# Call connect with custom correct input (influx_client - should be called with custom values
test_connector.connect(Tconf.custom_conf)
influx_client.assert_called_once_with(url=Tconf.custom_conf["url"], token=Tconf.custom_conf["token"],
org=Tconf.custom_conf["org"], timeout=Tconf.custom_conf["timeout"],
debug=Tconf.custom_conf["debug"])
influx_client.reset_mock()
# Call connect with incorrect input (influx_client - should be called with default values
test_connector.connect(Tconf.default_conf)
influx_client.assert_called_once_with(url=Tconf.default_conf["url"], token=Tconf.default_conf["token"],
org=Tconf.default_conf["org"], timeout=Tconf.default_conf["timeout"],
debug=Tconf.default_conf["debug"])
Now, what I do next, is to add more methods into TestInfluxInterface class, which will be testing rest of the code. One test method for each method in my library. Thats how I usually do it.
The problem is, that there is a part of the code:
influx_client = mocker.patch('my_lib.influx_interface.InfluxDBClient', autospec=True)
test_connector = InfluxInterface(Tconf.test_id)
That will be same for every method. Thus I will be copy-pasting it over and over. As you can already see, thats not good solution.
In unittest, I would do this:
import unittest
import unittest.mock as mock
from unittests.conftest import PyTestConfig as Tconf
from my_lib.influx_interface import InfluxInterface
#mock.patch('my_lib.influx_interface.InfluxDBClient', autospec=True)
class TestInfluxInterface:
def setUp(self):
self.test_connector = InfluxInterface(Tconf.test_id)
def test_connect(self, influx_client):
"""
Create InfluxConnector object and call connect()
check if __init__() arguments are passed / used correctly
"""
# Call connect with no input (influx_client - should be called with no arguemnts
self.test_connector.connect()
influx_client.assert_called_once()
Than in each method, I would use self.test_connector to call whatever method I want to test, and check if it called correct influx_client method, with correct parameters.
However, we are moving from unittest to pytest. And while I am wrapping my head around pytest docs, reading stuff here and there, fixtures, mockers, etc..I cannot find out how to do this correctly.

python mock find where method was called

I am working on a project that has some pytests, in one of the tests I have the following line:
mocked_class = Mock()
assert mocked_class.send.call_count == 1
Now I can not find the place in code where someone is calling the send method.
I tried to add
mocked_class.send=my_method
and added prints on that or put breakpoint, but it did not work.
So it seems that I am missing something
The tests are working on python 3.8
with
import pytest
from mock import Mock
How can I find who calls this method?
Any other help of debuging this
send might not exist in your code since its part of a Mock instance and you can call arbitrary methods from a Mock instance:
from unittest.mock import Mock
def test_1():
mocked_class = Mock()
mocked_class.wasda()
assert mocked_class.wasda.call_count == 1
test_1()
There are plenty of examples in the docs: https://docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock
Your code should also raise an "AssertionError" since there are no calls to send before the assertions statement.

Is it possible to inherit setup() and tearDown() methods?

I work with automation primarily in C# / Java and have been looking into Python for its speed.
In C#, I can write a class that implements a WebDriver instance, along with [SetUp] and [TearDown] methods. Then, every class containing test cases can inherit from this class, so I do not need to write my SetUp and TearDown for every test class I write.
The other benefit of SetUp / TearDown fixture is that I can use same WebDriver instance throughout all tests. SetUp will create the WebDriver instance, and pass it into test case class, where test case can use to initialize PageObjects & perform clicks etc. When test is finished, WebDriver instance gets passed back into TearDown for cleanup. This is highly efficient and easy to use.
My issue: I do not understand the Python best-practice on how to implement this functionality.
I have read through Python unittest docs here and read up on Python multiple inheritance here with little luck. I have also read this SO discussion here, but it is 10+ years old and contains many polarizing opinions. I did use discussion as guidance to get started, but I do not want to blindly write code without understanding it.
I am hung up on the part about how to actually inherit setUp(), tearDown(), and my webdriver instance. I don't want to declare a new webdriver instance, and re-write setUp() and tearDown() methods for every single test class, as this seems inefficient.
Here's what I've tried:
This is SetUp / TearDown fixture which is meant to handle setup and teardown for ALL test cases, and also keep track of singleton WebDriver instance.
Project Directory Structure:
base_test_fixture.py
from selenium import webdriver
import unittest
class BaseTestFixture(unittest.TestCase):
driver = None
def setUp(self):
print("Running SetUp")
self.driver = webdriver.Chrome()
def tearDown(self):
print("Running Teardown")
self.driver.close()
self.driver.quit()
Here is test_webdriver.py:
import unittest
import BaseTestFixture
class TestWebDriver(BaseTestFixture.SetUpTearDownFixture):
def test_should_start_webdriver(self):
super.setUp()
print("Running test 1")
super.driver.get("https://www.google.com")
assert "Google" in self.driver.title
super.tearDown()
def test_should_navigate_to_stackoverflow(self):
super.setUp()
print("Running test 2")
super.driver.get("https://www.stackoverflow.com")
assert "Stack Overflow" in self.driver.title
super.teardown()
if __name__ == '__main__':
unittest.main()
Here's the error my class declaration is showing: AttributeError: module 'BaseTestFixture' has no attribute 'SetUpTearDownFixture'
Is it possible to implement a single WebDriver, setUp(), and tearDown() for all Python test cases?
You are very close. the Python convention is that your module should be named with underscores, so I would rename BaseTestFixture.py to base_test_fixture.py, and the class in the module would be the camelcase version of the module name.
that would give us, base_test_fixture.py:
from selenium import webdriver
from unittest import TestCase
class BaseTestFixture(TestCase):
and test_web_driver.py:
import unittest
from base_test_fixture import BaseTestFixture
class TestWebDriver(BaseTestFixture):
If you're still having trouble, the problem may be in the directory structure of your package, so share that with us by editing your question above to indicate the structure of your directory and files.
Also, within your test, since the test class inherits self.driver, you just have to refer to it as self.driver (no super.).
Also, setUp() and tearDown() are automatically called by unittest, so you don't have to call them explicitly.

Creating custom 'test' command to run a test suite for a Flask application

We are extending the Flask-cli with some custom commands. The command test is one of them:
# run.py: specified by FLASK_APP
# This is the Flask application object
app = create_app(os.getenv('FLASK_ENV') or 'default')
#app.cli.command()
def test():
"""Run the unit tests."""
tests = unittest.TestLoader().discover('tests')
test_runner = unittest.TextTestRunner()
test_runner.run(tests)
However a typical test (using Python's built-in unittest module) looks like
this which is based on the style described here.
# some-tests.py: unittest-based test case.
class SomeTestCase(unittest.TestCase):
def setUp(self):
self.app = create_app('testing')
self.app_context = self.app.app_context()
self.app_context.push()
def tearDown(self):
self.app_context.pop()
def test_load(self):
pass
I am clearly hitting an anti-pattern here: I have initialized a flask object with the default(development) configuration because I need it for the #app.cli.command() decorator which all happens in run.py. However once I run the test setUp function in some-tests.py I somehow have to obtain a Flask object utilizing the testing configuration, e.g. by recreating a Flask app with the testing configuration like what happens now.
I would like to have pointers on how one goes about to implement a flask-cli test command in which only one Flask object is created which is reused amongst the various test cases without having the need of explicitely setting the environment to testing before I run flask test on the command line.
I'm not sure if this answer will suit your requirements but that is how I would try to approach this problem. Unfortunately, if you want to use default CLI interface in Flask than you need to call create_app just to call flask test command. What you can do is try use pytest. It allows you to create fixtures that can be used across multiple test cases. For example, in your tests package create file named conftest.py and declare some default fixtures like this:
#pytest.fixture
def app():
return create_app('testing')
#pytest.fixture
def client(app):
return app.test_client()
#pytest.fixture
def database(app):
_db.app = app
with app.app_context():
_db.create_all()
yield _db
_db.session.close()
_db.drop_all()
Then in your test case file (ex. test_login.py) you can use those fixtures like this:
# Notice argument names are the same as name of our fixtures
# You don't need to import fixtures to this file - pytest will
# automatically recognize fixtures for you
def test_registration(app, client):
response = client.post(
'/api/auth/login',
json={
'username': 'user1',
'password': '$dwq3&jNYGu'
})
assert response.status_code == 200
json_data = response.get_json()
assert json_data['access_token']
assert json_data['refresh_token']
The best thing about this approach is that you don't need to create setUp and tearDown methods. Then you can create test cli command for your app:
import pytest
#app.cli.command()
def test():
'''
Run tests.
'''
pytest.main(['--rootdir', './tests'])
And call it like this flask test.

patching boto3.resource and resource.Table method for unit testing

Here is my class:
# Production.py
import boto3
class Production(object):
resource = boto3.resource('dynamodb', 'us-west-2')
def __init__(self):
self.table = Production.resource.Table('employee')
I am trying to test that resource.Table is called with arg 'employee'. I wrote a unit test for it
def test_init():
with patch('Production.boto3.resource') as resource:
mock_resource = MagicMock()
resource.return_value = mock_resource
pd = Production()
resource.assert_called_with('dynamodb', 'us-west-2')
table = resource.return_value.Table.return_value
table.assert_called_with('employee')
test_init()
But it doesn't seem to work... Can some one help me how to test this?
When you patch an object it mocks all of its methods for you. So (I didn't test the code but) I think just:
def test_resource_is_called_with_correct_params():
with patch('Production.boto3') as mock_boto:
Production()
mock_boto.resource.assert_called_once_with('dynamodb', 'us-west-2')
will do the first part of your test. I would then test the init function separately in another test, which is clearer easier and more simple (generally aim to test one thing per test):
def test_table_is_called_with_correct_params():
with patch('Production.boto3') as mock_boto:
Production()
mock_resource = mock_boto.resource.return_value
mock_resource.Table.assert_called_once_with('employee')
I would say a couple of things about this though:
It's nice to group your tests into a class which organises your tests. Also when you get subclass TestsCase you get a bunch of methods that come with it such as self.assertDictEqual that will provide good meaningful output and work well will test runners like nose2. So do something like:
class TestProduction(unittest.TestCase):
def test1():
pass
def test2():
pass
The stuff you are testing there is basically hard coded so these tests are not really meaningful. You are just testing that the language works. I would learn to test behaviour rather than implementation ... So what do you want you class to do? Sit down and think about it first before you write the class. Then you can write the specs out and use those specs to design your tests before you even start coding.

Resources