I have to set different environment variables for my test cases.
My assumption was once the test case is completed, monkeypatch will remove the env variables from os.environ. But it is not. How to set and revert the environment variables for each test?
Here is my simplified test case code with monkeypatch lib.
import os
import unittest
import time
from _pytest.monkeypatch import MonkeyPatch
class Test_Monkey_Patch_Env(unittest.TestCase):
def setUp(self):
print("Setup")
def test_1(self):
monkeypatch = MonkeyPatch()
monkeypatch.setenv("TESTVAR1", "This env value is persistent")
def test_2(self):
# I was expected the env TESTVAR1 set in test_1 using monkeypatch
# should not persist across test cases. But it is.
print(os.environ["TESTVAR1"])
def tearDown(self):
print("tearDown")
if __name__ == '__main__':
unittest.main()
Output:
Setup
tearDown
.Setup
This env value is persistent
tearDown
.
----------------------------------------------------------------------
Ran 2 tests in 0.001s
OK
This expands a bit on the (correct) answer given by #MaNKuR.
The reason why it does not work as expected is that in pytest, it is not designed to be used this way - instead the monkeypatch fixture is used, which does the cleanup on leaving the test scope. To do the cleanup in unittest, you can do that cleanup in tearDown, as shown in the answer - though I would use the less specific undo for this:
from _pytest.monkeypatch import MonkeyPatch
class Test_Monkey_Patch_Env(unittest.TestCase):
def setUp(self):
self.monkeypatch = MonkeyPatch()
def tearDown(self):
self.monkeypatch.undo()
This reverts any changes made by using the MonkeyPatch instance, not just the specific setenv call shown above, and is therefore more secure.
Additionally there is the possibility to use MonkeyPatch as a context manager. This comes in handy if you want to use it just in one or a couple of tests. In this case you can write:
...
def test_1(self):
with MonkeyPatch() as mp:
mp.setenv("TESTVAR1", "This env value is not persistent")
do_something()
The cleanup in this case is done on exiting the context manager.
To remove env variable set by monkeypatch module, seems you have to call delenv method.
You can call this delenv after you are done with setting & testing env variable using setenv method but I think tearDown would be the right place to put that delenv call.
def tearDown(self):
print("tearDown")
monkeypatch.delenv('TESTVAR1', raising=False)
I have not tested the code however should give you fair amount of idea whats needs to be done.
EDIT: Improved code to use setup * tearDown more efficiently. test_2 should raise OS Error since env variable has been deleted
import os
import unittest
import time
from _pytest.monkeypatch import MonkeyPatch
class Test_Monkey_Patch_Env(unittest.TestCase):
def setUp(self):
self.monkeypatch = MonkeyPatch()
print("Setup")
def test_1(self):
self.monkeypatch.setenv("TESTVAR1", "This env value is persistent")
def test_2(self):
# I was expected the env TESTVAR1 set in test_1 using monkeypatch
# should not persist across test cases. But it is.
print(os.environ["TESTVAR1"]) # Should raise OS error
def tearDown(self):
print("tearDown")
self.monkeypatch.delenv("TESTVAR1", raising=False)
if __name__ == '__main__':
unittest.main()
Cheers!
Related
Is there a way to automatically setup a clean db/table when running pytest unit test cases along with peewee?
Currently what i'm doing is as shown below,
#pytest.fixture(autouse=True)
def before_after(tmpdir):
"""Fixture to execute before and after a test is run"""
# Before:
MyTable.create_table(safe=True)
MyOtherTable.create_table(safe=True)
# MyTable.truncate_table()
# MyOtherTable.truncate_table()
yield # test
# After:
# MyTable.truncate_table()
# MyOtherTable.truncate_table()
MyTable.drop_table(safe=True)
MyOtherTable.drop_table(safe=True)
The table data persists across the tests. So is there any other way to run each test in its own isolated environment other than creating and dropping tables between tests?
You can begin a transaction at the beginning of each testcase and then roll it back after each case is over.
With regular Python unittest, you might do something like:
class BaseTestCase(unittest.TestCase):
def setUp(self):
self.txn = db.transaction().__enter__()
def tearDown(self):
self.txn.rollback()
I read the documentation at docs.pytest.org
I'm not sure about the meaning of the statement: yield smtp_connection
Can please someone explain what yield does, and if it's mandatory?
First of all it's not mandatory!!!
Yield execute test body, for example, you can set up your test with pre-condition and post condition. For this thing we can use conftest.py:
import pytest
#pytest.fixture
def set_up_pre_and_post_conditions():
print("Pre condition")
yield # this will be executed our test
print("Post condition")
Our test, for example store in test.py:
def test(set_up_pre_and_post_conditions):
print("Body of test")
So, let's launch it: pytest test.py -v -s
Output:
test.py::test Pre condition
Body of test
PASSEDPost condition
It's not full functionality of yield, just example, I hope it will be helpful.
I am currently running a unittest script which successfully passes the various specified test with a nagging ImportWarning message in the console:
...../lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
....
----------------------------------------------------------------------
Ran 7 tests in 1.950s
OK
The script is run with this main function:
if __name__ == '__main__':
unittest.main()
I have read that warnings can be surpressed when the script is called like this:
python -W ignore:ImportWarning -m unittest testscript.py
However, is there a way of specifying this ignore warning in the script itself so that I don't have to call -W ignore:ImportWarning every time that the testscript is run?
Thanks in advance.
To programmatically prevent such warnings from showing up, adjust your code so that:
import warnings
if __name__ == '__main__':
with warnings.catch_warnings():
warnings.simplefilter('ignore', category=ImportWarning)
unittest.main()
Source: https://stackoverflow.com/a/40994600/328469
Update:
#billjoie is certainly correct. If the OP chooses to make answer 52463661 the accepted answer, I am OK with that. I can confirm that the following is effective at suppressing such warning messages at run-time using python versions 2.7.11, 3.4.3, 3.5.4, 3.6.5, and 3.7.1:
#! /usr/bin/env python
# -*- coding: utf-8 -*-
import unittest
import warnings
class TestPandasImport(unittest.TestCase):
def setUp(self):
warnings.simplefilter('ignore', category=ImportWarning)
def test_01(self):
import pandas # noqa: E402
self.assertTrue(True)
def test_02(self):
import pandas # noqa: E402
self.assertFalse(False)
if __name__ == '__main__':
unittest.main()
However, I think that the OP should consider doing some deeper investigation into the application code targets of the unit tests, and try to identify the specific package import or operation which is causing the actual warning, and then suppress the warning as closely as possible to the location in code where the violation takes place. This will obviate the suppression of warnings throughout the entirety of one's unit test class, which may be inadvertently obscuring warnings from other parts of the program.
Outside the unit test, somewhere in the application code:
with warnings.catch_warnings():
warnings.simplefilter('ignore', category=ImportWarning)
# import pandas
# or_ideally_the_application_code_unit_that_imports_pandas()
It could take a bit of work to isolate the specific spot in the code that is either causing the warning or leveraging third-party software which causes the warning, but the developer will obtain a clearer understanding of the reason for the warning, and this will only improve the overall maintainability of the program.
I had the same problem, and starting my unittest script with a warnings.simplefilter() statement, as described by Nels, dit not work for me. According to this source, this is because:
[...] as of Python 3.2, the unittest module was updated to use the warnings module default filter when running tests, and [...] resets to the default filter before each test, meaning that any change you may think you are making scriptwide by using warnings.simplefilter(“ignore”) at the beginning of your script gets overridden in between every test.
This same source recommends to renew the filter inside of each test function, either directly or with an elegant decorator. A simpler solution is to define the warnings filter inside unittest's setUp() method, which is run right before each test.
import unittest
class TestSomething(unittest.TestCase):
def setUp(self):
warnings.simplefilter('ignore', category=ImportWarning)
# Other initialization stuff here
def test_a(self):
# Test assertion here.
if __name__ == '__main__':
unittest.main()
I had the same warning in Pycharm for one test when using unittest. This warning disappeared when I stopped trying to import a library during the test (I moved the import to the top where it's supposed to be). I know the request was for suppression, but this would also make it disappear if it's only happening in a select number of tests.
Solutions with def setUp suppress warnings for all methods within class. If you don't want to suppress it for all of them, you can use decorator.
From Neural Dump:
def ignore_warnings(test_func):
def do_test(self, *args, **kwargs):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
test_func(self, *args, **kwargs)
return do_test
Then you can use it to decorate single test method in your test class:
class TestClass(unittest.TestCase):
#ignore_warnings
def test_do_something_without_warning()
self.assertEqual(whatever)
def test_something_else_with_warning()
self.assertEqual(whatever)
I have a (python3) package that has completely different behaviour depending on how it's init()ed (perhaps not the best design, but rewriting is not an option). The module can only be init()ed once, a second time gives an error. I want to test this package (both behaviours) using py.test.
Note: the nature of the package makes the two behaviours mutually exclusive, there is no possible reason to ever want both in a singular program.
I have serveral test_xxx.py modules in my test directory. Each module will init the package in the way in needs (using fixtures). Since py.test starts the python interpreter once, running all test-modules in one py.test run fails.
Monkey-patching the package to allow a second init() is not something I want to do, since there is internal caching etc that might result in unexplained behaviour.
Is it possible to tell py.test to run each test module in a separate python process (thereby not being influenced by inits in another test-module)
Is there a way to reliably reload a package (including all sub-dependencies, etc)?
Is there another solution (I'm thinking of importing and then unimporting the package in a fixture, but this seems excessive)?
To reload a module, try using the reload() from library importlib
Example:
from importlib import reload
import some_lib
#do something
reload(some_lib)
Also, launching each test in a new process is viable, but multiprocessed code is kind of painful to debug.
Example
import some_test
from multiprocessing import Manager, Process
#create new return value holder, in this case a list
manager = Manager()
return_value = manager.list()
#create new process
process = Process(target=some_test.some_function, args=(arg, return_value))
#execute process
process.start()
#finish and return process
process.join()
#you can now use your return value as if it were a normal list,
#as long as it was assigned in your subprocess
Delete all your module imports and also your tests import that also import your modules:
import sys
for key in list(sys.modules.keys()):
if key.startswith("your_package_name") or key.startswith("test"):
del sys.modules[key]
you can use this as a fixture by configuring on your conftest.py file a fixture using the #pytest.fixture decorator.
Once I had similar problem, quite bad design though..
#pytest.fixture()
def module_type1():
mod = importlib.import_module('example')
mod._init(10)
yield mod
del sys.modules['example']
#pytest.fixture()
def module_type2():
mod = importlib.import_module('example')
mod._init(20)
yield mod
del sys.modules['example']
def test1(module_type1)
pass
def test2(module_type2)
pass
The example/init.py had something like this
def _init(val):
if 'sample' in globals():
logger.info(f'example already imported, val{sample}' )
else:
globals()['sample'] = val
logger.info(f'importing example with val : {val}')
output:
importing example with val : 10
importing example with val : 20
No clue as to how complex your package is, but if its just global variables, then this probably helps.
I have the same problem, and found three solutions:
reload(some_lib)
patch SUT, as the imported method is a key and value in SUT, you can patch the
SUT. Example, if you use f2 of m2 in m1, you can patch m1.f2 instead of m2.f2
import module, and use module.function.
Print can be mocked in the following way:
import unittest
import builtin
class TestSomething(unittest.TestCase):
#mock.patch('builtins.print')
def test_method(self, print_):
some_other_module.print_something()
However this means that in the python debug console (pydev debugger) and in the unit test method itself print cannot be used. This is rather inconvenient.
Is there a way to only mock the print method in some_other_module instead of in the testing module as well?
A way to sidestep this is to swap the use of print in the test module with some other function which just calls print, which I can do if there turns out to be no better solution.
#michele's "final solution" has an even cleaner alternative which works in my case:
from unittest import TestCase
from unittest.mock import patch
import module_under_test
class MyTestCase(TestCase):
#patch('module_under_test.print', create=True)
def test_something(self, print_):
module_under_test.print_something()
print_.assert_called_with("print something")
Yes you can! ... But just because you are using Python 3. In Python 3 print is a function and you can rewrite it without change the name. To understand the final solution I'll describe it step by step to have a final flexible and non intrusive solution.
Instrument Module
The trick is add at the top of your module that you will test a line like:
print = print
And now you can patch just print of your module. I wrote a test case where the mock_print_module.py is:
print = print
def print_something():
print("print something")
And the test module (I'm using autospec=True just to avoid errors like mock_print.asser_called_with):
from unittest import TestCase
from unittest.mock import patch
import mock_print_module
class MyTestCase(TestCase):
#patch("mock_print_module.print",autospec=True)
def test_something(self,mock_print):
mock_print_module.print_something()
mock_print.assert_called_with("print something")
I don't want to change my module but just patch print without lose functionalities
You can use patch on "builtins.print" without lose print functionality just by use side_effect patch's attribute:
#patch("builtins.print",autospec=True,side_effect=print)
def test_somethingelse(self,mock_print):
mock_print_module.print_something()
mock_print.assert_called_with("print something")
Now you can trace your prints call without lose the logging and pydev debugger. The drawback of that approach is that you must fight against lot of noise to check your interested the print calls. Moreover you cannot chose what modules will be patched and what not.
Both modes don't work together
You cannot use both way together because if you use print=print in your module you save builtins.print in print variable at the load module time. Now when you patch builtins.print the module still use the original saved one.
If you would have a chance to use both you must wrap original print and not just record it. A way to implement it is use following instead of print=print:
import builtins
print = lambda *args,**kwargs:builtins.print(*args,**kwargs)
The Final Solution
Do we really need to modify the original module to have a chance to patch all print calls in it? No, we can do it without change the module to test anyway. The only thing that we need is injecting a local print function in the module to override the builtins's one: we can do it in the test module instead of the the module to test. My example will become:
from unittest import TestCase
from unittest.mock import patch
import mock_print_module
import builtins
mock_print_module.print = lambda *args,**kwargs:builtins.print(*args,**kwargs)
class MyTestCase(TestCase):
#patch("mock_print_module.print",autospec=True)
def test_something(self,mock_print):
mock_print_module.print_something()
mock_print.assert_called_with("print something")
#patch("builtins.print",autospec=True,side_effect=print)
def test_somethingelse(self,mock_print):
mock_print_module.print_something()
mock_print.assert_called_with("print something")
and mock_print_module.py can be the clean original version with just:
def print_something():
print("print something")