setup/teardown table when using peewee in pytest - python-3.x

Is there a way to automatically setup a clean db/table when running pytest unit test cases along with peewee?
Currently what i'm doing is as shown below,
#pytest.fixture(autouse=True)
def before_after(tmpdir):
"""Fixture to execute before and after a test is run"""
# Before:
MyTable.create_table(safe=True)
MyOtherTable.create_table(safe=True)
# MyTable.truncate_table()
# MyOtherTable.truncate_table()
yield # test
# After:
# MyTable.truncate_table()
# MyOtherTable.truncate_table()
MyTable.drop_table(safe=True)
MyOtherTable.drop_table(safe=True)
The table data persists across the tests. So is there any other way to run each test in its own isolated environment other than creating and dropping tables between tests?

You can begin a transaction at the beginning of each testcase and then roll it back after each case is over.
With regular Python unittest, you might do something like:
class BaseTestCase(unittest.TestCase):
def setUp(self):
self.txn = db.transaction().__enter__()
def tearDown(self):
self.txn.rollback()

Related

Python unittest framework, unfortunately no build-in timeout possibility

I'm using the python unittest framework to perform unit tests. The python version i'm using is 3.6. I'm using the windows OS.
My production code is currently a little bit unstable (as i added some new functionality) and tends to hangup in some internal asynchronous loops. I'm working on fixing those hangups, currently.
However, the tracking of those bugs is hindered by the corresponding testcases hanging up, too.
I would merely like that the corresponding testcases stop after e.g. 500ms if they not run through and be marked as FAILED in order to let all the other testcases beeing continued.
Unfortunately, the unittest framework does not support timeouts (If i had known in advance ...). Therefore i'm searching for a workaround of this. If some package would add that missing functionality to the unittest framework i would be willing to try. What i don't want is that my production code relies on too much non standard packages, for the unittests this would be OK.
I'm a little lost on howto adding such functionality to the unittests.
Therefore i just tried out some code from here: How to limit execution time of a function call?. As they said somewhere that threading should not be used to implement timeouts i tried using multiprocessing, too.
Please note that the solutions proposed here How to specify test timeout for python unittest? do not work, too. They are designed for linux (using SIGALRM).
import multiprocessing
# import threading
import time
import unittest
class InterruptableProcess(multiprocessing.Process):
# class InterruptableThread(threading.Thread):
def __init__(self, func, *args, **kwargs):
super().__init__()
self._func = func
self._args = args
self._kwargs = kwargs
self._result = None
def run(self):
self._result = self._func(*self._args, **self._kwargs)
#property
def result(self):
return self._result
class timeout:
def __init__(self, sec):
self._sec = sec
def __call__(self, f):
def wrapped_f(*args, **kwargs):
it = InterruptableProcess(f, *args, **kwargs)
# it = InterruptableThread(f, *args, **kwargs)
it.start()
it.join(self._sec)
if not it.is_alive():
return it.result
#it.terminate()
raise TimeoutError('execution expired')
return wrapped_f
class MyTests(unittest.TestCase):
def setUp(self):
# some initialization
pass
def tearDown(self):
# some cleanup
pass
#timeout(0.5)
def test_XYZ(self):
# looong running
self.assertEqual(...)
The code behaves very differently using threads vs. processes.
In the first case it runs through but continues execution of the function despite timeout (which is unwanted). In the second case it complains about unpickable objects.
In both cases i would like to know howto do proper cleanup, e.g. call the unittest.TestCase.tearDown method on timeout from within the decorator class.

Monkeypatch setenv value persist across test cases in python unittest

I have to set different environment variables for my test cases.
My assumption was once the test case is completed, monkeypatch will remove the env variables from os.environ. But it is not. How to set and revert the environment variables for each test?
Here is my simplified test case code with monkeypatch lib.
import os
import unittest
import time
from _pytest.monkeypatch import MonkeyPatch
class Test_Monkey_Patch_Env(unittest.TestCase):
def setUp(self):
print("Setup")
def test_1(self):
monkeypatch = MonkeyPatch()
monkeypatch.setenv("TESTVAR1", "This env value is persistent")
def test_2(self):
# I was expected the env TESTVAR1 set in test_1 using monkeypatch
# should not persist across test cases. But it is.
print(os.environ["TESTVAR1"])
def tearDown(self):
print("tearDown")
if __name__ == '__main__':
unittest.main()
Output:
Setup
tearDown
.Setup
This env value is persistent
tearDown
.
----------------------------------------------------------------------
Ran 2 tests in 0.001s
OK
This expands a bit on the (correct) answer given by #MaNKuR.
The reason why it does not work as expected is that in pytest, it is not designed to be used this way - instead the monkeypatch fixture is used, which does the cleanup on leaving the test scope. To do the cleanup in unittest, you can do that cleanup in tearDown, as shown in the answer - though I would use the less specific undo for this:
from _pytest.monkeypatch import MonkeyPatch
class Test_Monkey_Patch_Env(unittest.TestCase):
def setUp(self):
self.monkeypatch = MonkeyPatch()
def tearDown(self):
self.monkeypatch.undo()
This reverts any changes made by using the MonkeyPatch instance, not just the specific setenv call shown above, and is therefore more secure.
Additionally there is the possibility to use MonkeyPatch as a context manager. This comes in handy if you want to use it just in one or a couple of tests. In this case you can write:
...
def test_1(self):
with MonkeyPatch() as mp:
mp.setenv("TESTVAR1", "This env value is not persistent")
do_something()
The cleanup in this case is done on exiting the context manager.
To remove env variable set by monkeypatch module, seems you have to call delenv method.
You can call this delenv after you are done with setting & testing env variable using setenv method but I think tearDown would be the right place to put that delenv call.
def tearDown(self):
print("tearDown")
monkeypatch.delenv('TESTVAR1', raising=False)
I have not tested the code however should give you fair amount of idea whats needs to be done.
EDIT: Improved code to use setup * tearDown more efficiently. test_2 should raise OS Error since env variable has been deleted
import os
import unittest
import time
from _pytest.monkeypatch import MonkeyPatch
class Test_Monkey_Patch_Env(unittest.TestCase):
def setUp(self):
self.monkeypatch = MonkeyPatch()
print("Setup")
def test_1(self):
self.monkeypatch.setenv("TESTVAR1", "This env value is persistent")
def test_2(self):
# I was expected the env TESTVAR1 set in test_1 using monkeypatch
# should not persist across test cases. But it is.
print(os.environ["TESTVAR1"]) # Should raise OS error
def tearDown(self):
print("tearDown")
self.monkeypatch.delenv("TESTVAR1", raising=False)
if __name__ == '__main__':
unittest.main()
Cheers!

Meaning of yield within fixture in pytest

I read the documentation at docs.pytest.org
I'm not sure about the meaning of the statement: yield smtp_connection
Can please someone explain what yield does, and if it's mandatory?
First of all it's not mandatory!!!
Yield execute test body, for example, you can set up your test with pre-condition and post condition. For this thing we can use conftest.py:
import pytest
#pytest.fixture
def set_up_pre_and_post_conditions():
print("Pre condition")
yield # this will be executed our test
print("Post condition")
Our test, for example store in test.py:
def test(set_up_pre_and_post_conditions):
print("Body of test")
So, let's launch it: pytest test.py -v -s
Output:
test.py::test Pre condition
Body of test
PASSEDPost condition
It's not full functionality of yield, just example, I hope it will be helpful.

How to suppress ImportWarning in a python unittest script

I am currently running a unittest script which successfully passes the various specified test with a nagging ImportWarning message in the console:
...../lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
....
----------------------------------------------------------------------
Ran 7 tests in 1.950s
OK
The script is run with this main function:
if __name__ == '__main__':
unittest.main()
I have read that warnings can be surpressed when the script is called like this:
python -W ignore:ImportWarning -m unittest testscript.py
However, is there a way of specifying this ignore warning in the script itself so that I don't have to call -W ignore:ImportWarning every time that the testscript is run?
Thanks in advance.
To programmatically prevent such warnings from showing up, adjust your code so that:
import warnings
if __name__ == '__main__':
with warnings.catch_warnings():
warnings.simplefilter('ignore', category=ImportWarning)
unittest.main()
Source: https://stackoverflow.com/a/40994600/328469
Update:
#billjoie is certainly correct. If the OP chooses to make answer 52463661 the accepted answer, I am OK with that. I can confirm that the following is effective at suppressing such warning messages at run-time using python versions 2.7.11, 3.4.3, 3.5.4, 3.6.5, and 3.7.1:
#! /usr/bin/env python
# -*- coding: utf-8 -*-
import unittest
import warnings
class TestPandasImport(unittest.TestCase):
def setUp(self):
warnings.simplefilter('ignore', category=ImportWarning)
def test_01(self):
import pandas # noqa: E402
self.assertTrue(True)
def test_02(self):
import pandas # noqa: E402
self.assertFalse(False)
if __name__ == '__main__':
unittest.main()
However, I think that the OP should consider doing some deeper investigation into the application code targets of the unit tests, and try to identify the specific package import or operation which is causing the actual warning, and then suppress the warning as closely as possible to the location in code where the violation takes place. This will obviate the suppression of warnings throughout the entirety of one's unit test class, which may be inadvertently obscuring warnings from other parts of the program.
Outside the unit test, somewhere in the application code:
with warnings.catch_warnings():
warnings.simplefilter('ignore', category=ImportWarning)
# import pandas
# or_ideally_the_application_code_unit_that_imports_pandas()
It could take a bit of work to isolate the specific spot in the code that is either causing the warning or leveraging third-party software which causes the warning, but the developer will obtain a clearer understanding of the reason for the warning, and this will only improve the overall maintainability of the program.
I had the same problem, and starting my unittest script with a warnings.simplefilter() statement, as described by Nels, dit not work for me. According to this source, this is because:
[...] as of Python 3.2, the unittest module was updated to use the warnings module default filter when running tests, and [...] resets to the default filter before each test, meaning that any change you may think you are making scriptwide by using warnings.simplefilter(“ignore”) at the beginning of your script gets overridden in between every test.
This same source recommends to renew the filter inside of each test function, either directly or with an elegant decorator. A simpler solution is to define the warnings filter inside unittest's setUp() method, which is run right before each test.
import unittest
class TestSomething(unittest.TestCase):
def setUp(self):
warnings.simplefilter('ignore', category=ImportWarning)
# Other initialization stuff here
def test_a(self):
# Test assertion here.
if __name__ == '__main__':
unittest.main()
I had the same warning in Pycharm for one test when using unittest. This warning disappeared when I stopped trying to import a library during the test (I moved the import to the top where it's supposed to be). I know the request was for suppression, but this would also make it disappear if it's only happening in a select number of tests.
Solutions with def setUp suppress warnings for all methods within class. If you don't want to suppress it for all of them, you can use decorator.
From Neural Dump:
def ignore_warnings(test_func):
def do_test(self, *args, **kwargs):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
test_func(self, *args, **kwargs)
return do_test
Then you can use it to decorate single test method in your test class:
class TestClass(unittest.TestCase):
#ignore_warnings
def test_do_something_without_warning()
self.assertEqual(whatever)
def test_something_else_with_warning()
self.assertEqual(whatever)

restart python (or reload modules) in py.test tests

I have a (python3) package that has completely different behaviour depending on how it's init()ed (perhaps not the best design, but rewriting is not an option). The module can only be init()ed once, a second time gives an error. I want to test this package (both behaviours) using py.test.
Note: the nature of the package makes the two behaviours mutually exclusive, there is no possible reason to ever want both in a singular program.
I have serveral test_xxx.py modules in my test directory. Each module will init the package in the way in needs (using fixtures). Since py.test starts the python interpreter once, running all test-modules in one py.test run fails.
Monkey-patching the package to allow a second init() is not something I want to do, since there is internal caching etc that might result in unexplained behaviour.
Is it possible to tell py.test to run each test module in a separate python process (thereby not being influenced by inits in another test-module)
Is there a way to reliably reload a package (including all sub-dependencies, etc)?
Is there another solution (I'm thinking of importing and then unimporting the package in a fixture, but this seems excessive)?
To reload a module, try using the reload() from library importlib
Example:
from importlib import reload
import some_lib
#do something
reload(some_lib)
Also, launching each test in a new process is viable, but multiprocessed code is kind of painful to debug.
Example
import some_test
from multiprocessing import Manager, Process
#create new return value holder, in this case a list
manager = Manager()
return_value = manager.list()
#create new process
process = Process(target=some_test.some_function, args=(arg, return_value))
#execute process
process.start()
#finish and return process
process.join()
#you can now use your return value as if it were a normal list,
#as long as it was assigned in your subprocess
Delete all your module imports and also your tests import that also import your modules:
import sys
for key in list(sys.modules.keys()):
if key.startswith("your_package_name") or key.startswith("test"):
del sys.modules[key]
you can use this as a fixture by configuring on your conftest.py file a fixture using the #pytest.fixture decorator.
Once I had similar problem, quite bad design though..
#pytest.fixture()
def module_type1():
mod = importlib.import_module('example')
mod._init(10)
yield mod
del sys.modules['example']
#pytest.fixture()
def module_type2():
mod = importlib.import_module('example')
mod._init(20)
yield mod
del sys.modules['example']
def test1(module_type1)
pass
def test2(module_type2)
pass
The example/init.py had something like this
def _init(val):
if 'sample' in globals():
logger.info(f'example already imported, val{sample}' )
else:
globals()['sample'] = val
logger.info(f'importing example with val : {val}')
output:
importing example with val : 10
importing example with val : 20
No clue as to how complex your package is, but if its just global variables, then this probably helps.
I have the same problem, and found three solutions:
reload(some_lib)
patch SUT, as the imported method is a key and value in SUT, you can patch the
SUT. Example, if you use f2 of m2 in m1, you can patch m1.f2 instead of m2.f2
import module, and use module.function.

Resources