py.test run tests in specific testSuite - parameterized

I'm new to py.test. So far I like what I see and want to integrate it to our CI process.
Currently we use a different kind of parameterization scheme for our tests which I will explain briefly:
instead of parameterizing per-test, we parameterize per class
say params is a touple of of of tuples, each representing different set of parameters.
we create for each such tupla different instance of some TestCaseWithParameters which is a unittest.TestCase class. Something like this:
for test_parameters in params:
parameterized_test_suite.addTest(ParametrizedTestCase.parametrize(TestCaseWithParameters,param=test_parameters))
Each of these classes is injected with self.params and runs all tests functions it with those different params.
This means that if we have hundreds of tuples in params and TestSomethingWithParameters has dozens of tests, there are a lot of tests in total.
My question: How would I go about translating this to py.test?
I've read this article about the pytest_generate_tests hook, but it seems it injects dependency per test function, and I need it per TestCase...
The simplest way would be to tell py.test to run the specific parameterized_tes_suite I create already, but I did not find a way to do so...
A different way would be to do a similar dependency-injection at TestCase class level, but I have not found a way to do that either.

You can easily parametrize whole classes using the #pytest.mark.parametrize marker:
import pytest
#pytest.mark.parametrize('n', [0, 1])
class TestFoo:
def test_42(self, n):
assert n == 42
def test_7(self, n):
assert n == 7
See the documentation on the parameterize marker for details on how to pass in multiple arguments etc. And also have a look at how to apply markers to classes and modules for more information on this.

Related

run python unittest case strictly sequentially?

I would like to run the individual tests (functions) of my Python unittest Class in sequential order and not in parallel. I can tell it is parallel because the first function/test writes a record into the TinyDB and another function/test - which fails - needs to read that new record and make an existence test.
So, how do I enforce the strict sequential test execution?
If that is NOT possible, can I enforce the strict sequential processing in creating multiple tests? (I would dislike to do so, because I would like to have a 1:1 relationship of modules and their test_modules.)
Answer for unittest
A strict execution I could realize by creating a master test py file. Named it run_all_tests.py
The modules have separate classes. I trigger them one by one. Respectively the functions.
Switching to pytest and fixtures
Anyhow, I dislike that short-coming of controlling the sequence in a sophisticated/declarative way on a function level. Thus I switched to pytest.
First of all I like that there is an argument that shows the sequence. That confirms what we are expecting:
pytest -setup-show test_myfunction.py
On top of it, you may apply the decorator #pytest.fixture() to a method that is run before. This does NOT necessarily help us with the sequence in first line. Complementary, it reminds us of making independent tests - where one uses the #pytest.fixture() annotated method as an argument to the test function. This way one has a deliberate fixture for a single function. Do NOT mistaken that as the same as the setUp() method of unittest. They run before every single method. Every. And the setUpClass() runs once before any test function is invoked.
For those who still want declarative order, you find it here: https://pypi.org/project/pytest-order/, https://github.com/pytest-dev/pytest-order

How can I get deterministic hash values for class objects?

I have an application running in Python 3.9.4 where I store class objects in sets (along with many other kinds of objects). I'm getting non-deterministic behavior even when PYTHONHASHSEED=0 because class objects get non-deterministic hash codes. I assume that's because class objects' hash codes come from their addresses in memory.
For example, here are two runs of a little test program, where Before and Equation are classes:
print(hash(Before), hash(Equation), hash(int))
304555224 304593057 271715397
print(hash(Before), hash(Equation), hash(int))
326601328 293027788 273337413
How can I get Python to generate deterministic hash values for class objects?
Is there a metaclass or something that I could monkey-patch so that all class objects, even int, get a hash function that I specify?
Hash for classes is deterministic within the same process . Yes, in cPython it is memory based - but then you can't simply "move" a class object to another memory address using Python code.
If you happen to use some serialization/de-serialization transforms with the classes, the de-serialized objects will ordinarily be new objects, distinct from the original ones, and therefore will hash differently.
For the note: I could not reproduce the behavior you stated in the question: on the same process, the hashes for the class objects will be the same.
If you are calculating the hashes in different processes, though, the will differ. So, although you don't mention multiprocessing there, I assume that is your working case.
Then, indeed, implementing __hash__ and __eq__ proper methods on the metaclass can allow you a stable, across process, hashing - but you can't do that with built-in classes such as int: those are built in native code and can't be changed on the Python side. On the other hand, despite the hash number shown being different for these built-in classes, whatever you are using to serialize/deserialize your classes (that is what Python does for communicating data across processes, even if you don't do any explicit de/serializing) .
Then we come to, while it is straightforward to add __eq__ and __hash__ methods to a metaclass to your classes, it'd be better to ensure that on deserializing, it would always yield the same object (with the same ID). hash stability, as you put it, could possibly ensure you have always the same class, but it would depend on how you'd write your code: it is a bit tricky to retrieve the object instance that is already inside a set, if you check positively for containship of another instance that matches it - the most straightfoward way would be building a identity-dictionary out of a set, and then use the value:
my_registry_dict = {element: element for element in my_registry_set}
my_class = my_registry_dict[incoming_class]
With this in mind, we can have a custom metaclass that not only add __eq__ and __hash__- and you have to pick what elements of the classes you will want to compare for equality - class.__qualname__ can be a simple and functional attribute to use - but also customize the __new__ method so that upon de-serializing the same class a second time will always re-use the first class object defined in the current process (i.e.: ensuring the "singleton" behavior Python classes enjoy in non-corner cases like yours seems to be)
class Meta(type):
registry = {}
def __new__(mcls, name, bases, namespace):
cls = super().__new__(mcls, name, bases, namespace)
if cls not in mcls.registry:
mcls.registry[cls] = cls
else:
# reuse the previously created class
cls = mcls.registry[cls]
return cls
def __hash__(cls):
# when working with metaclasses, using the name `cls` instead of `self``
# helps reminding us that we are dealing with instances that are
# actually classes.
return hash(cls.__qualname__)
def __eq__(cls, other):
return cls.__qualname__ == other.__qualname__

Given a Hypothesis Strategy, can I get the minimal example?

I'd like to get the minimum for a complicated class, for which I have already written a strategy.
Is it possible to ask hypothesis to simply give me the minimum example for a given strategy?
Context here is writing a strategy and a default minimum (for use in #hypothesis.example) - surely the information for the latter is already contained in the former?
import dataclasses
import hypothesis
from hypothesis import strategies
#dataclasses.dataclass
class Foo:
bar: int
# Foo has many more attributes which are strategised over...
#classmethod
def strategy(cls):
return hypothesis.builds(cls, bar=strategies.integers())
#classmethod
def minimal(cls):
return hypothesis.minimal(cls.strategy())
Found the answer.
Use hypothesis.find:
"""Returns the minimal example from the given strategy specifier that
matches the predicate function condition.
So we want the following code for the above example:
minimal = hypothesis.find(
specifier=Foo.strategy(),
condition=lambda _: True,
)
You're correct that hypothesis.find(s, lambda _: True) is the way to do this.
Context here is writing a strategy and a default minimum (for use in #hypothesis.example()) - surely the information for the latter is already contained in the former?
It's not just contained in it, but generating examples will almost always start by generating the minimal example* - because if that fails, we can stop immediately! So you might not need to do this at all ;-)
(the rare exceptions are strategies with complicated filters, where the attempted-minimal example is rejected by a filter - and we just move on to reasonably-minimal examples rather than shrinking to find the minimal one)
I'd also note that st.builds() has native support for dataclasses, so you can omit the strategies (or pass hypothesis.infer) for any argument where you don't have more specific constraints than "any instance of the type". For example, there's no need to pass bar=st.integers() above!

Mockito ArgumentCaptor capturing multiple times in multithreaded code

I am trying creating a unit test for my multithreaded code.
My current code snippet is like this:
verify(someObject, times(2)).someMethod(captor.capture());
List<SomeObject> list = captor.getAllValues();
assertThat(list.get(0)).isEqualTo(...
assertThat(list.get(1)).isEqualTo(...
Now someMethod is called in two separate threads, so the order of captured arguments is nondeterministic. I was wondering if there was a way to assert these arguments without any particular order.
Of course I could write a custom Comparator and sort the list beforehand, but I was wondering if there was a simpler way than this.
Thanks!
Simply check that the list contains the elements, independently from the order:
assertThat(list, hasItem(...));
assertThat(list, hasItem(...));

Basic unit testing for Fortran in a locked-down environment

What would be a sensible approach to take to try and add some basic unit tests to a large body of existing (Fortran 90) code, that is developee solely on a locked-down system, where there is no opportunity to install any 3rd party framework. I'm pretty much limited to standard Linux tools. At the moment the code base is tested at a full system level using a very limited set of tests, but this is extremely time consuming (multiple days to run), and so is rarely used during development
Ideally looking to be able to incrementally add targetted testing to key systems, rather than completely overhaul the whole code base in a single attempt.
Taking the example module below, and assuming an implementation of assert-type macros as detailed in Assert in Fortran
MODULE foo
CONTAINS
FUNCTION bar() RESULT (some_output)
INTEGER :: some_output
some_output = 0
END FUNCTION bar
END MODULE foo
A couple of possible methods spring to mind, but there may be technical or admistrative challenges to implementating these of which I am not aware:
Separate test module for each module, as below, and have a single main test runner to call each function within each module
MODULE foo_test
CONTAINS
SUBROUTINE bar_test()
! ....
END SUBROUTINE bar_test()
END MODULE foo_test
A similar approach as above, but with individual executables for each test. Obvious benefit being that a single failure will not terminate all tests, but may be harder to manage a large set of test executables, and may require large amounts of extra code.
Use preprocessor to include main function(s) containing tests within each module, e.g. in gfortran Fortran 90 with C/C++ style macro (e.g. # define SUBNAME(x) s ## x) and use a build-script to automatically test main's stored between preprocessor delimiters in the main code file.
I have tried using some of the existing Fortran frameworks (as documented in >Why the unit test frameworks in Fortran rely on Ruby instead of Fortran itself?>) but for this particular project, there is no possibility of being able to install additional tools on the system I am using.
In my opinion the assert mechanisms are not the main concern for unit tests of Fortran. As mentioned in the answer you linked, there exist several unit test frameworks for Fortran, like funit and FRUIT.
However, I think, the main issue is the resolution of dependencies. You might have a huge project with many interdependent modules, and your test should cover one of the modules using many others. Thus, you need to find these dependencies and build the unit test accordingly. Everything boils down to compiling executables, and the advantages of assertions are very limited, as you anyway will need to define your tests and do the comparisons yourself.
We are building our Fortran applications with Waf, which comes with a unit testing utility itself. Now, I don't know if this would be possible for you to use, but the only requirement is Python, which should be available on almost any platform. One shortcoming is, that the tests rely on a return code, which is not easily obtained from Fortran, at least not in a portable way before Fortran 2008, which recommends to provide stop codes in the return code. So I modified the checking for success in our projects. Instead of checking the return code, I expect the test to write some string, and check for that in the output:
def summary(bld):
"""
Get the test results from last line of output::
Fortran applications can not return arbitrary return codes in
a standarized way, instead we use the last line of output to
decide the outcome of a test: It has to state "PASSED" to count
as a successful test.
Otherwise it is considered as a failed test. Non-Zero return codes
that might still happen are also considered as failures.
Display an execution summary:
def build(bld):
bld(features='cxx cxxprogram test', source='main.c', target='app')
from waflib.extras import utest_results
bld.add_post_fun(utest_results.summary)
"""
from waflib import Logs
import sys
lst = getattr(bld, 'utest_results', [])
# Check for the PASSED keyword in the last line of stdout, to
# decide on the actual success/failure of the test.
nlst = []
for (f, code, out, err) in lst:
ncode = code
if not code:
if sys.version_info[0] > 2:
lines = out.decode('ascii').splitlines()
else:
lines = out.splitlines()
if lines:
ncode = lines[-1].strip() != 'PASSED'
else:
ncode = True
nlst.append([f, ncode, out, err])
lst = nlst
Also I add tests by convention, in the build script just a directory has to be provided and all files within that directory ending with _test.f90 will be assumed to be unit tests and we will try to build and run them:
def utests(bld, use, path='utests'):
"""
Define the unit tests from the programs found in the utests directory.
"""
from waflib import Options
for utest in bld.path.ant_glob(path + '/*_test.f90'):
nprocs = search_procs_in_file(utest.abspath())
if int(nprocs) > 0:
bld(
features = 'fc fcprogram test',
source = utest,
use = use,
ut_exec = [Options.options.mpicmd, '-n', nprocs,
utest.change_ext('').abspath()],
target = utest.change_ext(''))
else:
bld(
features = 'fc fcprogram test',
source = utest,
use = use,
target = utest.change_ext(''))
You can find unit tests defined like that in the Aotus library. Which are utilized in the wscript via:
from waflib.extras import utest_results
utest_results.utests(bld, 'aotus')
It is then also possible to build only subsets from the unit tests, for example by running
./waf build --target=aot_table_test
in Aotus. Our testcoverage is a little meagre, but I think this infrastructure fairs actually pretty well. A test can simply make use of all the modules in the project and it can be easily compiled without further ado.
Now I don't know wether this is suitable for you or not, but I would think more about the integration of your tests in your build environment, than about the assertion stuff. It definitely is a good idea to have a test routine in each module, which then can be easily called from a test program. I would try to aim for one executable per module you want to test, where each of these modules can of course contain several tests.

Resources