run python unittest case strictly sequentially? - python-3.x

I would like to run the individual tests (functions) of my Python unittest Class in sequential order and not in parallel. I can tell it is parallel because the first function/test writes a record into the TinyDB and another function/test - which fails - needs to read that new record and make an existence test.
So, how do I enforce the strict sequential test execution?
If that is NOT possible, can I enforce the strict sequential processing in creating multiple tests? (I would dislike to do so, because I would like to have a 1:1 relationship of modules and their test_modules.)

Answer for unittest
A strict execution I could realize by creating a master test py file. Named it run_all_tests.py
The modules have separate classes. I trigger them one by one. Respectively the functions.
Switching to pytest and fixtures
Anyhow, I dislike that short-coming of controlling the sequence in a sophisticated/declarative way on a function level. Thus I switched to pytest.
First of all I like that there is an argument that shows the sequence. That confirms what we are expecting:
pytest -setup-show test_myfunction.py
On top of it, you may apply the decorator #pytest.fixture() to a method that is run before. This does NOT necessarily help us with the sequence in first line. Complementary, it reminds us of making independent tests - where one uses the #pytest.fixture() annotated method as an argument to the test function. This way one has a deliberate fixture for a single function. Do NOT mistaken that as the same as the setUp() method of unittest. They run before every single method. Every. And the setUpClass() runs once before any test function is invoked.
For those who still want declarative order, you find it here: https://pypi.org/project/pytest-order/, https://github.com/pytest-dev/pytest-order

Related

Why do we need to write lambda step def inside constructor in cucumber jvm 8?

In new cucumber jvm 8 i see lots of example of hooks and step definition using lambda but they all written inside constructor. is there any reason we nee to write inside constructor? or can we write step definition and hooks using lambda expression but outside constructor?
When you write a step definition it has to be registered in LamdbaGlueRegistry. You can find the details in io.cucumber.java8.En default implementations.
Hence you have to execute that code somehow. The simplest way is to execute it from constructor since Cucumber instantiate all classes that are under glued packages on each scenario run.
Theoretically you can use the same code to register the definitions at any other point. The only thing you have to make sure that the registry has been initialized and your code is reachable from Cucumber entry point.

How can I compute the number of functions annotated as #[test] in a crate at compile time?

Background
I am writing integration tests using the guidelines in the Rust Book.
A requirement is to be able to run setup and teardown code at two levels:
Test level: Before and after each test.
Crate level: Before and after all tests in the crate.
The basic idea is as follows:
I wrote a TestRunner class.
Each integration test file (crate) declares a lazy-static singleton instance.
Each #[test] annotated method passes a test function to a run_test() method of the TestRunner.
With this design it is straightforward to implement Test level setup and teardown. The run_test() method handles that.
It is also no problem to implement the setup part of the Crate level, because the TestRunner knows when run_test() is called for the first time.
The Problem
The remaining problem is how to get the TestRunner to execute the Crate level teardown after the last test has run.
As tests may run in any order (or even in parallel), the TestRunner cannot know when it is running the last test.
What I Have Tried
I used the ctor crate and the #[dtor] annotation to mark a function as one that will run before the process ends. This function calls the Crate level teardown function.
This is an unsatisfactory solution because of this issue which documents the limitations of what can be done in a dtor function.
A Step in the Right Direction
I propose to pass a test_count argument to the TestRunner's constructor function.
As the TestRunner now knows how many test functions there are, it can call the Crate level teardown function after the last one completes. (there are some thread safety issues to handle, but they are manageable).
The Missing Link
Clearly, the above approach is error prone as it depends on the developer updating the test_count argument every time she adds or removes a test or marks one as ignored.
The Remaining Problem
I would therefore like to be able to detect the number of tests in the crate at compile time without any manual intervention by the developer.
I am not familiar enough with Rust macros to write such a macro myself, but I assume it is possible.
Maybe it is even possible to use the test crate object model for this (See https://doc.rust-lang.org/test/struct.TestDesc.html)
Can anyone suggest a clean way to detect (at compile time) the number of tests that will run in the crate so I can pass it to the constructor of the TestRunner?

why no assertAll in Python's unittest

With some frequency, I find myself using assertTrue with all. It seems like it would be nice instead to get an elementwise report of failures, which one might expect from an assertAll method. However this (or an equivalent) seems not to be in unittest. Has it just not been implemented, or has it been ruled out (say, because one may simply assertTrue in a loop instead)?

Mockito ArgumentCaptor capturing multiple times in multithreaded code

I am trying creating a unit test for my multithreaded code.
My current code snippet is like this:
verify(someObject, times(2)).someMethod(captor.capture());
List<SomeObject> list = captor.getAllValues();
assertThat(list.get(0)).isEqualTo(...
assertThat(list.get(1)).isEqualTo(...
Now someMethod is called in two separate threads, so the order of captured arguments is nondeterministic. I was wondering if there was a way to assert these arguments without any particular order.
Of course I could write a custom Comparator and sort the list beforehand, but I was wondering if there was a simpler way than this.
Thanks!
Simply check that the list contains the elements, independently from the order:
assertThat(list, hasItem(...));
assertThat(list, hasItem(...));

Basic unit testing for Fortran in a locked-down environment

What would be a sensible approach to take to try and add some basic unit tests to a large body of existing (Fortran 90) code, that is developee solely on a locked-down system, where there is no opportunity to install any 3rd party framework. I'm pretty much limited to standard Linux tools. At the moment the code base is tested at a full system level using a very limited set of tests, but this is extremely time consuming (multiple days to run), and so is rarely used during development
Ideally looking to be able to incrementally add targetted testing to key systems, rather than completely overhaul the whole code base in a single attempt.
Taking the example module below, and assuming an implementation of assert-type macros as detailed in Assert in Fortran
MODULE foo
CONTAINS
FUNCTION bar() RESULT (some_output)
INTEGER :: some_output
some_output = 0
END FUNCTION bar
END MODULE foo
A couple of possible methods spring to mind, but there may be technical or admistrative challenges to implementating these of which I am not aware:
Separate test module for each module, as below, and have a single main test runner to call each function within each module
MODULE foo_test
CONTAINS
SUBROUTINE bar_test()
! ....
END SUBROUTINE bar_test()
END MODULE foo_test
A similar approach as above, but with individual executables for each test. Obvious benefit being that a single failure will not terminate all tests, but may be harder to manage a large set of test executables, and may require large amounts of extra code.
Use preprocessor to include main function(s) containing tests within each module, e.g. in gfortran Fortran 90 with C/C++ style macro (e.g. # define SUBNAME(x) s ## x) and use a build-script to automatically test main's stored between preprocessor delimiters in the main code file.
I have tried using some of the existing Fortran frameworks (as documented in >Why the unit test frameworks in Fortran rely on Ruby instead of Fortran itself?>) but for this particular project, there is no possibility of being able to install additional tools on the system I am using.
In my opinion the assert mechanisms are not the main concern for unit tests of Fortran. As mentioned in the answer you linked, there exist several unit test frameworks for Fortran, like funit and FRUIT.
However, I think, the main issue is the resolution of dependencies. You might have a huge project with many interdependent modules, and your test should cover one of the modules using many others. Thus, you need to find these dependencies and build the unit test accordingly. Everything boils down to compiling executables, and the advantages of assertions are very limited, as you anyway will need to define your tests and do the comparisons yourself.
We are building our Fortran applications with Waf, which comes with a unit testing utility itself. Now, I don't know if this would be possible for you to use, but the only requirement is Python, which should be available on almost any platform. One shortcoming is, that the tests rely on a return code, which is not easily obtained from Fortran, at least not in a portable way before Fortran 2008, which recommends to provide stop codes in the return code. So I modified the checking for success in our projects. Instead of checking the return code, I expect the test to write some string, and check for that in the output:
def summary(bld):
"""
Get the test results from last line of output::
Fortran applications can not return arbitrary return codes in
a standarized way, instead we use the last line of output to
decide the outcome of a test: It has to state "PASSED" to count
as a successful test.
Otherwise it is considered as a failed test. Non-Zero return codes
that might still happen are also considered as failures.
Display an execution summary:
def build(bld):
bld(features='cxx cxxprogram test', source='main.c', target='app')
from waflib.extras import utest_results
bld.add_post_fun(utest_results.summary)
"""
from waflib import Logs
import sys
lst = getattr(bld, 'utest_results', [])
# Check for the PASSED keyword in the last line of stdout, to
# decide on the actual success/failure of the test.
nlst = []
for (f, code, out, err) in lst:
ncode = code
if not code:
if sys.version_info[0] > 2:
lines = out.decode('ascii').splitlines()
else:
lines = out.splitlines()
if lines:
ncode = lines[-1].strip() != 'PASSED'
else:
ncode = True
nlst.append([f, ncode, out, err])
lst = nlst
Also I add tests by convention, in the build script just a directory has to be provided and all files within that directory ending with _test.f90 will be assumed to be unit tests and we will try to build and run them:
def utests(bld, use, path='utests'):
"""
Define the unit tests from the programs found in the utests directory.
"""
from waflib import Options
for utest in bld.path.ant_glob(path + '/*_test.f90'):
nprocs = search_procs_in_file(utest.abspath())
if int(nprocs) > 0:
bld(
features = 'fc fcprogram test',
source = utest,
use = use,
ut_exec = [Options.options.mpicmd, '-n', nprocs,
utest.change_ext('').abspath()],
target = utest.change_ext(''))
else:
bld(
features = 'fc fcprogram test',
source = utest,
use = use,
target = utest.change_ext(''))
You can find unit tests defined like that in the Aotus library. Which are utilized in the wscript via:
from waflib.extras import utest_results
utest_results.utests(bld, 'aotus')
It is then also possible to build only subsets from the unit tests, for example by running
./waf build --target=aot_table_test
in Aotus. Our testcoverage is a little meagre, but I think this infrastructure fairs actually pretty well. A test can simply make use of all the modules in the project and it can be easily compiled without further ado.
Now I don't know wether this is suitable for you or not, but I would think more about the integration of your tests in your build environment, than about the assertion stuff. It definitely is a good idea to have a test routine in each module, which then can be easily called from a test program. I would try to aim for one executable per module you want to test, where each of these modules can of course contain several tests.

Resources