Here is a link to a project and output that you can use to reproduce the problem I describe below.
I'm using coverage with tox against multiple versions of python. My tox.ini file looks something like this:
[tox]
envlist =
py27
py34
[testenv]
deps =
coverage
commands =
coverage run --source=modules/ -m pytest
coverage report -m
My problem is that coverage will run using only one version of python (in my case, py27), not both py27 and py34. This is a problem whenever I have code execution dependent on the python version, e.g.:
def add(a, b):
import sys
if sys.version.startswith('2.7'):
print('2.7')
if sys.version.startswith('3'):
print('3')
return a + b
Running coverage against the above code will incorrectly report that line 6 ("print('3')") is "Missing" for both py27 and py34. It should only be Missing for py34.
I know why this is happening: coverage is installed on my base OS (which uses python2.7). Thus, when tox is run, it notices that coverage is already installed and inherits coverage from the base OS rather than installing it in the virtualenv it creates.
This is fine and dandy for py27, but causes incorrect results in the coverage report for py34. I have a hacky, temporary work-around: I require a slightly earlier version of coverage (relative to the one installed on my base OS) so that tox will be forced to install a separate copy of coverage in the virtualenv. E.g.
[testenv]
deps =
coverage==4.0.2
pytest==2.9.0
py==1.4.30
I don't like this workaround, but it's the best I've found for now. Any suggestions on a way to force tox to install the current version of coverage in its virtualenv's, even when I already have it installed on my base OS?
I came upon this problem today, but couldn't find an easy answer. So, for future reference, here is the solution that I came up with.
Create an envlist that contains each version of Python that will be tested and a custom env for cov.
For all versions of Python, set COVERAGE_FILE environment varible to store the .coverage file in {envdir}.
For the cov env I use two commands.
coverage combine that combines the reports, and
coverage html to generate the report and, if necessary, fail the test.
Create a .coveragerc file that contains a [paths] section to lists the source= locations.
The first line is where the actual source code is found.
The subsequent lines are the subpaths that will be eliminated by `coverage combine'.
tox.ini:
[tox]
envlist=py27,py36,py35,py34,py33,cov
[testenv]
deps=
pytest
pytest-cov
pytest-xdist
setenv=
py{27,36,35,34,33}: COVERAGE_FILE={envdir}/.coverage
commands=
py{27,36,35,34,33}: python -m pytest --cov=my_project --cov-report=term-missing --no-cov-on-fail
cov: /usr/bin/env bash -c '{envpython} -m coverage combine {toxworkdir}/py*/.coverage'
cov: coverage html --fail-under=85
.coveragerc:
[paths]
source=
src/
.tox/py*/lib/python*/site-packages/
The most peculiar part of the configuration is the invocation of coverage combine. Here's a breakdown of the command:
tox does not handle Shell expansions {toxworkdir}/py*/.coverage, so we need to invoke a shell (bash -c) to get the necessary expansion.
If one were inclined, you could just type out all the paths individually and not jump through all of these hoops, but that would add maintenance and .coverage file dependency for each pyNN env.
/usr/bin/env bash -c '...' to ensure we get the correct version of bash. Using the fullpath to env avoids the need for setting whitelist_externals.
'{envpython} -m coverage ...' ensures that we invoke the correct python and coverage for the cov env.
NOTE: The unfortunate problem of this solution is that the cov env is dependent on the invocation of py{27,36,35,34,33} which has some not so desirable side effects.
My suggestion would be to only invoke cov through tox.
Never invoke tox -ecov because, either
It will likely fail due to a missing .coverage file, or
It could give bizarre results (combining differing tests).
If you must invoke it as a subset (tox -epy27,py36,cov), then wipe out the .tox directory first (rm -rf .tox) to avoid the missing .coverage file problem.
I don't understand why tox wouldn't install coverage in each virtualenv properly. You should get two different coverage reports, one for py27 and one for py35. A nicer option might be to produce one combined report. Use coverage run -p to record separate data for each run, and then coverage combine to combine them before reporting.
Related
The question: Is there a way in autotools to build my code and unit tests without running the unit tests?
I have a code base that uses autotools and running make check compiles the code and runs unit tests. I have a portable singularity container that I want to build and test the code on a slurm cluster. I am able to do something like
./configure MPI_LAUNCHER="srun --mpi=pmi2"
singularity exec -B ${PWD} container.sif envscript.sh "make check"
Which will run an environment set up script (envscript.sh) and build the code. When it gets to the unit tests, it hangs. I think this is because it's trying to run the srun --mpi=pmi2 in the container and not on the host. Is there a way to get this to work with this set up? Can I build the library and then just build the unit tests without running them? Then in a second step, run the tests. I imagine something like this:
./configure MPI_LAUNCHER="srun --mpi=pmi2 singularity exec -B ${PWD} container.sif envscript.sh"
singularity exec -B ${PWD} container.sif envscript.sh "make buildtests"
make check
I don't even this this would work though because our tests are set up with the -n for the number of cores for each test like this
mpirun -n test_cores ./test.sh
So subbing in the srun singularity command would put the -n after singularity. If anyone has any idea, please let me know.
The question: Is there a way in autotools to build my code and unit tests without running the unit tests?
None of the standard makefile targets provided by Automake provide for this.
In fact, the behavior of not building certain targets until make check is specifically requested by the Makefile.am author. Designating those targets in a check_PROGRAMS, check_LIBRARIES, etc variable (and not anywhere else) has that effect. If you modify each check_FOO to noinst_FOO then all the targets named by those variables should be built by make / make all. Of course, if the build system already uses noinst_ variables for other purposes then you'll need to perform some merging.
BUT YOU PROBABLY DON'T WANT TO DO THAT.
Targets designated (only) in check_FOO or noinst_FOO variables are not installed to the system by make install, and often they depend on data and layout provided by the build environment. If you're not going to run them in the build environment, then you should plan not to run them at all.
Additionally, if you're performing your build inside the container because the container is the target execution environment then it is a huge red flag that the tests are not running successfully inside the container. There is every reason to think that the misbehavior of the tests will also manifest when you try to run the installed software for your intended purposes. That's pretty much the point of automated testing.
On the other hand, if you're building inside the container for some kind of build environment isolation purpose, then successful testing outside the container combined with incorrect behavior of the tests inside would indicate at minimum that the container does not provide an environment adequately matched to the target execution environment. That should undercut your confidence in the test results obtained outside. Validation tests intended to run against the installed software are a thing, to be sure, but they are a different thing than build-time tests.
I need to install a python plugin that is a simple python file before the starting tests using pytest. I have used entry_points in setup.py. My problem is a bit complex so let's get into the problem with an example and we will come back to the problem later.
There are two packages- one is core and another one is mypackage.
Core provided functionality to add a plugin with group name 'my.plugin'.
core package logic
from importlib_metadata import entry_points
def plugin_method(some_data):
plugins = entry_points()['my.plugin']
loaded_plugins = []
for p in plugins:
loaded_plugins.apend(p.load())
#Does some processing on the data and decides which method to call from plugin
#call plugin method
return result
mypackage logic
setup.py
setup(
...
entry_points={'my.plugin': 'plugin1= plugin1.logic'}
...
)
logic.py
def method1(method_data):
print('method1 called')
return 1
def method2(method_data):
print('method1 called')
return 2
main.py
def method_uses_plugin()
# create data
plugin_method(data)
The plugin works fine. :)
Problem
I have written a test case for the method_uses_plugin method. It works fine if I have installed pypackage on my machine but it fails if installation is not done (in jenkins pipeline 🙁 )
We don't usually install the package to run test cases because test cases should use source code directly.
We might need to do something with pytest to register the plugin in entry_points. I have tried many links but nothing worked.
My use case a bit complex but a similar question can be found here
There is two usecase to run the test on the actual source code.
In your Local machine
If you want to test the source code while working, you can simply install your package in editable mode with the command:
pip install -e .
Documentation of -e from the man page:
-e,--editable <path/url>
Install a project in editable mode (i.e. setuptools "develop mode") from a local project path or a VCS url.
This will link the package to the . location of the code, meaning that any change made to the source code will be reflected on the package.
In Continuous Integration (CI)
As your CI is running on a docker container, you can simply copy the source code inside it, install it with pip install . and finally run pytest.
If all else fails you can try to convert your code into an executable and use batch commands to run pip install for as many packages as you need and then run your program. I believe in Jenkins you can run batch files as an administrator.
Invoke pip install from batch file
Run Batch file as an administrator in Jenkins
I have certain files in a directory named benchmarks and I want to get code coverage by running these source files.
I have tried using source flag in the following ways but it doesn't work.
coverage3 run --source=benchmarks
coverage3 run --source=benchmarks/
On running, I always get Nothing to do.
Thanks
coverage run is like python. If you would run a file with python myprog.py, then you can use coverage run myprog.py.
My Python 3 project is organized as follows:
/myproject
/.venv -> stores virtual environment
/src
/common
__init__.py
utility.py
/tests
/common
__init__.py
test_utility.py --> tests utility.py
__init__.py
__init__.py
requirements.txt
(The project has many more files and tests, which I've elided for brevity.)
I have this set up in IntelliJ IDEA 2017.1.4. Generally speaking, everything works fine. I have src set up as the content root and a virtual environment pointed to use the python libraries in .venv, as configured in requirements.txt. When I right click on test_utility.py in the IDE and select 'Run Unittests...' it executes and runs all the tests, with successes and failures as you'd expect. This also happens if I right click on common and select 'Run Unittests...'. However, if I right click on tests and do the same all tests report:
ImportError: No module named 'common.utility'
The test runner reports it is executing the following:
Launching unittests with arguments python -m unittest discover -s /Users/nford/repos/myproject/src/tests -t /Users/nford/repos/myproject/src/tests in /Users/nford/repos/myproject/src/tests
If I run this in the command line (with the virtual environment on) it also fails. However, if I run this:
python -m unittest discover -s /Users/nford/repos/myproject/src/tests -t /Users/nford/repos/myproject/src
It succeeds. I don't see a way to tell the IDE to do that, however. Nor does it seem necessary for the other test runs. For instance, this works on the command line:
python -m unittest discover -s /Users/nford/repos/myproject/src/tests/common -t /Users/nford/repos/myproject/src/tests/common
It is only at this top level that seems to have this problem, meaning that I can't use the IDE to run a complete test run.
I have never built software from source, so forgive my complete ignorance on this subject. I was trying to determine what the difference would be between installing Node.js from the OS X Installer pkg vs unzipping the 64-bit binaries file (from the Node.js downloads page) and moving to /usr/local/ (then making sure permissions are correct, of course.) Below is the output from running a diff between the unzipped binaries directory path [binaries] and the installation path [install].
Can someone please explain what test files are, as well as the other differences in doing what I described above? Using the output to explain would be helpful to me.
Only in [binaries]: ChangeLog
Only in [binaries]: LICENSE
Only in [binaries]: README.md
Binary files [binaries]/bin/node and [install]/bin/node differ
diff -r [binaries]/bin/npm [install]/bin/npm
1,2c1
< #!/bin/sh
< // 2>/dev/null; exec "`dirname "$0"`/node" "$0" "$#"
---
> #!/usr/bin/env node
diff -r [binaries]/include/node/config.gypi [install]/include/node/config.gypi
10c10
< 'node_prefix': '/',
---
> 'node_prefix': '',
diff -r [binaries]/lib/node_modules/npm/bin/npm-cli.js [install]/lib/node_modules/npm/bin/npm-cli.js
1,2c1
< #!/bin/sh
< // 2>/dev/null; exec "`dirname "$0"`/node" "$0" "$#"
---
> #!/usr/bin/env node
Only in [install]/lib/node_modules/npm/node_modules/ansicolors: test
Only in [install]/lib/node_modules/npm/node_modules/ansistyles: test
Only in [install]/lib/node_modules/npm/node_modules/block-stream: test
Only in [install]/lib/node_modules/npm/node_modules/char-spinner: test
Only in [install]/lib/node_modules/npm/node_modules/child-process-close: test
Only in [install]/lib/node_modules/npm/node_modules/chmodr: test
Only in [install]/lib/node_modules/npm/node_modules/cmd-shim: test
Only in [install]/lib/node_modules/npm/node_modules/columnify/node_modules/wcwidth: test
Only in [install]/lib/node_modules/npm/node_modules/fstream-npm/node_modules/fstream-ignore: test
Only in [install]/lib/node_modules/npm/node_modules/github-url-from-username-repo: test
Only in [install]/lib/node_modules/npm/node_modules/glob: test
Only in [install]/lib/node_modules/npm/node_modules/graceful-fs: test
Only in [install]/lib/node_modules/npm/node_modules/ini: test
Only in [install]/lib/node_modules/npm/node_modules/init-package-json/node_modules/promzard: test
Only in [install]/lib/node_modules/npm/node_modules/init-package-json: test
Only in [install]/lib/node_modules/npm/node_modules/lockfile: test
Only in [install]/lib/node_modules/npm/node_modules/lru-cache: test
Only in [install]/lib/node_modules/npm/node_modules/minimatch/node_modules/sigmund: test
Only in [install]/lib/node_modules/npm/node_modules/minimatch: test
Only in [install]/lib/node_modules/npm/node_modules/mkdirp/node_modules/minimist: test
Only in [install]/lib/node_modules/npm/node_modules/mkdirp: test
Only in [install]/lib/node_modules/npm/node_modules/nopt: test
Only in [install]/lib/node_modules/npm/node_modules/npm-install-checks: test
Only in [install]/lib/node_modules/npm/node_modules/npm-registry-client: test
Only in [install]/lib/node_modules/npm/node_modules/npm-user-validate: test
Only in [install]/lib/node_modules/npm/node_modules/npmconf/node_modules/config-chain/node_modules/proto-list: test
Only in [install]/lib/node_modules/npm/node_modules/npmconf/node_modules/config-chain: test
Only in [install]/lib/node_modules/npm/node_modules/npmconf: test
Only in [install]/lib/node_modules/npm/node_modules/npmlog: test
Only in [install]/lib/node_modules/npm/node_modules/once: test
Only in [install]/lib/node_modules/npm/node_modules/osenv: test
Only in [install]/lib/node_modules/npm/node_modules/read/node_modules/mute-stream: test
Only in [install]/lib/node_modules/npm/node_modules/read: test
Only in [install]/lib/node_modules/npm/node_modules/read-installed: test
Only in [install]/lib/node_modules/npm/node_modules/read-package-json/node_modules/normalize-package-data: test
Only in [install]/lib/node_modules/npm/node_modules/read-package-json: test
Only in [install]/lib/node_modules/npm/node_modules/request/node_modules/bl: test
Only in [install]/lib/node_modules/npm/node_modules/request/node_modules/form-data/node_modules/combined-stream/node_modules/delayed-stream: test
Only in [install]/lib/node_modules/npm/node_modules/request/node_modules/hawk/node_modules/boom: test
Only in [install]/lib/node_modules/npm/node_modules/request/node_modules/hawk/node_modules/cryptiles: test
Only in [install]/lib/node_modules/npm/node_modules/request/node_modules/hawk/node_modules/hoek: test
Only in [install]/lib/node_modules/npm/node_modules/request/node_modules/hawk/node_modules/sntp: test
Only in [install]/lib/node_modules/npm/node_modules/request/node_modules/hawk: test
Only in [install]/lib/node_modules/npm/node_modules/request/node_modules/node-uuid: test
Only in [install]/lib/node_modules/npm/node_modules/request/node_modules/qs: test
Only in [install]/lib/node_modules/npm/node_modules/retry: test
Only in [install]/lib/node_modules/npm/node_modules/rimraf: test
Only in [install]/lib/node_modules/npm/node_modules/semver: test
Only in [install]/lib/node_modules/npm/node_modules/tar: test
Only in [install]/lib/node_modules/npm/node_modules/text-table: test
Only in [install]/lib/node_modules/npm: test
General differences
Since pre-build binaries can be tested before being deployed (zipped), they don't have to include the tests in the zip to test it again after download.
The OS X (universal) installer will do various checks and depending on your system it will compile/download/install different files, depending on your system setup.
After the various steps it will verify the installed files using the test files, which basically contain a bit of programming so that node is able to understand how it can test the actual file.
The OS X binaries however have basically already been "installed" (on a building machine) and then just zipped up.
something build on OS X will work on all OS X's having a similar system build
If you compile/build the code yourself however, testing it can be quite useful to detect potential build errors.
The other small changes would have to do with the (slightly) different way the application was compiled/build on your system, in comparison to their build system.
"source"
Binary differences
The binary differences seem to only be the declaration (on the binaries file) that it's a shell application, followed by a commented-out line.
the node_prefix value seems to be only used by the installer according to this github issue comment