how to use coverage run --source = {dir_name} - ubuntu-14.04

I have certain files in a directory named benchmarks and I want to get code coverage by running these source files.
I have tried using source flag in the following ways but it doesn't work.
coverage3 run --source=benchmarks
coverage3 run --source=benchmarks/
On running, I always get Nothing to do.
Thanks

coverage run is like python. If you would run a file with python myprog.py, then you can use coverage run myprog.py.

Related

Autotools build code and unit tests in a singularity container

The question: Is there a way in autotools to build my code and unit tests without running the unit tests?
I have a code base that uses autotools and running make check compiles the code and runs unit tests. I have a portable singularity container that I want to build and test the code on a slurm cluster. I am able to do something like
./configure MPI_LAUNCHER="srun --mpi=pmi2"
singularity exec -B ${PWD} container.sif envscript.sh "make check"
Which will run an environment set up script (envscript.sh) and build the code. When it gets to the unit tests, it hangs. I think this is because it's trying to run the srun --mpi=pmi2 in the container and not on the host. Is there a way to get this to work with this set up? Can I build the library and then just build the unit tests without running them? Then in a second step, run the tests. I imagine something like this:
./configure MPI_LAUNCHER="srun --mpi=pmi2 singularity exec -B ${PWD} container.sif envscript.sh"
singularity exec -B ${PWD} container.sif envscript.sh "make buildtests"
make check
I don't even this this would work though because our tests are set up with the -n for the number of cores for each test like this
mpirun -n test_cores ./test.sh
So subbing in the srun singularity command would put the -n after singularity. If anyone has any idea, please let me know.
The question: Is there a way in autotools to build my code and unit tests without running the unit tests?
None of the standard makefile targets provided by Automake provide for this.
In fact, the behavior of not building certain targets until make check is specifically requested by the Makefile.am author. Designating those targets in a check_PROGRAMS, check_LIBRARIES, etc variable (and not anywhere else) has that effect. If you modify each check_FOO to noinst_FOO then all the targets named by those variables should be built by make / make all. Of course, if the build system already uses noinst_ variables for other purposes then you'll need to perform some merging.
BUT YOU PROBABLY DON'T WANT TO DO THAT.
Targets designated (only) in check_FOO or noinst_FOO variables are not installed to the system by make install, and often they depend on data and layout provided by the build environment. If you're not going to run them in the build environment, then you should plan not to run them at all.
Additionally, if you're performing your build inside the container because the container is the target execution environment then it is a huge red flag that the tests are not running successfully inside the container. There is every reason to think that the misbehavior of the tests will also manifest when you try to run the installed software for your intended purposes. That's pretty much the point of automated testing.
On the other hand, if you're building inside the container for some kind of build environment isolation purpose, then successful testing outside the container combined with incorrect behavior of the tests inside would indicate at minimum that the container does not provide an environment adequately matched to the target execution environment. That should undercut your confidence in the test results obtained outside. Validation tests intended to run against the installed software are a thing, to be sure, but they are a different thing than build-time tests.

Running multiple requests in parallel via Postman Newman to load test the API

I'm not a Node.js developer but I installed Newman Postman just to be able to load test my API.
I want to take advantage of a simple idea suggested at the link below to run multiple API requests in parallel from a batch file. Source: https://community.getpostman.com/t/how-can-i-run-simultaneous-request-parallely/3797/2
Due to my lack of knowledge of Node.js console commands, I'm failing at running the script file. What's the right syntax to run this batch/text file with a list of Postman collections?
I tried:
As a developer at the link above suggested I created myfile.txt file and plugged in:
newman run c:\path...\collection.json -e c:\path...\staging.json &
newman run c:\path...\collection.json -e c:\path...\staging.json &
newman run c:\path...\collection.json -e c:\path...\staging.json
Then I ran the file with:
newman run c:\path...\myfile.txt
Fail.
Then tried running the file this way:
node c:\path...\myfile.txt
No luck.
Then I tried added #!/bin/bash inside of the file and running the same way but with .sh extension.
Still no luck.
How can I run my simultaneous api calls from the file here ?
The recommendations from the article you're referring are about Bash, I don't know why you're talking about Node.js.
The recommendations from the article are all for sequential execution of multiple tests
You don't be able to apply any of these instructions because looking into c:\ drive you seem to be using a Windows OS
If you want to kick off several parallel version of newman process (whatever it is) in a Windows cmd.exe interpreter it makes sense to use start command like:
Create file myfile.cmd
Put the following lines there:
start newman run c:\path...\collection.json -e c:\path...\staging.json
start newman run c:\path...\collection.json -e c:\path...\staging.json
start newman run c:\path...\collection.json -e c:\path...\staging.json
However I would rather recommend going for a specialized load testing tool, there is a variety of free and open source load testing solutions which don't have any problems with parallel API tests execution and at the end of test you will get nice tables and charts as I fail to see how you're going to analyze the results of your "load test" with Postman/newman.

Passing multiple tags (or/and) via -Dcucumber.options does not trigger my test

i've bunch of scenarios with tagging #ABC and #DEF under module 'TEST'.
I'm able to run test with multiple tagging using following command (older way and soon to be deprecated)
mvn clean test -pl TEST -Dcucumber.options="--tags #ABC,#DEF"
but not
mvn clean test -pl TEST -Dcucumber.options="--tags '#ABC or #DEF'"
any idea?
i've also switching the quotes around but still doesn't work, no test are triggered.
what I've tried
-Dcucumber.options='--tags #ABC or #DEF' -Dcucumber.options='--tags "#ABC or #DEF"' -Dcucumber.options="--tags '#ABC or #DEF'" -Dcucumber.options="--tags '(#ABC or #DEF)'"
Thanks alot!
I don't think you need the single quotes. Could you try mvn clean test -pl TEST -Dcucumber.options="--tags #ABC or #DEF"?
Note, for more information, see Cucumber documentation on Tag expressions
In my environments (W10 and CentOS 7), the 3rd way that you listed is working for me.
-Dcucumber.options="--tags '#ABC or #DEF'"
My tests are their own module in our application project, so I execute a java class to run stuff (vs. maven test). For example on CentOS, I have a script with the line
java -ea -Dcucumber.options="${cucumberTags}" regressionTests.AutomatedTest >> $logFile.log
I kick off the tests using a script, and it can be a little tricky buliding the "cucumberTags" string, and getting a literal single quote before the first and after the last tag, but this is definitely my working format.
-Dennis

Coverage in tox for multiple python versions

Here is a link to a project and output that you can use to reproduce the problem I describe below.
I'm using coverage with tox against multiple versions of python. My tox.ini file looks something like this:
[tox]
envlist =
py27
py34
[testenv]
deps =
coverage
commands =
coverage run --source=modules/ -m pytest
coverage report -m
My problem is that coverage will run using only one version of python (in my case, py27), not both py27 and py34. This is a problem whenever I have code execution dependent on the python version, e.g.:
def add(a, b):
import sys
if sys.version.startswith('2.7'):
print('2.7')
if sys.version.startswith('3'):
print('3')
return a + b
Running coverage against the above code will incorrectly report that line 6 ("print('3')") is "Missing" for both py27 and py34. It should only be Missing for py34.
I know why this is happening: coverage is installed on my base OS (which uses python2.7). Thus, when tox is run, it notices that coverage is already installed and inherits coverage from the base OS rather than installing it in the virtualenv it creates.
This is fine and dandy for py27, but causes incorrect results in the coverage report for py34. I have a hacky, temporary work-around: I require a slightly earlier version of coverage (relative to the one installed on my base OS) so that tox will be forced to install a separate copy of coverage in the virtualenv. E.g.
[testenv]
deps =
coverage==4.0.2
pytest==2.9.0
py==1.4.30
I don't like this workaround, but it's the best I've found for now. Any suggestions on a way to force tox to install the current version of coverage in its virtualenv's, even when I already have it installed on my base OS?
I came upon this problem today, but couldn't find an easy answer. So, for future reference, here is the solution that I came up with.
Create an envlist that contains each version of Python that will be tested and a custom env for cov.
For all versions of Python, set COVERAGE_FILE environment varible to store the .coverage file in {envdir}.
For the cov env I use two commands.
coverage combine that combines the reports, and
coverage html to generate the report and, if necessary, fail the test.
Create a .coveragerc file that contains a [paths] section to lists the source= locations.
The first line is where the actual source code is found.
The subsequent lines are the subpaths that will be eliminated by `coverage combine'.
tox.ini:
[tox]
envlist=py27,py36,py35,py34,py33,cov
[testenv]
deps=
pytest
pytest-cov
pytest-xdist
setenv=
py{27,36,35,34,33}: COVERAGE_FILE={envdir}/.coverage
commands=
py{27,36,35,34,33}: python -m pytest --cov=my_project --cov-report=term-missing --no-cov-on-fail
cov: /usr/bin/env bash -c '{envpython} -m coverage combine {toxworkdir}/py*/.coverage'
cov: coverage html --fail-under=85
.coveragerc:
[paths]
source=
src/
.tox/py*/lib/python*/site-packages/
The most peculiar part of the configuration is the invocation of coverage combine. Here's a breakdown of the command:
tox does not handle Shell expansions {toxworkdir}/py*/.coverage, so we need to invoke a shell (bash -c) to get the necessary expansion.
If one were inclined, you could just type out all the paths individually and not jump through all of these hoops, but that would add maintenance and .coverage file dependency for each pyNN env.
/usr/bin/env bash -c '...' to ensure we get the correct version of bash. Using the fullpath to env avoids the need for setting whitelist_externals.
'{envpython} -m coverage ...' ensures that we invoke the correct python and coverage for the cov env.
NOTE: The unfortunate problem of this solution is that the cov env is dependent on the invocation of py{27,36,35,34,33} which has some not so desirable side effects.
My suggestion would be to only invoke cov through tox.
Never invoke tox -ecov because, either
It will likely fail due to a missing .coverage file, or
It could give bizarre results (combining differing tests).
If you must invoke it as a subset (tox -epy27,py36,cov), then wipe out the .tox directory first (rm -rf .tox) to avoid the missing .coverage file problem.
I don't understand why tox wouldn't install coverage in each virtualenv properly. You should get two different coverage reports, one for py27 and one for py35. A nicer option might be to produce one combined report. Use coverage run -p to record separate data for each run, and then coverage combine to combine them before reporting.

running py3to2 using windows 7

I've tried my best for an hour, but I just don't understand code lingo well enough to get py3to2 to work. I have a script written in Python 3 that I want to convert to 2. Downloaded and unzipped py3to2 from here:
https://bitbucket.org/amentajo/lib3to2/overview
This is all the read me says about running it:
Usage
Run "./3to2" to convert stdin ("-"), files or directories given as
arguments. By default, the tool outputs a unified diff-formatted patch on
standard output and a "what was changed" summary on standard error, but the
"-w" option can be given to write back converted files, creating
".bak"-named backup files.
If you are root, you can also install with "./setup.py build" and
"./setup.py install" ("make install" does this for you).
Do I need to run Python? Command line? I'm lost. Has anyone done this? Thanks.
Do you know how to use pip?
Just type in C:/[Enter your python folder]/Scripts/pip install 3to2
Go to the Scripts folder in the Python folder and rename 3to2 to 3to2.py
Then, type in C:/[Enter python folder again]/python.exe C:/[Enter python folder]/Scripts/3to2.py -w Path/To/The/Python/File

Resources