I am trying to split my pytests in a gitlab stage to reduce the time it takes to run them. However, I am having difficulties getting the full coverage report. I am unable to use pytest-xdist or pytest-parallel due to the way our database is set up.
Build:
stage: Build
script:
- *setup-build
- *build
- touch banana.xml # where I write the code coverage collected by pytest-cov
- *push
artifacts:
paths:
- banana.xml
reports:
cobertura: banana.xml
Unit Test:
stage: Test
script:
- *setup-build
- *pull
- docker-compose run $DOCKER_IMG_NAME pytest -m unit --cov-report xml:banana.xml --cov=app --cov-append;
needs:
- Build
Validity Test:
stage: Test
script:
- *setup-build
- *pull
- docker-compose run $DOCKER_IMG_NAME pytest -m validity --cov-report xml:banana.xml --cov=app --cov-append;
needs:
- Build
After these two stages run (Build - 1 job, Test - 2 jobs), I go to download the banana.xml file from Gitlab but there's nothing in it, even though the jobs say Coverage XML written to file banana.xml
Am I missing something with how to get the total coverage written to an artifact file when splitting up marked tests in a gitlab pipeline stage?
If you want to combine the coverage reports of several different jobs, you will have to add another stage which will run after your tests. Here is a working example for me :
# You need to define the Test stage before the Coverage stage
stages:
- Test
- Coverage
# Your first test job
unit_test:
stage: Test
script:
- COVERAGE_FILE=.coverage.unit coverage run --rcfile=.coveragerc -m pytest ./unit
artifacts:
paths:
- .coverage.unit
# Your second test job which will run in parallel
validity_test:
stage: Test
script:
- COVERAGE_FILE=.coverage.validity coverage run --rcfile=.coveragerc -m pytest ./validity
artifacts:
paths:
- .coverage.validity
# Your coverage job, which will combine the coverage data from the two tests jobs and generate a report
coverage:
stage: Coverage
script:
- coverage combine --rcfile=.coveragerc
- coverage report
- coverage xml -o coverage.xml
coverage: '/\d+\%\s*$/'
artifacts:
reports:
cobertura: coverage.xml
You also need to create a .coveragerc file in your repository with the following content, to specify that coverage.py needs to use relative file paths, because your tests were run on different gitlab runners, so their full path don't match :
[run]
relative_files = True
source =
./
Note : In your case it's better to use the coverage command directly (so coverage run -m pytest instead of pytest) because it provides more options, and it's what pytest uses under the hood anyway.
The issue in your file is that you start with creating an empty file, try to generate a report from that (which won't generate anything since the file is empty), and then pass it over to both test jobs, which both overwrite it with their local coverage report separately, and then never use it.
You need to do it the other way round, as shown in my example : run the tests first, and in a later stage, get both the test coverage data, and generate a report from that.
Related
I have a .NET solution with multiple projects and for each project I have a separate test project. Currently, whenever I add a new project, I will add a separate test project for it and I need to manually add a new test to the pipeline test step.
I want to write a test step that will run all test projects parallel, but without me having to manually add a new test. Recently, I discovered gitlab has a parallel:matrix keyword, that seems a step in the right direction. I am already working on using it, instead of having separate implementations of a re-usable script, but if possible I want to also dynamically find tests in my test folder.
Current re-usable test script:
.test: &test
allow_failure: false
dependencies:
- build
image: mcr.microsoft.com/dotnet/sdk:6.0
script:
- echo ${TEST_NAME}
- echo ${RESULT_FILE_NAME}
- dotnet test --no-restore ./Tests/${TEST_NAME} -l "JUnit;LogFilePath=../../TestResults/${RESULT_FILE_NAME}.xml"
Example implementation:
Test1:
<<: *test
stage: test
variables:
TEST_NAME: "test1"
RESULT_FILE_NAME: "test1_results"
artifacts:
paths:
- ./TestResults/
What I'm trying to achieve:
test:
stage: test
dependencies:
- build
image: mcr.microsoft.com/dotnet/sdk:6.0
before_script:
- TEST_NAMES = ["test1", "test2"] //Want to find these dynamically
script:
- ls
- echo ${TEST_NAME}
- echo ${RESULT_FILE_NAME}
- dotnet test --no-restore ./Tests/${TEST_NAME} -l "JUnit;LogFilePath=../../TestResults/${TEST_NAME}.xml"
parallel:
matrix:
- TEST_NAME: TEST_NAMES
My current test step (added as exp_test until fully being able to replace test), where I'm expecting 2 parallel tests running, but instead it's only running 1 with the name of the variable, instead of using the variable as an array:
I found 1 answer on here that suggests to dynamically create a child pipeline yaml, but I want to see if it's possible to use parallel:matrix for this.
I got a lot of different android flavors for one app to build, so i want to split up the building into different yml files. I currently have my base file .gitlab-ci.yml
image: alvrme/alpine-android:android-29-jdk11
variables:
GIT_SUBMODULE_STRATEGY: recursive
before_script:
- export GRADLE_USER_HOME=`pwd`/.gradle
- chmod +x ./gradlew
cache:
key: "$CI_COMMIT_REF_NAME"
paths:
- .gradle/
stages:
- test
- staging
- production
- firebaseUpload
- slack
include:
- local: '/.gitlab/bur.yml'
- local: '/.gitlab/vil.yml'
- local: '/.gitlab/kom.yml'
I am currently trying to build 3 different flavors. But i dont know why only the last included yml file gets executed. the first 2 are ignored.
/.gitlab/bur.yml
unitTests:
stage: test
script:
- ./gradlew testBurDevDebugUnitTest
/.gitlab/vil.yml
unitTests:
stage: test
script:
- ./gradlew testVilDevDebugUnitTest
/.gitlab/kom.yml
unitTests:
stage: test
script:
- ./gradlew testKomDevDebugUnitTest
What you observe looks like the expected behavior:
Your three files .gitlab/{bur,vil,kom}.yml contain the same job name unitTests.
So, each include overrides the specification of this job.
As a result, you only get 1 unitTests job in the end, with the specification from the last YAML file.
Thus, the simplest fix would be to change this job name, e.g.:
unitTests-kom:
stage: test
script:
- ./gradlew testKomDevDebugUnitTest
I have two simple stages. (build and test). And I want jobs in the pipeline to run sequentially.
Actually, I want when I run the test job, it doesn't run until the build job was passed completely.
My gitlab file:
stages:
- build
- test
build:
stage: build
script:
- mvn clean package
only:
- merge_requests
test:
stage: test
services:
script:
- mvn verify
- mvn jacoco:report
artifacts:
reports:
junit:
- access/target/surefire-reports/TEST-*.xml
paths:
- access/target/site/jacoco
expire_in: 1 week
only:
- merge_requests
Can I add
needs:
- build
in the test stage?
Based on the simplicity of your build file, i do not think that you actively need the needs. Based on the documentation, all stages are executed sequentially.
The pitfall you are in right now, is the only reference. The build stage will run for any branch, and for that ignore merge requests. if you add a only directive to your build job, you might get the result you are looking for like:
build:
stage: build
script:
- mvn clean package
only:
- merge_requests
- master # might be main, develop, etc. what ever your longliving branches are
This way it will not be triggered for each branch but only for merge requests and the long living branches. see the only documentation. Now the execution is not assigned to the branch but to the merge request, and you will have your expected outcome (at least what i assume)
I am using a multi-project pipeline to separate my end to end tests from my main application code. My end of end tests, if I run the full suite, can take a significant amount of time. I’ve nicely broken them into various groupings using pytest and it’s mark feature.
I’d like to be able to run specific scenarios from within the pipeline now by setting each of these different scenarios to when: manual . Unfortunately, as soon as I add this, the child pipeline reports it has failed to the parent and progress stops. I can manually run each section, as expected, but even then success is not reported back to the parent pipeline.
This is an example of the pipeline. The Integration Tests step has reported a failure, and I’ve manually run the Fast Tests from the downstream pipeline. It has passed, and as the only job in the pipeline the entire downstream pipeline passes. Yet the parent still reports failure so Deploy doesn’t get run.
If I remove when: manual from the downstream pipeline, Integration Tests will run the full test suite, pass, and Deploy will move on as expected.
This is the parent project pipeline.
image: "python:3.7"
before_script:
- python --version
- pip install -r requirements.txt
- export PYTHONPATH=${PYTHONPATH}:./src
- python -c "import sys;print(sys.path)"
stages:
- Static Analysis
- Local Tests
- Integration Tests
- Deploy
mypy:
stage: Static Analysis
script:
- mypy .
flake8:
stage: Static Analysis
script:
- flake8 --max-line-length=88
pytest-smoke:
stage: Local Tests
script:
- pytest -m smoke
pytest-unit:
stage: Local Tests
script:
- pytest -m unittest
pytest-slow:
stage: Local Tests
script:
- pytest -m slow
pytest-fast:
stage: Local Tests
script:
- pytest -m fast
int-tests:
stage: Integration Tests
trigger:
project: andy/gitlab-integration-testing-integration-tests
strategy: depend
deploy:
stage: Deploy
when: manual
script:
- echo "Deployed!"
The end to end tests pipeline looks like this:
image: "python:3.7"
before_script:
- python --version
- pip install -r requirements.txt
- export PYTHONPATH=${PYTHONPATH}:./src
- python -c "import sys;print(sys.path)"
stages:
- Fast Tests
pytest-smoke:
stage: Fast Tests
when: manual
script:
- pytest -m smoke
How can I selectively (manually) run downstream jobs and report success back to the parent pipeline? Without when: manual in that last step on the end to end (downstream) pipeline it performs exactly as I want. But, in the real world pipeline I have, I don’t want to run the end to end tests on everything. Usually it is selected scenarios.
I am currently running: GitLab Enterprise Edition 13.2.2-ee
I'm using pytest with gitlab and I'm wondering if there's a way to automatically parse test results in the pipeline, so that I don't have to go manually in the terminal output and search for test names that have failed. Teamcity has such a feature by using teamcity-messages.
Does anybody know if such a feature is available for gitlab as well?
Test summary in Merge Request view
Gitlab supports parsing and rendering test results from a JUnit report file. The reserved word for that is artifacts:reports:junit. Here is an example CI config that generates a JUnit report on a pytest run and makes it available to Gitlab:
stages:
- test
test:
stage: test
script:
- pytest --junitxml=report.xml
artifacts:
reports:
junit: report.xml
Here is what the results would look like rendered in the Merge Request view:
More info (and examples for other languages) can be found in Gitlab docs: JUnit test reports.
Preview feature: test summary in the Pipeline view
On the doc page linked above, you can also find a preview feature of an extra Tests card in the pipeline view:
This feature is available since 12.5 and currently should be explicitly enabled by an admin via the :junit_pipeline_view flag.
Edit: your case
To sum up, I would rework the pytest invocation command and add the reports section to artifacts in the .gitlab-ci.yml:
test:
script:
- pytest -vv
--cov=${ROOT_MODULE}
--cov-branch
--cov-report term-missing
--cov-report xml:artifacts/coverage.xml
--junitxml=artifacts/junit.xml
artifacts:
paths:
- artifacts/coverage.xml
- artifacts/junit.xml # if you want the JUnit report to be also downloadable
reports:
junit: artifacts/junit.xml