teamcity-messages for gitlab? - gitlab

I'm using pytest with gitlab and I'm wondering if there's a way to automatically parse test results in the pipeline, so that I don't have to go manually in the terminal output and search for test names that have failed. Teamcity has such a feature by using teamcity-messages.
Does anybody know if such a feature is available for gitlab as well?

Test summary in Merge Request view
Gitlab supports parsing and rendering test results from a JUnit report file. The reserved word for that is artifacts:reports:junit. Here is an example CI config that generates a JUnit report on a pytest run and makes it available to Gitlab:
stages:
- test
test:
stage: test
script:
- pytest --junitxml=report.xml
artifacts:
reports:
junit: report.xml
Here is what the results would look like rendered in the Merge Request view:
More info (and examples for other languages) can be found in Gitlab docs: JUnit test reports.
Preview feature: test summary in the Pipeline view
On the doc page linked above, you can also find a preview feature of an extra Tests card in the pipeline view:
This feature is available since 12.5 and currently should be explicitly enabled by an admin via the :junit_pipeline_view flag.
Edit: your case
To sum up, I would rework the pytest invocation command and add the reports section to artifacts in the .gitlab-ci.yml:
test:
script:
- pytest -vv
--cov=${ROOT_MODULE}
--cov-branch
--cov-report term-missing
--cov-report xml:artifacts/coverage.xml
--junitxml=artifacts/junit.xml
artifacts:
paths:
- artifacts/coverage.xml
- artifacts/junit.xml # if you want the JUnit report to be also downloadable
reports:
junit: artifacts/junit.xml

Related

Changing Gitlab SAST json report names

Issue
Note: My CI contains a code complexity checker which can be ignored. This question is mainly focused on SAST.
I have recently setup a SAST pipeline for one of my Gitlab projects. The Gitlab-ce and Gitlab-runner instances are self-hosted. When the SAST scan is completed, the downloaded artifacts / json reports all contain the same name gl-sast-report.json. In this example, the artifacts bandit-sast and semgrep-sast both product gl-sast-report.json when downloaded.
SAST configuration
stages:
- CodeScan
- CodeComplexity
sast:
stage: CodeScan
tags:
- sast
code_quality:
stage: CodeComplexity
artifacts:
paths: [gl-code-quality-report.json]
services:
tags:
- cq-sans-dind
include:
- template: Security/SAST.gitlab-ci.yml
- template: Code-Quality.gitlab-ci.yml
Completed SAST results
End Goal
If possible, how could I change the name of the artifacts for bandit-sast and semgrep-sast?
If question one is possible, does this mean I have to manually specify each analyser for various projects. Currently, based on my .gitlab-ci.yml the SAST analysers are automatically detected based on the project language.
If you're using the pre-built SAST images, this isn't possible, even if you run the docker command manually like so:
docker run --volume "$PWD":/code --env=LM_REPORT_VERSION="2.1" --env=CI_PROJECT_DIR=/code registry.gitlab.com/gitlab-org/security-products/analyzers/license-finder:latest
When using these SAST (and DAST) images, the report file will always have the name in the docs, however if you ran the docker command manually like above, you could rename the file before it's uploaded as an artifact, but it would still have the same json structure/content.
Run License Scanning Analyzer:
stage: sast
script:
- docker run --volume "$PWD":/code --env=LM_REPORT_VERSION="2.1" --env=CI_PROJECT_DIR=/code registry.gitlab.com/gitlab-org/security-products/analyzers/license-finder:latest
- mv gl-license-scanning-report.json license-scanning-report.json
artifacts:
reports:
license_scanning: license-scanning-report.json
The only way to change the json structure/content is to implement the SAST tests manually without using the provided images at all. You can see all the available SAST analyzers in this Gitlab repo.
For the License Finder analyzer as an example, the Dockerfile says the entrypoint for the image is the run.sh script.
You can see on line 20 of run.sh it sets the name of the file to 'gl-license-scanning-report.json', but we can change the name by running the docker image manually so this doesn't really help. However, we can see that the actual analyzing comes from the scan_project function, which you could replicate.
So while it is possible to manually run these analyzers without the pre-built images, it will be much more difficult to get them to work.

Append pytest coverage to file in Gitlab CI artifacts

I am trying to split my pytests in a gitlab stage to reduce the time it takes to run them. However, I am having difficulties getting the full coverage report. I am unable to use pytest-xdist or pytest-parallel due to the way our database is set up.
Build:
stage: Build
script:
- *setup-build
- *build
- touch banana.xml # where I write the code coverage collected by pytest-cov
- *push
artifacts:
paths:
- banana.xml
reports:
cobertura: banana.xml
Unit Test:
stage: Test
script:
- *setup-build
- *pull
- docker-compose run $DOCKER_IMG_NAME pytest -m unit --cov-report xml:banana.xml --cov=app --cov-append;
needs:
- Build
Validity Test:
stage: Test
script:
- *setup-build
- *pull
- docker-compose run $DOCKER_IMG_NAME pytest -m validity --cov-report xml:banana.xml --cov=app --cov-append;
needs:
- Build
After these two stages run (Build - 1 job, Test - 2 jobs), I go to download the banana.xml file from Gitlab but there's nothing in it, even though the jobs say Coverage XML written to file banana.xml
Am I missing something with how to get the total coverage written to an artifact file when splitting up marked tests in a gitlab pipeline stage?
If you want to combine the coverage reports of several different jobs, you will have to add another stage which will run after your tests. Here is a working example for me :
# You need to define the Test stage before the Coverage stage
stages:
- Test
- Coverage
# Your first test job
unit_test:
stage: Test
script:
- COVERAGE_FILE=.coverage.unit coverage run --rcfile=.coveragerc -m pytest ./unit
artifacts:
paths:
- .coverage.unit
# Your second test job which will run in parallel
validity_test:
stage: Test
script:
- COVERAGE_FILE=.coverage.validity coverage run --rcfile=.coveragerc -m pytest ./validity
artifacts:
paths:
- .coverage.validity
# Your coverage job, which will combine the coverage data from the two tests jobs and generate a report
coverage:
stage: Coverage
script:
- coverage combine --rcfile=.coveragerc
- coverage report
- coverage xml -o coverage.xml
coverage: '/\d+\%\s*$/'
artifacts:
reports:
cobertura: coverage.xml
You also need to create a .coveragerc file in your repository with the following content, to specify that coverage.py needs to use relative file paths, because your tests were run on different gitlab runners, so their full path don't match :
[run]
relative_files = True
source =
./
Note : In your case it's better to use the coverage command directly (so coverage run -m pytest instead of pytest) because it provides more options, and it's what pytest uses under the hood anyway.
The issue in your file is that you start with creating an empty file, try to generate a report from that (which won't generate anything since the file is empty), and then pass it over to both test jobs, which both overwrite it with their local coverage report separately, and then never use it.
You need to do it the other way round, as shown in my example : run the tests first, and in a later stage, get both the test coverage data, and generate a report from that.

Gitlab CI not showing Cobertura code coverage visualization

I am trying to show test coverage visualization in Gitlab for our monorepo as described here Test Coverage Visualization
We are using a self-managed gitlab runner with the docker+machine executor hosted on AWS EC2 instances. We are using Gitlab SaaS. The job from the gitlab-ci.yml is below
sdk:
stage: test
needs: []
artifacts:
when: always
reports:
cobertura: /builds/path/to/cobertura/coverage/cobertura-coverage.xml
<<: *main
<<: *tests
<<: *rules
<<: *tags
The line in the script that runs the tests and outputs code coverage...
- npm run test -- --watchAll=false --coverage --coverageReporters=cobertura
The artifact gets saved just fine and looks normal when I download it, but I don't get any visualization as described in the documentation linked above. I just updated the gitlab runner to V14.0.0 thinking that might be the problem, it's not.
I don't have any sort of regex pattern setup, as from my understanding that is only for printing the coverage to stdout.
I'm sure I am overlooking something trivial and I really need a sanity check here as I have already spent way more time on this than I can afford.
The issue was that the regex pattern needed to be set in the repository settings. I had experimented with adding a regex pattern, but it hadn't worked by the time I posted this question because the regex pattern I was using was not correct.

Create two versions of my software in Gitlab CI

I am setting up Gitlab CI for the first time, and I want it to create two releases for each commit. My .gitlab-ci.yml file looks like this:
stages:
- compile
- test
- build release
compile apps:
stage: compile
script:
- scons
artifacts:
paths:
- deliverables/
check version:
stage: test
script:
- check_version.sh
build releasefile:
stage: build release
script:
- build_release.sh
artifacts:
paths:
- release/
For my second version, I want to run scons in compile apps with a flag (scons --special) and then run all next jobs as well on those deliverables. My deliverables are named the same for both versions, and if I just create jobs for both the normal and special version, my "check version" job will check the normal version twice. My options:
Create a really long pipeline that runs everything of the normal version and then everything of the special version. I don't like this solution, it looks hideous and can make errors less visible when the pipeline is expanded later.
Change my scons and shell scripts.
Create two pipelines on each commit, one with a Gitlab CI flag and one without (I don't know how to do this).
Create a "split" pipeline that only uses stuff from the job that it is based on (I don't know how to do this).
For the last case, my pipeline would look something like this:
-----+------ Compile normal ----- Check version ----- Build releasefile
|
+----- Compile special ----- Check version ----- Build releasefile
I would prefer option 3 or 4 and I've been looking at Directed Acyclic Graph Pipelines, but I can't get those to work in the way I want. Is there a way to do either of these?
You can do this by creating a DAG pipeline with needs. If you don't use needs (or the older dependencies), all artifacts from previous stages will be downloaded, which in this case is problematic due to the overlap in the artifact folders / names.
You can also use extends to avoid duplication in your job declarations. The full .gitlab-ci.yml config could be something like this:
stages:
- compile
- test
- build release
compile apps:
stage: compile
script:
- scons
artifacts:
paths:
- deliverables/
compile apps special:
extends:
- compile apps
script:
- scons --special
check version:
stage: test
script:
- check_version.sh
needs:
- compile apps
check version special:
extends:
- check version
needs:
- compile apps special
build releasefile:
stage: build release
script:
- build_release.sh
artifacts:
paths:
- release/
needs:
- compile apps
- check version
build releasefile special:
extends:
- build releasefile
needs:
- compile apps special
- check version special
extends works well in this context because it doesn't combine YAML list items, but instead overwrites them (keys with different names would get merged, on the other hand). So in this case, the whole script and needs declarations get overwritten by the inheriting job.

when: manual in downstream pipeline causes upstream to report failure

I am using a multi-project pipeline to separate my end to end tests from my main application code. My end of end tests, if I run the full suite, can take a significant amount of time. I’ve nicely broken them into various groupings using pytest and it’s mark feature.
I’d like to be able to run specific scenarios from within the pipeline now by setting each of these different scenarios to when: manual . Unfortunately, as soon as I add this, the child pipeline reports it has failed to the parent and progress stops. I can manually run each section, as expected, but even then success is not reported back to the parent pipeline.
This is an example of the pipeline. The Integration Tests step has reported a failure, and I’ve manually run the Fast Tests from the downstream pipeline. It has passed, and as the only job in the pipeline the entire downstream pipeline passes. Yet the parent still reports failure so Deploy doesn’t get run.
If I remove when: manual from the downstream pipeline, Integration Tests will run the full test suite, pass, and Deploy will move on as expected.
This is the parent project pipeline.
image: "python:3.7"
before_script:
- python --version
- pip install -r requirements.txt
- export PYTHONPATH=${PYTHONPATH}:./src
- python -c "import sys;print(sys.path)"
stages:
- Static Analysis
- Local Tests
- Integration Tests
- Deploy
mypy:
stage: Static Analysis
script:
- mypy .
flake8:
stage: Static Analysis
script:
- flake8 --max-line-length=88
pytest-smoke:
stage: Local Tests
script:
- pytest -m smoke
pytest-unit:
stage: Local Tests
script:
- pytest -m unittest
pytest-slow:
stage: Local Tests
script:
- pytest -m slow
pytest-fast:
stage: Local Tests
script:
- pytest -m fast
int-tests:
stage: Integration Tests
trigger:
project: andy/gitlab-integration-testing-integration-tests
strategy: depend
deploy:
stage: Deploy
when: manual
script:
- echo "Deployed!"
The end to end tests pipeline looks like this:
image: "python:3.7"
before_script:
- python --version
- pip install -r requirements.txt
- export PYTHONPATH=${PYTHONPATH}:./src
- python -c "import sys;print(sys.path)"
stages:
- Fast Tests
pytest-smoke:
stage: Fast Tests
when: manual
script:
- pytest -m smoke
How can I selectively (manually) run downstream jobs and report success back to the parent pipeline? Without when: manual in that last step on the end to end (downstream) pipeline it performs exactly as I want. But, in the real world pipeline I have, I don’t want to run the end to end tests on everything. Usually it is selected scenarios.
I am currently running: GitLab Enterprise Edition 13.2.2-ee

Resources