I am using a multi-project pipeline to separate my end to end tests from my main application code. My end of end tests, if I run the full suite, can take a significant amount of time. I’ve nicely broken them into various groupings using pytest and it’s mark feature.
I’d like to be able to run specific scenarios from within the pipeline now by setting each of these different scenarios to when: manual . Unfortunately, as soon as I add this, the child pipeline reports it has failed to the parent and progress stops. I can manually run each section, as expected, but even then success is not reported back to the parent pipeline.
This is an example of the pipeline. The Integration Tests step has reported a failure, and I’ve manually run the Fast Tests from the downstream pipeline. It has passed, and as the only job in the pipeline the entire downstream pipeline passes. Yet the parent still reports failure so Deploy doesn’t get run.
If I remove when: manual from the downstream pipeline, Integration Tests will run the full test suite, pass, and Deploy will move on as expected.
This is the parent project pipeline.
image: "python:3.7"
before_script:
- python --version
- pip install -r requirements.txt
- export PYTHONPATH=${PYTHONPATH}:./src
- python -c "import sys;print(sys.path)"
stages:
- Static Analysis
- Local Tests
- Integration Tests
- Deploy
mypy:
stage: Static Analysis
script:
- mypy .
flake8:
stage: Static Analysis
script:
- flake8 --max-line-length=88
pytest-smoke:
stage: Local Tests
script:
- pytest -m smoke
pytest-unit:
stage: Local Tests
script:
- pytest -m unittest
pytest-slow:
stage: Local Tests
script:
- pytest -m slow
pytest-fast:
stage: Local Tests
script:
- pytest -m fast
int-tests:
stage: Integration Tests
trigger:
project: andy/gitlab-integration-testing-integration-tests
strategy: depend
deploy:
stage: Deploy
when: manual
script:
- echo "Deployed!"
The end to end tests pipeline looks like this:
image: "python:3.7"
before_script:
- python --version
- pip install -r requirements.txt
- export PYTHONPATH=${PYTHONPATH}:./src
- python -c "import sys;print(sys.path)"
stages:
- Fast Tests
pytest-smoke:
stage: Fast Tests
when: manual
script:
- pytest -m smoke
How can I selectively (manually) run downstream jobs and report success back to the parent pipeline? Without when: manual in that last step on the end to end (downstream) pipeline it performs exactly as I want. But, in the real world pipeline I have, I don’t want to run the end to end tests on everything. Usually it is selected scenarios.
I am currently running: GitLab Enterprise Edition 13.2.2-ee
Related
I have a .NET solution with multiple projects and for each project I have a separate test project. Currently, whenever I add a new project, I will add a separate test project for it and I need to manually add a new test to the pipeline test step.
I want to write a test step that will run all test projects parallel, but without me having to manually add a new test. Recently, I discovered gitlab has a parallel:matrix keyword, that seems a step in the right direction. I am already working on using it, instead of having separate implementations of a re-usable script, but if possible I want to also dynamically find tests in my test folder.
Current re-usable test script:
.test: &test
allow_failure: false
dependencies:
- build
image: mcr.microsoft.com/dotnet/sdk:6.0
script:
- echo ${TEST_NAME}
- echo ${RESULT_FILE_NAME}
- dotnet test --no-restore ./Tests/${TEST_NAME} -l "JUnit;LogFilePath=../../TestResults/${RESULT_FILE_NAME}.xml"
Example implementation:
Test1:
<<: *test
stage: test
variables:
TEST_NAME: "test1"
RESULT_FILE_NAME: "test1_results"
artifacts:
paths:
- ./TestResults/
What I'm trying to achieve:
test:
stage: test
dependencies:
- build
image: mcr.microsoft.com/dotnet/sdk:6.0
before_script:
- TEST_NAMES = ["test1", "test2"] //Want to find these dynamically
script:
- ls
- echo ${TEST_NAME}
- echo ${RESULT_FILE_NAME}
- dotnet test --no-restore ./Tests/${TEST_NAME} -l "JUnit;LogFilePath=../../TestResults/${TEST_NAME}.xml"
parallel:
matrix:
- TEST_NAME: TEST_NAMES
My current test step (added as exp_test until fully being able to replace test), where I'm expecting 2 parallel tests running, but instead it's only running 1 with the name of the variable, instead of using the variable as an array:
I found 1 answer on here that suggests to dynamically create a child pipeline yaml, but I want to see if it's possible to use parallel:matrix for this.
I had created a specific runner for my gitlab project,
its taking too long to run the pipeline.
Its is getting stuck in Cypress test mainly.
After "All Specs passed" it will not move forward.
- build
- test
build:
stage: build
image: gradle:jdk11
script:
- gradle --no-daemon build
artifacts:
paths:
- build/distributions
expire_in: 1 day
when: always
junit-test:
stage: test
image: gradle:jdk11
dependencies: []
script:
- gradle test
timeout: 5m
cypress-test:
stage: test
image: registry.gitlab.com/sahajsoft/gurukul2022/csv-parser-srijan:latestSrigin2
dependencies:
- build
script:
- unzip -q build/distributions/csv-parser-srijan-1.0-SNAPSHOT.zip -d build/distributions
- sh build/distributions/csv-parser-srijan-1.0-SNAPSHOT/bin/csv-parser-srijan &
- npm install --save-dev cypress-file-upload
- npx cypress run --browser chrome
A recommended approach is to try and replicate your script (from your pipeline) locally, from your computer.
It will allow to check:
how long those commands take
if there is any interactive step, where a command might expect a user entry, and wait for stdin.
The second point would explain why, in an unattended environment like a pipeline one, "it will not move forward".
I am trying to split my pytests in a gitlab stage to reduce the time it takes to run them. However, I am having difficulties getting the full coverage report. I am unable to use pytest-xdist or pytest-parallel due to the way our database is set up.
Build:
stage: Build
script:
- *setup-build
- *build
- touch banana.xml # where I write the code coverage collected by pytest-cov
- *push
artifacts:
paths:
- banana.xml
reports:
cobertura: banana.xml
Unit Test:
stage: Test
script:
- *setup-build
- *pull
- docker-compose run $DOCKER_IMG_NAME pytest -m unit --cov-report xml:banana.xml --cov=app --cov-append;
needs:
- Build
Validity Test:
stage: Test
script:
- *setup-build
- *pull
- docker-compose run $DOCKER_IMG_NAME pytest -m validity --cov-report xml:banana.xml --cov=app --cov-append;
needs:
- Build
After these two stages run (Build - 1 job, Test - 2 jobs), I go to download the banana.xml file from Gitlab but there's nothing in it, even though the jobs say Coverage XML written to file banana.xml
Am I missing something with how to get the total coverage written to an artifact file when splitting up marked tests in a gitlab pipeline stage?
If you want to combine the coverage reports of several different jobs, you will have to add another stage which will run after your tests. Here is a working example for me :
# You need to define the Test stage before the Coverage stage
stages:
- Test
- Coverage
# Your first test job
unit_test:
stage: Test
script:
- COVERAGE_FILE=.coverage.unit coverage run --rcfile=.coveragerc -m pytest ./unit
artifacts:
paths:
- .coverage.unit
# Your second test job which will run in parallel
validity_test:
stage: Test
script:
- COVERAGE_FILE=.coverage.validity coverage run --rcfile=.coveragerc -m pytest ./validity
artifacts:
paths:
- .coverage.validity
# Your coverage job, which will combine the coverage data from the two tests jobs and generate a report
coverage:
stage: Coverage
script:
- coverage combine --rcfile=.coveragerc
- coverage report
- coverage xml -o coverage.xml
coverage: '/\d+\%\s*$/'
artifacts:
reports:
cobertura: coverage.xml
You also need to create a .coveragerc file in your repository with the following content, to specify that coverage.py needs to use relative file paths, because your tests were run on different gitlab runners, so their full path don't match :
[run]
relative_files = True
source =
./
Note : In your case it's better to use the coverage command directly (so coverage run -m pytest instead of pytest) because it provides more options, and it's what pytest uses under the hood anyway.
The issue in your file is that you start with creating an empty file, try to generate a report from that (which won't generate anything since the file is empty), and then pass it over to both test jobs, which both overwrite it with their local coverage report separately, and then never use it.
You need to do it the other way round, as shown in my example : run the tests first, and in a later stage, get both the test coverage data, and generate a report from that.
I have two simple stages. (build and test). And I want jobs in the pipeline to run sequentially.
Actually, I want when I run the test job, it doesn't run until the build job was passed completely.
My gitlab file:
stages:
- build
- test
build:
stage: build
script:
- mvn clean package
only:
- merge_requests
test:
stage: test
services:
script:
- mvn verify
- mvn jacoco:report
artifacts:
reports:
junit:
- access/target/surefire-reports/TEST-*.xml
paths:
- access/target/site/jacoco
expire_in: 1 week
only:
- merge_requests
Can I add
needs:
- build
in the test stage?
Based on the simplicity of your build file, i do not think that you actively need the needs. Based on the documentation, all stages are executed sequentially.
The pitfall you are in right now, is the only reference. The build stage will run for any branch, and for that ignore merge requests. if you add a only directive to your build job, you might get the result you are looking for like:
build:
stage: build
script:
- mvn clean package
only:
- merge_requests
- master # might be main, develop, etc. what ever your longliving branches are
This way it will not be triggered for each branch but only for merge requests and the long living branches. see the only documentation. Now the execution is not assigned to the branch but to the merge request, and you will have your expected outcome (at least what i assume)
I have a .gitlab-ci.yml that looks like this:
image: "python:3.7"
.python-tag:
tags:
- python
before_script:
- python --version
- pip install -r requirements.txt
- export PYTHONPATH=${PYTHONPATH}:./src
- python -c "import sys;print(sys.path)"
stages:
- Static Analysis
- Local Tests
- Integration Tests
- Deploy
mypy:
stage: Static Analysis
extends:
- .python-tag
script:
- mypy .
pytest-smoke:
stage: Local Tests
extends:
- .python-tag
script:
- pytest -m smoke
int-tests-1:
stage: Integration Tests
when: manual
allow_failure: false
trigger:
project: tests/gitlab-integration-testing-integration-tests
strategy: depend
int-tests-2:
stage: Integration Tests
when: manual
allow_failure: false
trigger:
project: tests/gitlab-integration-testing-integration-tests
strategy: depend
deploy:
stage: Deploy
extends:
- .python-tag
script:
- echo "Deployed!"
The Integrations stage has multiple jobs in it that take a decent chunk of time to run. It is unusual that all of the integration tests need to be run. This is why we stuck a manual flag on these, and the specific ones needed will be manually run.
How do I make it so that the Deploy stage requires that one or more of the jobs in Integration Tests has passed? I can either do all like I have now or I can do none by removing allow_failure: false from the integration test jobs.
I want to require that at least once has passed.
if each job generate an artifcact only when the job is successful
artifacts:
paths:
- success.txt
script:
# generate the success.txt file
you should be able to test if the file exist in the next stage
you need to add this (below) in the next stage to be able to read the file:
artifacts:
paths:
- success.txt