How to dynamically set parallel:matrix in gitlab yaml? - gitlab

I have a .NET solution with multiple projects and for each project I have a separate test project. Currently, whenever I add a new project, I will add a separate test project for it and I need to manually add a new test to the pipeline test step.
I want to write a test step that will run all test projects parallel, but without me having to manually add a new test. Recently, I discovered gitlab has a parallel:matrix keyword, that seems a step in the right direction. I am already working on using it, instead of having separate implementations of a re-usable script, but if possible I want to also dynamically find tests in my test folder.
Current re-usable test script:
.test: &test
allow_failure: false
dependencies:
- build
image: mcr.microsoft.com/dotnet/sdk:6.0
script:
- echo ${TEST_NAME}
- echo ${RESULT_FILE_NAME}
- dotnet test --no-restore ./Tests/${TEST_NAME} -l "JUnit;LogFilePath=../../TestResults/${RESULT_FILE_NAME}.xml"
Example implementation:
Test1:
<<: *test
stage: test
variables:
TEST_NAME: "test1"
RESULT_FILE_NAME: "test1_results"
artifacts:
paths:
- ./TestResults/
What I'm trying to achieve:
test:
stage: test
dependencies:
- build
image: mcr.microsoft.com/dotnet/sdk:6.0
before_script:
- TEST_NAMES = ["test1", "test2"] //Want to find these dynamically
script:
- ls
- echo ${TEST_NAME}
- echo ${RESULT_FILE_NAME}
- dotnet test --no-restore ./Tests/${TEST_NAME} -l "JUnit;LogFilePath=../../TestResults/${TEST_NAME}.xml"
parallel:
matrix:
- TEST_NAME: TEST_NAMES
My current test step (added as exp_test until fully being able to replace test), where I'm expecting 2 parallel tests running, but instead it's only running 1 with the name of the variable, instead of using the variable as an array:
I found 1 answer on here that suggests to dynamically create a child pipeline yaml, but I want to see if it's possible to use parallel:matrix for this.

Related

Gitlab Ci include local only executes last

I got a lot of different android flavors for one app to build, so i want to split up the building into different yml files. I currently have my base file .gitlab-ci.yml
image: alvrme/alpine-android:android-29-jdk11
variables:
GIT_SUBMODULE_STRATEGY: recursive
before_script:
- export GRADLE_USER_HOME=`pwd`/.gradle
- chmod +x ./gradlew
cache:
key: "$CI_COMMIT_REF_NAME"
paths:
- .gradle/
stages:
- test
- staging
- production
- firebaseUpload
- slack
include:
- local: '/.gitlab/bur.yml'
- local: '/.gitlab/vil.yml'
- local: '/.gitlab/kom.yml'
I am currently trying to build 3 different flavors. But i dont know why only the last included yml file gets executed. the first 2 are ignored.
/.gitlab/bur.yml
unitTests:
stage: test
script:
- ./gradlew testBurDevDebugUnitTest
/.gitlab/vil.yml
unitTests:
stage: test
script:
- ./gradlew testVilDevDebugUnitTest
/.gitlab/kom.yml
unitTests:
stage: test
script:
- ./gradlew testKomDevDebugUnitTest
What you observe looks like the expected behavior:
Your three files .gitlab/{bur,vil,kom}.yml contain the same job name unitTests.
So, each include overrides the specification of this job.
As a result, you only get 1 unitTests job in the end, with the specification from the last YAML file.
Thus, the simplest fix would be to change this job name, e.g.:
unitTests-kom:
stage: test
script:
- ./gradlew testKomDevDebugUnitTest

Is there any way to dynamically edit a variable in one job and then pass it to a trigger/bridge job in Gitlab CI?

I need to pass a file path to a trigger job where the file path is found within a specified json file in a separate job. Something along the lines of this...
stages:
- run_downstream_pipeline
variables:
- FILE_NAME: default_file.json
.get_path:
stage: run_downstream_pipeline
needs: []
only:
- schedules
- triggers
- web
script:
- apt-get install jq
- FILE_PATH=$(jq '.file_path' $FILE_NAME)
run_pipeline:
extends: .get_path
variables:
PATH: $FILE_PATH
trigger:
project: my/project
branch: staging
strategy: depend
I can't seem to find any workaround to do this, as using extends won't work since Gitlab wont allow for a script section in a trigger job.
I thought about trying to use the Gitlab API trigger method, but I want the status of the downstream pipeline to actually show up in the pipeline UI and I want the upstream pipeline to depend on the status of the downstream pipeline, which from my understanding is not possible when triggering via the API.
Any advice would be appreciated. Thanks!
You can use artifacts:reports:dotenv for setting variables dynamically for subsequent jobs.
stages:
- one
- two
my_job:
stage: "one"
script:
- FILE_PATH=$(jq '.file_path' $FILE_NAME) # In script, get the environment URL.
- echo "FILE_PATH=${FILE_PATH}" >> variables.env # Add the value to a dotenv file.
artifacts:
reports:
dotenv: "variables.env"
example:
stage: two
script: "echo $FILE_PATH"
another_job:
stage: two
trigger:
project: my/project
branch: staging
strategy: depend
Variables in the dotenv file will automatically be present for jobs in subsequent stages (or that declare needs: for the job).
You can also pull artifacts into child pipelines, in general.
But be warned you probably don't want to override the PATH variable, since that's a special variable used to help you find builtin binaries.

Append pytest coverage to file in Gitlab CI artifacts

I am trying to split my pytests in a gitlab stage to reduce the time it takes to run them. However, I am having difficulties getting the full coverage report. I am unable to use pytest-xdist or pytest-parallel due to the way our database is set up.
Build:
stage: Build
script:
- *setup-build
- *build
- touch banana.xml # where I write the code coverage collected by pytest-cov
- *push
artifacts:
paths:
- banana.xml
reports:
cobertura: banana.xml
Unit Test:
stage: Test
script:
- *setup-build
- *pull
- docker-compose run $DOCKER_IMG_NAME pytest -m unit --cov-report xml:banana.xml --cov=app --cov-append;
needs:
- Build
Validity Test:
stage: Test
script:
- *setup-build
- *pull
- docker-compose run $DOCKER_IMG_NAME pytest -m validity --cov-report xml:banana.xml --cov=app --cov-append;
needs:
- Build
After these two stages run (Build - 1 job, Test - 2 jobs), I go to download the banana.xml file from Gitlab but there's nothing in it, even though the jobs say Coverage XML written to file banana.xml
Am I missing something with how to get the total coverage written to an artifact file when splitting up marked tests in a gitlab pipeline stage?
If you want to combine the coverage reports of several different jobs, you will have to add another stage which will run after your tests. Here is a working example for me :
# You need to define the Test stage before the Coverage stage
stages:
- Test
- Coverage
# Your first test job
unit_test:
stage: Test
script:
- COVERAGE_FILE=.coverage.unit coverage run --rcfile=.coveragerc -m pytest ./unit
artifacts:
paths:
- .coverage.unit
# Your second test job which will run in parallel
validity_test:
stage: Test
script:
- COVERAGE_FILE=.coverage.validity coverage run --rcfile=.coveragerc -m pytest ./validity
artifacts:
paths:
- .coverage.validity
# Your coverage job, which will combine the coverage data from the two tests jobs and generate a report
coverage:
stage: Coverage
script:
- coverage combine --rcfile=.coveragerc
- coverage report
- coverage xml -o coverage.xml
coverage: '/\d+\%\s*$/'
artifacts:
reports:
cobertura: coverage.xml
You also need to create a .coveragerc file in your repository with the following content, to specify that coverage.py needs to use relative file paths, because your tests were run on different gitlab runners, so their full path don't match :
[run]
relative_files = True
source =
./
Note : In your case it's better to use the coverage command directly (so coverage run -m pytest instead of pytest) because it provides more options, and it's what pytest uses under the hood anyway.
The issue in your file is that you start with creating an empty file, try to generate a report from that (which won't generate anything since the file is empty), and then pass it over to both test jobs, which both overwrite it with their local coverage report separately, and then never use it.
You need to do it the other way round, as shown in my example : run the tests first, and in a later stage, get both the test coverage data, and generate a report from that.

Create two versions of my software in Gitlab CI

I am setting up Gitlab CI for the first time, and I want it to create two releases for each commit. My .gitlab-ci.yml file looks like this:
stages:
- compile
- test
- build release
compile apps:
stage: compile
script:
- scons
artifacts:
paths:
- deliverables/
check version:
stage: test
script:
- check_version.sh
build releasefile:
stage: build release
script:
- build_release.sh
artifacts:
paths:
- release/
For my second version, I want to run scons in compile apps with a flag (scons --special) and then run all next jobs as well on those deliverables. My deliverables are named the same for both versions, and if I just create jobs for both the normal and special version, my "check version" job will check the normal version twice. My options:
Create a really long pipeline that runs everything of the normal version and then everything of the special version. I don't like this solution, it looks hideous and can make errors less visible when the pipeline is expanded later.
Change my scons and shell scripts.
Create two pipelines on each commit, one with a Gitlab CI flag and one without (I don't know how to do this).
Create a "split" pipeline that only uses stuff from the job that it is based on (I don't know how to do this).
For the last case, my pipeline would look something like this:
-----+------ Compile normal ----- Check version ----- Build releasefile
|
+----- Compile special ----- Check version ----- Build releasefile
I would prefer option 3 or 4 and I've been looking at Directed Acyclic Graph Pipelines, but I can't get those to work in the way I want. Is there a way to do either of these?
You can do this by creating a DAG pipeline with needs. If you don't use needs (or the older dependencies), all artifacts from previous stages will be downloaded, which in this case is problematic due to the overlap in the artifact folders / names.
You can also use extends to avoid duplication in your job declarations. The full .gitlab-ci.yml config could be something like this:
stages:
- compile
- test
- build release
compile apps:
stage: compile
script:
- scons
artifacts:
paths:
- deliverables/
compile apps special:
extends:
- compile apps
script:
- scons --special
check version:
stage: test
script:
- check_version.sh
needs:
- compile apps
check version special:
extends:
- check version
needs:
- compile apps special
build releasefile:
stage: build release
script:
- build_release.sh
artifacts:
paths:
- release/
needs:
- compile apps
- check version
build releasefile special:
extends:
- build releasefile
needs:
- compile apps special
- check version special
extends works well in this context because it doesn't combine YAML list items, but instead overwrites them (keys with different names would get merged, on the other hand). So in this case, the whole script and needs declarations get overwritten by the inheriting job.

when: manual in downstream pipeline causes upstream to report failure

I am using a multi-project pipeline to separate my end to end tests from my main application code. My end of end tests, if I run the full suite, can take a significant amount of time. I’ve nicely broken them into various groupings using pytest and it’s mark feature.
I’d like to be able to run specific scenarios from within the pipeline now by setting each of these different scenarios to when: manual . Unfortunately, as soon as I add this, the child pipeline reports it has failed to the parent and progress stops. I can manually run each section, as expected, but even then success is not reported back to the parent pipeline.
This is an example of the pipeline. The Integration Tests step has reported a failure, and I’ve manually run the Fast Tests from the downstream pipeline. It has passed, and as the only job in the pipeline the entire downstream pipeline passes. Yet the parent still reports failure so Deploy doesn’t get run.
If I remove when: manual from the downstream pipeline, Integration Tests will run the full test suite, pass, and Deploy will move on as expected.
This is the parent project pipeline.
image: "python:3.7"
before_script:
- python --version
- pip install -r requirements.txt
- export PYTHONPATH=${PYTHONPATH}:./src
- python -c "import sys;print(sys.path)"
stages:
- Static Analysis
- Local Tests
- Integration Tests
- Deploy
mypy:
stage: Static Analysis
script:
- mypy .
flake8:
stage: Static Analysis
script:
- flake8 --max-line-length=88
pytest-smoke:
stage: Local Tests
script:
- pytest -m smoke
pytest-unit:
stage: Local Tests
script:
- pytest -m unittest
pytest-slow:
stage: Local Tests
script:
- pytest -m slow
pytest-fast:
stage: Local Tests
script:
- pytest -m fast
int-tests:
stage: Integration Tests
trigger:
project: andy/gitlab-integration-testing-integration-tests
strategy: depend
deploy:
stage: Deploy
when: manual
script:
- echo "Deployed!"
The end to end tests pipeline looks like this:
image: "python:3.7"
before_script:
- python --version
- pip install -r requirements.txt
- export PYTHONPATH=${PYTHONPATH}:./src
- python -c "import sys;print(sys.path)"
stages:
- Fast Tests
pytest-smoke:
stage: Fast Tests
when: manual
script:
- pytest -m smoke
How can I selectively (manually) run downstream jobs and report success back to the parent pipeline? Without when: manual in that last step on the end to end (downstream) pipeline it performs exactly as I want. But, in the real world pipeline I have, I don’t want to run the end to end tests on everything. Usually it is selected scenarios.
I am currently running: GitLab Enterprise Edition 13.2.2-ee

Resources