Gitlab Ci include local only executes last - gitlab

I got a lot of different android flavors for one app to build, so i want to split up the building into different yml files. I currently have my base file .gitlab-ci.yml
image: alvrme/alpine-android:android-29-jdk11
variables:
GIT_SUBMODULE_STRATEGY: recursive
before_script:
- export GRADLE_USER_HOME=`pwd`/.gradle
- chmod +x ./gradlew
cache:
key: "$CI_COMMIT_REF_NAME"
paths:
- .gradle/
stages:
- test
- staging
- production
- firebaseUpload
- slack
include:
- local: '/.gitlab/bur.yml'
- local: '/.gitlab/vil.yml'
- local: '/.gitlab/kom.yml'
I am currently trying to build 3 different flavors. But i dont know why only the last included yml file gets executed. the first 2 are ignored.
/.gitlab/bur.yml
unitTests:
stage: test
script:
- ./gradlew testBurDevDebugUnitTest
/.gitlab/vil.yml
unitTests:
stage: test
script:
- ./gradlew testVilDevDebugUnitTest
/.gitlab/kom.yml
unitTests:
stage: test
script:
- ./gradlew testKomDevDebugUnitTest

What you observe looks like the expected behavior:
Your three files .gitlab/{bur,vil,kom}.yml contain the same job name unitTests.
So, each include overrides the specification of this job.
As a result, you only get 1 unitTests job in the end, with the specification from the last YAML file.
Thus, the simplest fix would be to change this job name, e.g.:
unitTests-kom:
stage: test
script:
- ./gradlew testKomDevDebugUnitTest

Related

run gitlab jobs sequentially

I have two simple stages. (build and test). And I want jobs in the pipeline to run sequentially.
Actually, I want when I run the test job, it doesn't run until the build job was passed completely.
My gitlab file:
stages:
- build
- test
build:
stage: build
script:
- mvn clean package
only:
- merge_requests
test:
stage: test
services:
script:
- mvn verify
- mvn jacoco:report
artifacts:
reports:
junit:
- access/target/surefire-reports/TEST-*.xml
paths:
- access/target/site/jacoco
expire_in: 1 week
only:
- merge_requests
Can I add
needs:
- build
in the test stage?
Based on the simplicity of your build file, i do not think that you actively need the needs. Based on the documentation, all stages are executed sequentially.
The pitfall you are in right now, is the only reference. The build stage will run for any branch, and for that ignore merge requests. if you add a only directive to your build job, you might get the result you are looking for like:
build:
stage: build
script:
- mvn clean package
only:
- merge_requests
- master # might be main, develop, etc. what ever your longliving branches are
This way it will not be triggered for each branch but only for merge requests and the long living branches. see the only documentation. Now the execution is not assigned to the branch but to the merge request, and you will have your expected outcome (at least what i assume)

How do I make a stage pass only if one or more jobs succeed in GitLab CI?

I have a .gitlab-ci.yml that looks like this:
image: "python:3.7"
.python-tag:
tags:
- python
before_script:
- python --version
- pip install -r requirements.txt
- export PYTHONPATH=${PYTHONPATH}:./src
- python -c "import sys;print(sys.path)"
stages:
- Static Analysis
- Local Tests
- Integration Tests
- Deploy
mypy:
stage: Static Analysis
extends:
- .python-tag
script:
- mypy .
pytest-smoke:
stage: Local Tests
extends:
- .python-tag
script:
- pytest -m smoke
int-tests-1:
stage: Integration Tests
when: manual
allow_failure: false
trigger:
project: tests/gitlab-integration-testing-integration-tests
strategy: depend
int-tests-2:
stage: Integration Tests
when: manual
allow_failure: false
trigger:
project: tests/gitlab-integration-testing-integration-tests
strategy: depend
deploy:
stage: Deploy
extends:
- .python-tag
script:
- echo "Deployed!"
The Integrations stage has multiple jobs in it that take a decent chunk of time to run. It is unusual that all of the integration tests need to be run. This is why we stuck a manual flag on these, and the specific ones needed will be manually run.
How do I make it so that the Deploy stage requires that one or more of the jobs in Integration Tests has passed? I can either do all like I have now or I can do none by removing allow_failure: false from the integration test jobs.
I want to require that at least once has passed.
if each job generate an artifcact only when the job is successful
artifacts:
paths:
- success.txt
script:
# generate the success.txt file
you should be able to test if the file exist in the next stage
you need to add this (below) in the next stage to be able to read the file:
artifacts:
paths:
- success.txt

Gitlab 'exists' rules do not take artifacts into consideration

I am having a problem with some Gitlab CI jobs where I specify a rule to run only if a file exists.
This is my .gitlab-ci.yml:
stages:
- build
- test
#Jobs
build:
stage: build
script:
- dotnet restore --no-cache --force
- dotnet build --configuration Release --no-restore
artifacts:
paths:
- test/
expire_in: 1 week
unit_tests:
stage: test
script: dotnet vstest test/*UnitTests/bin/Release/**/*UnitTests.dll --Blame
rules:
- exists:
- test/*UnitTests/bin/Release/**/*UnitTests.dll
integration_tests:
stage: test
script: dotnet vstest test/*IntegrationTests/bin/Release/**/*IntegrationTests.dll --Blame
rules:
- exists:
- test/*IntegrationTests/bin/Release/**/*IntegrationTests.dll
I want to run unit_tests only when there are *UnitTests.dll under the bin in test/ folder and integration_tests only when there are *IntegrationTests.dll under the bin in test/ folder also.
The problem is that both jobs are completely ignored. In other words, Gitlab seems to be evaluating the exists to false as if it was only evaluated at the beginning of the pipeline and not at the beginning of the job, but these paths exist because they're generated in the previous stage and the artifacts are automatically available.
If I remove the rules the unit_tests will run successfully but integration_tests will fail because at my specific project there are no integration tests.
I've tried replacing exists with changes, same problem.
How can I achieve this conditional job execution?
UPDATE 1: I have an ugly workaround but the question remains because the exists seems to be evaluated at the beginning of the pipeline and not at the beginning of the job and, therefore, anything regarding artifacts is ignored.
This trick works because I always assume that if there's a csproj there will be a dll later on as the result of the build stage.
stages:
- build
- test
build:
stage: build
script:
- dotnet restore --no-cache --force
- dotnet build --configuration Release --no-restore
artifacts:
paths:
- test/
expire_in: 1 week
unit_tests:
stage: test
script: dotnet vstest test/*UnitTests/bin/Release/**/*UnitTests.dll --Blame
rules:
- exists:
- test/*UnitTests/*UnitTests.csproj
integration_tests:
stage: test
script: dotnet vstest test/*IntegrationTests/bin/Release/**/*IntegrationTests.dll --Blame
rules:
- exists:
- test/*IntegrationTests/*IntegrationTests.csproj
At the time of writing, it appears that GitLab doesn't support the use of artifact files in rules. This issue confirms that it doesn't work.
My own workaround is to remove the conditional rule and instead write a wrapper script that first checks the presence of the file.

How to reuse job in .gitlab-ci.yml

I currently have two jobs in my CI file which are nearly identical.
The first is for manually compiling a release build from any git branch.
deploy_internal:
stage: deploy
script: ....<deploy code>
when: manual
The second is to be used by the scheduler to release a daily build from develop branch.
scheduled_deploy_internal:
stage: deploy
script: ....<deploy code from deploy_internal copy/pasted>
only:
variables:
- $MY_DEPLOY_INTERNAL != null
This feels wrong to have all that deploy code repeated in two places. It gets worse. There are also deploy_external, deploy_release, and scheduled variants.
My question:
Is there a way that I can combine deploy_internal and scheduled_deploy_internal such that the manual/scheduled behaviour is retained (DRY basically)?
Alternatively: Is there is a better way that I should structure my jobs?
Edit:
Original title: Deploy job. Execute manually except when scheduled
You can use YAML anchors and aliases to reuse the script.
deploy_internal:
stage: deploy
script:
- &deployment_scripts |
echo "Deployment Started"
bash command 1
bash command 2
when: manual
scheduled_deploy_internal:
stage: deploy
script:
- *deployment_scripts
only:
variables:
- $MY_DEPLOY_INTERNAL != null
Or you can use extends keyword.
.deployment_script:
script:
- echo "Deployment started"
- bash command 1
- bash command 2
deploy_internal:
extends: .deployment_script
stage: deploy
when: manual
scheduled_deploy_internal:
extends: .deployment_script
stage: deploy
only:
variables:
- $MY_DEPLOY_INTERNAL != null
Use GitLab's default section containing a before_script:
default:
before_script:
- ....<deploy code>
job1:
stage: deploy
script: ....<code after than deploy>
job2:
stage: deploy
script: ....<code after than deploy>
Note: the default section fails to function as such if you try to execute a job locally with the gitlab-runner exec command - use YAML anchors instead.

How to delete artifacts directory on gitlab runner after uploading them to gitlab?

I'm trying to create a gitlab job that shows a metric for test code coverage. To do that, I'm creating a .coverage file and placing it in a directory that uploads artifacts. In a subsequent stage the artifacts are downloaded and consumed by a coverage tool to produce a coverage report. I noticed that the artifacts are not deleted when the gitlab runner finishes the job and are bloating my filesystem. How can I remove the artifacts directory after the artifacts are uploaded?
Here's what we currently have
stages:
- test
- build
before_script:
- export GITLAB_ARTIFACT_DIR="$(pwd)"/artifacts
[...]
some-test:
stage: test
script:
- [some script that puts something in ${GITLAB_ARTIFACTS_DIR}
artifacts:
expire_in: 4 days
paths:
- artifacts/
some-other-test:
stage: test
script:
- [some script that puts something in ${GITLAB_ARTIFACTS_DIR}
artifacts:
expire_in: 4 days
paths:
- artifacts/
[...]
coverage:
stage: build
before_script:
script:
- [our coverage script]
coverage: '/TOTAL.*\s+(\d+%)$/'
artifacts:
expire_in: 4 days
paths:
- artifacts/
when: always
[...]
after_script:
- sudo rm -rf "${GITLAB_ARTIFACT_DIR}"
According to https://gitlab.com/gitlab-org/gitlab-runner/issues/4146 after_script does not have access to before_script or scripts environment variables.
A solution could be to use cache and artifact simultaneously.
This config will create a new directory depending of the job id ($CI_JOB_ID) for each job execution :
stages:
- test
remote:
stage: test
script :
- mkdir cache-$CI_JOB_ID
- echo hello> cache-$CI_JOB_ID/foo.txt
cache:
key: build-cache
paths:
- cache-$CI_JOB_ID/
artifacts:
paths:
- cache-$CI_JOB_ID/foo.txt
expire_in: 1 week
At the next run, the previous cache-$CI_JOB_ID will be removed and replace by a new directory (as the $CI_JOB_ID will be different). This will keep only one instance of your cached file until the next job execution.
Note : you need to prefix the directory name with cache- otherwise the .gitlab-ci.yml is invalid.

Resources