I have a repo QA/tests which I want to run all the jobs when there is a push to this repo.
I used a script to generate the jobs dynamically:
job-generator:
stage: generate
tags:
- kuber
script:
- scripts/generate-job.sh > generated-job.yml
artifacts:
paths:
- generated-job.yml
main:
trigger:
include:
- artifact: generated-job.yml
job: job-generator
strategy: depend
At the next step, I have another repo products/first which I want to run a specific job in QA/tests at every push in the products/first so I tried:
stages:
- test
tests:
stage: test
variables:
TARGET: first
trigger:
project: QA/tests
branch: master
strategy: depend
Then I tried to define a global TARGET: all variable in my main gitlab-ci.yml and override it with the TARGET: first in the above YAML.
generate-job.sh:
#!/bin/bash
PRODUCTS=("first" "second" "third")
for P in "${PRODUCTS[#]}"; do
cat << EOF
$P:
stage: test
tags:
- kuber
script:
- echo -e "Hello from $P"
rules:
- if: '"$TARGET" == "all"'
when: always
- if: '"$TARGET" == $P'
when: always
EOF
done
But no results. the downstream pipeline doesn't have any job at all!
Any idea?
I am not sure if this is now helpful, but this looks like an over complicated approach from the outside. I have to say i have limited knowledge and my answer is based on assumption:
the QA/tests repository contains certain test cases for all repositories
QA/tests has the sole purpose of containing the tests, not an overview over the projects etc.
My Suggestion
As QA/tests is only containing tests which should be executed against each project, i would create a docker image out of it which is contains all the tests and can actually execute them. (lets calls it qa-tests:latest)
Within my projects i would add a step which uses this images, with my source code of the project and executes the tests:
qa-test:
image: qa-tests:latest
script:
- echo "command to execute scripts"
# add rules here accordingly
this would solve the issue with each push into the repositories. For an easier usage, i could create a QA-Tests.gitlab-ci.yml file which can be included by the sub-projects with
include:
- project: QA/tests
file: QA-Tests.gitlab-ci.yml
this way you do not need to do updates with in the repositories if the ci snippet changes.
Finally to trigger the execution on each push, you only need to trigger the pipelines of the subprojects from the QA/tests.
Disclaimer
As i said, i have only a limited few, as the goal is described but not the motivation. With this approach you remove some of the directive calls - mainly the ones triggering from sub projects to QA/tests. And it generates a clear structure, but it might not fit your needs.
I solved it with:
gitlab-ci.yml:
variables:
TARGET: all
job-generator:
stage: generate
tags:
- kuber
script:
- scripts/generate-job.sh > generated-job.yml
artifacts:
paths:
- generated-job.yml
main:
variables:
CHILD_TARGET: $TARGET
trigger:
include:
- artifact: generated-job.yml
job: job-generator
strategy: depend
and use CHILD_TARGET in my generate-job.sh:
#!/bin/bash
PRODUCTS=("first" "second" "third")
for P in "${PRODUCTS[#]}"; do
cat << EOF
$P:
stage: test
tags:
- kuber
script:
- echo -e "Hello from $P"
rules:
- if: '\$CHILD_TARGET == "all"'
when: always
- if: '\$CHILD_TARGET == "$P"'
when: always
EOF
done
So I could call it from other projects like this:
stages:
- test
e2e-tests:
stage: test
variables:
TARGET: first
trigger:
project: QA/tests
branch: master
strategy: depend
Related
Suppose we have a .gitlab-ci.yml file that reads in a common.gitlab-ci.complete.yml file:
include: common.gitlab-ci.complete.yml
Suppose that common.gitlab-ci.complete.yml has the following stages:
stages:
- build
- unit-test
- mutation-test
In the .gitlab-ci.yml file, how do we ignore the unit-test and mutation-test stages? Would it be something like:
include: common.gitlab-ci.complete.yml
stages:
- build
#- unit-test
#- mutation-test
Would these stages override the one in the common.gitlab-ci.complete.yml file?
(I have not yet checked this code for accuracy. Treat it as pseudocode.)
You're asking to remove all jobs in a stage. If you look at the documentation for the include: keyword, you'll note that the YAML is always merged with your current CI script. So no, you can't just add a statement like "Disable all jobs in stage A".
Here's an idea... recall that
If a stage is defined but no jobs use it, the stage is not visible in the pipeline...
https://docs.gitlab.com/ee/ci/yaml/#stages
So what you could do is add a mechanism for disabling all jobs in that stage. Then the stage will disappear as well. That mechanism might be checking a variable in rules.
So if you wrote the common.gitlab-ci.complete.yml with templates that check a variable to disable the job...
stages:
- build
- unit-test
- package
.unit-test-stage:
stage: unit-test
when: on_success
rules:
- if: $DISABLE_UNIT_TESTS
when: never
unit-test-job-a:
extends: .unit-test-stage
script:
- echo "do stuff"
unit-test-job-b:
extends: .unit-test-stage
script:
- echo "do more stuff"
Then maybe your top-level .gitlab-ci.yml would simply set that known variable in its global variables section like this:
include: common.gitlab-ci.complete.yml
variables:
DISABLE_UNIT_TESTS: 1
I am trying to use "rules" and "only" keywords to define my pipeline behaviors between merge requests, pushes into dev branch and pushes into master branch.
I noticed several weird behaviors in the Gitlab CI, let's see in my merge_requests pipelines.
With this gitlab-ci.yml file, without any rule, all the jobs are displayed and run.
image: "python:3.8"
stages:
- test_without_only_policy
- test_with_only_policy
test_without_only_policy:
stage: test_without_only_policy
when: manual
script:
- echo "Yay, I am in the pipeline"
test_with_only_policy:
stage: test_with_only_policy
script:
- echo "I am always in the pipeline"
Everything is working as expected, great 👍
With this gitlab-ci.yml file, without an "only" policy in the 2nd job, the 1st job without rules disappears.
image: "python:3.8"
stages:
- test_without_only_policy
- test_with_only_policy
test_without_only_policy:
stage: test_without_only_policy
when: manual
script:
- echo "No, I am not in the pipeline anymore"
test_with_only_policy:
stage: test_with_only_policy
only:
- merge_requests
script:
- echo "I am always in the pipeline"
Why did the 1st job without the rules or "only" policy disappear?
With this gitlab-ci.yml file, with a "rules" keyword in the 2nd job, the 1st job without rules disappears.
image: "python:3.8"
stages:
- test_without_only_policy
- test_with_only_policy
test_without_only_policy:
stage: test_without_only_policy
when: manual
script:
- echo "No, I am not in the pipeline anymore"
test_with_only_policy:
stage: test_with_only_policy
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: manual
script:
- echo "I am always in the pipeline"
Why did the 1st job without the rules or "only" policy disappear?
Thank you for your help, I don't understand why my job disappears when I add rules in other jobs.
According to the documentation for Merge Request Pipelines, if you have a pipeline where only some jobs use only/except/rules with merge_requests, only those jobs will be in the pipeline if it is a Merge Request pipeline. The other jobs will be left out.
Here's the example from the docs:
build:
stage: build
script: ./build
only:
- main
test:
stage: test
script: ./test
only:
- merge_requests
deploy:
stage: deploy
script: ./deploy
only:
- main
In this example, the only job that specifies it should run for Merge Request pipelines is the test job. For standard push pipelines, the build and deploy jobs will run, but when a new merge request is created, a change is pushed to a branch that is the source branch on an existing merge request, or you hit the Run Pipeline button on the Pipelines tab for a Merge Request, it will run a Merge Request pipeline.
Here's another example with a different scenario:
A:
stage: some_stage
only:
- branches
- tags
- merge_requests
script:
- script.sh
B:
stage: some_other_stage
only:
- branches
- tags
- merge_requests
script:
- script.sh
C:
stage: third_stage
only:
- merge_requests
script:
- script.sh
In this example, jobs A and B run for all pipeline types push, tags, merge_requests, etc., but job C only runs for merge_request pipelines.
That's why your test_without_only_policy job will never be in a pipeline where test_with_only_policy runs. test_with_only_policy specifically runs for Merge Request events, but test_without_only_policy does not.
I am trying to add needs between jobs in the Gitlab CI yaml configuration file.
stages:
- build
- test
- package
- deploy
maven-build:
stage: build
only:
- merge_requests
- master
- branches
...
test:
stage: test
needs: [ "maven-build" ]
only:
- merge_requests
- master
...
docker-build:
stage: package
needs: [ "test" ]
only:
- master
...
deploy-stage:
stage: deploy
needs: [ "docker-build" ]
only:
- master
...
deploy-prod:
stage: deploy
needs: [ "docker-build" ]
only:
- master
when: manual
...
I have used the GitLab CI online lint tools to check my syntax, it is correct.
But when I pushed the codes, it always complains:
'test' job needs 'maven-build' job
but it was not added to the pipeline
You can also test your .gitlab-ci.yml in CI Lint
The GitLab CI did not run at all.
Update: Finally I made it. I think the needs position is sensitive, move all needs under the stage, it works. My original scripts included some other configuration between them.
CI-jobs that depend on each other need to have the same limitations!
In your case that would mean to share the same only targets:
stages:
- build
- test
maven-build:
stage: build
only:
- merge_requests
- master
- branches
test:
stage: test
needs: [ "maven-build" ]
only:
- merge_requests
- master
- branches
that should work from my experience^^
Finally I made it. I think the needs position is sensitive, move all needs under the stage, it works
Actually... that might no longer be the case with GitLab 14.2 (August 2021):
Stageless pipelines
Using the needs keyword in your pipeline configuration helps to reduce cycle times by ignoring stage ordering and running jobs without waiting for others to complete.
Previously, needs could only be used between jobs on different stages.
In this release, we’ve removed this limitation so you can define a needs relationship between any job you want.
As a result, you can now create a complete CI/CD pipeline without using stages by including needs in every job to implicitly configure the execution order.
This lets you define a less verbose pipeline that takes less time to create and can run even faster.
See Documentation and Issue.
The rule in both jobs should be that same or otherwise GitLab cannot create job dependency between the jobs when the trigger rule is different.
I don't know why, but if the jobs are in different stages (as in my case), you have to define the jobs that will be done later with "." at the start.
Another interesting thing is GitLab's own CI/CD Lint online text editor does not complain there is an error. So you have to start the pipeline to see the error.
Below, notice the "." in ".success_notification" and ".failure_notification"
stages:
- prepare
- build_and_test
- deploy
- notification
#SOME CODE
build-StandaloneWindows64:
<<: *build
image: $IMAGE:$UNITY_VERSION-windows-mono-$IMAGE_VERSION
variables:
BUILD_TARGET: StandaloneWindows64
.success_notification:
needs:
- job: "build-StandaloneWindows64"
artifacts: true
stage: notification
script:
- wget https://raw.githubusercontent.com/DiscordHooks/gitlab-ci-discord-webhook/master/send.sh
- chmod +x send.sh
- ./send.sh success $WEBHOOK_URL
when: on_success
.failure_notification:
needs:
- job: "build-StandaloneWindows64"
artifacts: true
stage: notification
script:
- wget https://raw.githubusercontent.com/DiscordHooks/gitlab-ci-discord-webhook/master/send.sh
- chmod +x send.sh
- ./send.sh failure $WEBHOOK_URL
when: on_failure
#SOME CODE
I am trying to set up a CI with minimal code duplication using .gitlab-ci.yml.
With that, I am separating the configuration in separate files and reusing parts of it that are common.
I have a separate repository with Gitlab CI settings: gitlab-ci and several projects that use it to form their own CI pipelines.
Contents of gitlab-ci repository
template_jobs.yml:
.sample:
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
when: on_success
- when: never
jobs_architectureA.yml:
include:
- local: '/template_jobs.yml'
.script_core: &script_core
- echo "Running stage"
test_archA:
extends:
- .sample
stage: test
tags:
- architectureA
script:
- *script_core
jobs_architectureB.yml:
include:
- local: '/template_jobs.yml'
.script_core: &script_core
- echo "Running stage"
test_archB:
extends:
- .sample
stage: test
tags:
- architectureB
script:
- *script_core
Project with code contents:
In the actual project (separate repositories per project, and I have a lot of them), I have the following:
.gitlab-ci.yml:
stages:
- test
include:
- project: 'gitlab-ci'
file: '/jobs_architectureA.yml'
- project: 'gitlab-ci'
file: '/jobs_architectureB.yml'
This configuration works fine and allows to include only some architectures for some modules while sharing rules between the job templates.
However, it's easy to notice one code duplication: both jobs_architectureA.yml and jobs_architectureB.yml contain a common section:
.script_core: &script_core
- echo "Running stage"
It would be ideal to move it into a separate file: template_scripts.yml and include from both jobs_architectureA.yml* and jobs_architectureB.yml. However, that results in the invalid YAML (at least from Gitlab's point of view).
With that, I make a conclusion that I can share the rules as the mechanism of their usage is via extends keyword; however, I am not able to do it with the scripts: as it uses &/* anchoring mechanic on the YAML level.
Ideally, I want something along the lines of:
Contents of the ideal (conceptually) gitlab-ci repository
template_jobs.yml:
.sample:
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
when: on_success
- when: never
template_scripts.yml:
.script_core: &script_core
- echo "Running stage"
jobs_architectureA.yml:
include:
- local: '/template_jobs.yml'
- local: '/template_scripts.yml'
test_archA:
extends:
- .sample
stage: test
tags:
- architectureA
script:
- *script_core # this becomes invalid, as script_core is in the other file, even though it is included at the top
jobs_architectureB.yml:
include:
- local: '/template_jobs.yml'
- local: '/template_scripts.yml'
test_archB:
extends:
- .sample
stage: test
tags:
- architectureB
script:
- *script_core # this becomes invalid, as script_core is in the other file, even though it is included at the top
Am I doing something wrong?
Am I hitting a limitation of the Gitlab mechanic? Is it the implementation of the include directive in this specific YML type, that limits me?
Do I have options to achieve something close to the desired behaviour?
Note, while this might not look like a big deal, in reality, I have many more pieces to the scripts, and the actual script is much larger. Thus, currently, it is duplicated code all over the place which is very prone to mistakes.
my solution is to not include template_jobs.yml and template_scripts.yml directly in jobs_architectureA.yml but only in the "final" .gitlab-ci.yml
taking you exemple, /template_jobs.yml//template_scripts.yml do not change.
jobs_architectureA.yml loses the include:
test_archA:
extends:
- .sample
stage: test
tags:
- architectureB
script:
- *script_core # this becomes invalid, as script_core is in the other file, even though it is included at the top
and .gitlab-ci.yml becomes:
stages:
- test
include:
- local: '/template_jobs.yml'
- local: '/template_scripts.yml'
- project: 'gitlab-ci'
file: '/jobs_architectureA.yml'
- project: 'gitlab-ci'
file: '/jobs_architectureB.yml'
in reality, I have many more pieces to the scripts, and the actual script is much larger
Adding to Cyril's solution, GitLab 13.12 (May 2021) can help scale those includes:
Support wildcards when including YAML CI/CD configuration files
The includes: keyword for CI/CD pipelines lets you break one long .gitlab-ci.yml file into multiple smaller files to increase readability.
It also makes it easier to reuse configuration in multiple places.
Frequently there are multiple files included into a single pipeline, and they all might be stored in the same place.
In this release, we add support to use the * wildcard with the local includes: keyword. You can now make your includes: sections more dynamic, less verbose, and easier to read, check out how we are dogfooding it in GitLab.
See Documentation and Issue.
The extends: keyword can be used with multiple jobs to extend from (on recent versions of GitLab). You can simply do the following:
include:
- local: '/template_jobs.yml'
.script_core:
- echo "Running stage"
test_archA:
extends:
- .sample
- .script_core
stage: test
tags:
- architectureA
And after that move the .script_core to an import.
I currently have two jobs in my CI file which are nearly identical.
The first is for manually compiling a release build from any git branch.
deploy_internal:
stage: deploy
script: ....<deploy code>
when: manual
The second is to be used by the scheduler to release a daily build from develop branch.
scheduled_deploy_internal:
stage: deploy
script: ....<deploy code from deploy_internal copy/pasted>
only:
variables:
- $MY_DEPLOY_INTERNAL != null
This feels wrong to have all that deploy code repeated in two places. It gets worse. There are also deploy_external, deploy_release, and scheduled variants.
My question:
Is there a way that I can combine deploy_internal and scheduled_deploy_internal such that the manual/scheduled behaviour is retained (DRY basically)?
Alternatively: Is there is a better way that I should structure my jobs?
Edit:
Original title: Deploy job. Execute manually except when scheduled
You can use YAML anchors and aliases to reuse the script.
deploy_internal:
stage: deploy
script:
- &deployment_scripts |
echo "Deployment Started"
bash command 1
bash command 2
when: manual
scheduled_deploy_internal:
stage: deploy
script:
- *deployment_scripts
only:
variables:
- $MY_DEPLOY_INTERNAL != null
Or you can use extends keyword.
.deployment_script:
script:
- echo "Deployment started"
- bash command 1
- bash command 2
deploy_internal:
extends: .deployment_script
stage: deploy
when: manual
scheduled_deploy_internal:
extends: .deployment_script
stage: deploy
only:
variables:
- $MY_DEPLOY_INTERNAL != null
Use GitLab's default section containing a before_script:
default:
before_script:
- ....<deploy code>
job1:
stage: deploy
script: ....<code after than deploy>
job2:
stage: deploy
script: ....<code after than deploy>
Note: the default section fails to function as such if you try to execute a job locally with the gitlab-runner exec command - use YAML anchors instead.