How reference variables in job rules in gitlab ci? - gitlab

I need to reuse variables in a gitlab ci job rules
include:
- template: "Workflows/Branch-Pipelines.gitlab-ci.yml"
.staging_variables:
variables:
CONFIG_NAME: "staging"
.staging_rules:
rules:
- if: $CI_COMMIT_BRANCH == $STAGING_BRANCH
variables: !reference [.staging_variables, variables]
stages:
- staging
staging:
stage: staging
rules:
- !reference [.staging_rules, rules]
script:
- echo $CONFIG_NAME
tags:
- staging
However, I am seeing this Syntax is incorrect linting error:
jobs:staging:rules:rule:variables config should be a hash of key value pairs
I am using the example explained here:
https://docs.gitlab.com/ee/ci/yaml/yaml_optimization.html#reference-tags
Please note that I can do this and it works:
include:
- template: "Workflows/Branch-Pipelines.gitlab-ci.yml"
.staging_rules:
rules:
- if: $CI_COMMIT_BRANCH == $STAGING_BRANCH
variables:
CONFIG_NAME: "staging"
stages:
- staging
staging:
stage: staging
rules:
- !reference [.staging_rules, rules]
script:
- echo $CONFIG_NAME
tags:
- staging

Using !reference-keyword in referenced section with !reference is not possible at this moment.
!reference documentation:
You can’t reuse a section that already includes a !reference tag. Only
one level of nesting is supported.
For your needs you could use YAML-anchors. (not tested)
include:
- template: "Workflows/Branch-Pipelines.gitlab-ci.yml"
.staging_variables: &staging_variables
variables:
CONFIG_NAME: "staging"
.staging_rules: &staging_rules
rules:
- if: $CI_COMMIT_BRANCH == $STAGING_BRANCH
variables: *staging_variables
stages:
- staging
staging:
stage: staging
rules:
- *staging_rules
script:
- echo $CONFIG_NAME
tags:
- staging

As stated in the comment above you produce a syntax error for the GitLab CI linting tool which tries to resolve the variables array from the referenced section.
Change your config to have the !reference tag like below:
staging:
stage: staging
rules: !reference [.staging_rules, rules]
script:
- echo $CONFIG_NAME
tags:
- staging
Note that here the - !reference has changed to rules: !reference […]
This should fix your error

Related

How to Automatically run the Deploy (No manual action) with Gitlab CI and Terraform?

My gitlab ci pipeline always blocks the terraform deploy, requiring manual action to start it. Is it possible to make it automatic instead?
From terraform gitlab yaml example
stages:
- validate
- test
- build
- deploy
- cleanup
sast:
stage: test
include:
- template: Terraform/Base.gitlab-ci.yml # https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Terraform/Base.gitlab-ci.yml
fmt:
extends: .terraform:fmt
needs: []
validate:
extends: .terraform:validate
needs: []
build:
extends: .terraform:build
deploy:
extends: .terraform:deploy
dependencies:
- build
environment:
name: $TF_STATE_NAME
action: start
when: on_success
destroy:
extends: .terraform:destroy
environment:
name: $TF_STATE_NAME
action: stop
when: manual
Based on the documentation, when: on_success should automatically run the deploy command when the build stage succeeds. However, it still requires manual actions. Removing the when command is the same, it always requires a manual action to start the deploy.
Given I'm using gitlab's terraform template, is this hard coded to require manual actions to enable a deploy?
It's been a little while since I've worked on GitLab, but the template you reference has it as a rule:
.terraform:deploy: &terraform_deploy
stage: deploy
script:
- cd "${TF_ROOT}"
- gitlab-terraform apply
resource_group: ${TF_STATE_NAME}
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
when: manual
Which is different from just the when keyword that you're using.
What if you tried overriding with with your own rule?
deploy:
extends: .terraform:deploy
dependencies:
- build
environment:
name: $TF_STATE_NAME
action: start
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
when: on_success
Or better yet, just create/manage your own template from a repo of your own. Then you can modify the rules in there and delete the when: manual piece.

Is it possible the nested use of !reference in gitlab-ci.yml?

rules.yml
.rules:
default:
- .gitlab-ci.yml
- Makefile
- VERSION
a_pybuild_deps:
- !reference [.rules, default]
- foo/a/**
b_pybuild_deps:
- !reference [.rules, default]
- foo/b/**
It works if I am simply referring to default:
gitlab-ci.yml
include:
- local: "./rules.yml"
a_pybuild:
...
stage: py_build
script:
...
only:
changes: !reference [.rules, default]
But I want to do the following:
gitlab-ci.yml
include:
- local: "./rules.yml"
a_pybuild:
...
stage: py_build
script:
...
only:
changes: !reference [.rules, a_pybuild_deps]
b_pybuild:
...
stage: py_build
script:
...
only:
changes: !reference [.rules, b_pybuild_deps]
Like this I will get a lint error: jobs:a_pybuild:only changes should be an array of string
I understand the problem, but is there any proper way to apply this?
As of GitLab 14.8, you can use nested !references up to 10 levels deep, but only for script:, before_script: and after_script:.
Nesting of !reference is not allowed in other keys, like only: which is why you get this error.
In versions of GitLab prior to 14.8, nested !references are prohibited entirely.
In your situation, you would simply have to manually duplicate your default rules into a_pybuild_deps and b_pybuild_deps or omit the default rules from those keys entirely and !reference both the default and respective pybuild keys in the job.
This is probably the most flexible and DRY way to do it is to use rules: instead of only: and arrange your rules like so:
# rules.yml
.rules:
default:
changes:
- .gitlab-ci.yml
- Makefile
- VERSION
a_pybuild_deps:
changes:
- foo/a/**
b_pybuild_deps:
changes:
- foo/b/**
#.gitlab-ci.yml
include:
- local: "./rules.yml"
a_pybuild:
script: '...'
rules:
- !reference [.rules, default]
- !reference [.rules, a_pybuild_deps]
b_pybuild:
script: '...'
rules:
- !reference [.rules, default]
- !reference [.rules, b_pybuild_deps]

rules:changes always evaluates as true in MR pipeline

I have a monorepo where each package should be built as a docker image.
I created a trigger job for each package that triggers a child pipeline.
In the MR, my changes rule is being ignored and all child pipelines are triggered.
.gitlab-ci.yml
---
workflow:
rules:
- if: $CI_MERGE_REQUEST_ID || $CI_COMMIT_BRANCH
trigger-package-a:
stage: build
trigger:
include: .gitlab/ci/packages/package-gitlab-ci.yml
strategy: depend
rules:
- changes:
- "packages/package-a/**/*"
variables:
PACKAGE: package-a
trigger-package-b:
stage: build
trigger:
include: .gitlab/ci/packages/package-gitlab-ci.yml
strategy: depend
rules:
- changes:
- "packages/package-b/**/*"
variables:
PACKAGE: package-b
done_job:
stage: deploy
script:
- "echo DONE"
- "cat config.json"
stages:
- build
- deploy
package-gitlab-ci.yml
workflow:
rules:
- if: $CI_MERGE_REQUEST_ID
- changes:
- "packages/${PACKAGE}/**/*"
stages:
- bootstrap
- validate
cache:
key: "${PACKAGE}_${CI_COMMIT_REF_SLUG}"
paths:
- packages/${PACKAGE}/node_modules/
policy: pull
install-package:
stage: bootstrap
script:
- echo ${PACKAGE}}
- echo '{"package":${PACKAGE}}' > config.json
- "cd packages/${PACKAGE}/"
- yarn install --frozen-lockfile
artifacts:
paths:
- config.json
cache:
key: "${PACKAGE}_${CI_COMMIT_REF_SLUG}"
paths:
- packages/${PACKAGE}/node_modules/
policy: pull-push
lint-package:
script:
- yarn lint
stage: validate
needs: [install-package]
before_script:
- "cd packages/${PACKAGE}/"
test-package:
stage: validate
needs: [lint-package]
before_script:
- "echo working on ${PACKAGE}"
- "cd packages/${PACKAGE}/"
rules:
- if: $CI_MERGE_REQUEST_ID
script:
- yarn test
It looks like your downstream pipeline is defining a workflow with 2 independent rules: if and changes. This may cause the jobs to be included if the first condition in the if is met, i.e. if it is a MR pipeline. Try removing the dash in front of changes, as in the example here, to treat this as a single rule:
workflow:
rules:
- if: $CI_MERGE_REQUEST_ID
changes:
- "packages/${PACKAGE}/**/*"
EDIT: This recent issue states rules:changes does not work as expected with trigger. So you may actually need to remove the changes from the upstream pipeline and solve this in the downstream pipeline.
Side note, not directly related to your issue: the GitLab Docs provide a workflow template to run branch or MR pipelines without creating duplicates. You can use this in your upstream pipeline if it helps:
workflow:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
- if: '$CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS'
when: never
- if: '$CI_COMMIT_BRANCH'

GITLAB CI pipeline, run job only with git tag

need help from GitLab gurus. I have a following pipeline below.
I expect "sync_s3:prod" job will run only when i will push new git tag. But gitlab trigger both
jobs. Why its behaving like this ? I create $git_commit_tag rule only for one job. Any ideas?
stages:
- sync:nonprod
- sync:prod
.sync_s3:
image:
name: image
entrypoint: [""]
script:
- aws configure set region eu-west-1
- aws s3 sync ${FOLDER_ENV} s3://img-${AWS_ENV} --delete
sync_s3:prod:
stage: sync:prod
rules:
- if: $CI_COMMIT_TAG
changes:
- prod/*
extends: .sync_s3
variables:
AWS_ENV: prod
FOLDER_ENV: prod/
tags:
- gaming_prod
sync_s3:nonprod:
stage: sync:nonprod
rules:
- changes:
- pp2/*
extends: .sync_s3
variables:
AWS_ENV: nonprod
FOLDER_ENV: pp2/
tags:
- gaming_nonprod
If I understand the question correctly, you do not want to have the sync_s3:nonprod job run if the sync_s3:prod is run. (?)
To achieve this, on the sync_s3:nonprod job you should be able to copy the same rule from sync_s3:prod together with when: never:
stages:
- sync:nonprod
- sync:prod
.sync_s3:
image:
name: image
entrypoint: [""]
script:
- aws configure set region eu-west-1
- aws s3 sync ${FOLDER_ENV} s3://img-${AWS_ENV} --delete
sync_s3:prod:
stage: sync:prod
rules:
- if: $CI_COMMIT_TAG
changes:
- prod/*
extends: .sync_s3
variables:
AWS_ENV: prod
FOLDER_ENV: prod/
tags:
- gaming_prod
sync_s3:nonprod:
stage: sync:nonprod
rules:
- if: $CI_COMMIT_TAG
changes:
- prod/*
when: never
- changes:
- pp2/*
extends: .sync_s3
variables:
AWS_ENV: nonprod
FOLDER_ENV: pp2/
tags:
- gaming_nonprod
As #slauth already mentions in his answer the rules need to be adjusted per step of the pipeline. I only post this as an answer as an addition to the original answer above.
In order to prevent pipeline steps from running when a git-tag is present you need to explicitly set the rule for the corresponding job.
stages:
- sync:nonprod
- sync:prod
.sync_s3:
image:
name: image
entrypoint: [""]
script:
- aws configure set region eu-west-1
- aws s3 sync ${FOLDER_ENV} s3://img-${AWS_ENV} --delete
sync_s3:prod:
stage: sync:prod
rules:
- if: $CI_COMMIT_TAG
changes:
- prod/*
extends: .sync_s3
variables:
AWS_ENV: prod
FOLDER_ENV: prod/
tags:
- gaming_prod
sync_s3:nonprod:
stage: sync:nonprod
rules:
- changes:
- pp2/*
- if: $CI_COMMIT_TAG
when: never
extends: .sync_s3
variables:
AWS_ENV: nonprod
FOLDER_ENV: pp2/
tags:
- gaming_nonprod
For further clarification:
The following rule will evaluate similar to a logic AND, so this will evaluate to true if there is a $CI_COMMIT_TAG AND there are changes in prod/*. So only when both conditions are met this will be added to the pipeline.
rules:
- if: $CI_COMMIT_TAG
changes:
- prod/*

GitLab: chosen stage does not exist

I am trying to put together a fairly complex pipeline with several jobs that run sequentially in our different environments. This is to run our Terraform changes across our infra. The sequence of jobs should run automatically across our infraci environment which is only ever rolled out to via CI, then stop and require a button click to start the deployment to our dev environment which has actual (albeit dev) users. Of course I don't want to write the same code over and over again so I've tried to be as DRY as possible. Here is my gitlab-ci.yml:
---
# "variables" & "default" are used by all jobs
variables:
TF_ROOT: '${CI_PROJECT_DIR}/terraform'
TF_CLI_CONFIG_FILE: .terraformrc
AWS_STS_REGIONAL_ENDPOINTS: regional
AWS_DEFAULT_REGION: eu-west-2
ASG_MODULE_PATH: module.aws_asg.aws_autoscaling_group.main_asg
default:
image:
name: hashicorp/terraform:light
entrypoint: ['']
cache:
paths:
- ${TF_ROOT}/.terraform
tags:
- nonlive # This tag matches the group wide GitLab runner.
before_script:
- cd ${TF_ROOT}
# List of all stages (jobs within the same stage are executed concurrently)
stages:
- init
- infraci_plan
- infraci_taint
- infraci_apply
- dev_plan
- dev_taint
- dev_apply
# "Hidden" jobs we use as templates to improve code reuse.
.default:
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
.plan:
extends: .default
stage: ${CI_ENVIRONMENT_NAME}_plan
script:
- terraform workspace select ${CI_ENVIRONMENT_NAME}
- terraform plan
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $CI_ENVIRONMENT_NAME != "infraci"'
when: manual
allow_failure: false
.taint:
extends: .default
stage: ${CI_ENVIRONMENT_NAME}_taint
script: terrafrom taint ${ASG_MODULE_PATH}
needs:
- ${CI_ENVIRONMENT_NAME}_plan
.apply:
extends: .default
stage: ${CI_ENVIRONMENT_NAME}_apply
script: terraform apply -auto-approve
# Create actual jobs
## init - runs once per pipeline
init:
stage: init
script: terraform init
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
when: always
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $CI_PIPELINE_SOURCE == "web"'
when: manual
## infraci - auto deploy
infraci_plan:
extends: .plan
environment:
name: infraci
infraci_taint:
extends: .taint
environment:
name: infraci
infraci_apply:
extends: .apply
environment:
name: infraci
## dev - manual deployment
dev_plan:
extends: .plan
environment:
name: dev
dev_taint:
extends: .taint
environment:
name: dev
dev_apply:
extends: .apply
environment:
name: dev
Unfortunately this fails validation with the following error:
infraci_plan job: chosen stage does not exist; available stages are .pre, init, infraci_plan, infraci_taint, infraci_apply, dev_plan, dev_taint, dev_apply, .post
My assumption is that it's to do with interpolating CI_ENVIRONMENT_NAME in the hidden jobs but not actually setting the value until the jobs where the jobs are actually defined.
If that's the case though what's a way to get the setup I need without a severe amount of duplication?
You are right, it is not possible to use a variable in stage. The only way I to see you need to define the stage directly in your job and remove the stage in .plan.
infraci_plan:
extends: .plan
stage: infraci_plan
environment:
name: infraci

Resources