Gitlab - Separating CI from Deployment - gitlab

We are currently using Jenkins, and planning to migrate to Gitlab. We actually have 2 Jenkinsfiles in each repo, 1 is setup as a Multibranch pipeline and runs on all changes. Its is the merge check, that runs all the various linting, tests, building the docker containers etc. The second Jenkinsfile is only ran manually from Jenkins, it takes in all the various input parameters and it deploys the code. Which is mostly coming in from say, the linted Ansible/Terraform and selecting a docker image that would have already been built via the CI side of things.
I know gitlab doesnt support this model, but this project is already MVP'd so re-working how the dev's combined their logic and deployment code together is probably not going to happen.
Is it possibly, in 1 gitlab-ci.yml file to say run these jobs on merge/pushes and only run this on manual deployment .
e.g.
workflow:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
- if: '$CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS'
when: never
- if: '$CI_COMMIT_BRANCH'
stages:
- test
- test
- deploy
- destroy
test-python-job:
stage: test
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
script:
- echo "Test Python"
- black
- bandit
- flake8
- tox
test-terraform-job:
stage: test
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
script:
- echo "Test Terraform"
- terraform validate --yadda
test-ansible-job:
stage: test
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
script:
- echo "Test Ansible"
- ansible-lint --yadda
deploy-job:
stage: deploy
variables:
DEPLOYMENT_ID: "Only deploy-job can use this variable's value"
secrets:
DATABASE_PASSWORD:
vault: production/db/password#ops
rules:
- when: manual
script:
- echo "Terraform Deploy"
- terraform deploy
- ansible-playbook yaddas
destroy-job:
stage: destroy
variables:
DEPLOYMENT_ID: "Only destroy-job can use this variable's value"
secrets:
DATABASE_PASSWORD:
vault: production/db/password#ops
rules:
- when: manual
script:
- terraform destroy
We have not even deployed gitlab yet, so im writing that off the top of my head, but I want to know what level of pain I am in for.

There are multiple options to achieve your goal with minimized configuration effort:
Working with private jobs and use inheritance or references for easier configuration - doable in one file
Extract parts into child pipelines for easier usage
reduced Configuration in one File
I assume you most hate that you have to redefine the rules for your jobs. There are two ways how you can reduce those duplication.
Inheritance
Can reduce a lot of duplication, but can also cause problems, with unintended side behaviour.
.test:
stage: test
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
test-python-job:
extends: .test
script:
- echo "Test Python"
- black
- bandit
- flake8
- tox
test-terraform-job:
extends: .test
script:
- echo "Test Terraform"
- terraform validate --yadda
test-ansible-job:
extends: .test
script:
- echo "Test Ansible"
- ansible-lint --yadda
Composition
By using !reference you can combine certain aspects of jobs see https://docs.gitlab.com/ee/ci/yaml/#reference-tags
.test:
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
test-python-job:
stage: test
rules:
- !reference [.test, rules]
script:
- echo "Test Python"
- black
- bandit
- flake8
- tox
test-terraform-job:
stage: test
rules:
- !reference [.test, rules]
script:
- echo "Test Terraform"
- terraform validate --yadda
stage: test
rules:
- !reference [.test, rules]
script:
- echo "Test Ansible"
- ansible-lint --yadda
Parent Child pipelines
Sometimes it might be also suitable to extract functionality into child pipelines. You can easier control what is happening at one stage, when you call the child pipeline and gain overview due to fewer lines of codes. It adds complexity to your builds. But generally it will generate a cleaner and easier to maintain ci structure (my opinion).
This approach will only add the child pipeline when needed - furthermore, you could also centralize this file, if it is similar for the deployments
.gitlab-ci.yml
deploy:
stage: deploy
trigger:
include:
- local: Deploy.gitlab-ci.yml
strategy: depend
rules:
- if: $CI_PIPELINE_SOURCE == 'web' #maybe also useful, as it will only happen on a web interaction
when: manual
- if: $CI_PIPELINE_SOURCE == 'schedule' #maybe also useful, for schedules
Deploy.gitlab-ci.yml
deploy-job:
stage: deploy
variables:
DEPLOYMENT_ID: "Only deploy-job can use this variable's value"
secrets:
DATABASE_PASSWORD:
vault: production/db/password#ops
script:
- echo "Terraform Deploy"
- terraform deploy
- ansible-playbook yaddas
destroy-job:
stage: destroy
variables:
DEPLOYMENT_ID: "Only destroy-job can use this variable's value"
secrets:
DATABASE_PASSWORD:
vault: production/db/password#ops
script:
- terraform destroy
Sidenotes
This is not 100% answering your question, but it shows you a lot of flexibility, and you will soon realize that mimicking jenkins is not 100% ideal. Eg having deployment jobs directly attached to a commit, and visible on that one, allows for a better overview of what was exactly deployed. If you need to run such things manually, i highly recommend to use schedules with preconfigured values, as they only have a play button. Also you have the artifacts already in place and build from your pipeline, why not add additional steps utilizing them, instead of providing this information
I hope my insights will be useful to you, and happy migration ;)

Related

How to Automatically run the Deploy (No manual action) with Gitlab CI and Terraform?

My gitlab ci pipeline always blocks the terraform deploy, requiring manual action to start it. Is it possible to make it automatic instead?
From terraform gitlab yaml example
stages:
- validate
- test
- build
- deploy
- cleanup
sast:
stage: test
include:
- template: Terraform/Base.gitlab-ci.yml # https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Terraform/Base.gitlab-ci.yml
fmt:
extends: .terraform:fmt
needs: []
validate:
extends: .terraform:validate
needs: []
build:
extends: .terraform:build
deploy:
extends: .terraform:deploy
dependencies:
- build
environment:
name: $TF_STATE_NAME
action: start
when: on_success
destroy:
extends: .terraform:destroy
environment:
name: $TF_STATE_NAME
action: stop
when: manual
Based on the documentation, when: on_success should automatically run the deploy command when the build stage succeeds. However, it still requires manual actions. Removing the when command is the same, it always requires a manual action to start the deploy.
Given I'm using gitlab's terraform template, is this hard coded to require manual actions to enable a deploy?
It's been a little while since I've worked on GitLab, but the template you reference has it as a rule:
.terraform:deploy: &terraform_deploy
stage: deploy
script:
- cd "${TF_ROOT}"
- gitlab-terraform apply
resource_group: ${TF_STATE_NAME}
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
when: manual
Which is different from just the when keyword that you're using.
What if you tried overriding with with your own rule?
deploy:
extends: .terraform:deploy
dependencies:
- build
environment:
name: $TF_STATE_NAME
action: start
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
when: on_success
Or better yet, just create/manage your own template from a repo of your own. Then you can modify the rules in there and delete the when: manual piece.

Gitlab scheduled pipeline also run another job not on schedule

I'm new to this Gitlab CI/CD features, and I encountered the following issues.
I have these 2 jobs in my gitlab-ci.yml, the automation test and my deployment job.
automation_test_scheduled:
stage: test
script:
- yarn test:cypress
only:
- schedules
deploy_to_staging:
stage: deploy-staging
environment: staging
only:
- staging
I want to run my automation test automatically on a daily basis and I have created a new pipeline schedule against my staging branch.
however, when the scheduler is triggered, it also runs my deployment job, which is not needed because I only want my automation test to run in the scheduled pipeline.
Does this happen because my deploy_to_staging job has only: - staging rules? if so, how can I set my scheduled pipeline to only run the automation test without triggering another job?
If you wanted to do this with only/except, it would probably be sufficient to add
except:
- schedules
to your deployment job.
Though as
Though notably, the rules based system is preferred at this point.
This also allows for more expressive and detailed decisions when it comes to running jobs.
The simplest way to set the rules for the two jobs would be:
automation_test_scheduled:
stage: test
script:
- yarn test:cypress
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
deploy_to_staging:
stage: deploy-staging
environment: staging
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
when: never
- if: $CI_COMMIT_REF_SLUG == "staging"
And that might be all you need.
Though when it comes to rules, a particularly convenient way of handling them is defining some common rules for the configuration, and reusing these through yaml anchors. The following are some reusable definitions for your case:
.definitions:
rules:
- &if-scheduled
if: $CI_PIPELINE_SOURCE == "schedule"
- &not-scheduled
if: $CI_PIPELINE_SOURCE == "schedule"
when: never
- &if-staging
if: $CI_COMMIT_REF_SLUG == "staging"
And after that you could use them in your jobs like this:
automation_test_scheduled:
stage: test
script:
- yarn test:cypress
rules:
- *if-scheduled
deploy_to_staging:
stage: deploy-staging
environment: staging
rules:
- *not-scheduled
- *if-staging
This way of handling the rules makes it a bit easier to have a overview, and to reuse rules, which in large configurations definitely makes sense
You should use rules instead of only as the latter is not in active development any more.
With that in mind you can change to the following rules clause using the predefined variables CI_COMMIT_REF_SLUG and CI_PIPELINE_SOURCE. The automation_test_scheduled is only run on the branch staging if triggered by a schedule and the deploy_to_staging job is run on any change on the staging branch.
automation_test_scheduled:
stage: test
script:
- yarn test:cypress
rules:
- if: '$CI_COMMIT_REF_SLUG == "staging" && $CI_PIPELINE_SOURCE == "schedule"'
deploy_to_staging:
stage: deploy-staging
environment: staging
rules:
- if: '$CI_COMMIT_REF_SLUG == "staging"'

rules:changes always evaluates as true in MR pipeline

I have a monorepo where each package should be built as a docker image.
I created a trigger job for each package that triggers a child pipeline.
In the MR, my changes rule is being ignored and all child pipelines are triggered.
.gitlab-ci.yml
---
workflow:
rules:
- if: $CI_MERGE_REQUEST_ID || $CI_COMMIT_BRANCH
trigger-package-a:
stage: build
trigger:
include: .gitlab/ci/packages/package-gitlab-ci.yml
strategy: depend
rules:
- changes:
- "packages/package-a/**/*"
variables:
PACKAGE: package-a
trigger-package-b:
stage: build
trigger:
include: .gitlab/ci/packages/package-gitlab-ci.yml
strategy: depend
rules:
- changes:
- "packages/package-b/**/*"
variables:
PACKAGE: package-b
done_job:
stage: deploy
script:
- "echo DONE"
- "cat config.json"
stages:
- build
- deploy
package-gitlab-ci.yml
workflow:
rules:
- if: $CI_MERGE_REQUEST_ID
- changes:
- "packages/${PACKAGE}/**/*"
stages:
- bootstrap
- validate
cache:
key: "${PACKAGE}_${CI_COMMIT_REF_SLUG}"
paths:
- packages/${PACKAGE}/node_modules/
policy: pull
install-package:
stage: bootstrap
script:
- echo ${PACKAGE}}
- echo '{"package":${PACKAGE}}' > config.json
- "cd packages/${PACKAGE}/"
- yarn install --frozen-lockfile
artifacts:
paths:
- config.json
cache:
key: "${PACKAGE}_${CI_COMMIT_REF_SLUG}"
paths:
- packages/${PACKAGE}/node_modules/
policy: pull-push
lint-package:
script:
- yarn lint
stage: validate
needs: [install-package]
before_script:
- "cd packages/${PACKAGE}/"
test-package:
stage: validate
needs: [lint-package]
before_script:
- "echo working on ${PACKAGE}"
- "cd packages/${PACKAGE}/"
rules:
- if: $CI_MERGE_REQUEST_ID
script:
- yarn test
It looks like your downstream pipeline is defining a workflow with 2 independent rules: if and changes. This may cause the jobs to be included if the first condition in the if is met, i.e. if it is a MR pipeline. Try removing the dash in front of changes, as in the example here, to treat this as a single rule:
workflow:
rules:
- if: $CI_MERGE_REQUEST_ID
changes:
- "packages/${PACKAGE}/**/*"
EDIT: This recent issue states rules:changes does not work as expected with trigger. So you may actually need to remove the changes from the upstream pipeline and solve this in the downstream pipeline.
Side note, not directly related to your issue: the GitLab Docs provide a workflow template to run branch or MR pipelines without creating duplicates. You can use this in your upstream pipeline if it helps:
workflow:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
- if: '$CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS'
when: never
- if: '$CI_COMMIT_BRANCH'

Gitlab CI runs test twice when pushing to master

I have a Gitlab CI config that kinda looks like this:
stages:
- test
- deploy
test:
stage: test
only:
- merge_request
- master
script:
- jest --coverage
deploy:
stage: deploy
only:
- master
dependencies:
- test
script:
- make deploy
I only want the tests to be run when a merge request is opened or if we merge to master because I'm only on the free plan on gitlab.com and I'd like to conserve my runner minutes.
If the unit tests ran for every commit we made, we'd always run out of minutes on the 3rd or 4th week.
For the most part, it works. The problem comes from pushing to master directly (which can happen every now and then); test runs twice and at the same time.
I couldn't find anything on Gitlab docs on how to properly approach this. Any help would be great.
I actually don't see why a direct push to master will run your tests twice except when you have an open merge-request which master as source branch.
You can prevent this from happening by using workflow. Furthermore you should use rules instead of only/except as they are not actively developed any more.
workflow:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
- if: '$CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS'
when: never
- if: '$CI_COMMIT_BRANCH'
stages:
- test
- deploy
test:
stage: test
script:
- jest --coverage
rules:
- if: '$CI_COMMIT_BRANCH == "master" || $CI_PIPELINE_SOURCE == "merge_request_event"'
deploy:
stage: deploy
dependencies:
- test
script:
- make deploy
rules:
- if: '$CI_COMMIT_BRANCH == "master"'

Accept merge request without running manual stages

I have a pipeline with 3 stages: build, deploy-test and deploy-prod. I want stages to have following behavior:
always run build
run deploy-test automatically when on master or manually when on other branches
run deploy-prod manually, only available on master branch
My pipeline configuration seems to achieve that but I have a problem when trying to merge branches into master. I don't want to execute deploy-test stage on every branch before doing merge. Right now I am required to do that as the merge button is disabled with a message Pipeline blocked. The pipeline for this merge request requires a manual action to proceed. The setting Pipelines must succeed in project is disabled.
I tried adding additional rule to prevent deploy-test stage from running in merge requests but it didn't change anything:
rules:
- if: '$CI_MERGE_REQUEST_ID'
when: never
- if: '$CI_COMMIT_BRANCH == "master"'
when: on_success
- when: manual
Full pipeline configuration:
stages:
- build
- deploy-test
- deploy-prod
build:
stage: build
script:
- echo "build"
deploy-test:
stage: deploy-test
script:
- echo "deploy-test"
rules:
- if: '$CI_COMMIT_BRANCH == "master"'
when: on_success
- when: manual
deploy-prod:
stage: deploy-prod
script:
- echo "deploy-prod"
only:
- master
The only way I got it to work was to set ☑️ Skipped pipelines are considered successful in Setttings > General > Merge requests > Merge Checks
and marking the manual step as "allow_failure"
upload:
stage: 'upload'
rules:
# Only allow uploads for a pipeline source whitelisted here.
# See: https://docs.gitlab.com/ee/ci/jobs/job_control.html#common-if-clauses-for-rules
- if: $CI_COMMIT_BRANCH
when: 'manual'
allow_failure: true
After this clicking the Merge when Pipeline succeeds button …
… will merge the MR without any manual interaction:
I've opened a merge request from branch "mybranch" into "master" with the following .gitlab-ci.yml:
image: alpine
stages:
- build
- deploy-test
- deploy-prod
build:
stage: build
script:
- echo "build"
# run deploy-test automatically when on master or manually when on other branches
# Don't run on merge requests
deploy-test:
stage: deploy-test
script:
- echo "deploy-test"
rules:
- if: $CI_MERGE_REQUEST_ID
when: never
- if: '$CI_COMMIT_BRANCH == "master"'
when: on_success
- when: manual
# run deploy-prod manually, only available on master branch
deploy-prod:
stage: deploy-prod
script:
- echo "deploy-prod"
rules:
- if: '$CI_COMMIT_BRANCH == "master"'
when: manual
Notes:
only is deprecated, so I replaced it with if
I added Alpine image to make the jobs run faster (slimmer container); it doesn't affect the logic
When I pushed changes to branch "mybranch", GitLab did the following:
showed a blue "Merge when pipeline succeeds" button on my MR
ran "build" stage
skipped "deploy-prod" stage (only available on "master" branch)
gave me a manual "play" button to run the job on "mybranch"
at this point, the pipeline status is "blocked" and the MR is showing "Pipeline blocked. The pipeline for this merge request requires a manual action to proceed"
now I manually start the "deploy-test" stage by selecting the Play icon in the Pipelines screen
pipeline status indicator changes to "running" and then to "passed"
my merge request shows the pipeline passed and gives me the green "Merge" button
There are a number of variables that are available to the pipeline on runtime - Predefined variables reference
Some are available specifically for pipelines associated with merge requests - Predefined variables for merge request pipelines
You can utilize one or more of these variables to determine if you would want to run the deploy-test job for that merge request.
For example, you could use mention the phrase "skip_cicd" in your merge request title, access it with CI_MERGE_REQUEST_TITLE variable and create a rule. Your pipeline would look somewhat like this (please do test the rule, I have edited the pipeline off the top of my head and could be wrong) -
stages:
- build
- deploy-test
- deploy-prod
build:
stage: build
script:
- echo "build"
deploy-test:
stage: deploy-test
script:
- echo "deploy-test"
rules:
- if: '$CI_MERGE_REQUEST_TITLE == *"skip_cicd"*'
when: never
- if: '$CI_COMMIT_BRANCH == "master"'
when: on_success
- when: manual
deploy-prod:
stage: deploy-prod
script:
- echo "deploy-prod"
only:
- master

Resources