Gitlab: best approach for creating a manual production deployment pipeline - gitlab

I have a pipeline which builds and deploys my application to staging environment.
I want to create a job which can deploy my application to production environment, but it should be run manually.
In theory I see 2 options:
Create a separate .deploy-to-prod.yml pipeline with when: manual condition and run it via "play" button. As far as I understand its impossible because I cannot run an arbitrary pipeline in Gitlab, it always runs default one. Please correct me if I am wrong.
Hence only 2nd option is available for me: I need to create additional trigger job in my default .gitlab-ci.yml and add conditions: if execution is manual and some variable is set or environment = production - then run deploy to prod, otherwise a standard job should be executed.
An example of 2nd approach can look like:
manual-deploy-to-prod:
stage: deploy
trigger:
include:
- '.deploy-to-prod.yml'
strategy: depend
rules:
- if: $MANUAL_DEPLOY_VERSION != null
when: manual
..while in standard pipeline triggers I should add following lines to avoid execution along with production deployment:
rules:
- if: $MANUAL_DEPLOY_VERSION == null
Is this a good approach?
Is it correct that only 2nd option is available for me?
What is the best practice for creating a manual production deployment pipeline?

"Best" is a very subjective term, so it's difficult to tell you which one is best for your use-case. Insteaad, let me lay out a couple of options for how you could achieve what you're attempting to do:
You could update your deploy process to use deploy.yml, then use the trigger keyword in your CI file to trigger that job for different environments. You can then use the rules keyword to control when and how different jobs are triggered. This has the benefit of re-using your deployment process that you're using for your staging environment, which is nice and DRY and ensures that your deployment is repeatable across environments. This would look like this:
deploy-to-staging:
stage: deploy
trigger:
include: deploy.yml
strategy: depend
when: on_success
deploy-to-production:
stage: deploy
trigger:
include: deploy.yml
strategy: depend
when: manual
You could use the rules keyword to include your deploy-to-production job only when the job is kicked off manually from the UI. The rest of your pipeline would still execute (unless you explicitly tell it not to), but your deploy-to-prod job would only show up if you manually kicked the pipeline off. This would look like this:
deploy-to-prod:
stage: deploy
script:
- echo "I'm deploying!"
rules:
- if: $CI_PIPELINE_SOURCE == "web"
when: on_success
- when: never
You could use a separate project for your deployment pipeline. This pipeline can retrieve artifacts from your other project, but would only run its CI when you manually click on run for that project. This gives you really nice separation of concerns because you can give a separate set of permissions to that project as opposed to the code project, and it can help keep your pipeline clean if it's really complicated.
All approaches have pros and cons, simply pick the one that works best for you!

Related

Problem in triggering GITLAB pipeline when changes to a file in external repository

I have GITLAB setup to pull repository from bitbucket.
I want pipeline to run automatically when there is pull request and when there are changes in files.
But it seems to not work.
This is the rule I have setup in pipeline:
rules:
- if: $CI_PIPELINE_SOURCE == "external_pull_request_event"
changes:
- Dockerfile
when: manual
Kindly help to debug
I want pipeline to run automatically
Then the first thing to change would be to remove the when:manual, which is used to require that a job doesn’t run unless a user starts it.
The combination you used (rules/if/changes/when:manual) was seen in the GitLab documentation at "Complex rules", but does not fit what you expressed in your question.

Is there a way to trigger the pipeline when manually creating a release?

I am currently creating a gitlab pipeline in a 3 System environment (dev, qas, prd). I decided to use a trunk based pipeline where my main branch is my trunk and from there everything else is supplemented. Since I also want the 3 stages to be separat from each other and only allow certain members to deploy something to prd I thought of a pipeline like the following:
DEV: A developer should be able to push code no matter what and how. The only thing which matters is that the code can be build and is correctly tested
QAS: Here I want to know which version is currently present, therefore I want to use tags in gitlab.
PRD: I only want certain members to be able to push to prd and also want a history, release notes and so on to be published. Therefore I want to use the release mechanism in gitlab to deploy everything to prd.
Therefore I created a pipeline looking something like the following:
stages:
- build
- test
- deployDEV
- deployQAS
- deployPRD
build:
stage: build
# build everything
test:
stage: test
# test everything
deployDEV:
stage: deployDEV
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
script:
# deploy code to DEV
deployQAS:
stage: deployQAS
rules:
- if: '$CI_COMMIT_TAG && $CI_DEPLOY_FREEZE == null'
when: manual
script:
# deploy QAS
deployPRD:
stage: deployPRD
rules:
- if % new Release is created manually trigger this step % # currently not working
script:
#deploy PRD
As you can see in my script above I am not able to push the code to production since the pipeline is not triggered when creating a release manually. Therefore I wanted to know if this is the default case or if I have missed something in the documentation?
I think it would be really great if this is possible, because then I would be able to test all my versions in qas environment and if one is the one to be released then I would simply make a new release with this version which then gets deployed.
What I already tried was going trough the documentation here: https://docs.gitlab.com/ee/user/project/releases/. There you can only create releases but not that these releases trigger a pipeline

Gitlab CI Job matrix when manual requires each combination to be manually triggered

So I've got a problem where a gitlab ci job matrix is required to run manually. But the problem is, when I use when: manual or a set of rules to determine the same thing, each combination of the pipeline gets its own button to trigger independently.
This isn't ideal because it allows the user to trigger one, but not all, so servers that are redundantly deployed, might have their software updated because somebody pushed the button. But another server in the matrix might not be updated.
So what I want, or need, is that the entire matrix of is manually triggered at once. Instead of one by one. Does anybody know how I can do that?
This isn't really an answer, but it's a solution for anybody else who is having the same problem. If in the future there is a proper solution. Let me know.
I wrote this on a gitlab issue I found here: https://gitlab.com/gitlab-org/gitlab/-/issues/330013
Here is what I found I could do:
stages:
- approval
- deploy
approval:
stage: approval
script:
- echo ">>>>>>>>>>>>>>> Start DEPLOYMENT >>>>>>>>>>>>>>>"
when: manual
allow_failure: false
deploy:
stage: deploy
dependencies:
- approval
variables:
SOMETHING: value
parallel:
matrix:
- ANOTHER:
- this
- that
Then you get one button in a prior stage which the deploy stage requires to trigger all the pipelines to run.
I would prefer an option on the parallel:matrix object to configure this to deploy one by one or the entire group, but I don't have any suggestions for how to do this yet.

Can the build policies on a pull request in Azure Devops use the yaml-files that is updated in said PR?

We have all our pipelines checked into code, but if we deliver a PR with changes in those pipelines, the PR Build Policies will run with the YAML-files in MASTER, not the ones that we want to check into master. It's basically a deadlock.
Say you want to remove a validation that makes all your PRs fail, so you make a PR but you can't merge it cause the build policies fail :P
PS: I know I can remove the policies, complete the merge, and add the policies back in as a manual job, but that is not really a good solution.
Create a separate yaml pipeline with the pre merge build steps that you then set in the PR policies for. It will always run the code from the current branch that the PR is created from.
We do it like this:
(All in same repo)
build_steps.yml - Yaml template with build steps
azure-pipelines-yml - Main pipeline that has a reference to build_steps.yml to build the project
pre_merge.yml - Secondary pipeline that is run only by PR request which has a reference to build_steps.yml so there are no differences in the build and two places to update if something changes.
Whole yaml definition:
#pre_merge.yml
trigger: none #Pipeline should never trigger on any branches. Only invoked by the policy.
variables:
- name: system.buildConfiguration
value: 'Release'
- name: system.buildPlatform
value: 'win10-x86'
- name: system.debug
value: 'false'
pool:
vmImage: 'windows-latest'
name: $(SourceBranchName)_$(date:yyyyMMdd)$(rev:.r)
steps:
- template: build_steps.yml
And then in the policies you setup it like this:
All this applies also to classic pipelines. You need to create a seperate pre-merge build pipeline that could reference a TaskGroup with the steps used in the main build pipeline. In both cases you dont have to use a template or Taskgroup and and create the steps manually. But if the build would change in the future you have 2 places to update.
Just throwing this out there as an alternative. However; if desired can maintain one yaml pipeline that can do both the pre merge and the deployment.
Would want to create a variable to detect if the pipeline is running against the "master/main" branch similar to:
IsMainBranch: $[eq(variables['Build.SourceBranch'], 'refs/heads/master')]
From there the variable will need to be a condition on subsequent jobs/stages. Personally I do not like this approach as it limits the portability of the pipelines. In addition it interjects another variable to account for; however, felt it fair to provide an alternative option.

How to use the same ADO pipeline for deploying to dev, staging and production

I'm using ADO to trigger tests and deployments to Google App Engine when I push to Github. Note that I use the same .yaml file to run 2 stages; the first being Test and the second, Deploy. The reason I am not using release pipelines is because the docs suggest that release pipelines are the "classic" way of doing things, and besides, having the deploy code in my yaml means I can version control it (as opposed to using ADO's UI)
I have already set up the 2 stages and they work perfectly fine. However, the deploy right now only works for my dev environment. I wish to use the same stage to deploy to my other environments (staging and production).
Here's what my .yaml looks like:
trigger:
# list of triggers
stages:
- stage 'Test'
pool: 'ubuntu-latest'
jobs:
- job: 1
- job: 2
- job: 3
- stage 'Deploy'
jobs:
- job: 'Deploy to dev'
I could potentially do a deploy to staging and production by copying the Deploy to dev job and creating similar jobs for staging and production, but I really want to avoid that because the deployment jobs are pretty huge and I'd hate to maintain 3 separate jobs that all do the same thing, albeit slightly differently. For example, one of the things that the dev jobs does is copy app.yaml files from <repo>/deploy/dev/module/app.yaml. For a deploy to staging, I'd need to have the same job, but use the staging directory: <repo>/deploy/staging/module/app.yaml. This is a direct violation of the DRY (don't repeat yourself) principle.
So I guess my questions are:
Am I doing the right thing by choosing to not use release-pipelines and instead, having the deploy code in my yaml?
Does my azure-pipelines.yaml format look okay?
Should I use variables, set them to either dev, staging, and production in a dependsOn job, then use these variables to replace the directories where I copy the files from? If I do end up using this though, setting the variables will be a challenge.
Deploying from the yaml is the right approach, you could also leverage the new "Environment" concept to add special gates and approvals before deploying. See the documentation to learn more.
As #mm8 said, using templates here would be a clean way to manage the deployment steps and keep them consistent between environments. You will find more information about it here.

Resources