I have been tasked with presenting the developer with a manually triggered pipeline that provides the developer the option of which integration environment to deploy to.
For instance, we have 3 integration environments named 'savanna', 'desert' and 'jungle' available to integrate with. I would like the pipeline to halt after its initial steps have completed and offer the developer the choice of which integration environment to deploy. That way we can work on different aspects of the stack and test changes independently of other developers' work.
Is it possible to achieve this functionality Bitbucket pipelines? From what I can tell, Pipelines is only equipped to do deployments to static, ascending environments. Am I missing something?
You can do the "choose your path" pipeline by making the choice steps be in parallel and triggered manually.
The limitations are however that they cannot have steps afterwards, since you need to complete all the parallel steps to continue, and that you cannot use the Bitbucket deployments on the steps as parallel deployments are not allowed, even if they would be all manual ones.
An alternative would be to do custom pipelines for each option where they choose to run the pipeline for their environment.
While there are 3 underlying "static, ascending" environment categories, you can define up to 50 arbitrary environments in those categories. See https://bitbucket.org/blog/additional-deployment-environments-for-bitbucket-pipelines
Then, you can use the BITBUCKET_DEPLOYMENT_ENVIRONMENT variable, a url-friendly version of the environment name, plus any other per-environment variable you set up in the web UI.
Bitbucket will keep track of what was and is being deployed to each environment, plus the differences that would be deployed if each deploy was triggered on a given commit.
Just keep in mind that if you offer parallel manual deployments to staging environments, you could not continue the pipeline unless all steps were triggered. So I'd include the production deployment in that final parallel block.
definitions:
yaml-anchors:
- &deploy-step:
script:
- deploy to $BITBUCKET_DEPLOYMENT_ENVIRONMENT
pipelines:
branches:
main:
- step:
<<: *deploy-step
name: Deploy Test
deployment: test
trigger: automatic
- parallel:
- step:
<<: *deploy-step
name: Deploy Savanna
deployment: savanna
trigger: manual
- step:
<<: *deploy-step
name: Deploy Desert
deployment: desert
trigger: manual
- step:
<<: *deploy-step
name: Deploy Jungle
deployment: jungle
trigger: manual
- step:
<<: *deploy-step
name: Deploy Production
deployment: production
trigger: manual
this won't work
Configuration error
Parallel steps can only contain deployments to a single environment type. Separate different types into their own parallel set.
https://jira.atlassian.com/browse/BCLOUD-18754
Related
I have a pipeline which builds and deploys my application to staging environment.
I want to create a job which can deploy my application to production environment, but it should be run manually.
In theory I see 2 options:
Create a separate .deploy-to-prod.yml pipeline with when: manual condition and run it via "play" button. As far as I understand its impossible because I cannot run an arbitrary pipeline in Gitlab, it always runs default one. Please correct me if I am wrong.
Hence only 2nd option is available for me: I need to create additional trigger job in my default .gitlab-ci.yml and add conditions: if execution is manual and some variable is set or environment = production - then run deploy to prod, otherwise a standard job should be executed.
An example of 2nd approach can look like:
manual-deploy-to-prod:
stage: deploy
trigger:
include:
- '.deploy-to-prod.yml'
strategy: depend
rules:
- if: $MANUAL_DEPLOY_VERSION != null
when: manual
..while in standard pipeline triggers I should add following lines to avoid execution along with production deployment:
rules:
- if: $MANUAL_DEPLOY_VERSION == null
Is this a good approach?
Is it correct that only 2nd option is available for me?
What is the best practice for creating a manual production deployment pipeline?
"Best" is a very subjective term, so it's difficult to tell you which one is best for your use-case. Insteaad, let me lay out a couple of options for how you could achieve what you're attempting to do:
You could update your deploy process to use deploy.yml, then use the trigger keyword in your CI file to trigger that job for different environments. You can then use the rules keyword to control when and how different jobs are triggered. This has the benefit of re-using your deployment process that you're using for your staging environment, which is nice and DRY and ensures that your deployment is repeatable across environments. This would look like this:
deploy-to-staging:
stage: deploy
trigger:
include: deploy.yml
strategy: depend
when: on_success
deploy-to-production:
stage: deploy
trigger:
include: deploy.yml
strategy: depend
when: manual
You could use the rules keyword to include your deploy-to-production job only when the job is kicked off manually from the UI. The rest of your pipeline would still execute (unless you explicitly tell it not to), but your deploy-to-prod job would only show up if you manually kicked the pipeline off. This would look like this:
deploy-to-prod:
stage: deploy
script:
- echo "I'm deploying!"
rules:
- if: $CI_PIPELINE_SOURCE == "web"
when: on_success
- when: never
You could use a separate project for your deployment pipeline. This pipeline can retrieve artifacts from your other project, but would only run its CI when you manually click on run for that project. This gives you really nice separation of concerns because you can give a separate set of permissions to that project as opposed to the code project, and it can help keep your pipeline clean if it's really complicated.
All approaches have pros and cons, simply pick the one that works best for you!
We are using extensively Gitlab CI/CD pipelines for DevOps, configuration management etc. There is a long list of Gitlab runners which is now defined in gitlab-ci.yml in every project. By using "tags" and "only" we then define when to run jobs on different runners. The question is how to put the runner lists in separate files and then include them in required places? This is maybe now possible as Gitlab has evolved fast during last years. First we started with extends keyword:
gitlab-ci.yml
.devphase:
stage: build
when: manual
only:
- dev
...
Updating development environment (runner1):
extends: .devphase
tags:
- runner1
Updating development environment (runner2):
extends: .devphase
tags:
- runner2
...
This made the gitlab-ci.yml easier to read as we could list the runners separately at the end of the configuration file. However defining all these parameters for every runner is not very efficient or elegant. Somewhat later Gitlab introduced keywords "parallel" and "matrix". Now we can do this:
gitlab-ci.yml
.devphase:
stage: build
only:
- dev
tags:
- ${DEV_RUNNER}
...
Updating development environment:
extends: .devphase
parallel:
matrix:
- DEV_RUNNER: runner1
- DEV_RUNNER: runner2
...
Only one extends section and then list of runners which is pretty nice already. The next natural follow-up question is how one could put this list to a separate configuration file so that it would be easily copied and maintained without touching the main gitlab-ci.yml? I'm not sure how and where to use include keyword or is it even possible. For example this don't work (pipeline doesn't start).
gitlab-ci.yml:
.devphase:
stage: build
parallel:
matrix:
include: gitlab-ci-dev-runners.yml
only:
- dev
tags:
- ${DEV_RUNNER}
...
gitlab-ci-dev-runners.yml
- DEV_RUNNER: runner1
- DEV_RUNNER: runner2
...
Is Gitlab interpreting includes and variables in some "uncompatible" order orso? Is there any fellow coders who have faced this same problem here?
I have two sub projects in my repo.
The First one is .Net 5, The second is SharePoint solution. Both projects located in one branch.
How to configure pipeline only for one project?
I need to pass SAST test only for .Net 5 project. Now both projects are testing.
In my .gitlab-ci.yml:
include:
- template: Security/SAST.gitlab-ci.yml
stages:
- test
sast:
stage: test
tags:
- docker
First off - putting multiple projects into a single repo is a bad idea.
Second, the Gitlab SAST template is meant to be "simple to use", but it's very complex under the hood and utilizes a number of different SAST tools.
You do have some configuration options available via variables, but those are mostly specific to the language/files you are scanning. For some languages there are variables available that can limit which directories/paths are being scanned.
Since you haven't provided enough information to make a specific recommendation, you can look up the variables you need for your project(s) in the official docs here: https://docs.gitlab.com/ee/user/application_security/sast/#available-cicd-variables
If each Project, can have specific branch names, you can use
only--except or rules keywords to determine when jobs should run.
sast:
stage: test
tags:
- docker
only:
- /^sast-.*$/
It is for specific jobs, not sure about whole pipeline.
I'm using ADO to trigger tests and deployments to Google App Engine when I push to Github. Note that I use the same .yaml file to run 2 stages; the first being Test and the second, Deploy. The reason I am not using release pipelines is because the docs suggest that release pipelines are the "classic" way of doing things, and besides, having the deploy code in my yaml means I can version control it (as opposed to using ADO's UI)
I have already set up the 2 stages and they work perfectly fine. However, the deploy right now only works for my dev environment. I wish to use the same stage to deploy to my other environments (staging and production).
Here's what my .yaml looks like:
trigger:
# list of triggers
stages:
- stage 'Test'
pool: 'ubuntu-latest'
jobs:
- job: 1
- job: 2
- job: 3
- stage 'Deploy'
jobs:
- job: 'Deploy to dev'
I could potentially do a deploy to staging and production by copying the Deploy to dev job and creating similar jobs for staging and production, but I really want to avoid that because the deployment jobs are pretty huge and I'd hate to maintain 3 separate jobs that all do the same thing, albeit slightly differently. For example, one of the things that the dev jobs does is copy app.yaml files from <repo>/deploy/dev/module/app.yaml. For a deploy to staging, I'd need to have the same job, but use the staging directory: <repo>/deploy/staging/module/app.yaml. This is a direct violation of the DRY (don't repeat yourself) principle.
So I guess my questions are:
Am I doing the right thing by choosing to not use release-pipelines and instead, having the deploy code in my yaml?
Does my azure-pipelines.yaml format look okay?
Should I use variables, set them to either dev, staging, and production in a dependsOn job, then use these variables to replace the directories where I copy the files from? If I do end up using this though, setting the variables will be a challenge.
Deploying from the yaml is the right approach, you could also leverage the new "Environment" concept to add special gates and approvals before deploying. See the documentation to learn more.
As #mm8 said, using templates here would be a clean way to manage the deployment steps and keep them consistent between environments. You will find more information about it here.
I have a large number of microservices (40+) with identical pipeline requirements (currently very simple: build, test, deploy). Each of them lives in its own repository. Obviously, I would like to avoid having to change my bitbucket-pipelines.yml file in 40 places as we improve our pipeline. I know travis and gitlab both offer an import/include feature that allows you to include a 'master' yml file. Is there anything similar for Bitbucket Pipelines? If not, what alternatives are viable?
Thanks!
You should be able to do that by using a custom Bitbucket Pipe. For example, you can build a custom pipe to deploy your code and add it to your bitbucket-pipelines.yml. Here is an example:
pipelines:
default:
- step:
name: "Deployment step"
script:
- pipe: your-bitbucket-account/deploy-pipe:1.0.0
variables:
KEY: 'value'
Here is a link to an article that explains how to write a custom pipe https://bitbucket.org/blog/practice-devops-with-custom-pipes-reusing-ci-cd-logic
This works for me, I had to setup bitbucket pipeline for a solution with multiple projects.
You will have to add custom: after pipeline: attribute before the name of the pipeline
pipelines:
custom:
deployAdminAPI:
- step:
name: Build and Push AdminAPI
script: enter your script
deployParentsAPI:
- step:
name: Build and Push ParentsAPI
script: enter your script
Here is the Link to the Solution on Bitbucket