I would like to deploy my application to aws blue-green environment.
I can find aws codepipeline to integrate the blue-green environment. But I can't find anything for bitbucket.
How to implement blue-green deployment with Bitbucket Pipeline?
Practically it's not possible and makes no sense.
Bitbucket Pipeline is CI tool, not CD tool. We can still perform deployment there but it's similar to shell script execution.
Also even as CI tool it's very limited in features, because it's quite new in the market.
Corner case for us here will be performing rollback basing on some conditions. It's not possible in Bitbucket Pipeline.
We can make this rollback manually or semi-automatic by executing sequential tasks in Bitbucket Pipeline, but it will be similar to shell script execution. We can do this in command line without Bitbucket.
Please note that blue-green deployment assumes coexistence of different versions of your product at the same time. It raises many questions to your product and it's deployment. It doesn't relate to CI/CD tool you use.
Related
I am working on a CI/CD pipeline on AWS. For the given information, I have to use GitLab as the repository and use Blue/Green Deployment as the deployment method for ECS Fargate. I would like to use CodeDeploy(preset in the template of Cloudformation) and trigger it by each commit push to GitLab. I cannot use CodePipeline in my region so using CodePipeline is not work for me.
I have read so much docs and webpage related to ECS fargate and B/G deployment. But it seems not much information can help. Are there anyone have related experience?
If your goal is Zero Down Time, ECS already comes packaged as so by default, but not in what I'd call Blue/Green deployment, but rather a rolling upgrade. You'll have the ability to control percentage of healthy instances, ensuring no downtime, with ECS draining connections from the old tasks and provisioning new tasks with new versions.
Your application must be able to handle this 'duality' in versions, e.g. on the data layer, UX etc.
If Blue/Green is an essential requirement, you'll have to leverage CodeDeploy and ALB with ECS. Without going into implementation details, here's the highlight of it:
You have two sets of: Task Definitions and Target Groups (tied to one ALB)
Code Deploy deploys new task definition, which is tied
to the green Target Group. Leaving blue as is.
Test your green deployment by configuring a test listener to the new target group.
When testing is complete, switch all/incremental traffic from blue to green (ALB rules/weighted targets)
Repeat the same process on the next update, except you'll be going from green to red.
Parts of what I've described are handled by CodeDeploy, but hopefully this gives you an idea of the solution architecture, hence how to automate. ECS B/G.
We have a need to auto-deploy our projects to various server instances at the time their corresponding branches are merged back to the develop branch.
I am uncertain how to address this use case using gitlab and pipelines, etc. Since it is triggered by the merge event, I don’t know for sure how to 1) listen for those events, and 2) where/how to capture the steps (currently manual bash scripts) to perform the deployment activities and post-deployment (e.g. starting containers, etc.) activities.
I am basically familiar with AutoDevOps, but I do not need anything to fancy for the time being. We are not using Kubernetes and my understanding is that you need Kubernetes for AutoDevOps.
I would grateful for any general or even specific guidance on how to proceed. thanks!
Gitlab handles that for you automatically, you just need to define a file called .gitlab-ci.yml under the root of your project, and specify the stages and scripts.
Refer:
https://docs.gitlab.com/ee/ci/merge_request_pipelines/
Is there a parameter or a setting for running pipelines in sequence in azure devops?
I currently have a single dev pipeline in my azure DevOps project. I use this for infrastructure because I build, test, and deploy using scripts in multiple stages in my pipeline.
My issue is that my stages are sequential, but my pipelines are not. If I run my pipeline multiple times back-to-back, agents will be assigned to every run and my deploy scripts will therefore run in parallel.
This is an issue if our developers commit close together because each commit kicks off a pipeline run.
You can reduce the number of parallel jobs to 1 in your project settings.
I swear there was a setting on the pipeline as well but I can't find it. You could do an API call as part or your build/release to pause and start the pipeline as well. Pause as the first step and start as the last step. This will ensure the active pipeline is the only one running.
There is a new update to Azure DevOps that will allow sequential pipeline runs. All you need to do is add a lockBehavior parameter to your YAML.
https://learn.microsoft.com/en-us/azure/devops/release-notes/2021/sprint-190-update
Bevan's solution can achieve what you want, but there has an disadvantage that you need to change the parallel number manually back and forth if sometimes need parallel job and other times need running in sequence. This is little unconvenient.
Until now, there's no directly configuration to forbid the pipeline running. But there has a workaruond that use an parameter to limit the agent used. You can set the demand in pipeline.
After set it, you'll don't need to change the parallel number back and forth any more. Just define the demand to limit the agent used. When the pipeline running, it will pick up the relevant agent to execute the pipeline.
But, as well, this still has disadvantage. This will also limit the job parallel.
I think this feature should be expand into Azure Devops thus user can have better experience of Azure Devops. You can raise the suggestion in our official Suggestion forum. Then vote it. Our product group and PMs will review it and consider taking it into next quarter roadmap.
I have set up a PR Pipeline in Azure. As part of this pipeline I run a number of regression tests. These run against a regression test database - we have to clear out the database at the start of the tests so we are certain what data is in there and what should come out of it.
This is all working fine until the pipeline runs multiple times in parallel - then the regression database is being written to multiple times and the data returned from it is not what is expected.
How can I stop a pipeline running in parallel - I've tried Google but can't find exactly what I'm looking for.
If the pipeline is running, the the next build should wait (not for all pipelines - I want to set it on a single pipeline), is this possible?
Depending on your exact use case, you may be able to control this with the right trigger configuration.
In my case, I had a pipeline scheduled to kick off every time a Pull Request is merged to the main branch in Azure. The pipeline deployed the code to a server and kicked off a suite of tests. Sometimes, when two merges occurred just minutes apart, the builds would fail due to a shared resource that required synchronisation being used.
I fixed it by Batching CI Runs
I changed my basic config
trigger:
- main
to use the more verbose syntax allowing me to turn batching on
trigger:
batch: true
branches:
include:
- main
With this in place, a new build will only be triggered for main once the previous one has finished, no matter how many commits are added to the branch in the meantime.
That way, I avoid having too many builds being kicked off and I can still use multiple agents where needed.
One way to solve this is to model your test regression database as an "environment" in your pipeline, then use the "Exclusive Lock" check to prevent concurrent "deployment" to that "environment".
Unfortunately this approach comes with several disadvantages inherent to "environments" in YAML pipelines:
you must set up the check manually in the UI, it's not controlled in source code.
it will only prevent that particular deployment job from running concurrently, not an entire pipeline.
the fake "environment" you create will appear in alongside all other environments, cluttering the environment view if you happen to use environments for "real" deployments. This is made worse by this view being a big sack of all environments, there's no grouping or hierarchy.
Overall the initial YAML reimplementation of Azure Pipelines mostly ignored the concepts of releases, deployments, environments. A few piecemeal and low-effort aspects have subsequently been patched in, but without any real overarching design or apparent plan to get to parity with the old release pipelines.
You can use "Trigger Azure DevOps Pipeline" extension by Maik van der Gaag.
It needs to add to you DevOps and configure end of the main pipeline and point to your test pipeline.
Can find more details on Maik's blog.
According to your description, you could use your own self-host agent.
Simply deploy your own self-hosted agents.
Just need to make sure your self host agent environment is the same as your local development environment.
Under this situation, since your agent pool only have one available build agent. When multiple builds triggered, only one build will be running simultaneously. Others will stay in queue with a specific order for agents. Unless the prior build finished, it will not run with next build.
For other pipeline, just need to keep use the host agent pool.
More and more server-side file deployments are handled using git. It's nice and there are plenty of guides available how to setup your deployment workflow with git, rsync and others.
However, I'd like to ask what's the cleanest way to set deployment rollbacks, so that
Every time you deploy, you record the latest state before the deployment (no need to manually read through logs to find the commit)
What git commands use to rollback to the prior (recorded) state in the case deployment has unforeseen consequences
The scope of the question is Linux servers, shell scripting and command line git.
Note that there is no general solution to this problem. I would propose two solutions.
First one requires usage of Fabric and some deep thinking how to handle whole deployment process. For a Django site I maintain, I wrote a fabric script that deploys staging on every git commmit. Deploying from staging to production is then a simple fabric command that copies all the files to a new folder (increments a version by 1), for example from production/v55/ to production/v56/ (ok, it also does backups and runs migrations). If anything goes wrong, rollback command restores backups, and starts production environment from folder production/v55. Less talk, more code: https://github.com/kiberpipa/Intranet/blob/master/fabfile.py
The second option requires more reading and has a bigger learning curve, but also provides cleaner solution. As Lenin suggested to use framework with declarative configuration, I would propose to go a step further and learn a Linux distribution with declarative configuration - http://nixos.org/. NixOS has built in capabilities for distributed software deployment (including rollbacks) and also tools to deploy stuff from your machine https://github.com/NixOS/nixops. See also thesis on Distributed Software Deployment, which covers also your questions (being part of a much bigger problem): http://www.st.ewi.tudelft.nl/~sander/index.php/phdthesis
Please have a look at Capistrano and Chef which require ruby/ror support. But are great deployment tools. Python's fabric is also an awesome tool.