Trigger CodeDeploy in GitLab? - gitlab

I am working on a CI/CD pipeline on AWS. For the given information, I have to use GitLab as the repository and use Blue/Green Deployment as the deployment method for ECS Fargate. I would like to use CodeDeploy(preset in the template of Cloudformation) and trigger it by each commit push to GitLab. I cannot use CodePipeline in my region so using CodePipeline is not work for me.
I have read so much docs and webpage related to ECS fargate and B/G deployment. But it seems not much information can help. Are there anyone have related experience?

If your goal is Zero Down Time, ECS already comes packaged as so by default, but not in what I'd call Blue/Green deployment, but rather a rolling upgrade. You'll have the ability to control percentage of healthy instances, ensuring no downtime, with ECS draining connections from the old tasks and provisioning new tasks with new versions.
Your application must be able to handle this 'duality' in versions, e.g. on the data layer, UX etc.
If Blue/Green is an essential requirement, you'll have to leverage CodeDeploy and ALB with ECS. Without going into implementation details, here's the highlight of it:
You have two sets of: Task Definitions and Target Groups (tied to one ALB)
Code Deploy deploys new task definition, which is tied
to the green Target Group. Leaving blue as is.
Test your green deployment by configuring a test listener to the new target group.
When testing is complete, switch all/incremental traffic from blue to green (ALB rules/weighted targets)
Repeat the same process on the next update, except you'll be going from green to red.
Parts of what I've described are handled by CodeDeploy, but hopefully this gives you an idea of the solution architecture, hence how to automate. ECS B/G.

Related

How many agents should I have?

I trying to build a branch-based GitOps declarative infrastructure for Kubernetes. I plan to create clusters on a cloud provider with crossplane, and those clusters will be stored in Gitlab. However, as I start building, I seem to be running into gitlab-agent sprawl.
Every application I will be deploying to each of my environments is stored in a separate git repo, and I'm wondering if I need a separate agent for each repo and environment. For example, I have my three clusters prod, stage, and dev, and my three apps, API, kafka, and DB. I've started with three agents per repo (gitlab-agent-api-prod, gitlab-agent-kafka-stage, ...), Which seems a bit excessive. Do I really need 9 agents?
Additionally, I now have to install as many agents as I have apps onto each of my clusters, which already eats up significant resources. I'd imagine I can get away with one gitlab agent per cluster, I am just not seeing how that is done. Any help would be appreciated!
PS: if anyone has a guide on how to automatically add gitlab agents to new clusters created with crossplane, I'm all ears. Thanks!

Blue Green Deployment with AWS ECS

We are using ECS Fargate containers to deploy all of our services (~10) and want to follow Blue/Green Deployment.
We have deployed all the services under BLUE flag where target groups are pointing to the services.
In CICD, New Target groups are created and having slightly different forward rules to allow testing without any issue.
Now, my System is running with 2 kind of target groups, services and task definition -
tg_blue, service_blue, task_blue → pointing to old containers and serving live traffic
tg_green, service_green, task_green → pointing to new containers and do not have any traffic.
All above steps are done in Terraform.
Now, I want to switch the traffic and here I am stuck, How to Switch the Traffic and How the next Deployment will look like?
I would go for AWS native solution if no important reasons against. I have on my mind CodeDeploy. It switches in automatic way between TGroups.
Without CDeploy, you need to implement weighted balancing among two TGroups and adjust them later on. That is extra work.
Whole flow is quite good explained on this YT video.

Is it possible to convert an aws codepipeline to bitbucket pipeline?

I would like to deploy my application to aws blue-green environment.
I can find aws codepipeline to integrate the blue-green environment. But I can't find anything for bitbucket.
How to implement blue-green deployment with Bitbucket Pipeline?
Practically it's not possible and makes no sense.
Bitbucket Pipeline is CI tool, not CD tool. We can still perform deployment there but it's similar to shell script execution.
Also even as CI tool it's very limited in features, because it's quite new in the market.
Corner case for us here will be performing rollback basing on some conditions. It's not possible in Bitbucket Pipeline.
We can make this rollback manually or semi-automatic by executing sequential tasks in Bitbucket Pipeline, but it will be similar to shell script execution. We can do this in command line without Bitbucket.
Please note that blue-green deployment assumes coexistence of different versions of your product at the same time. It raises many questions to your product and it's deployment. It doesn't relate to CI/CD tool you use.

How to manage patching on multiple AWS accounts with different schedules

I'm looking for the best way to manage patching Linux systems across AWS accounts with the following things to consider:
Separate schedules to roll patches through Dev, QA, Staging and Prod sequentially
Production patches to be released on approval, not automatic
No newer patches can be deployed to Production than what was already deployed to lower environments (as new patches come out periodically throughout the month)
We have started by caching all patches in all environments on the first Sunday of every month. The goal there was to then install patches from cache. This helps prevent un-vetted patches being installed in prod.
Most, not all, instances are managed by OpsWorks, but there are numerous OpsWorks stacks. We have some other instances managed by Chef Server. Still others are not managed, but are just simple EC2 instances created from the EC2 console. This means, using recipes means we have to kick off approved patches on a stack-by-stack basis or instance-by-instance basis. Not optimal.
More recently, we have looked at the new features of SSM using a central AWS account to manage instances. However, this causes problems with some applications because the AssumeRole for SSM adds credentials to the .aws/config file that interferes with other tasks we need to run.
We have considered other tools, such as Ansible, but we would like to explore staying within the toolset we currently have which is largely OpsWorks and Chef Server. I'm looking for ideas that are more on a higher level, an architecture of how one would approach this scenario.
Thanks for any thoughts or ideas.
This sounds like one of the exact scenarios RunCommand was designed for.
You can create multiple groups of servers with different schedules based on tags. More importantly, you don't need to rely on secret/keys being deployed anywhere.

Connection Strings in the Code and Continuous Deployment

I am trying to setup connection string is my Azure resource deployment project. Which will be triggered via Octopus.We would like to have a clean system with every deployment. We want to accomplish Continuous deployment. The projects should cope with changes.And the system should be ready for blue green deployments ?
I am thinking about
I.Using a load balancer and configurations point to the load balancer with the connection strings.
II.Have a different database names at every deployment so every resource is unique every time.
What are the challanges I would face ? Do I have to have a tool like ZooKeeper?
Project Structure by TYPE using Azure Resource Project
Parameters & variables focusing to keep them less than 20
I would advise the use of Jenkins in this scenario to achieve continuous deployment. I have implemented this successfully.
There is a feature called poll SCM. This will cause a build when you have a new commit.
After the build, you can use the tool octo.exe to create a release and deploy that release to your specific environment.
I use the following code to create and deploy the release:
Create Release:
octo.exe create-release --project=%Environment% --server %Server% --apiKey %APIKEY% --version %version% --packageversion %version%
My variables are defined at a higher level to provide loose coupling:
%Environment%: Octopus Environment
%Server%: Octopus Server
%APIKEY%: API Key
%Timeout%: Timeout expiry
Deploy Release:
octo.exe deploy-release --project %Environment% --releaseNumber %version% --deployto %Environment% --server %Server% --apiKey %APIKEY% --progress --deploymenttimeout %Timeout%
Jenkins is very flexible and helps a lot in Continuous Deployment. You can have different jobs:
one for green and one for blue
After green is completed, you can trigger one for blue. As for database changes, you can use powershell together with sql cmd to alter your database and/or execute scripts.
In your scenario you can create a deployment slots and use the Auto-swap feature. This will reduce the downtime and risk. See this for more details:
Configure Auto Swap
Also, for the first question, you can create 2 databases, one for production and one for staging. You can use sticky settings as shown below to stick a DB to a specific slot. See this: Configuration for deployment slots
Additionally, you can also warm-up your slot so that it is ready to serve requests before it gets swapped.
HTH!

Resources