I am trying to setup connection string is my Azure resource deployment project. Which will be triggered via Octopus.We would like to have a clean system with every deployment. We want to accomplish Continuous deployment. The projects should cope with changes.And the system should be ready for blue green deployments ?
I am thinking about
I.Using a load balancer and configurations point to the load balancer with the connection strings.
II.Have a different database names at every deployment so every resource is unique every time.
What are the challanges I would face ? Do I have to have a tool like ZooKeeper?
Project Structure by TYPE using Azure Resource Project
Parameters & variables focusing to keep them less than 20
I would advise the use of Jenkins in this scenario to achieve continuous deployment. I have implemented this successfully.
There is a feature called poll SCM. This will cause a build when you have a new commit.
After the build, you can use the tool octo.exe to create a release and deploy that release to your specific environment.
I use the following code to create and deploy the release:
Create Release:
octo.exe create-release --project=%Environment% --server %Server% --apiKey %APIKEY% --version %version% --packageversion %version%
My variables are defined at a higher level to provide loose coupling:
%Environment%: Octopus Environment
%Server%: Octopus Server
%APIKEY%: API Key
%Timeout%: Timeout expiry
Deploy Release:
octo.exe deploy-release --project %Environment% --releaseNumber %version% --deployto %Environment% --server %Server% --apiKey %APIKEY% --progress --deploymenttimeout %Timeout%
Jenkins is very flexible and helps a lot in Continuous Deployment. You can have different jobs:
one for green and one for blue
After green is completed, you can trigger one for blue. As for database changes, you can use powershell together with sql cmd to alter your database and/or execute scripts.
In your scenario you can create a deployment slots and use the Auto-swap feature. This will reduce the downtime and risk. See this for more details:
Configure Auto Swap
Also, for the first question, you can create 2 databases, one for production and one for staging. You can use sticky settings as shown below to stick a DB to a specific slot. See this: Configuration for deployment slots
Additionally, you can also warm-up your slot so that it is ready to serve requests before it gets swapped.
HTH!
Related
Currently, I am working on Django based project which is deployed in the azure app service. While deploying into the azure app service there were two options, one via using DevOps and another via vscode plugin. Both the scenario is working fine, but strangle while deploying into app service via DevOps is slower than vscode deployment. Usually, via DevOps, it takes around 17-18 minutes whereas via vscode it takes less than 14 min.
Any reason behind this.
Assuming you're using Microsoft hosted build agents, the following statements are true:
With Microsoft-hosted agents, maintenance and upgrades are taken care of for you. Each time you run a pipeline, you get a fresh virtual machine. The virtual machine is discarded after one use.
and
Parallel jobs represents the number of jobs you can run at the same time in your organization. If your organization has a single parallel job, you can run a single job at a time in your organization, with any additional concurrent jobs being queued until the first job completes. To run two jobs at the same time, you need two parallel jobs.
Microsoft provides a free tier of service by default in every organization that includes at least one parallel job. Depending on the number of concurrent pipelines you need to run, you might need more parallel jobs to use multiple Microsoft-hosted or self-hosted agents at the same time.
This first statement might cause an Azure Pipeline to be slower because it does not have any cached information about your project. If you're only talking about deploying, the pipeline first needs to download (and extract?) an artifact to be able to deploy it. If you're also building, it might need to bring in the entire source code and/or external packages before being able to build.
The second statement might make it slower because there might be less parallelization possible than on the local machine.
Next to these two possible reasons, the agents will most probably not have the specs of your development machine, causing them to run tasks slower than they can on your local machine.
You could look into hosting your own agents to eliminate these possible reasons.
Do self-hosted agents have any performance advantages over Microsoft-hosted agents?
In many cases, yes. Specifically:
If you use a self-hosted agent, you can run incremental builds. For example, if you define a pipeline that does not clean the repo and does not perform a clean build, your builds will typically run faster. When you use a Microsoft-hosted agent, you don't get these benefits because the agent is destroyed after the build or release pipeline is completed.
A Microsoft-hosted agent can take longer to start your build. While it often takes just a few seconds for your job to be assigned to a Microsoft-hosted agent, it can sometimes take several minutes for an agent to be allocated depending on the load on our system.
More information: Azure Pipelines Agents
When you deploy via DevOps pipeline. you will go through a lot more steps. See below:
Process the pipeline-->Request Agents(wait for an available agent to be allocated to run the jobs)-->Downloads all the tasks needed to run the job-->Run each step in the job(Download source code, restore, build, publish, deploy,etc.).
If you deploy the project in the release pipeline. Above process will need to be repeated again in the release pipeline.
You can check the document Pipeline run sequence for more information.
However, when you deploy via vscode plugin. Your project will get restored, built on your local machine, and then it will be deployed to azure web app directly from your local machine. So we can see deploying via vscode plugin is faster, since much less steps are needed.
I am working on a CI/CD pipeline on AWS. For the given information, I have to use GitLab as the repository and use Blue/Green Deployment as the deployment method for ECS Fargate. I would like to use CodeDeploy(preset in the template of Cloudformation) and trigger it by each commit push to GitLab. I cannot use CodePipeline in my region so using CodePipeline is not work for me.
I have read so much docs and webpage related to ECS fargate and B/G deployment. But it seems not much information can help. Are there anyone have related experience?
If your goal is Zero Down Time, ECS already comes packaged as so by default, but not in what I'd call Blue/Green deployment, but rather a rolling upgrade. You'll have the ability to control percentage of healthy instances, ensuring no downtime, with ECS draining connections from the old tasks and provisioning new tasks with new versions.
Your application must be able to handle this 'duality' in versions, e.g. on the data layer, UX etc.
If Blue/Green is an essential requirement, you'll have to leverage CodeDeploy and ALB with ECS. Without going into implementation details, here's the highlight of it:
You have two sets of: Task Definitions and Target Groups (tied to one ALB)
Code Deploy deploys new task definition, which is tied
to the green Target Group. Leaving blue as is.
Test your green deployment by configuring a test listener to the new target group.
When testing is complete, switch all/incremental traffic from blue to green (ALB rules/weighted targets)
Repeat the same process on the next update, except you'll be going from green to red.
Parts of what I've described are handled by CodeDeploy, but hopefully this gives you an idea of the solution architecture, hence how to automate. ECS B/G.
I have set up a PR Pipeline in Azure. As part of this pipeline I run a number of regression tests. These run against a regression test database - we have to clear out the database at the start of the tests so we are certain what data is in there and what should come out of it.
This is all working fine until the pipeline runs multiple times in parallel - then the regression database is being written to multiple times and the data returned from it is not what is expected.
How can I stop a pipeline running in parallel - I've tried Google but can't find exactly what I'm looking for.
If the pipeline is running, the the next build should wait (not for all pipelines - I want to set it on a single pipeline), is this possible?
Depending on your exact use case, you may be able to control this with the right trigger configuration.
In my case, I had a pipeline scheduled to kick off every time a Pull Request is merged to the main branch in Azure. The pipeline deployed the code to a server and kicked off a suite of tests. Sometimes, when two merges occurred just minutes apart, the builds would fail due to a shared resource that required synchronisation being used.
I fixed it by Batching CI Runs
I changed my basic config
trigger:
- main
to use the more verbose syntax allowing me to turn batching on
trigger:
batch: true
branches:
include:
- main
With this in place, a new build will only be triggered for main once the previous one has finished, no matter how many commits are added to the branch in the meantime.
That way, I avoid having too many builds being kicked off and I can still use multiple agents where needed.
One way to solve this is to model your test regression database as an "environment" in your pipeline, then use the "Exclusive Lock" check to prevent concurrent "deployment" to that "environment".
Unfortunately this approach comes with several disadvantages inherent to "environments" in YAML pipelines:
you must set up the check manually in the UI, it's not controlled in source code.
it will only prevent that particular deployment job from running concurrently, not an entire pipeline.
the fake "environment" you create will appear in alongside all other environments, cluttering the environment view if you happen to use environments for "real" deployments. This is made worse by this view being a big sack of all environments, there's no grouping or hierarchy.
Overall the initial YAML reimplementation of Azure Pipelines mostly ignored the concepts of releases, deployments, environments. A few piecemeal and low-effort aspects have subsequently been patched in, but without any real overarching design or apparent plan to get to parity with the old release pipelines.
You can use "Trigger Azure DevOps Pipeline" extension by Maik van der Gaag.
It needs to add to you DevOps and configure end of the main pipeline and point to your test pipeline.
Can find more details on Maik's blog.
According to your description, you could use your own self-host agent.
Simply deploy your own self-hosted agents.
Just need to make sure your self host agent environment is the same as your local development environment.
Under this situation, since your agent pool only have one available build agent. When multiple builds triggered, only one build will be running simultaneously. Others will stay in queue with a specific order for agents. Unless the prior build finished, it will not run with next build.
For other pipeline, just need to keep use the host agent pool.
I have created a kubernetes cluster and I successfully deployed my spring boot application + nginx reverse proxy for testing purposes.
Now I'm moving to production, the only difference between test and prod is the connection to the database and the nginx basic auth (of course scaling parameters are also different).
In this case, considering I'm using a cloud provider infrastructure, what are the best practices for kubernetes?
Should I create a new cluster only for prod? Or I could use the same cluster and use labels to identify test and production machines?
For now having 2 clusters seems a waste to me: the provider assures me that I have the hardware capabilities and I can put different request/limit/replication parameters according to the environment. Also, for now, I just have 2 images to deploy per environment (even though for production I will opt for an horizontal scaling of 2).
I would absolutely 100% set up a separate test cluster. (...assuming a setup large enough where Kubernetes makes sense; I might consider an easier deployment system for a simple three-tier app like what you're describing.)
At a financial level this shouldn't make much difference to you. You'll need some amount of hardware to run the test copy of your application, and your organization will be paying for it whether it's in the same cluster or a different cluster. The additional cost will only be the cost of the management plane, which shouldn't be excessive.
At an operational level, there are all kinds of things that can go wrong during a deployment, and in particular there are cases where one Kubernetes resource can "step on" another. Deploying to a physically separate cluster helps minimize the risk of accidents in production; you won't accidentally overwrite the prod deployment's ConfigMap holding its database configuration, for example. If you have some sort of crash reporting or alerting set up, "it came from the test cluster" is a very clear check you can use to not wake up the DevOps team. It also gives you a place to try out possibly risky configuration changes: if you run your update script once in the test cluster and it passes then you can re-run it in prod, but if the first time you run it is in prod and it fails, that's an outage.
Depending on what you're using for a CI system, the other thing you can set up is fully automated deploys to the test environment. If a commit passes its own unit tests, you can have the test environment always running current master and run integration tests there. If and only if those integration tests pass, you can promote to the production environment.
It is true, that it is definitely a better practice to use a different cluster, as in your test cluster you could do something wrong (especially resource wise), and take down you prod environment, but if you can't afford it, and if you feel confident on k8s, you can put your prod environment in a different namespace.
I don't know on azure, but on GKE you can take the number of nodes to zero. If it is possible on azure, may be you can take the number of nodes of test environment to zero, whenever not using it, and get 2 clusters.
Its better to use different clusters for production and dev/testing. Please refer here for best practices
I'm having hard times with the application's release process. The app is being developed in .NET Core and uses 'appsettings.json' that holds connection string to a database. The app should be deployed to Kubernetes cluster in Azure. We have a build and release processes in Azure DevOps so the process is automated, although the problem belongs to a necessity to deploy the same to multiple environments (DEV/QA/UAT) and every env is using its own database. When we build Docker image, the 'appsettings.json' that holds a connection string is being baked-in to the image. The next step pushes the image to a container repository which Release process then uses to deploy the image to a cluster (the steps are classics).
Replacing or putting the connection parameters into the variables on the build step is not a big deal. However, this is a Release process that controls the deployment to multiple environments. I don't see how I can substitute the connection string to a database in the Release pipeline... or better to say, how to deploy to three different environments with database connection string properly set for each of them.
Please suggest how it can be achieved. The only option I came up with is having three separate build pipelines for every env which doesn't look pretty. The entire idea behind Release is that you can manage the approval process before rolling out the changes to the next environment.
Decided to proceed with Kubernetes secrets. Found a good article around this issue here: https://strive2code.net/post/2018/12/07/devops-friday-build-a-deployment-pipeline-using-k8s-secrets