I am working on building a CI CD pipeline of a .net application. I am finished with the CI part and everything is working as expected, the pipeline is releasing an artifact with the website files in it.
Instead of deploying this application to a singular vm, i would like to deploy it on virtual machine scale set.
I understand this is not a direct explicit question, but I am really trying to understand how people do this. I couldn't find any accurate documentation on it.
From my understanding, there are 2 different ways from what I've read.
Build an immutable image and push it to VMSS using a built in tasks of the release pipelines
Use extension scripts that will push the changes to each vm as decscribed here
Related
I'm new-ish to Azure DevOps, so I've missed how it got to where it is. By that I mean that I've seen two different approaches for deployment to environments and I'm not sure which superseded which:
Using a Release Pipeline and Defined Deployment Groups to deploy across stages (environments) See here
Using a Deployment Job in a Pipeline, then using a release pipeline to orchestrate pushing it to different environments - See here
It's interesting that the first link MS docs refer to as being classic, however the latter is not.
I'm currently using Deployment Groups to define the App Servers I deploy to for each environment - then each stage in my Release pipeline targets a different deployment group (environment). This seems the most fluent and natural of the solutions. However, it niggles me that the Environments I setup in the Environments section still maintain that they have never been deployed to - but the deployment groups have recorded the deployments as I expect. Also, the environments allow me to set useful stuff like "business hours" to wake the environment machines.
I looked and tried out some of the approach in the second link I posted - however, this just didn't seem intuative to me - and I can't find much in the DevOps docs to support this approach. I can see the benefits in that you can store your deployment pipeline as code in your repo, and that you have finer controller over the whole process - but I couldn't get variables from the library to be used in any of the replace variables steps or really understand where the release pipelines fit in.
So, I guess I after an inkling of what "best practice" is in this fairly straight forward scenario. I wondering if it's a blend of the two, but to be honest - I'm a bit lost.
Release pipelines and deployment groups have been around for longer than Azure DevOps has been named Azure DevOps. The YAML releases are rather recent. It isn't ever spelled out explicitly, but in my mind it comes down to how you plan on delivering your product.
If you are doing Continuous delivery (choosing when to release, maybe daily, weekly, or quarterly) then I think you must use release pipelines. You might choose this also if you have multiple environments that maybe aren't in the path to production that would want to deploy.
If you are doing Continuous deployment (every push that passes tests goes to production without any real human intervention), then I imagine you'd choose to use the YAML stages. This is kind of spelled out in your second link as the approach for deploying with "release flow", which is Microsoft's approach for delivering changes for Azure DevOps.
I have a project that consists of an Azure webapp, a PostgreSQL on Azure, and multiple Azure functions for background ETL workflows. I also have a local Python package that I need to access from both the webapp and the Azure functions.
How can I structure configuration and script deployment for those resources from a single git repo?
Any suggestions or pointers to good examples or tutorials would be very appreciated.
All the Azure tutorials that I've seen are only for small and simple projects.
For now, I've hand-written an admin.py script that does e.g. the webapp and function deployments by creating a Python package, creating ZIP files for each resource and doing ZIP deployments. This is getting messy, and now I want to have QA and PROD versions, and I need to pass secrets so that the DB is reachable, and it's getting more complex. Is there either a nice way to structure this packaging / deployment, or a tool to help with it? For me, putting everything in Kubernetes is not the solution, at least the DB already exists. Also, Azure DevOps is not an option, we are using Gitlab CI, so eventually I want to have a solution that can run on CI/CD there.
Not sure if this will help complete but here we go.
Instead of using a hand-written admin.py script, try using a yaml pipeline flow. For Gitlab, they have https://docs.gitlab.com/ee/ci/yaml/ that you can use to get started. From what you've indicated, I would recommend having several job steps in your yaml pipeline that will build and package your web and function apps. For deployment, you can make use of environments. Have a look at https://docs.gitlab.com/ee/ci/multi_project_pipelines.html as well which illustrates how you can create downstream pipelines.
From a deployment standpoint, the current integration I've found between Azure and GitLab leaves me with two recommendations:
Leverage the script command of yaml to continue zipping your artifacts use Azure CLI (I would assume you can install the tools during the pipeline) to zip deploy.
Keep your code inside the GitLab repo and utilize Azure Pipelines to handle the CI/CD for you.
I hope you find this helpful.
I'm trying Azure App Services. I've set up a build pipeline in Azure DevOps which builds and pushes my image to Docker Hub and then publishes docker-compose.yml as an artifact.
My release pipeline takes the docker-compose.yml and feeds it to the "Azure Web App for Container" task which succeeds. But the bot goes down and doesn't get back up after the deployment unless I access http://<myappname>.azurewebsites.net, then it starts and is of the latest pushed version. So everything seems to work, except the "restart" or docker-compose up.
I've been reading that I want to add a WebJob to my app service, but since I am using a Linux host I cannot seem to configure this. I've tried adding a curl task after deployment, but this probably executes too early.
Any ideas on how I would get to solve this last piece of the puzzle to have a simple CI/CD environment?
Currently there is zero out of the box support for hosting WebJobs in a Linux hosted app service. I've heard there's a hacky way of doing it (I'll have to find the post) but since it's not supported out of the gate, there's no guarantee it'll work.
I created a simple .net core console app. This app's repository is a Azure DevOps one. I have also created a ubuntu vm which i can successfully connect to, to receive the deploy.
I have managed to deploy my app from my local computer, by cloning, building and pushing it (via scp command).
Now I would like to do this using azure devops pipeline?
I managed to build the app, but now i can't seem to find help regarding how to execute the scp (or a alternative) command...
Edit1:
Ok, this is turning out to be an order of magnitude harder than I expected. I'm giving up for now. I've been trying to figure this out for almost 2 work-days. I can't believe that a task that requires 4-6 commands on a script on my local machine should require this much effort to do on a devops environment...
You can configure a deploy agent to your VM and use a release management to copy and configure your applications:
Deploy an agent on Linux
Define your multi-stage continuous deployment (CD) pipeline
Have a look at the Copy files over SSH pipeline task which supports SCP.
I'm trying to create a CI/CD pipeline for an example prototype. Thus, I've started simple enough to test my infrastructure - I'm using an almost untouched boilerplate of ASP.NET Framework Web App (targeting 4.6.1). The steps I've completed are:
App is deployed to an Azure App Service.
Its version control is hosted with Azure DevOps.
A build pipeline with the following tasks has been created, set-up and tested if it executes (tasks and their order, come from a template):
Azure Deployment Options/Settings are bound to the repository DevOps, thus builds are also displayed in Azure, and should be deployed there if successful.
The Build Pipeline is bound to the correct repository inside DevOps
Builds get triggered by pushing to the master branch
The next step was to verify that a broken build, because of failed tests or any other reason is not deployed to production in Azure. I've created a failing test for this reason.
And this is where I'm left stumped. Builds do fail as expected and the "App Service Deploy" task is skipped, because the build tasks before it have a failure:
And yet those broken builds still get deployed to Azure and to production without even waiting for the pipeline to finish. I'm verifying that a change has actually happened with small visual updates.
Build started and finished in Azure as soon as a push occurs before the pipeline in DevOps is fully traversed (or even started, if finding an agent takes longer):
(DevOps still not finished):
What am I doing wrong here? Am I understanding the pipeline wrong? Have I missed a set-up step somewhere? I'm lost.
Edit: As asked by Josh, here's my trigger as well:
Edit 2.2 A bit more clarification of my deployment options in my App Service in Azure, related to Daniel's comments:
This turned out to be the issue.
This is the only option I'm allowed to choose when tying my deployment to DevOps. I'm not allowed to choose a pipeline, just a project and a branch. In a tutorial I've compared with, the settings are the same (at least in this menu), but the build does not get triggered from the repository, but expects the pipeline to reach the appropriate step first, which is why I haven't considered it to be the culprit. Is there some additional setting up, I've missed to do, to indicate that it must look for a pipeline, rather than fire straight away from branch changes?
The deployment you have set up in the Azure portal is tied to source control only, not your build definition. So every time you commit to source control, two things happen that are totally disconnected from each other and start in parallel since they listen to the same repository for changes:
A build fires off in the pipeline.
The Azure website is updated with the version you just pushed to source control, since its deployment options are bound to it.
Remove #2 and your problem will go away. You set the App Service you want updated in the pipeline, you don't need an additional hook in the App Service itself.