I have a single codebase that is deployed to ~30 website instances on 3 Virtual Machines. Once code is tested and signed off on I have manual trigger with Pre-deployment approvals.
I was wondering if there is a way to have a Stage (Production) with multiple websites associated to them vs. having to create a separate stage for each customer such as prod_client1, prod_client2, etc.
I googled this topic and was unable to find anything on it. I can appreciate the granularity of deployments but the redundancy and commonality of the stages would be frustrating. Guidance or best practices would be greatly appreciated. Thanks!
if there is a way to have a Stage (Production) with multiple websites
associated to them
For this issue,you could add multiple deploy tasks to a stage's agent job.Specific deploy tasks are selected according to your needs.You can also try adding a PowerShell task to the agent job, deploying a code base to multiple websites on a stage via a powershell script.
However, multi-stage deployment is recommended for the situation where you want to deploy to multiple websites.The benefits of doing this is mentioned in Dejulia489's comment.
Related
Does azure pipelines allow custom action like AWS codepipeline?
I want to create a job worker that will poll azure pipeline for job requests for this custom action, execute the job, and return the status result to azure pipeline.
Something similar to - https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-create-custom-action.html
Tasks are the building blocks for defining automation in a build or release pipeline in Azure DevOps. There are many built-in tasks to enable fundamental build and deployment scenarios. If the existing tasks don't satisfy your needs, you can always build a custom task. Check Task types & usage for more details.
In addition, Visual Studio Marketplace offers a number of extensions; each of which, when installed to your subscription or collection, extends the task catalog with one or more tasks. Furthermore, you can write your own custom extensions to add tasks to Azure Pipelines.
Azure Pipeline Agents
When your pipeline runs, the system begins one or more jobs. An agent is computing infrastructure with installed agent software that runs one job at a time.
You have two options here to choose from: Microsoft-hosted agents or Self-hosted agents
An agent that you set up and manage on your own to run jobs is a self-hosted agent. Self-hosted agents give you more control to install dependent software needed for your builds and deployments. Also, machine-level caches and configuration persist from run to run, which can boost speed.
However, before you install a self-hosted agent you might want to see if a Microsoft-hosted agent pool will work for you. In many cases, this is the simplest way to get going.
With Microsoft-hosted agents, maintenance and upgrades are taken care of for you. Each time you run a pipeline, you get a fresh virtual machine. The virtual machine is discarded after one use. Microsoft-hosted agents can run jobs directly on the VM or in a container. Azure Pipelines provides a pre-defined agent pool named Azure Pipelines with Microsoft-hosted agents.
You can try it first and see if it works for your build or deployment. If not, you can use a self-hosted agent. Check this doc for more details.
I will pull the agent queue from my custom job worker and process the job. Is that possible in azure pipelines?
Based on my understanding of code pipeline and Azure devops, I am afraid what you said should be meaningless.
According to the document Create and add a custom action in CodePipeline, we could to know that:
AWS CodePipeline includes a number of actions that help you configure
build, test, and deploy resources for your automated release process.
If your release process includes activities that are not included in
the default actions, such as an internally developed build process or
a test suite, you can create a custom action for that purpose and
include it in your pipeline.
But for Azure devops, we do not need to create a job worker that will poll CodePipeline for job requests for this custom action. That because the whole process of build/release can be customized. We do not need to add a job worker for additional custom actions.
Azure devops provide a lot of templates when we create the pipeline, we could modify the pipeline directly in the pipeline to add/remove or update the task:
Even we can completely start with a blank pipeline and completely customize the entire build/release process.
So, we do not need to create a job worker for the custom action, just modify your pipeline directly.
I'm new-ish to Azure DevOps, so I've missed how it got to where it is. By that I mean that I've seen two different approaches for deployment to environments and I'm not sure which superseded which:
Using a Release Pipeline and Defined Deployment Groups to deploy across stages (environments) See here
Using a Deployment Job in a Pipeline, then using a release pipeline to orchestrate pushing it to different environments - See here
It's interesting that the first link MS docs refer to as being classic, however the latter is not.
I'm currently using Deployment Groups to define the App Servers I deploy to for each environment - then each stage in my Release pipeline targets a different deployment group (environment). This seems the most fluent and natural of the solutions. However, it niggles me that the Environments I setup in the Environments section still maintain that they have never been deployed to - but the deployment groups have recorded the deployments as I expect. Also, the environments allow me to set useful stuff like "business hours" to wake the environment machines.
I looked and tried out some of the approach in the second link I posted - however, this just didn't seem intuative to me - and I can't find much in the DevOps docs to support this approach. I can see the benefits in that you can store your deployment pipeline as code in your repo, and that you have finer controller over the whole process - but I couldn't get variables from the library to be used in any of the replace variables steps or really understand where the release pipelines fit in.
So, I guess I after an inkling of what "best practice" is in this fairly straight forward scenario. I wondering if it's a blend of the two, but to be honest - I'm a bit lost.
Release pipelines and deployment groups have been around for longer than Azure DevOps has been named Azure DevOps. The YAML releases are rather recent. It isn't ever spelled out explicitly, but in my mind it comes down to how you plan on delivering your product.
If you are doing Continuous delivery (choosing when to release, maybe daily, weekly, or quarterly) then I think you must use release pipelines. You might choose this also if you have multiple environments that maybe aren't in the path to production that would want to deploy.
If you are doing Continuous deployment (every push that passes tests goes to production without any real human intervention), then I imagine you'd choose to use the YAML stages. This is kind of spelled out in your second link as the approach for deploying with "release flow", which is Microsoft's approach for delivering changes for Azure DevOps.
The subject line is what I am looking to accomplish in a nutshell. The testing is for a Windows client that connects to a locally hosted server. I need the CodedUI test to run on as many VMs as possible.
I am new to Azure and all of the terminology associated with it, but have been doing a bit of research and it looks like Azure Pipelines may help me accomplish what I need. My company's Azure admin is not familiar with Pipelines and has asked if I may need to use the Microsoft administered Azure DevOps for that.
I am hoping that someone who knows what they're talking about could help me with this. Is what I am trying to do feasible? What are all the pieces that I will need? Is there an upper limit on how many VMs I can run a test on simultaneously?
Thanks in advance!
Azure DevOps Pipelines can help you accomplish this. There are some considerations though.
The standard way to UI test a web app would be to create a build that includes your app and tests, then create a new release definition with the built-in "Visual Studio Test" tasks, and run the release on a number of Microsoft-hosted agents (VMs).
First problem, since you are using a windows client, Microsoft-hosted agents probably won't work because they don't have connectivity to your network. You can use self-hosted agents, but that means you have VMs to manage now.
Second problem, pricing is not based on minutes, but on number of concurrent jobs. If you want to be able to run tests on 20 agents at once, you have to pay for 20 concurrent jobs, even if you only run your tests for 5 minutes a month.
Putting on my creative thinking hat... Here's a solution that would work with a single Microsoft-hosted agent. You could create an ARM template that does the following:
Stand up as many VMs as you want
Use VM extensions and Powershell DSC and/or other scripts to configure the VM (You can install Windows features, connect to your domain, install Chrome, etc)
Run powershell scripts to download your application, configuration, and testing tools from somewhere (ex. file server or azure storage), then run your tests and publish the results
Your release pipeline would deploy the ARM template to a new resource group, wait, and then delete everything when the tests are done.
This solution has the benefit of running on VMs in your network without making you maintain or pay for VMs long term.
My source code is on GitHub.
I have an Azure Devops pipeline set up to build and deploy the application to an Azure subscription.
I also have the full azure environment defined in ARM templates.
I'd like to run the template deployment only when a specific folder changes in my GitHub repo.
Path triggers are only for Azure Devops repos.
Other possible solutions I investigated, but there is no clear documentation on how to achieve this exactly:
Custom condition on build or release task.
Pre-deployment conditions. Maybe artifact filters?
Pre-deployment Gates?
The ARM template deployment is idempotent, I know, but it takes a several long minutes to run even if there was no infrastructure change and I'd like to avoid that time wasted on every build.
Sounds like you have a single pipeline for both the infrastructure and application code. I have separate pipelines for each, one for infrastructure as code and other builds/pipelines for applications, NuGet package creation, etc. Perhaps split the pipeline and have the application deployment trigger after and separately from the infrastructure deployment pipeline. That way the application build and deployment can run in a more frequent cycle.
What are the different ways to deploy the adf v2 pipelines on different environment. What can be the best approach to do fast, repeatable, reliable deployments of pipelines.
Thanks in advance.
Currently in v2 there isn't really any best practices to follow as the development tools are still in private preview. But as somebody with access to the new dev UI and can offer assurances that the things you seek are coming. Probably later this month, but I'm guessing.
With regards to repeatability and automation of your deployments, you have 2 options:
Script the deployments using PowerShell. This works like the deployment of the data factory v1, but you'll have to use the v2 commandlets, and you'll have to add triggers. You can use PowerShell to parametrize connections to linked services and other environment specific values.
Deploy using ARM templates and run it from a PowerShell / TFS deployment. You can find an example here. In this case it is possible to use ARM template parameters to parameterize connections to linked services and other environment specific values.