What are the different ways to deploy the adf v2 pipelines on different environment. What can be the best approach to do fast, repeatable, reliable deployments of pipelines.
Thanks in advance.
Currently in v2 there isn't really any best practices to follow as the development tools are still in private preview. But as somebody with access to the new dev UI and can offer assurances that the things you seek are coming. Probably later this month, but I'm guessing.
With regards to repeatability and automation of your deployments, you have 2 options:
Script the deployments using PowerShell. This works like the deployment of the data factory v1, but you'll have to use the v2 commandlets, and you'll have to add triggers. You can use PowerShell to parametrize connections to linked services and other environment specific values.
Deploy using ARM templates and run it from a PowerShell / TFS deployment. You can find an example here. In this case it is possible to use ARM template parameters to parameterize connections to linked services and other environment specific values.
Related
We use Azure Pipeline to implement our Continuous integration pipeline. The app is deployed in virtual machines that we need to provision and configure. There are tones of libraries, patches , configurations , and applications that we need to deploy on the target VM before we get our code into those.
The question is what is the best tool to provision and configure these virtual machines? I was thinking of using Ansible AWX. Basically Azure Pipeline would make a call to the AWX API, which would then take it from there and finalize things.
There is an Azure Pipeline Extension that allows me to execute a playbook https://github.com/microsoft/azure-pipelines-extensions/blob/master/Extensions/Ansible/Src/readme.md. But I would like to use AWX instead so that my ansible/deployment code is decoupled from my pipeline.
Any suggestions?
As far as I know, Ansible allows you to automate the deployment and configuration of resources in your environment. It could meet your needs.
As you said, Azure Pipeline supports to run the playbook in the Ansible task(Ansible extension).
So I think you can directly complete the VM Configuration and Code Deployment in the azure pipeline.
If you want to separate these two steps, you can split them into two pipelines (VM configure and Code Deployment). To avoid confusion between configuration and deployment code, you can also split them into two repos.
On the other hand, if you run the playbook in the azure pipeline, the azure pipeline also supports adding tasks to change the parameters in the playbook(e.g. Replace Token).
Here is an operation guide about using Ansible in Azure Pipeline.
By the way, if the Virtual Machine is Azure VM, you also could use ARM template to update the Azure VM resource.
Personally, I would drop the AWX requirement. It's something else to manage and maintain and an entirely separate interface too. Instead, just do your whole pipeline in one place... azure devops. Pick one or the other. Tower doesn't have a built in source control, so I recommend ADO over it, but they'll both run ansible and they'll both do it on your own control nodes. There's no reason to take an extra step with another tool. It adds way too much complexity.
I would like to perform the following steps on schedule (presumably using Azure Automation):
Provision a VM in Azure
Run a powershell script on that VM
Deprovision VM
Actually I have more steps but left only 3 for simplicity.
I am new to IaC and appreciate your general guidance and advice.
Is it scope of Azure Automation or I need something else?
I would like to code everything in text format and put in Git and update automatically via Pull Requests
Should I use Runbooks or DSC?
Regarding step 2, I cannot figure out how I can upload my powersehll script into newly created VM and run it locally. The script downloads some files and updates some remote resources.
Thanks,
Ruslan
there are a lot of options and tools to achieve your goal.
If you will be working strictly in the Azure cloud, The following tools are most commonly used for building an environment.
Azure-powershell
Azure-CLI
ARM-templates
each of them very similar but all a little different with their own benefits to them, but they are all tools for building your virtual infrastructure. For configuring your resources there are other tools. Like you mentioned yourself, DSC is a tool to configure virtual machines.
if you are planning to use github to push your code, i would recommend using ARM-templates. You can very easily use your own or other templates by referencing in your code. However this might be the 'hardest' solution to learn and understand the syntax in comparison to the cli and powershell. But also the most frequently used.
It is possible to build your environment and configure it in the same script using the Azure-CLI, Azure-Powershell or an other opensource solution like Terraform, But this is not best practice.
A lot of starter scripts are publicly available on github and in the Microsoft docs.
if you have any specific questions you can always send me a message, i am currently working on azure automation myself.
We are working on following within Azure portal
Azure Functions
Data Factory
Logic Apps
Storage account (not files)
Now as we are done with development, we need to deploy these azure resources in client's UAT environment
I looked around (might be missing something) and found that deployment of Azure resources is not straightforward.
In Azure, it is like another subscription, correct?
So found this blog, which works with different PowerShell scripts to copies from one subscription to another
This is the right approach? & it cover everything required for resources to execute flawless (I still need to go thru scripts) for e.g. permissions, Data Factory datasets, etc?
Any other way to deploy (kind of export & import)?
Basically what you need is to create a reusable arm template, your question lacks some details yet ARM templates are the way of automated deployment in Azure, on a high-level
start by authoring your arm template to deploy the vanilla required resources look here
https://learn.microsoft.com/en-us/azure/templates/microsoft.web/sites/functions
https://learn.microsoft.com/en-us/azure/templates/microsoft.datafactory/factories
https://learn.microsoft.com/en-us/azure/templates/microsoft.logic/integrationaccounts
https://learn.microsoft.com/en-us/azure/templates/microsoft.datalakeanalytics/accounts/storageaccounts
you can combine all of them in one big template using ARM template dependency and other functions
look here
https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-template-functions
after you finish ARM templates can be used in many ways including PowerShell, direct API calls or even you can create a deployment in Azure and save it to be reused with a click
look here, also if there will be a high volume of users consider adding it to market place.
https://learn.microsoft.com/en-us/azure/azure-resource-manager/
after finishing your implementation of vanilla resources you can then move into adding any customization you might have.
this is the right and best way to do "afaik"
also look here to see all of your existing resources in an arm template view
https://resources.azure.com/
my understanding of Azure is that almost everything with some few exceptions has an ARM template representation
hope this would help.
We are trying to implement a continuous deployment pipeline and we would like to create our resources from Azure Resource Templates.
Questions
1.What kind of a project should we choose?
2.How should we integrate it with Octopus ?
3.How should we trigger the infrastructure setup process ?
I.Using this template is a progress in the right direction.
II.Octopus calls Azure API s so it is much better to provide a deployment package and in case needed deploy it using VS.
III.Starting Continuous Deployment with Push makes the process easier as the locus of control is better.
I have two Azure VM's running in a cloud service. They contains almost the same thing. Some TCP port's are also opened between them.
Is it possible to create a deploy package from this existing setup so that at a later time can deploy this setup in an easy way. I.e. I want to be able to do this:
1. Create deploy package from existing setup *
2. Delete whole existing cloud service including VM's
3. Deploy the package from step 1 and have everything created again.
*I can save one of the VM's to my Azure storage and use it as template for both of them if that is easier.
How to accomplish this if it is possible?
Yes, you can take what you have as a template and use it to stand up multiple silos. But in IaaS, there isn't a notion of a deployment package. There's a few things you'll need to do...
1) understand how to take an existing VM and turn it into an image
2) use Powershell or another DevOps style automation suite (Chef/Puppet/etc..) to define deploy your silo.
You seem specifically interested in how to create an image so I'd recommend using the tutorial we have published on this. http://www.windowsazure.com/en-us/documentation/articles/virtual-machines-capture-image-windows-server/ This does of course presume you're running Windows Server. But a Linux version it can be found at: http://www.windowsazure.com/en-us/documentation/articles/virtual-machines-linux-capture-image/
The automation of a deployment depends on a great many things, so I'd suggest at a starting point, familiarizing yourself with the management API: http://msdn.microsoft.com/en-us/library/windowsazure/ee460799.aspx
With the implementation of Resource Manager, you can now easily use JSON template to deploy and redeploy resources in Azure. There are also starter templates available - https://azure.microsoft.com/en-us/documentation/templates/