I'm wondering what is the best way to create and manage an Azure infrastructure. By infrastructure, I mean a set of resources used by a project. E.g. An Application Service Plan, a web service, a SQL server etc.
Currently, I see that there are a couple of ways to do this programmatically in a CD fashion:
By uploading a template with the needed resources
By creating each resource using its own PowerShell Module: E.g. Az.Websites, Az.Sql, Az.IotHub etc.
By using Az CLI, which is approximately the same as 2.
What are the pros and cons of each method?
You can try with azure ARM templates. It support all your mentioned applications to deploy using simple json structure. once you prepared the ARM template you can deploy the template using Azure DevOps release pipeline. for more details check the microsoft documentation
trigger:
- master
pool:
vmImage: 'windows-latest'
steps:
- task: AzureFileCopy#4
inputs:
SourcePath: 'templates'
azureSubscription: 'copy-connection'
Destination: 'AzureBlob'
storage: 'demostorage'
ContainerName: 'projecttemplates'
name: AzureFileCopy
- task: AzureResourceManagerTemplateDeployment#3
inputs:
deploymentScope: 'Resource Group'
azureResourceManagerConnection: 'copy-connection'
subscriptionId: '00000000-0000-0000-0000-000000000000'
action: 'Create Or Update Resource Group'
resourceGroupName: 'demogroup'
location: 'West US'
templateLocation: 'URL of the file'
csmFileLink: '$(AzureFileCopy.StorageContainerUri)templates/mainTemplate.json$(AzureFileCopy.StorageContainerSasToken)'
csmParametersFileLink: '$(AzureFileCopy.StorageContainerUri)templates/mainTemplate.parameters.json$(AzureFileCopy.StorageContainerSasToken)'
deploymentMode: 'Incremental'
deploymentName: 'deploy1'
Basically, when you want to build an infrastructure, all the vehicles you use will do the same job. but the difference is speed and convenience. operations you will do using the interface in a shorter time using the CLI. You can do the If you are working with multiple cloud providers (AWS, GCP, Azure) I recommend using terraform. so you don't need to be knowledgeable about all cloud providers to build the infrastructure.
We suggest using ARM templates for a couple reasons. ARM templates uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources. ARM templates are more consistent and are idempotent. If you rerun a PowerShell or CLI command numerous times you can get different results. More pros can be found here, I am not going to re-write our docs.
The downside of ARM templates is that they can get complex, especially when you start nesting templates or start using Desired State Configuration. We have recenlty released Bicep (Preview) to reduce some of the complexity.
PowerShell and CLI are pretty similar in the pros/cons but there are times I find one is easier to use (e.g. it's easer to configure Web with CLI but AzureAD needs PowerShell). CLI of course is better is you are running on a non-windows client but now you can run PowerShell in Linux so that is not a hard/fast rule.
The downside with PowerShell or CLI is you must understand the dependencies of your infrastructure and code the script accordingly. ARM templates can take care of this orchestration and deploy everything in the proper order. This can also make PowerShell/CLI slower to deploy resources send they are not deployed in parallel where possible unless you code your script in an async manner.
I would be remiss if I didn't mention Terraform. Terraform is great if you want consistency in deployments across clouds like Azure, AWS and GCP.
Related
I'm working with a subscription that has a few different deployed environments (dev, test, staging, etc.). Each environment has its own storage account, containing an associated Terraform state file. These environments get deployed via Azure DevOps Pipelines.
It's easy enough to get at the .tfstate files that have been created this way, through the portal, CLI, etc.
But is it possible to access these state files using the 'terraform state' commands, for example using Azure Cloud Shell? If so, how do you point them at the right location?
I've tried using the terraform state commands in a Cloud Shell, but it's not clear how to point them to the right location or if this is indeed possible.
For these requirement, you need AzurePowerShell task to achieve your requirement.
1, First, if you can achieve your requirement via powershell feature in azure portal, then it is possible using the AzurePowerShell task to achieve the same thing(AzurePowerShell is running on the agent based on the service connection/service principal you provided.).
- task: AzurePowerShell#5
inputs:
azureSubscription: 'testbowman_in_AAD' #This service connection related to service principal on Azure side.
ScriptType: 'InlineScript'
Inline: |
# Put your logic here.
# Put your logic here.
azurePowerShellVersion: 'LatestVersion'
2, Second, you can use AzCopy to download the file and then do operations to it. DevOps microsoft host agent support this tool.
running this command : terraform state pull > state.tfstate (you can give like thils dev.tfstate extension tfstate is important)in the Azure cloud shell.
All you need to move to the terraform file directory
enter image description here
and run this command terraform state pull > dev.tfstate
enter image description here
I have been working on deploying terraform package using azure devops pipeline.
We have our tf state file locally, and no plans to move to azure storage account. Could you please help how can we define the attribute values in terraform init step in pipeline.
- task: TerraformTaskV2#2
displayName: Terraform init
inputs:
provider: 'azurerm'
command: 'init'
workingDirectory: 'some directory'
backendServiceArm: 'some service conn'
**backendAzureRmContainerName: ??
backendAzureRmResourceGroupName: ??
backendAzureRmStorageAccountName: ??
backendAzureRmKey: **
What should be the values for Resource group, storage account name, container name. If I don't specify these values, pipeline is failing with below error
##[error]Error: Input required: backendAzureRmStorageAccountName
Any help on this is much appreciated. Thanks in advance.
I'm unsure if you can use the TerraformTaskV2 without utilizing a cloud provider's backend. In the README for said task it doesn't show options for using a local backend, only the following for terraform init:
... AzureRM backend configuration
... Amazon Web Services(AWS) backend configuration
... Google Cloud Platform(GCP) backend configuration
I haven't had experience with this yet, but you could look at the extension Azure Pipelines Terraform Tasks, which does explicitly support a local backend:
The Terraform CLI task supports the following terraform backends
local
...
Just a note on working in teams:
if you're working in a team deploying infrastructure, using a local backend can lead to potential undefined state and/or undesirable outcomes. The benefits of choosing a good backend can offer "...support locking the state while operations are being performed, which helps prevent conflicts and inconsistencies." - docs
I'm pretty familiar with Azure DevOps, pipelines all that stuff, and now I'm trying to dig into GitHub Actions. The question is basically pretty simple, I want to deploy my .NET Core 5 App to Azure. The only problem is, that all examples include this publish profile.
Since I provision the infrastructure with an ARM template, the publish profile is simply not there yet. I could find some examples that deploy the ARM template and a couple of examples that deploy the Web App, but no example combining both. Maybe I'm a little bit polluted by the way Azure DevOps works and the (wonderful) idea of service connections.
So my question is, how do I publish a web app to Azure when I don't have the ability to download a publish profile and store that in my GitHub secrets, using GitHub Actions?
OK, I found this one out myself. Apparently there's an action you can use that will download the publish profile for you. This means that you don't have to have the publish profile up front. The step looks like this:
- name: Get WebApp/FunctionApp publish profile
id: webapp-dev
uses: aliencube/publish-profile-actions#v1
env:
AZURE_CREDENTIALS: ${{ secrets.AZURE_CREDENTIALS }}
with:
resourceGroupName: 'your-resource-group-name'
appName: 'your-app-name'
This leaves you with an output variable called `profile' which can be used in following steps like so:
- name: 'Run Azure webapp deploy action using publish profile credentials'
uses: azure/webapps-deploy#v1
with:
app-name: 'your-app-name'
publish-profile: ${{ steps.webapp-dev.outputs.profile }}
package: './'
This means you can now provision resources using ARM templates, get the publish profile (just-in-time) and use that to deploy your system... Everybody happy...
I've been researching Azure DevOps and I've come across what looks like a pretty obvious security hole in Azure pipelines.
So, I'm creating my pipeline as YAML and defining 2 stages: a build stage, and a deployment stage. The deployment stage looks like this:
- stage: deployApiProdStage
displayName: 'Deploy API to PROD'
dependsOn: buildTestApiStage
jobs:
- deployment: deployApiProdJob
displayName: 'Deploy API to PROD'
timeoutInMinutes: 10
condition: and(succeeded(), eq(variables.isRelease, true))
environment: PROD
strategy:
runOnce:
deploy:
steps:
- task: AzureWebApp#1
displayName: 'Deploy Azure web app'
inputs:
azureSubscription: '(service connection to production web app)'
appType: 'webAppLinux'
appName: 'my-web-app'
package: '$(Pipeline.Workspace)/$(artifactName)/**/*.zip'
runtimeStack: 'DOTNETCORE|3.1'
startUpCommand: 'dotnet My.Api.dll'
The Microsoft documentation talks about securing this by adding approvals and checks to an environment; in the above case, the PROD environment. This would be fine if the protected resource here that allows publishing to my PROD web app - the service connection in azureSubscription - were pulled from the PROD environment. Unfortunately, as far as I can tell, it's not. It's associated instead with the pipeline itself.
This means that when the pipeline is first run, the Azure DevOps UI prompts me to permit the pipeline access to the service connection, which is needed for any deployment to happen. Once access is permitted, that pipeline has access to that service connection for evermore. This means that from then on, that service connection can be used no matter which environment is specified for the job. Worse still, any environment name specified that is not recognized does not cause an error, but causes a blank environment to be created by default!
So even if I setup a manual approval for the PROD environment, if someone in the organization manages to slip a change through our code review (which is possible, with regular large code reviews) that changes the environment name to 'NewPROD' in the azure-pipelines.yml file, the CI/CD will create that new environment, and go ahead and deploy immediately to PROD because the new environment has no checks or approvals!
Surely it would make sense for the service connection to be associated with the environment instead. It would also make sense to have an option to ban the auto-creation of new environments - I don't really see how that's particularly useful anyway. Right now, as far as I can tell, this is a huge security hole that could allow deployments to critical environments by anyone who has commit access to the repo or manages to slip a change to the azure-pipelines.yml file through the approval process, introducing a major single point of failure/weakness. What happened to the much-acclaimed incremental approach to securing your pipelines? Am I missing something here, or is this security hole as bad as I think it is?
In your example, it seemed you created/used an empty environment, there is no deployment target. Currently, only the Kubernetes resource and virtual machine resource types are supported in an environment.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops
The resource in your example is a service connection, so you need to go the service connection and define checks for this service connection.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&tabs=check-pass
When creating a function in Azure through a Linux environment it seems CI/CD is completely missing from it's capabilities as I can't see any actual files. My VS code tells me this
Error: This plan does not support viewing files.
and when I try to deploy my files to the server through the Azure pipeline everything works except for the
Azure App Service Deploy
Which tells me this.
2020-04-21T19:48:37.6676043Z ##[error]Failed to deploy web package to App Service.
2020-04-21T19:48:37.6689536Z ##[error]Error: Error: Failed to deploy web package to App Service. Conflict (CODE: 409)
I did get it working directly through VS Code with a windows environment and didn't notice any of those issues.
Can you confirm this is not possible through Linux or perhaps there is a solution for what I am looking for.
is it possible to do continuous deployment CI/CD of an Azure Function through a Linux Environment via Azure DevOps?
The answer is Yes.
To deploy a Azure Function, you should use Azure Function App task instead of Azure App Service Deploy task. For below example.
steps:
- task: AzureFunctionApp#1
inputs:
azureSubscription: '<Azure service connection>'
appType: functionAppLinux
appName: '<Name of function app>'
#Uncomment the next lines to deploy to a deployment slot
#Note that deployment slots is not supported for Linux Dynamic SKU
#deployToSlotOrASE: true
#resourceGroupName: '<Resource Group Name>'
#slotName: '<Slot name>'
Please check out this document Continuous delivery by using Azure DevOps for detailed examples.