AzureResourceManagerTemplateDeployment fail in Azure DevOps pipeline - azure

I have the following task that runs an Arm Template that is tested and working in another system for creating a Management group in Azure. Everything is checked and it should work, but I keep giving this error:
##[error]Check out the troubleshooting guide to see if your issue is addressed: https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-resource-group-deployment?view=azure-devops#troubleshooting
##[error]Error: Could not find any file matching the template file pattern
What I am doing wrong?
parameters:
- name: ParentId
type: string
- name: NameId
type: string
- name: DisplayName
type: string
steps:
- task: AzureResourceManagerTemplateDeployment#3
inputs:
deploymentScope: "Management Group"
azureResourceManagerConnection: ServiceConnectionName
location: "West Europe"
csmFile: "$(System.DefaultWorkingDirectory)/**/ManagementGroup.json"
csmParametersFile: "$(System.DefaultWorkingDirectory)/**/ManagementGroup.parameters.json"
overrideParameters: "-NameId ${{ parameters.NameId }} -DisplayName ${{ parameters.DisplayName }} -ParentId ${{ parameters.ParentId }}"
deploymentMode: "Incremental"
deploymentOutputs: armOutputs

I have had recently similar issue. I have noticed you are using capital letters for naming your arm template. As far as I know it should be small letters. So to solve this particular issue, you need to correct following lines:
csmFile: "$(System.DefaultWorkingDirectory)/**/ManagementGroup.json"
csmParametersFile: "$(System.DefaultWorkingDirectory)/**/ManagementGroup.parameters.json"
to
csmFile: "$(System.DefaultWorkingDirectory)/**/managementgroup.json"
csmParametersFile: "$(System.DefaultWorkingDirectory)/**/managementmroup.parameters.json"
And make your files with small letters in your repository.
That should solve the problem.

Related

creating ARM template for storage devops

I try to create a storage account via a devops pipeline.
So I have this yaml file:
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
- master
pool:
vmImage: ubuntu-latest
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script'
- task: AzureResourceManagerTemplateDeployment#3
inputs:
deploymentScope: 'Resource Group'
azureResourceManagerConnection: 'spn-azure--contributor-002'
subscriptionId: 'fea4c865-1e54-44b3-ba1d-07315468f083'
action: 'Create Or Update Resource Group'
resourceGroupName: 'rg-idn-'
location: 'West Europe'
templateLocation: 'Linked artifact'
csmFile: '**/template.json'
csmParametersFile: '**/parameters.json'
deploymentMode: 'Incremental'
- task: AzureResourceManagerTemplateDeployment#3
inputs:
azureResourceManagerConnection: 'spn-azure--contributor-002'
subscriptionId: 'fea4c865-1e54-44b3-ba1d-07315468f083'
resourceGroupName: 'rg-idn-'
location: 'West Europe'
csmFile: ARMTemplates/storage/azuredeploy.json
csmParametersFile: ARMTemplates/storage/azuredeploy.parameters.json
It looks like you have an Azure Policy blocking your deployment here - meaning that your ARM template does not meet the rules of this policy. Someone from your organization most probably implemented this one to make sure that you are complying to specific security / architecture standards.
You can see it from the error message : Error Type: PolicyViolation, Policy Definition Name : ESLZ Storage Account set to minimum TLS and Secure transfer should be enabled, Policy Assignment Name : ALZ_DeployEncrTLS.. You should be able to retrieve it under your Subscription / Resource Group, in the Policy blade.
Basically, and this is an assumption as we would need to look into that specific policy, but you most likely have to specify an TLS version here (probably 1.2, to be checked in the policy) and enable secure transfer.
For this you need to set minimumTlsVersion to the correct version and supportsHttpsTrafficOnly to true within your ARM template. You can have a look at Storage Account ARM specs.

Should I use terraform termplate or devops pipeline to upload docker images in multiple region in public cloud?

This may sound naive, I have to upload docker images in all subscribed regions in public cloud. And I am planning to do it in terraform template and will create a null_resource with multiple local-exec provisioner to login into docker repo, docker tag and docker push.
In future, we can subscribe to more cloud regions so number of regions might change in future.
I am not sure terraform is a better choice or I should think about some devops pipeline. I have basic understanding of terraform and have no idea how devops pipeline works?
Any suggestion?
You can do both, so I have Terraform to make all my resources and then I use an Azure Dev Ops pipeline to make the magic happen.
Here is my pipeline code, that I used to spin up terraform:
parameters:
- name: terraformWorkingDirectory
type: string
default: $(System.DefaultWorkingDirectory)/Terraform
- name: serviceConnection
type: string
default: VALUE
- name: azureSubscription
type: string
default: VALUE
- name: appconnectionname
type: string
default: VALUE
- name: backendresourcegroupname
type: string
default: Terraform
- name: backendstorageaccountname
type: string
default: terraform
- name: backendcontainername
type: string
default: terraformstatefile
- name: RG
type: string
default: rg_example
- name: azureLocation
type: string
default: UK South
- name: terraformVersion
type: string
default: 1.0.4
- name: artifactName
type: string
default: Website
jobs:
- job: Run_Terraform
displayName: Installing and Running Terraform
steps:
- checkout: Terraform
- task: TerraformInstaller#0
displayName: install
inputs:
terraformVersion: '${{ parameters.terraformVersion }}'
- task: CmdLine#2
inputs:
script: |
echo '$(System.DefaultWorkingDirectory)'
dir
- task: TerraformTaskV2#2
displayName: init
inputs:
provider: azurerm
command: init
backendServiceArm: '${{ parameters.serviceConnection }}'
backendAzureRmResourceGroupName: '${{ parameters.backendresourcegroupname }}'
backendAzureRmStorageAccountName: '${{ parameters.backendstorageaccountname }}'
backendAzureRmContainerName: '${{ parameters.backendcontainername }}'
backendAzureRmKey: terraform.tfstate
workingDirectory: '${{ parameters.terraformWorkingDirectory }}'
- task: TerraformTaskV1#0
displayName: plan
inputs:
provider: azurerm
command: plan
commandOptions: '-input=false'
environmentServiceNameAzureRM: '${{ parameters.serviceConnection }}'
workingDirectory: '${{ parameters.terraformWorkingDirectory }}'
- task: TerraformTaskV1#0
displayName: apply
inputs:
provider: azurerm
command: apply
commandOptions: '-input=false -auto-approve'
environmentServiceNameAzureRM: '${{ parameters.serviceConnection }}'
workingDirectory: '${{ parameters.terraformWorkingDirectory }}'
- job: Put_artifacts_into_place
displayName: Putting_artifacts_into_place
dependsOn: Run_Terraform
steps:
- checkout: Website
- checkout: AuthenticationServer
- task: DownloadPipelineArtifact#2
displayName: Download Build Artifacts
inputs:
artifact: '${{ parameters.artifactName }}'
patterns: /**/*.zip
path: '$(Pipeline.Workspace)'
- task: AzureWebApp#1
displayName: 'Azure Web App Deploy: VALUE'
inputs:
package: $(Pipeline.Workspace)**/*.zip
azureSubscription: '${{ parameters.azureSubscription }}'
ConnectedServiceName: '${{ parameters.appconnectionname}}'
appName: VALUE
ResourceGroupName: '${{ parameters.RG}}'
- task: DownloadPipelineArtifact#2
displayName: Download Build Artifacts
inputs:
artifact: '${{ parameters.artifactName}}'
patterns: /authsrv/**/*.zip
path: $(Pipeline.Workspace)/authsrv/
- task: AzureWebApp#1
displayName: 'Azure Web App Deploy: VALUE'
inputs:
package: $(Pipeline.Workspace)/authsrv/**/*.zip
azureSubscription: '${{ parameters.azureSubscription }}'
ConnectedServiceName: '${{ parameters.appconnectionname}}'
appName: VALUE
ResourceGroupName: '${{ parameters.RG}}'
What you would have to do is make a repo in Git or Azure it's a bit easier in Azure as you won't have to make a service connection to git, so there is less to go wrong...
Once you have a repo you then have to set up a remote backend, I did this using these azure commands:
New-AzureRmResourceGroup -Name "myTerraformbackend" -Location "UK South"
New-AzureRmStorageAccount -ResourceGroupName "myTerraformbackend" -AccountName "myterraformstate" -Location UK South -SkuName Standard_LRS
New-AzureRmStorageContainer -ResourceGroupName "myTerraformbackend" -AccountName "nsdevopsterraform" -ContainerName "terraformstatefile"
I also followed this great blog post by Julie Ng, she works for Microsoft and she talks about the best practices for remote state files: https://julie.io/writing/terraform-on-azure-pipelines-best-practices/ she also has a youtube channel massive help.
In my terraform code I also use a bunch of local variables they are handy but make sure you have something created that is original that you can refer to mine is the resource group id. It's the first thing that's set up and then all my resources refer to that id.
Here look:
locals {
# Ids for Resource Group, merged together with unique string
resource_group_id_full_value = format("${azurerm_resource_group.terraform.id}")
resource_group_id = substr(("${azurerm_resource_group.terraform.id}"), 15, 36)
resource_group_id_for_login = substr(("${azurerm_resource_group.terraform.id}"), 15, 8)
sql_admin_login = format("${random_string.sql_name_random.id}-${local.resource_group_id_for_login}")
sql_server_name = format("sqlserver-${local.resource_group_id}")
sql_server_database_name = format("managerserver-${local.resource_group_id}")
}
You will understand this more really as you go.
I keep clear of null_resource I've seen a lot of people on here have problems with them, I use them sparingly.
If you're going to use Terraform for docker consider terraforms new feature, Terraform waypoint: https://www.hashicorp.com/blog/announcing-waypoint
You also have Terraform workspaces to handle remote states now but I haven't had time to look into this.
I hope this information is helpful and a good starting reference.

AzureResourceManagerTemplateDeployment fails to find template using pattern when executed in Deployment Job

I have been experimenting with Azure Logic Apps and wanted to figure out a way to codify the deployment process so that I could setup a CI/CD pipeline with secrets and all the good stuff.
So I set out with a yml file with multiple ways to deploy the same Logic App.
Hardcoding the values of the input params to the task like Connected Service, Subscription, Resource Group etc. in a step inside a regular job.
Doing the same thing but inside a Deployment job.
Use Pipeline variables to extract these values and repeat as 1 and 2.
1 and 2 again but this time using Pipeline Variables that are marked as Secrets
so on and so forth.
However, everytime I run the AzureResourceManagerTemplateDeployment#3 inside a deployment job, it fails to find the ARM template file.
Why is the deployment job unable to find the ARM Template using the pattern that works when the it is not run as a deployment job?
Do deployment jobs not have access to the build directory?
How do I help the deployment job to find the file? Should I be giving it a link to the template file instead of a pattern?
Everytime I search for the AzureResourceManagerTemplateDeployment task docs, I get the docs page of AzureResourceGroupDeployment task which is very similar but not the same
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-resource-group-deployment?view=azure-devops#troubleshooting
As I was about to post this question, I did more searching online and came across the original docs of the AzureResourceManagerTemplateDeployment which states that if the file is part of a repository then one must specify the path to the ARM template using the help of system variables.
csmFile: "$(Build.Repository.LocalPath)/**/LogicApp.json"
csmParametersFile: "$(Build.Repository.LocalPath)/**/LogicApp.parameters.json"
I can confirm that this did not work either.
What could I be missing?
stages:
- stage: 'HardcodedJobStage'
displayName: 'HardcodedJobStage'
jobs:
- job: 'HardcodedJob'
displayName: HardcodedJob
pool:
vmImage: ubuntu-latest
workspace:
clean: all
steps:
- task: AzureResourceManagerTemplateDeployment#3
inputs:
deploymentScope: 'Resource Group'
ConnectedServiceName: 'Subscription (e6d1dg8c-bcd6-4713-b2f1-c9a0375d687d)'
subscriptionName: 'e6d1dg8c-bcd6-4713-b2f1-c9a0375d687d'
action: 'Create Or Update Resource Group'
resourceGroupName: 'AzureLogicApp'
location: 'UK South'
templateLocation: 'Linked artifact'
csmFile: '**/LogicApp.json'
csmParametersFile: '**/LogicApp.parameters.json'
deploymentMode: 'Incremental'
- stage: 'HardCodedDeployJobStage'
displayName: 'HardCodedDeployJobStage'
jobs:
- deployment: 'HardCodedDeployJob'
displayName: HardCodedDeployJob
pool:
vmImage: ubuntu-latest
workspace:
clean: all
environment: development
strategy:
runOnce:
deploy:
steps:
- task: AzureResourceManagerTemplateDeployment#3
inputs:
deploymentScope: 'Resource Group'
ConnectedServiceName: 'Subscription (e6d1dg8c-bcd6-4713-b2f1-c9a0375d687d)'
subscriptionName: 'e6d1dg8c-bcd6-4713-b2f1-c9a0375d687d'
action: 'Create Or Update Resource Group'
resourceGroupName: 'AzureLogicApp'
location: 'UK South'
templateLocation: 'Linked artifact'
csmFile: '**/LogicApp.json'
csmParametersFile: '**/LogicApp.parameters.json'
deploymentMode: 'Incremental'
The problem here was that I had to publish the templates as artifacts and share it between the stages.
So I copied the ARM template json files to a folder using CopyFiles task and then used the PublishPipelineArtifact task to publish the contents as a pipeline artifact. This can then be referenced later by the following stage by using the DownloadPipelineArtifact task.
Now my YAML looks something like:
stages:
- stage: 'HardcodedJobStage'
displayName: 'HardcodedJobStage'
jobs:
- job: 'HardcodedJob'
displayName: HardcodedJob
pool:
vmImage: ubuntu-latest
workspace:
clean: all
steps:
- task: AzureResourceManagerTemplateDeployment#3
inputs:
deploymentScope: 'Resource Group'
ConnectedServiceName: 'Subscription (e6d1dg8c-bcd6-4713-b2f1-c9a0375d687d)'
subscriptionName: 'e6d1dg8c-bcd6-4713-b2f1-c9a0375d687d'
action: 'Create Or Update Resource Group'
resourceGroupName: 'AzureLogicApp'
location: 'UK South'
templateLocation: 'Linked artifact'
csmFile: '**/LogicApp.json'
csmParametersFile: '**/LogicApp.parameters.json'
deploymentMode: 'Incremental'
- task: CopyFiles#2
inputs:
Contents: $(Build.SourcesDirectory)/AzureLogicApp/**/*.json
targetFolder: $(Build.ArtifactStagingDirectory)
- task: PublishPipelineArtifact#1
inputs:
targetPath: $(Build.ArtifactStagingDirectory)
artifactName: armtemplate
- stage: 'HardCodedDeployJobStage'
displayName: 'HardCodedDeployJobStage'
jobs:
- deployment: 'HardCodedDeployJob'
displayName: HardCodedDeployJob
pool:
vmImage: ubuntu-latest
workspace:
clean: all
environment: development
strategy:
runOnce:
deploy:
steps:
- task: DownloadPipelineArtifact#2
inputs:
artifact: armtemplate
- task: AzureResourceManagerTemplateDeployment#3
inputs:
deploymentScope: 'Resource Group'
ConnectedServiceName: 'Subscription (e6d1dg8c-bcd6-4713-b2f1-c9a0375d687d)'
subscriptionName: 'e6d1dg8c-bcd6-4713-b2f1-c9a0375d687d'
action: 'Create Or Update Resource Group'
resourceGroupName: 'AzureLogicApp'
location: 'UK South'
templateLocation: 'Linked artifact'
csmFile: $(Pipeline.Workspace)/armtemplate/**/LogicApp.json
csmParametersFile: $(Pipeline.Workspace)/armtemplate/**/LogicApp.parameters.json
deploymentMode: 'Incremental'
Assuming your yml pipeline is defined in the same git repository as the LogicApp json files, you could use csmFile as an absolute path with the 'root' being the git repo root folder. For example, if your logicapp files are in /app/logicapp/LogicApp.json, and your yml pipeline is in the same git repo but maybe in /pipelines/pipeline.yml, you could then set csmFile's value to app/logicapp/LogicApp.json (and same for csmParametersFile).

InvalidDeploymentParameterKey when using overrideParameters on an ARM deployment from Azure Pipelines

Here is an extract from a YAML pipeline in Azure DevOps:
- task: AzureCLI#2
name: GetAppInsightsConnString
displayName: 'Get AppInsights ConnectionString'
inputs:
azureSubscription: ${{ parameters.TelemetryAzureSubscription }}
scriptType: 'pscore'
scriptLocation: 'inlineScript'
inlineScript: |
az extension add -n application-insights
az feature register --name AIWorkspacePreview --namespace microsoft.insights
$resourceInfo = az monitor app-insights component show --app ${{ parameters.AppInsightsResourceName }} --resource-group ${{ parameters.AppInsightsResourceGroupName }}
$instrumentationKey = ($resourceInfo | ConvertFrom-Json).InstrumentationKey
echo "##vso[task.setvariable variable=ApplicationInsightsInstrumentationKey]$instrumentationKey"
- task: FileTransform#2
displayName: "Replace Parameters From Variables"
inputs:
folderPath: '$(Pipeline.Workspace)'
xmlTransformationRules: ''
jsonTargetFiles: '**/${{ parameters.ArmTemplateParameters }}'
- powershell: 'Get-Content $(Pipeline.Workspace)/${{ parameters.ArtifactName }}-provisioning/${{ parameters.ArmTemplateParameters }}'
displayName: 'Preview Arm Template Parameters File'
- task: PowerShell#2
displayName: "TEMP: Test new variable values"
inputs:
targetType: 'inline'
script: |
Write-Host "ApplicationInsightsInstrumentationKey: $(ApplicationInsightsInstrumentationKey)"
- task: AzureResourceManagerTemplateDeployment#3
inputs:
deploymentScope: 'Resource Group'
ConnectedServiceName: ${{ parameters.AzureSubscription }}
action: 'Create Or Update Resource Group'
resourceGroupName: ${{ parameters.ResourceGroupName }}
location: $(locationLong)
templateLocation: 'Linked artifact'
csmFile: '$(Pipeline.Workspace)/${{ parameters.ArtifactName }}-provisioning/${{ parameters.ArmTemplate }}'
csmParametersFile: '$(Pipeline.Workspace)/${{ parameters.ArtifactName }}-provisioning/${{ parameters.ArmTemplateParameters }}'
overrideParameters: '–applicationInsightsInstrumentationKey "$(ApplicationInsightsInstrumentationKey)"'
deploymentMode: 'Incremental'
This is connecting to an App Insights instance, getting the instrumentation key, then doing a variable replacement on an ARM parameters file before previewing it and deploying it.
The instrumentation key is writtent to a ApplicationInsightsInstrumentationKey pipeline variable, and you can see a later task which previews this in the pipeline logs so I can confirm the variable is being set as expected.
On the final task I'm using an overrideParameters option to feed this key into the deployment as the value of the applicationInsightsInstrumentationKey parameter. This is where the pipeline fails, with the error:
##[error]One of the deployment parameters has an empty key. Please see https://aka.ms/resource-manager-parameter-files for details.
My web searching tells me this can occur when the value has spaces and isn't enclosed in double-quotes, but neither of those are the case here. In fact I can even replace that line with a hard-coded value and I still get the same issue.
If I remove that overrideParameters line the deployment succeeds, but obviously the parameter I want isn't included.
Anyone know how to solve this?
As shown by the help dialog on ARM template deployment ADO task:
Since, applicationInsightsInstrumentationKey will not have multiple words, try changing line like below:
overrideParameters: '–applicationInsightsInstrumentationKey $(ApplicationInsightsInstrumentationKey)'

Is there way to reference overrideParameters from a file in ADF CI/CD?

I need to make use of overrideParameters in an ADF CI/CD scenario like described in the official Azure documentation - can't seem to find a way even in the Azure Resource Group Deployment task , is there a way to reference overrideParameters from a file instead of being forced to create a long ugly string with space seperation?
You can reference the values of each parameter from a variable template. That still requires you to list the override parameters but the values are sourced dynamically and thus can change across environments. Alternatively, you could put the whole deployment task into a template but then you would still have to think about dynamic environment variables in the template. A simple example below.
variable-template.yml
variables:
...
# subscription and resource group
azureSubscription: 'my_subscription'
subscriptionId: 'my_subscription_id'
resourceGroupName: 'my_rg'
# key vault
keyVaultName: 'my_kv'
keyVaultSecretsList: 'storageAccountConnectionString,storageAccountKey'
# data factory
dataFactoryRepoPath: 'my_adf_repo_path'
dataFactoryName: 'my_adf'
# storage accounts
storageAccountName: 'my_sa'
...
azure-pipelines.yml
...
- task: AzureKeyVault#1
inputs:
azureSubscription: ${{ variables.azureSubscription }}
KeyVaultName: ${{ variables.keyVaultName }}
SecretsFilter: ${{ variables.keyVaultSecretsList }}
RunAsPreJob: true
- task: AzureResourceManagerTemplateDeployment#3
displayName: 'ARM Template deployment'
inputs:
deploymentScope: 'Resource Group'
azureResourceManagerConnection: ${{ variables.azureSubscription }}
subscriptionId: ${{ variables.subscriptionId }}
action: 'Create Or Update Resource Group'
resourceGroupName: ${{ variables.resourceGroupName }}
location: 'West Europe'
templateLocation: 'Linked artifact'
csmFile: '$(System.DefaultWorkingDirectory)/adf_branch/${{ variables.dataFactoryRepoPath }}/ARMTemplateForFactory.json'
csmParametersFile: '$(System.DefaultWorkingDirectory)/adf_branch/${{ variables.dataFactoryRepoPath }}/ARMTemplateParametersForFactory.json'
overrideParameters: |
-factoryName ${{ variables.dataFactoryName }}
-storageAccountSink_properties_typeProperties_url ${{ variables.storageAccountName }}
-storageAccountSink_accountKey $(storageAccountKey)
-storageAccountSink_connectionString $(storageAccountConnectionString)
deploymentMode: 'Incremental'
...

Resources