Here is an extract from a YAML pipeline in Azure DevOps:
- task: AzureCLI#2
name: GetAppInsightsConnString
displayName: 'Get AppInsights ConnectionString'
inputs:
azureSubscription: ${{ parameters.TelemetryAzureSubscription }}
scriptType: 'pscore'
scriptLocation: 'inlineScript'
inlineScript: |
az extension add -n application-insights
az feature register --name AIWorkspacePreview --namespace microsoft.insights
$resourceInfo = az monitor app-insights component show --app ${{ parameters.AppInsightsResourceName }} --resource-group ${{ parameters.AppInsightsResourceGroupName }}
$instrumentationKey = ($resourceInfo | ConvertFrom-Json).InstrumentationKey
echo "##vso[task.setvariable variable=ApplicationInsightsInstrumentationKey]$instrumentationKey"
- task: FileTransform#2
displayName: "Replace Parameters From Variables"
inputs:
folderPath: '$(Pipeline.Workspace)'
xmlTransformationRules: ''
jsonTargetFiles: '**/${{ parameters.ArmTemplateParameters }}'
- powershell: 'Get-Content $(Pipeline.Workspace)/${{ parameters.ArtifactName }}-provisioning/${{ parameters.ArmTemplateParameters }}'
displayName: 'Preview Arm Template Parameters File'
- task: PowerShell#2
displayName: "TEMP: Test new variable values"
inputs:
targetType: 'inline'
script: |
Write-Host "ApplicationInsightsInstrumentationKey: $(ApplicationInsightsInstrumentationKey)"
- task: AzureResourceManagerTemplateDeployment#3
inputs:
deploymentScope: 'Resource Group'
ConnectedServiceName: ${{ parameters.AzureSubscription }}
action: 'Create Or Update Resource Group'
resourceGroupName: ${{ parameters.ResourceGroupName }}
location: $(locationLong)
templateLocation: 'Linked artifact'
csmFile: '$(Pipeline.Workspace)/${{ parameters.ArtifactName }}-provisioning/${{ parameters.ArmTemplate }}'
csmParametersFile: '$(Pipeline.Workspace)/${{ parameters.ArtifactName }}-provisioning/${{ parameters.ArmTemplateParameters }}'
overrideParameters: '–applicationInsightsInstrumentationKey "$(ApplicationInsightsInstrumentationKey)"'
deploymentMode: 'Incremental'
This is connecting to an App Insights instance, getting the instrumentation key, then doing a variable replacement on an ARM parameters file before previewing it and deploying it.
The instrumentation key is writtent to a ApplicationInsightsInstrumentationKey pipeline variable, and you can see a later task which previews this in the pipeline logs so I can confirm the variable is being set as expected.
On the final task I'm using an overrideParameters option to feed this key into the deployment as the value of the applicationInsightsInstrumentationKey parameter. This is where the pipeline fails, with the error:
##[error]One of the deployment parameters has an empty key. Please see https://aka.ms/resource-manager-parameter-files for details.
My web searching tells me this can occur when the value has spaces and isn't enclosed in double-quotes, but neither of those are the case here. In fact I can even replace that line with a hard-coded value and I still get the same issue.
If I remove that overrideParameters line the deployment succeeds, but obviously the parameter I want isn't included.
Anyone know how to solve this?
As shown by the help dialog on ARM template deployment ADO task:
Since, applicationInsightsInstrumentationKey will not have multiple words, try changing line like below:
overrideParameters: '–applicationInsightsInstrumentationKey $(ApplicationInsightsInstrumentationKey)'
Related
I've been working on integrating PSRule into an Azure Devops pipeline. The pipeline is succeeding, but the following error: Target object <FileName> has not been processed because no matching rules were found is being displayed for every file in my repository, at the PSRule stage in the pipeline.
Below, I've included the code for my pipeline. Can anyone help me understand where I've gone wrong with implementing PSRule? Custom rules can be defined, but from my understanding, even without defining any, PSRule should run the 270 or so default rules associated with the module.
trigger:
batch: true
branches:
include:
- main
pool:
vmImage: ubuntu-latest
stages:
- stage: Lint
jobs:
- job: LintCode
displayName: Lint code
steps:
- script: |
az bicep build --file $(MainBicepFile)
name: LintBicepCode
displayName: Run Bicep linter
- stage: PSRule
jobs:
- job: PSRuleRun
displayName: PSRule Run
steps:
- task: ps-rule-assert#1
displayName: Analyze Azure template files
inputs:
modules: 'PSRule.Rules.Azure'
- stage: Validate
jobs:
- job: ValidateBicepCode
displayName: Validate Bicep code
steps:
- task: AzureCLI#2
name: RunPreflightValidation
displayName: Run preflight validation
inputs:
azureSubscription: $(ServiceConnectionName)
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
az deployment group validate \
--resource-group $(ResourceGroupName) \
--template-file $(MainBicepFile) \
- stage: Preview
jobs:
- job: PreviewAzureChanges
displayName: Preview Azure changes
steps:
- task: AzureCLI#2
name: RunWhatIf
displayName: Run what-if
inputs:
azureSubscription: $(ServiceConnectionName)
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
az deployment group what-if \
--resource-group $(ResourceGroupName) \
--template-file $(MainBicepFile) \
- stage: Deploy
jobs:
- deployment: DeployBicep
displayName: Deploy Bicep
environment: 'Test'
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: AzureCLI#2
name: DeployBicepFile
displayName: Deploy Bicep file
inputs:
azureSubscription: $(ServiceConnectionName)
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
set -e
deploymentOutput=$(az deployment group create \
--name $(Build.BuildNumber) \
--resource-group $(ResourceGroupName) \
--template-file $(MainBicepFile))
The warning on by default but not critical to the process after Bicep code has been expanded.
You can disable the warning which shows by default by setting the execution.notProcessedWarning option to false. You can do this in a number of ways however the easiest is to configure it in the ps-rule.yaml option file. See doc for additional options to set this.
Your pipeline configuration is fine however you need to enable expansion for Bicep or parameter files to be processed by the PSRule.Rules.Azure module.
Configuring expansion for Bicep code is done by setting the configuration.AZURE_BICEP_FILE_EXPANSION option to true. This is covered in Using Bicep source.
Hope that helps.
This may sound naive, I have to upload docker images in all subscribed regions in public cloud. And I am planning to do it in terraform template and will create a null_resource with multiple local-exec provisioner to login into docker repo, docker tag and docker push.
In future, we can subscribe to more cloud regions so number of regions might change in future.
I am not sure terraform is a better choice or I should think about some devops pipeline. I have basic understanding of terraform and have no idea how devops pipeline works?
Any suggestion?
You can do both, so I have Terraform to make all my resources and then I use an Azure Dev Ops pipeline to make the magic happen.
Here is my pipeline code, that I used to spin up terraform:
parameters:
- name: terraformWorkingDirectory
type: string
default: $(System.DefaultWorkingDirectory)/Terraform
- name: serviceConnection
type: string
default: VALUE
- name: azureSubscription
type: string
default: VALUE
- name: appconnectionname
type: string
default: VALUE
- name: backendresourcegroupname
type: string
default: Terraform
- name: backendstorageaccountname
type: string
default: terraform
- name: backendcontainername
type: string
default: terraformstatefile
- name: RG
type: string
default: rg_example
- name: azureLocation
type: string
default: UK South
- name: terraformVersion
type: string
default: 1.0.4
- name: artifactName
type: string
default: Website
jobs:
- job: Run_Terraform
displayName: Installing and Running Terraform
steps:
- checkout: Terraform
- task: TerraformInstaller#0
displayName: install
inputs:
terraformVersion: '${{ parameters.terraformVersion }}'
- task: CmdLine#2
inputs:
script: |
echo '$(System.DefaultWorkingDirectory)'
dir
- task: TerraformTaskV2#2
displayName: init
inputs:
provider: azurerm
command: init
backendServiceArm: '${{ parameters.serviceConnection }}'
backendAzureRmResourceGroupName: '${{ parameters.backendresourcegroupname }}'
backendAzureRmStorageAccountName: '${{ parameters.backendstorageaccountname }}'
backendAzureRmContainerName: '${{ parameters.backendcontainername }}'
backendAzureRmKey: terraform.tfstate
workingDirectory: '${{ parameters.terraformWorkingDirectory }}'
- task: TerraformTaskV1#0
displayName: plan
inputs:
provider: azurerm
command: plan
commandOptions: '-input=false'
environmentServiceNameAzureRM: '${{ parameters.serviceConnection }}'
workingDirectory: '${{ parameters.terraformWorkingDirectory }}'
- task: TerraformTaskV1#0
displayName: apply
inputs:
provider: azurerm
command: apply
commandOptions: '-input=false -auto-approve'
environmentServiceNameAzureRM: '${{ parameters.serviceConnection }}'
workingDirectory: '${{ parameters.terraformWorkingDirectory }}'
- job: Put_artifacts_into_place
displayName: Putting_artifacts_into_place
dependsOn: Run_Terraform
steps:
- checkout: Website
- checkout: AuthenticationServer
- task: DownloadPipelineArtifact#2
displayName: Download Build Artifacts
inputs:
artifact: '${{ parameters.artifactName }}'
patterns: /**/*.zip
path: '$(Pipeline.Workspace)'
- task: AzureWebApp#1
displayName: 'Azure Web App Deploy: VALUE'
inputs:
package: $(Pipeline.Workspace)**/*.zip
azureSubscription: '${{ parameters.azureSubscription }}'
ConnectedServiceName: '${{ parameters.appconnectionname}}'
appName: VALUE
ResourceGroupName: '${{ parameters.RG}}'
- task: DownloadPipelineArtifact#2
displayName: Download Build Artifacts
inputs:
artifact: '${{ parameters.artifactName}}'
patterns: /authsrv/**/*.zip
path: $(Pipeline.Workspace)/authsrv/
- task: AzureWebApp#1
displayName: 'Azure Web App Deploy: VALUE'
inputs:
package: $(Pipeline.Workspace)/authsrv/**/*.zip
azureSubscription: '${{ parameters.azureSubscription }}'
ConnectedServiceName: '${{ parameters.appconnectionname}}'
appName: VALUE
ResourceGroupName: '${{ parameters.RG}}'
What you would have to do is make a repo in Git or Azure it's a bit easier in Azure as you won't have to make a service connection to git, so there is less to go wrong...
Once you have a repo you then have to set up a remote backend, I did this using these azure commands:
New-AzureRmResourceGroup -Name "myTerraformbackend" -Location "UK South"
New-AzureRmStorageAccount -ResourceGroupName "myTerraformbackend" -AccountName "myterraformstate" -Location UK South -SkuName Standard_LRS
New-AzureRmStorageContainer -ResourceGroupName "myTerraformbackend" -AccountName "nsdevopsterraform" -ContainerName "terraformstatefile"
I also followed this great blog post by Julie Ng, she works for Microsoft and she talks about the best practices for remote state files: https://julie.io/writing/terraform-on-azure-pipelines-best-practices/ she also has a youtube channel massive help.
In my terraform code I also use a bunch of local variables they are handy but make sure you have something created that is original that you can refer to mine is the resource group id. It's the first thing that's set up and then all my resources refer to that id.
Here look:
locals {
# Ids for Resource Group, merged together with unique string
resource_group_id_full_value = format("${azurerm_resource_group.terraform.id}")
resource_group_id = substr(("${azurerm_resource_group.terraform.id}"), 15, 36)
resource_group_id_for_login = substr(("${azurerm_resource_group.terraform.id}"), 15, 8)
sql_admin_login = format("${random_string.sql_name_random.id}-${local.resource_group_id_for_login}")
sql_server_name = format("sqlserver-${local.resource_group_id}")
sql_server_database_name = format("managerserver-${local.resource_group_id}")
}
You will understand this more really as you go.
I keep clear of null_resource I've seen a lot of people on here have problems with them, I use them sparingly.
If you're going to use Terraform for docker consider terraforms new feature, Terraform waypoint: https://www.hashicorp.com/blog/announcing-waypoint
You also have Terraform workspaces to handle remote states now but I haven't had time to look into this.
I hope this information is helpful and a good starting reference.
parameters:
- name: AzureSubscription
default: 'abc'
- name: BlobName
type: string
default: ""
stages:
- stage: MyStage
displayName: 'My Stage'
variables:
- name: sas
jobs:
- job: ABC
displayName: ABC
steps:
- task: AzureCLI#2
displayName: 'XYZ'
inputs:
azureSubscription: ${{ parameters.AzureSubscription }}
scriptType: pscore
arguments:
scriptLocation: inlineScript
inlineScript: |
$sas=az storage account generate-sas --account-key "mykey" --account-name "abc" --expiry (Get-Date).AddHours(100).ToString("yyyy-MM-dTH:mZ") --https-only --permissions rw --resource-types sco --services b
Write-Host "My Token: " $sas
- task: PowerShell#2
inputs:
targetType: 'filepath'
filePath: $(System.DefaultWorkingDirectory)/psscript.ps1
arguments: >
-Token "????"
-BlobName "${{parameters.BlobName}}"
displayName: 'some work'
In this Azure Devops yaml, i have created 2 tasks. AzureCLI#2 and PowerShell#2
In AzureCLI#2 i get value in $sas varaible. Write-Host confirms that, but $sas does not get passes as parameter to PowerShell#2 powershell file as parameter.
"${{parameters.BlobName}}" is working fine. In powershell i am able to read that value.
How to pass sas variable value?
Tried
-Token $sas # not worked
-Token "${{sas}}" # not worked
Different tasks in Azure Pipeline don't share a common runspace that would allow them to preserve or pass on variables.
For this reason Azure Pipelines offers special logging commands that allow to take string output from a task to update an Azure Pipeline environment variable that can be used in subsequent tasks: Set variables in scripts (Microsoft Docs).
In your case you would use a logging command like this to make your sas token available to the next task:
Write-Host "##vso[task.setvariable variable=sas]$sas"
In the argument of your subsequent task (within the same job) use the variable syntax of Azure Pipelines:
-Token '$(sas)'
So, I'm new to Azure Devops (but not Azure or PowerShell). I've got two scripts as tasks in my pipeline, the first runs perfectly (Note: no az module commands). The second one fails (It has one Az call). The error I get in the pipeline is:
ParserError: /home/vsts/work/_temp/3f7b3ce1-6afb-46f3-b00d-1efe35fbac71.ps1:5
Line |
5 | } else {
| ~
| Unexpected token '}' in expression or statement.
Here is the thing... I don't have '} else {' in my script or at least, I removed it and got the same error.
So whatever is causing this is more fundamental than my script. I assumed that '/home/vsts/work/_temp/3f7b3ce1-6afb-46f3-b00d-1efe35fbac71.ps1' was my script copied to the remote syste, but it doesn't seem to be the case.
Is there any way, I can find out what that file is?
The task is 'PowerShell#2' with 'pwsh: true' set.
Thanks!
YAML:
trigger:
- main
pool:
vmImage: ubuntu-latest
steps:
- task: AzureResourceManagerTemplateDeployment#3
inputs:
deploymentScope: 'Resource Group'
azureResourceManagerConnection: 'Pay-As-You-Go(<guid>)'
subscriptionId: '<guid>'
action: 'Create Or Update Resource Group'
resourceGroupName: 'project-Test'
location: 'East US 2'
templateLocation: 'Linked artifact'
csmFile: 'deploy.template.json'
csmParametersFile: 'deploy.parameters.json'
deploymentMode: 'Incremental'
deploymentOutputs: 'DeploymentOutput'
- task: PowerShell#2
inputs:
targetType: 'inline'
script: $(System.DefaultWorkingDirectory)/Get-ArmDeploymentOutput.ps1 -ArmOutputString '$(DeploymentOutput)' -MakeOutput -ErrorAction Stop
pwsh: true
displayName: Get-ArmDeploymentOutput
- task: PowerShell#2
inputs:
targetType: 'inline'
script: $(System.DefaultWorkingDirectory)/Set-AzWebAppIPRestriction.ps1 -Priority 100 -Action 'Allow' -WebAppId '$(WebAppId)' -PipId '$(gwpipId)'' -ErrorAction Stop
pwsh: true
displayName: Set-AzWebAppIPRestriction
I found there is an extra ' in -PipId '$(gwpipId)'' in the script of your second powershell task. Maybe it caused the error.
script: $(System.DefaultWorkingDirectory)/Set-AzWebAppIPRestriction.ps1 -Priority 100 -Action 'Allow' -WebAppId '$(WebAppId)' -PipId '$(gwpipId)'' -ErrorAction Stop
Since you were running your scripts in .ps1 file. You should change the targetType parameter of powershell task to filePath. And set .ps1 file in the filePath parameter. See below example:
steps:
- task: PowerShell#2
displayName: 'PowerShell Script'
inputs:
targetType: filePath
filePath: '$(System.DefaultWorkingDirectory)/Set-AzWebAppIPRestriction.ps1'
arguments: '-Priority 100 -Action "Allow" -WebAppId "$(WebAppId)" -PipId "$(gwpipId)" -ErrorAction Stop'
pwsh: true
If you are using azure powershell module in your scripts. You can use Azure powershell task instead of powershell task..
I need to make use of overrideParameters in an ADF CI/CD scenario like described in the official Azure documentation - can't seem to find a way even in the Azure Resource Group Deployment task , is there a way to reference overrideParameters from a file instead of being forced to create a long ugly string with space seperation?
You can reference the values of each parameter from a variable template. That still requires you to list the override parameters but the values are sourced dynamically and thus can change across environments. Alternatively, you could put the whole deployment task into a template but then you would still have to think about dynamic environment variables in the template. A simple example below.
variable-template.yml
variables:
...
# subscription and resource group
azureSubscription: 'my_subscription'
subscriptionId: 'my_subscription_id'
resourceGroupName: 'my_rg'
# key vault
keyVaultName: 'my_kv'
keyVaultSecretsList: 'storageAccountConnectionString,storageAccountKey'
# data factory
dataFactoryRepoPath: 'my_adf_repo_path'
dataFactoryName: 'my_adf'
# storage accounts
storageAccountName: 'my_sa'
...
azure-pipelines.yml
...
- task: AzureKeyVault#1
inputs:
azureSubscription: ${{ variables.azureSubscription }}
KeyVaultName: ${{ variables.keyVaultName }}
SecretsFilter: ${{ variables.keyVaultSecretsList }}
RunAsPreJob: true
- task: AzureResourceManagerTemplateDeployment#3
displayName: 'ARM Template deployment'
inputs:
deploymentScope: 'Resource Group'
azureResourceManagerConnection: ${{ variables.azureSubscription }}
subscriptionId: ${{ variables.subscriptionId }}
action: 'Create Or Update Resource Group'
resourceGroupName: ${{ variables.resourceGroupName }}
location: 'West Europe'
templateLocation: 'Linked artifact'
csmFile: '$(System.DefaultWorkingDirectory)/adf_branch/${{ variables.dataFactoryRepoPath }}/ARMTemplateForFactory.json'
csmParametersFile: '$(System.DefaultWorkingDirectory)/adf_branch/${{ variables.dataFactoryRepoPath }}/ARMTemplateParametersForFactory.json'
overrideParameters: |
-factoryName ${{ variables.dataFactoryName }}
-storageAccountSink_properties_typeProperties_url ${{ variables.storageAccountName }}
-storageAccountSink_accountKey $(storageAccountKey)
-storageAccountSink_connectionString $(storageAccountConnectionString)
deploymentMode: 'Incremental'
...