How can I pass map variable to Azure Devops pipeline job? - azure

I'm learning Azure Devops pipelines, my first project is to create simple vnet with subnet using Terraform. I figured how to pass simple key-value variables, but problem is how to pass for example list of strings or more important, map variable from Terraform.
I'm using it to create subnets using each key - each value loop.
There are files that I'm using, I'm getting error about syntax in pipeline.yaml for VirtualNetworkAddressSpace and VirtualNetworkSubnets values.
Can you please help me with this one?
variables.tf
variable RG_Name {
type = string
#default = "TESTMS"
}
variable RG_Location {
type = string
#default = "West Europe"
}
variable VirtualNetworkName {
type = string
#default = "TESTSS"
}
variable VirtualNetworkAddressSpace {
type = list(string)
#default = ["10.0.0.0/16"]
}
variable VirtualNetworkSubnets {
type = map
#default = {
#"GatewaySubnet" = "10.0.255.0/27"
#}
}
dev.tfvars
RG_Name = __rgNAME__
RG_Location = __rgLOCATION__
VirtualNetworkName = __VirtualNetworkName__
VirtualNetworkAddressSpace = __VirtualNetworkAddressSpace__
VirtualNetworkSubnets = __VirtualNetworkSubnets__
pipeline.yaml
resources:
repositories:
- repository: self
trigger:
- feature/learning
stages:
- stage: DEV
jobs:
- deployment: TERRAFORM
displayName: 'Terraform deployment'
pool:
nvmImage: 'ubuntu-latest'
workspace:
clean: all
variables:
- name: 'rgNAME'
value: 'skwiera-rg'
- name: 'rgLOCATION'
value: 'West Europe'
- name: 'VirtualNetworkName'
value: 'SkwieraVNET'
- name: 'VirtualNetworkAddressSpace'
value: ['10.0.0.0/16']
- name: 'VirtualNetworkSubnets'
value: {'GatewaySubnet' : '10.0.255.0/27'}
environment: 'DEV'
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: qetza.replacetokens.replacetokens-task.replacetokens#3
displayName: 'Replace Terraform variables'
inputs:
targetFiles: '**/*.tfvars'
tokenPrefix: '__'
tokenSuffix: '__'
- task: TerraformInstaller#0
displayName: "Install Terraform"
inputs:
terraformVersion: '1.0.8'
- task: TerraformTaskV2#2
displayName: 'Terraform Init'
inputs:
provider: 'azurerm'
command: 'init'
backendServiceArm: 'skwieralearning'
backendAzureRmResourceGroupName: 'skwiera-learning-rg'
backendAzureRmStorageAccountName: 'skwieralearningtfstate'
backendAzureRmContainerName: 'tfstate'
backendAzureRmKey: 'dev.tfstate'
- task: TerraformTaskV2#2
displayName: 'Terraform Validate'
inputs:
provider: 'azurerm'
command: 'validate'
- task: TerraformTaskV2#2
displayName: "Terraform Plan"
inputs:
provider: 'azurerm'
command: 'plan'
environmentServiceNameAzureRM: 'skwieralearning'
- task: TerraformTaskV2#2
displayName: 'Terraform Apply'
inputs:
provider: 'azurerm'
command: 'apply'
environmentServiceNameAzureRM: 'skwieralearning'

The Azure Devops pipeline.yaml file is expecting the job variable's value to be a string but if you use:
- name: 'VirtualNetworkSubnets'
value: {'GatewaySubnet' : '10.0.255.0/27'}
Then the YAML parser sees that as a nested mapping under the value key as YAML supports both key1: value and {key: value} syntax for mappings.
You can avoid it being read as a mapping by wrapping it in quotes so that it's read as a string literal:
- name: 'VirtualNetworkSubnets'
value: "{'GatewaySubnet' : '10.0.255.0/27'}"
Separately you can avoid the qetza.replacetokens.replacetokens-task.replacetokens#3 step and the tokenised values in dev.tfvars by prefixing the environment variables with TF_VAR_:
stages:
- stage: DEV
jobs:
- deployment: TERRAFORM
displayName: 'Terraform deployment'
pool:
nvmImage: 'ubuntu-latest'
workspace:
clean: all
variables:
- name: 'TF_VAR_rgNAME'
value: 'skwiera-rg'
- name: 'TF_VAR_rgLOCATION'
value: 'West Europe'
- name: 'TF_VAR_VirtualNetworkName'
value: 'SkwieraVNET'
- name: 'TF_VAR_VirtualNetworkAddressSpace'
value: "['10.0.0.0/16']"
- name: 'TF_VAR_VirtualNetworkSubnets'
value: "{'GatewaySubnet' : '10.0.255.0/27'}"

Related

Terraform Init throwing error while using with Azure DevOps

I am trying to run Terraform Open Source using Azure Devops
I have the state file stored in Azure Blobstorage
Below is my pipeline file
variables:
- group: infra-variables
trigger:
branches:
include:
- master
paths:
include:
- Terraform-Test
exclude:
- README.md
stages:
- stage: Validate
displayName: Validate
jobs:
- job: validate
pool:
vmImage: ubuntu-latest
steps:
- task: ms-devlabs.custom-terraform-tasks.custom-terraform-installer-task.TerraformInstaller#0
displayName: Install Terraform
inputs:
terraformVersion: 'latest'
# Init
- task: TerraformCLI#0
displayName: Initialize Terraform
env:
ARM_SAS_TOKEN: $(ARM_ACCESS_KEY)
inputs:
command: 'init'
workingDirectory: '$(System.DefaultWorkingDirectory)/Terraform-Test'
commandOptions: '-backend-config=storage_account_name=$(TF_STATE_BLOB_ACCOUNT_NAME) -backend-config=container_name=$(TF_STATE_BLOB_CONTAINER_NAME) -backend-config=key=$(ARM_ACCESS_KEY)'
backendType: 'selfConfigured'
# Validate
- task: TerraformCLI#0
displayName: Validate Config
inputs:
command: 'validate'
workingDirectory: '$(System.DefaultWorkingDirectory)/Terraform-Test'
- stage: Plan
displayName: Plan
jobs:
- job: plan
pool:
vmImage: ubuntu-latest
steps:
- task: ms-devlabs.custom-terraform-tasks.custom-terraform-installer-task.TerraformInstaller#0
displayName: Install Terraform
inputs:
terraformVersion: 'latest'
# Init
- task: TerraformCLI#0
displayName: Initialize Terraform
env:
ARM_SAS_TOKEN: $(ARM_ACCESS_KEY)
inputs:
command: 'init'
workingDirectory: '$(System.DefaultWorkingDirectory)/Terraform-Test'
commandOptions: '-backend-config=storage_account_name=$(TF_STATE_BLOB_ACCOUNT_NAME) -backend-config=container_name=$(TF_STATE_BLOB_CONTAINER_NAME) -backend-config=key=$(ARM_ACCESS_KEY)'
backendType: 'selfConfigured'
# Plan
- task: TerraformCLI#0
displayName: Plan Terraform Deployment
env:
ARM_SAS_TOKEN: $(ARM_ACCESS_KEY)
ARM_CLIENT_ID: $(AZURE_CLIENT_ID)
ARM_CLIENT_SECRET: $(AZURE_CLIENT_SECRET)
ARM_SUBSCRIPTION_ID: $(AZURE_SUBSCRIPTION_ID)
ARM_TENANT_ID: $(AZURE_TENANT_ID)
inputs:
command: 'plan'
workingDirectory: '$(System.DefaultWorkingDirectory)/Terraform-Test'
# Approve
- stage: Approve
displayName: Approve
jobs:
- job: approve
displayName: Wait for approval
pool: server
steps:
- task: ManualValidation#0
timeoutInMinutes: 60
inputs:
notifyUsers: 'pallabcd#hotmail.com'
instructions: 'Review the plan in the next hour'
- stage: Apply
displayName: Apply
jobs:
- job: apply
pool:
vmImage: ubuntu-latest
steps:
- task: ms-devlabs.custom-terraform-tasks.custom-terraform-installer-task.TerraformInstaller#0
displayName: Install Terraform
inputs:
terraformVersion: 'latest'
# Init
- task: TerraformCLI#0
displayName: TF Init
env:
ARM_SAS_TOKEN: $(ARM_ACCESS_KEY)
inputs:
command: 'init'
workingDirectory: '$(System.DefaultWorkingDirectory)/Terraform-Test'
commandOptions: '-backend-config=storage_account_name=$(TF_STATE_BLOB_ACCOUNT_NAME) -backend-config=container_name=$(TF_STATE_BLOB_CONTAINER_NAME) -backend-config=key=$(ARM_ACCESS_KEY)'
backendType: 'selfConfigured'
# Apply
- task: TerraformCLI#0
displayName: TF Apply
env:
ARM_SAS_TOKEN: $(ARM_ACCESS_KEY)
ARM_CLIENT_ID: $(AZURE_CLIENT_ID)
ARM_CLIENT_SECRET: $(AZURE_CLIENT_SECRET)
ARM_SUBSCRIPTION_ID: $(AZURE_SUBSCRIPTION_ID)
ARM_TENANT_ID: $(AZURE_TENANT_ID)
inputs:
command: 'apply'
workingDirectory: '$(System.DefaultWorkingDirectory)/Terraform-Test'
commandOptions: '-auto-approve'
My main.tf file is given below
terraform {
required_version = "~> 1.0"
backend "azurerm" {
storage_account_name = var.storage_account_name
container_name = var.container_name
key = "terraform.tfstate"
access_key = "#{ARM_ACCESS_KEY}#"
features {}
}
required_providers {
azuread = "~> 1.0"
azurerm = "~> 2.0"
}
}
provider "azurerm" {
tenant_id = var.tenant_id
client_id = var.client_id
client_secret = var.client_secret
subscription_id = var.subscription_id
features {}
}
data "azurerm_resource_group" "az-rg-wu" {
name = "Great-Learning"
}
data "azurerm_client_config" "current" {}
When i am putting the actual storage access key in main.tf the Init is successful but if i am putting the ADO variable in the form of "#{ARM_ACCESS_KEY}#", the pipeline fails.
This variable is there in my tfvar file also and the value is set in a variable group in Azure Devops
So what i am doing wrong here
The service connection may cause problems. Terraform should require a proper service connection and an Azure DevOps extension as a pre-requisite. Ensure Terraform CLI is installed on the pipeline agent.
Here is the sample YAML to connect to Azure DevOps.
Step1: sample yaml file code as below
variables:
- name: TerraformBackend.ResourceGroup
value: rg-realworld-staging-001
- name: TerraformBackend.StorageAccount
value: strwstagingterraform01
- name: TerraformBackend.ContainerName
value: staging
- group: 'staging'
steps:
- task: AzureCLI#2
inputs:
azureSubscription: '[service connection]'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
az group create --location eastus --name $(TerraformBackend.ResourceGroup)
az storage account create --name $(TerraformBackend.StorageAccount) --resource-group $(TerraformBackend.ResourceGroup) --location eastus --sku Standard_LRS
az storage container create --name staging --account-name $(TerraformBackend.StorageAccount)
STORAGE_ACCOUNT_KEY=$(az storage account keys list -g $(TerraformBackend.ResourceGroup) -n $(TerraformBackend.StorageAccount) | jq ".[0].value" -r)
echo "setting storage account key variable"
echo "##vso[task.setvariable variable=ARM_ACCESS_KEY;issecret=true]$STORAGE_ACCOUNT_KEY"
- task: TerraformInstaller#0
inputs:
terraformVersion: '1.3.6'
- task: TerraformTaskV1#0
displayName: "Terraform Init"
inputs:
provider: 'azurerm'
command: 'init'
backendServiceArm: '[service connection]'
backendAzureRmResourceGroupName: $(TerraformBackend.ResourceGroup)
backendAzureRmStorageAccountName: $(TerraformBackend.StorageAccount)
backendAzureRmContainerName: '$(TerraformBackend.ContainerName)'
backendAzureRmKey: 'infrastructure/terraform.tfstate'
workingDirectory: '$(System.DefaultWorkingDirectory)/stag/'
- task: TerraformTaskV1#0
displayName: "Terraform Apply"
inputs:
provider: 'azurerm'
command: 'apply'
workingDirectory: '$(System.DefaultWorkingDirectory)/stag/'
environmentServiceNameAzureRM: '[service connection]'
commandOptions: |
-var "some_key=$(value)"
Upon init execution
Updated main tf file as below
terraform {
required_version = "~> 1.0"
backend "azurerm" {
storage_account_name = "tstate6075"
container_name = "tstate"
key = "terraform.tfstate"
access_key = "=="
******************************** // features {}
}
required_providers {
azuread = "~> 1.0"
azurerm = "~> 2.0"
}
}
provider "azurerm" {
tenant_id = "*******************"
//client_id = var.client_id
//client_secret = var.client_secret
subscription_id = "**********************"
features {}
}
data "azurerm_resource_group" "az-rg-wu" {
name = "rg-*****"
}
data "azurerm_client_config" "current" {}

Multiple GitHub repos in Azure Pipline

I am trying to deploy a simple infrastructure in Azure, described with Terraform scripts, using Azure devOps pipline. Tarraform files are keept in GitHub.
I use 3 different repositories:
Terraform1 - for main terraofrm module
TerPipline - for pipline.yaml file
TerModule - for child terraform modules, each is in its own subfolder
However, when I run pipline - on the stage terraform plan I receive error: Error: Module not installed.
If list downloaded folders, I see all child modules are downloaded. I believe I am dooing something wrong in terms of folder structure...
My pipline:
resources:
repositories:
- repository: TerModules
name: GAW99/TerModules
type: github
ref: main
endpoint: GitHub
- repository: Terraform1
name: GAW99/Terraform1
type: github
ref: main
endpoint: GitHub
trigger: none
pr: none
pool:
vmImage: windows-latest
name: 'Azure Pipelines'
stages:
- stage: terraform_validate
displayName: 'Validating Terraform'
jobs:
- job: Validate
continueOnError: false
steps:
- checkout: self
persistCredentials: true
- script: 'dir $(System.DefaultWorkingDirectory)'
- checkout: Terraform1
path: 's\\Terraform1'
- script: 'dir $(System.DefaultWorkingDirectory)'
- script: 'dir $(System.DefaultWorkingDirectory)\Terraform1'
- checkout: TerModules
path: 's\\TerModules'
submodules: recursive
- script: 'dir $(System.DefaultWorkingDirectory)'
- script: 'dir $(System.DefaultWorkingDirectory)\TerModules'
- task: TerraformInstaller#0
displayName: 'Installing Terraform'
inputs:
terraformVersion: 'latest'
- task: TerraformCLI#0
displayName: 'Initialising Terraform'
env:
ARM_CLIENT_ID: $(ARM_CLIENT_ID)
ARM_CLIENT_SECRET: $(ARM_CLIENT_SECRET)
ARM_SUBSCRIPTION_ID: $(ARM_SUBSCRIPTION_ID)
ARM_TENANT_ID: $(ARM_TENANT_ID)
GITHUB_TOKEN: $(GITHUB_TOKEN)
inputs:
command: 'init'
#workingDirectory: '$(System.DefaultWorkingDirectory)\Terraform1'
commandOptions: '-upgrade'
allowTelemetryCollection: true
- task: TerraformCLI#0
displayName: 'Validating Terraform config'
inputs:
command: 'validate'
allowTelemetryCollection: true
- stage: terraform_deploy
displayName: 'Deploing Terraform'
jobs:
- deployment: Deploy
continueOnError: false
environment: 'dev'
strategy:
runOnce:
deploy:
steps:
- checkout: self
persistCredentials: true
- script: 'dir $(System.DefaultWorkingDirectory)'
- checkout: Terraform1
path: 's\\Terraform1'
- script: 'dir $(System.DefaultWorkingDirectory)'
- script: 'dir $(System.DefaultWorkingDirectory)\Terraform1'
- checkout: TerModules
path: 's\\TerModules'
submodules: recursive
- script: 'dir $(System.DefaultWorkingDirectory)'
- script: 'dir $(System.DefaultWorkingDirectory)\TerModules'
- task: TerraformInstaller#0
displayName: 'Installing Terraform'
inputs:
terraformVersion: 'latest'
- task: TerraformCLI#0
displayName: 'Initialising Terraform'
env:
ARM_CLIENT_ID: $(ARM_CLIENT_ID)
ARM_CLIENT_SECRET: $(ARM_CLIENT_SECRET)
ARM_SUBSCRIPTION_ID: $(ARM_SUBSCRIPTION_ID)
ARM_TENANT_ID: $(ARM_TENANT_ID)
GITHUB_TOKEN: $(GITHUB_TOKEN)
inputs:
command: 'init'
#workingDirectory: '$(System.DefaultWorkingDirectory)\Terraform1'
commandOptions: '-upgrade'
allowTelemetryCollection: true
- task: TerraformCLI#0
displayName: 'Terraform Plan'
env:
ARM_CLIENT_ID: $(ARM_CLIENT_ID)
ARM_CLIENT_SECRET: $(ARM_CLIENT_SECRET)
ARM_SUBSCRIPTION_ID: $(ARM_SUBSCRIPTION_ID)
ARM_TENANT_ID: $(ARM_TENANT_ID)
GITHUB_TOKEN: $(GITHUB_TOKEN)
inputs:
command: 'plan'
environmentServiceName: 'AzRM(Auto)'
workingDirectory: '$(System.DefaultWorkingDirectory)\Terraform1'
providerAzureRmSubscriptionId: '05c55d9c-2fdd-49ca-9011-4dc4a28d50a5'
commandOptions: '-var-file="./development.tfvars"'
allowTelemetryCollection: true
- task: TerraformCLI#0
displayName: 'Terraform Deploy'
env:
ARM_CLIENT_ID: $(ARM_CLIENT_ID)
ARM_CLIENT_SECRET: $(ARM_CLIENT_SECRET)
ARM_SUBSCRIPTION_ID: $(ARM_SUBSCRIPTION_ID)
ARM_TENANT_ID: $(ARM_TENANT_ID)
GITHUB_TOKEN: $(GITHUB_TOKEN)
inputs:
command: 'apply'
environmentServiceName: 'AzRM(Auto)'
workingDirectory: '$(System.DefaultWorkingDirectory)\Terraform1'
providerAzureRmSubscriptionId: '05c55d9c-2fdd-49ca-9011-4dc4a28d50a5'
commandOptions: '-auto-approve -var-file="./development.tfvars"'
allowTelemetryCollection: true
My main terraform file:
terraform {
required_providers {
azrm = {
source = "hashicorp/azurerm"
version = ">=3.0"
}
azad = {
source = "hashicorp/azuread"
version = ">=2.0"
}
github = {
source = "integrations/github"
version = ">=4.0"
}
}
}
# Configure the Microsoft Azure Provider
provider "azrm" {
features {}
}
# Configure the Microsoft Azure Active Directory Provider
provider "azad" {
features {}
tenant_id = "4d18e547-6b51-4c00-bf8a-a94237a983fb"
}
provider "github" {
}
module "resourcegroup" {
#source = "./TerModules/RG"
source = "github.com/GAW99/TerModules.git/RG"
location = "North Europe"
env_prefix = var.prefix
}
module "Vnet" {
#source = "./TerModules/VNet"
source = "github.com/GAW99/TerModules.git/VNet"
location = "North Europe"
env_prefix = var.prefix
ParentRG = module.resourcegroup.RG_ID
}
output "ResourceGroupName" {
value = module.resourcegroup.RG_ID
}
output "IPs" {
value = module.Vnet.VnetIPs
}
Any help would be appreciated.

Can you pass Variable Group values to sub template in yaml?

I'm trying to create a multi-stage pipeline which has Variable Groups defined for each stage of the pipeline. The goal is to pass values from the group as parameters to a sub template. It seems the value of the Group, defined at the stage, is not getting passed in to the sub template. It overrides the "DEFAULTVALUE" with an empty string.
pipeline.yml
trigger:
- none
pool:
name: 'Azure Pipelines'
vmImage: windows-latest
stages:
- stage: DEV
variables:
- group: my-group-dev
jobs:
- template: sub-template.yml
parameters:
env: 'dev'
subscriptionName: '$(SubscriptionName)' # This reference from the variable group doesn't get passed in
subscriptionId: '$(SubscriptionId)'
- stage: TEST
variables:
- group: my-group-test
jobs:
- template: sub-template.yml
parameters:
env: 'test'
subscriptionName: '$(SubscriptionName)' # This reference from the variable group doesn't get passed in
subscriptionId: '$(SubscriptionId)'
sub-template.yml
parameters:
env: 'DEFAULTVALUE'
subscriptionName: 'DEFAULTVALUE'
subscriptionId: 'DEFAULTVALUE'
jobs:
- deployment: ResourceDeployment
displayName: Deploy Resources ${{ parameters.env }}
environment: ${{ parameters.env }}
strategy:
runOnce:
deploy:
steps:
- task: AzureFileCopy#4
displayName: 'Upload ARM Templates'
inputs:
sourcePath: '$(Pipeline.Workspace)/templates'
azureSubscription: '${{ parameters.subscriptionName }}'
destination: 'azureBlob'
storage: 'my-storage-account'
containerName: 'arm'
name: AzureFileCopy
- task: AzureResourceManagerTemplateDeployment#3
inputs:
deploymentScope: 'Resource Group'
azureResourceManagerConnection: '${{ parameters.subscriptionName }}'
subscriptionId: '${{ parameters.subscriptionId }}'
action: 'Create Or Update Resource Group'
resourceGroupName: 'my-resource-group'
location: 'eastus2'
templateLocation: 'URL of the file'
csmFileLink: '$(AzureFileCopy.StorageContainerUri)${{ parameters.env }}/templates/main.json$(AzureFileCopy.StorageContainerSasToken)'
I have also tried adding the variable group within the sub-template but that also doesn't parse correctly...
Does anyone know if this is possible?
It is a known issue that the service connection endpoint cannot be referenced in variable groups defined under stage level.
You can fix this issue by below workarounds:
1, Define the variable groups in the global level instead of stage level See below:
trigger:
- none
pool:
name: 'Azure Pipelines'
vmImage: windows-latest
# define the variable group under global level.
variables:
- group: my-group-dev
- group: my-group-test
stages:
- stage: DEV
jobs:
- template: sub-template.yml
parameters:
env: 'dev'
subscriptionName: '$(SubscriptionName)' # This reference from the variable group doesn't get passed in
subscriptionId: '$(SubscriptionId)'
- stage: TEST
jobs:
- template: sub-template.yml
parameters:
env: 'test'
subscriptionName: '$(SubscriptionName)' # This reference from the variable group doesn't get passed in
subscriptionId: '$(SubscriptionId)'
2, Link the variable groups on the UI page.
On the yaml pipeline edit page--> Click the 3dots-->Triggers-->Variables Tab-->Link Variable group
Please see below threads for more information.
Using a variable for the service connection
Azure subscription endpoint ID cannot be provided through a variable in build definition YAML file
You need to pass variable group name to template and on template level apply it on variables scope like this:
parameters:
env: 'DEFAULTVALUE'
subscriptionName: 'DEFAULTVALUE'
subscriptionId: 'DEFAULTVALUE'
variableGroupName: 'DEFAULTVALUE`
jobs:
- deployment: ResourceDeployment
displayName: Deploy Resources ${{ parameters.env }}
environment: ${{ parameters.env }}
variables:
- group: {{ paramaters.variableGroupName }}
strategy:
runOnce:
deploy:
steps:
- task: AzureFileCopy#4
displayName: 'Upload ARM Templates'
inputs:
sourcePath: '$(Pipeline.Workspace)/templates'
azureSubscription: '${{ parameters.subscriptionName }}'
destination: 'azureBlob'
storage: 'my-storage-account'
containerName: 'arm'
name: AzureFileCopy
- task: AzureResourceManagerTemplateDeployment#3
inputs:
deploymentScope: 'Resource Group'
azureResourceManagerConnection: '${{ parameters.subscriptionName }}'
subscriptionId: '${{ parameters.subscriptionId }}'
action: 'Create Or Update Resource Group'
resourceGroupName: 'my-resource-group'
location: 'eastus2'
templateLocation: 'URL of the file'
csmFileLink: '$(AzureFileCopy.StorageContainerUri)${{ parameters.env }}/templates/main.json$(AzureFileCopy.StorageContainerSasToken)'
This is similar to this topic:
For security reasons, we only allow you to pass information into templated code via explicit parameters.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops
The means the author of the pipeline using your template needs to commit changes where they explicitly pass the needed info into your template code.
There are some exceptions to this, where the variable is statically defined in the same file or at pipeline compile time, but generally speaking, it’s probably better to use parameters for everything that does not involve system-defined read-only dynamic variable and custom-defined dynamic output variables.
I have to variables groups
QA
PROD
both has isProd variable
and then my template is
parameters:
variableGroupName: 'QA'
jobs:
- job: ResourceDeployment
variables:
- group: ${{ parameters.variableGroupName }}
steps:
- script: echo '$(isProd)'
and main file is
trigger: none
pr: none
stages:
- stage: QA
jobs:
- template: sub-template.yml
parameters:
variableGroupName: 'QA'
- stage: PROD
jobs:
- template: sub-template.yml
parameters:
variableGroupName: 'PROD'
and I got:
and

Conditional Stage Execution in Azure DevOps Pipelines

I want a stage in an Azure DevOps pipeline to be executed depending on the content of a variable set in a previous stage.
Here is my pipeline:
stages:
- stage: plan_dev
jobs:
- job: terraform_plan_dev
steps:
- bash: echo '##vso[task.setvariable variable=terraform_plan_exitcode;isOutput=true]2'
name: terraform_plan
- stage: apply_dev
dependsOn: plan_dev
condition: eq(stageDependencies.plan_dev.terraform_plan_dev.outputs['terraform_plan.terraform_plan_exitcode'], '2')
jobs:
- deployment: "apply_dev"
...
The idea is to skip the apply_dev stage, if the plan_dev stage shows no changes. Background is that we have manual approval for the deployment in the plan_dev stage that we want to skip if there are no changes to be approved.
Unfortunately this doesn't seem to work. No matter whether the variable terraform_plan_exitcode is set with the expected value (2) or not, the apply_dev stage is skipped.
For the syntax, I followed the documentation here that says:
stageDependencies.StageName.JobName.outputs['StepName.VariableName']
I have seen this same issue. You need to use the dependencies variable instead of the stageDependencies:
stages:
- stage: plan_dev
jobs:
- job: terraform_plan_dev
steps:
- bash: echo '##vso[task.setvariable variable=terraform_plan_exitcode;isOutput=true]2'
name: terraform_plan
- stage: apply_dev
dependsOn: plan_dev
condition: eq(dependencies.plan_dev.outputs['terraform_plan_dev.terraform_plan.terraform_plan_exitcode'], '2')
jobs:
- deployment: "apply_dev"
The following is a more complete example of something I have working with Terraform Plan + conditional Apply:
stages:
- stage: Build_zip_plan
displayName: Build portal, zip files and terraform plan
jobs:
- job: Build_portal_zip_files_terraform_plan
pool:
vmImage: 'ubuntu-latest'
steps:
- task: Cache#2
displayName: 'Register TF cache'
inputs:
key: terraform | $(Agent.OS) | $(Build.BuildNumber) | $(Build.BuildId) | $(Build.SourceVersion) | $(prefix)
path: ${{ parameters.tfExecutionDir }}
- task: TerraformInstaller#0
displayName: 'Install Terraform'
inputs:
terraformVersion: ${{ parameters.tfVersion }}
- task: TerraformTaskV1#0
displayName: 'Terraform Init'
inputs:
provider: 'azurerm'
command: 'init'
workingDirectory: ${{ parameters.tfExecutionDir }}
backendServiceArm: ${{ parameters.tfStateServiceConnection }}
backendAzureRmResourceGroupName: ${{ parameters.tfStateResourceGroup }}
backendAzureRmStorageAccountName: ${{ parameters.tfStateStorageAccount }}
backendAzureRmContainerName: ${{ parameters.tfStateStorageContainer }}
backendAzureRmKey: '$(prefix)-$(environment).tfstate'
- task: TerraformTaskV1#0
displayName: 'Terraform Plan'
inputs:
provider: 'azurerm'
command: 'plan'
commandOptions: '-input=false -out=deployment.tfplan -var="environment=$(environment)" -var="prefix=$(prefix)" -var="tenant=$(tenant)" -var="servicenow={username=\"$(servicenowusername)\",instance=\"$(servicenowinstance)\",password=\"$(servicenowpassword)\",assignmentgroup=\"$(servicenowassignmentgroup)\",company=\"$(servicenowcompany)\"}" -var="clientid=$(clientid)" -var="username=$(username)" -var="password=$(password)" -var="clientsecret=$(clientsecret)" -var="mcasapitoken=$(mcasapitoken)" -var="portaltenantid=$(portaltenantid)" -var="portalclientid=$(portalclientid)" -var="customerdisplayname=$(customerdisplayname)" -var="reportonlymode=$(reportonlymode)"'
workingDirectory: ${{ parameters.tfExecutionDir }}
environmentServiceNameAzureRM: ${{ parameters.tfServiceConnection }}
- task: PowerShell#2
displayName: 'Check Terraform plan'
name: "Check_Terraform_Plan"
inputs:
filePath: '$(Build.SourcesDirectory)/Pipelines/Invoke-CheckTerraformPlan.ps1'
arguments: '-TfPlan ''${{ parameters.tfExecutionDir }}/deployment.tfplan'''
pwsh: true
- stage:
dependsOn: Build_zip_plan
displayName: Terraform apply
condition: eq(dependencies.Build_zip_plan.outputs['Build_portal_zip_files_terraform_plan.Check_Terraform_Plan.TFChangesPending'], 'yes')
jobs:
- deployment: DeployHub
displayName: Apply
pool:
vmImage: 'ubuntu-latest'
environment: '$(prefix)'
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: Cache#2
displayName: 'Get Cache for TF Artifact'
inputs:
key: terraform | $(Agent.OS) | $(Build.BuildNumber) | $(Build.BuildId) | $(Build.SourceVersion) | $(prefix)
path: ${{ parameters.tfExecutionDir }}
- task: TerraformInstaller#0
displayName: 'Install Terraform'
inputs:
terraformVersion: ${{ parameters.tfVersion }}
- task: TerraformTaskV1#0
displayName: 'Terraform Apply'
inputs:
provider: 'azurerm'
command: 'apply'
commandOptions: 'deployment.tfplan'
workingDirectory: ${{ parameters.tfExecutionDir }}
environmentServiceNameAzureRM: ${{ parameters.tfServiceConnection }}
#Marius is correct. So this works
stages:
- stage: plan_dev
jobs:
- job: terraform_plan_dev
steps:
- bash: echo '##vso[task.setvariable variable=terraform_plan_exitcode;isOutput=true]2'
name: terraform_plan
- stage: apply_dev
dependsOn: plan_dev
variables:
varFromA: $[ stageDependencies.plan_dev.terraform_plan_dev.outputs['terraform_plan.terraform_plan_exitcode'] ]
condition: eq(dependencies.plan_dev.outputs['terraform_plan_dev.terraform_plan.terraform_plan_exitcode'], 2)
jobs:
- job: apply_dev
steps:
- bash: echo 'apply $(varFromA)'
name: terraform_apply
When you refer stage to stage dependencies you have different syntax
"dependencies": {
"<STAGE_NAME>" : {
"result": "Succeeded|SucceededWithIssues|Skipped|Failed|Canceled",
"outputs": {
"jobName.stepName.variableName": "value"
}
},
"...": {
// another stage
}
}
And when you refer to job to job across stage you have different syntax
"stageDependencies": {
"<STAGE_NAME>" : {
"<JOB_NAME>": {
"result": "Succeeded|SucceededWithIssues|Skipped|Failed|Canceled",
"outputs": {
"stepName.variableName": "value"
}
},
"...": {
// another job
}
},
"...": {
// another stage
}
}
What is funny when you have job to job in one stage we use dependecies syntax again
"dependencies": {
"<JOB_NAME>": {
"result": "Succeeded|SucceededWithIssues|Skipped|Failed|Canceled",
"outputs": {
"stepName.variableName": "value1"
}
},
"...": {
// another job
}
}
This is a bit confusing and consider this in this as
when you are on some level stage, job and refer to the same level from job to job or from stage to stage you have dependencies syntax
when you want to refer from deeper level like from job to stage you should use stageDependencies
What is funny, in above example I used this on stage level:
variables:
varFromA: $[ stageDependencies.plan_dev.terraform_plan_dev.outputs['terraform_plan.terraform_plan_exitcode'] ]
but this is evaluated at runtime and is evaluated from the job, so it is correct and is evaluated correctly.
I hope it added a value to previous answer.
TerraformTaskV2 has changesPresent output variable now, which can be used to skip apply stage.
add name: to the plan task
stages:
- stage: terraform_plan_STAGE
jobs:
- job: plan_JOB
...
steps:
...
- task: TerraformTaskV2#2
name: 'plan_TASK' # <===========
displayName: 'plan'
inputs:
provider: 'azurerm'
command: 'plan'
...
add condition: to apply stage and check if changesPresent is true
- stage: terraform_apply
dependsOn: [terraform_plan]
condition: eq(dependencies.terraform_plan_STAGE.outputs['plan_JOB.plan_TASK.changesPresent'], 'true')
reference:
https://github.com/microsoft/azure-pipelines-terraform/tree/main/Tasks/TerraformTask/TerraformTaskV2#output-variables
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#use-outputs-in-a-different-stage

Unexpected Behavior With Azure Pipelines Variables Using Variable Groups and Templates

I have a Azure DevOps YAML Pipeline to execute a Terraform deployment using the Terraform by MS DevLabs extension and an Azure Resource Manager service connection.
The last working state was using a pipeline template yaml file however I had to configure a parameter within the template and call the variable using the template expression syntax.
...
...
stages:
- stage: Plan
displayName: Terrafom Plan
jobs:
- job: DEV PLAN
displayName: Plan (DEV)
pool:
vmImage: "ubuntu-latest"
variables:
az_service_connection: "MyServiceConnection"
tf_environment: "DEV"
tf_state_rg: "DEV"
tz_state_location: "canadacentral"
tf_state_stgacct_name: "mystorageaccuontname1231231"
tf_state_container_name: "tfstate"
steps:
- template: templates/terraform-plan.yml
parameters:
az_service_connection: ${{ variables.az_service_connection }}
...
...
steps:
- task: terraformInstaller#0
displayName: "Install Terraform $(tf_version)"
inputs:
terraformVersion: $(tf_version)
- task: TerraformTaskV1#0
displayName: "Run > terraform init"
inputs:
command: "init"
commandOptions: "-input=false"
backendServiceArm: ${{ parameters.az_service_connection }}
...
...
I believe the reason why this works is because the template expression syntax ${{ variables.varname}} evaluates at compile time vs. runtime. If I didn't do it this way, i'd either get $(az_service_connection) passed into the backendServiceArm input or an empty value.
With the introduction of variable groups, i'm now facing similar behavior. I expect that the variable group evaluates after the template expression variable which causes ${{ variables.az_service_connection }} to have an empty value. I am unsure how to get this working.
How can I use variable groups with a pipeline template that uses a service connection?
I used $() syntax to pass arm connection to template:
Template file:
parameters:
- name: 'instances'
type: object
default: {}
- name: 'server'
type: string
default: ''
- name: 'armConnection'
type: string
default: ''
steps:
- task: TerraformTaskV1#0
inputs:
provider: 'azurerm'
command: 'init'
backendServiceArm: '${{ parameters.armConnection }}'
backendAzureRmResourceGroupName: 'TheCodeManual'
backendAzureRmStorageAccountName: 'thecodemanual'
backendAzureRmContainerName: 'infra'
backendAzureRmKey: 'some-terrform'
- ${{ each instance in parameters.instances }}:
- script: echo ${{ parameters.server }}:${{ instance }}
Main file:
trigger:
branches:
include:
- master
paths:
include:
- stackoverflow/09-array-parameter-for-template/*
# no PR triggers
pr: none
pool:
vmImage: 'ubuntu-latest'
variables:
- group: my-variable-group
- name: my-passed-variable
value: $[variables.myhello] # uses runtime expression
steps:
- template: template.yaml
parameters:
instances:
- test1
- test2
server: $(myhello)
armConnection: $(armConnection)
Note: Group my-variable-group contains armConnection variable

Resources