Azure Function App infra redeployment makes existing functions in app fail because of missing requirements and also delete previous invocations data - azure

I'm facing quite a big problem. I have a function app that I deploy by Azure Bicep in the following fashion:
param environmentType string
param location string
param storageAccountSku string
param vnetIntegrationSubnetId string
param kvName string
/*
This module contains the IaC for deploying the Premium function app
*/
/// Just a single minimum instance to start with and max scaling of 3 for dev, 5 for prd ///
var minimumElasticSize = 1
var maximumElasticSize = ((environmentType == 'prd') ? 5 : 3)
var name = 'nlp'
var functionAppName = 'function-app-${name}-${environmentType}'
/// Storage account for service ///
resource functionAppStorage 'Microsoft.Storage/storageAccounts#2019-06-01' = {
name: 'st4functionapp${name}${environmentType}'
location: location
kind: 'StorageV2'
sku: {
name: storageAccountSku
}
properties: {
allowBlobPublicAccess: false
accessTier: 'Hot'
supportsHttpsTrafficOnly: true
minimumTlsVersion: 'TLS1_2'
}
}
/// Premium app plan for the service ///
resource servicePlanfunctionApp 'Microsoft.Web/serverfarms#2021-03-01' = {
name: 'plan-${name}-function-app-${environmentType}'
location: location
kind: 'linux'
sku: {
name: 'EP1'
tier: 'ElasticPremium'
family: 'EP'
}
properties: {
reserved: true
targetWorkerCount: minimumElasticSize
maximumElasticWorkerCount: maximumElasticSize
elasticScaleEnabled: true
isSpot: false
zoneRedundant: ((environmentType == 'prd') ? true : false)
}
}
// Create log analytics workspace
resource logAnalyticsWorkspacefunctionApp 'Microsoft.OperationalInsights/workspaces#2021-06-01' = {
name: '${name}-functionapp-loganalytics-workspace-${environmentType}'
location: location
properties: {
sku: {
name: 'PerGB2018' // Standard
}
}
}
/// Log analytics workspace insights ///
resource applicationInsightsfunctionApp 'Microsoft.Insights/components#2020-02-02' = {
name: 'application-insights-${name}-function-${environmentType}'
location: location
kind: 'web'
properties: {
Application_Type: 'web'
Flow_Type: 'Bluefield'
publicNetworkAccessForIngestion: 'Enabled'
publicNetworkAccessForQuery: 'Enabled'
Request_Source: 'rest'
RetentionInDays: 30
WorkspaceResourceId: logAnalyticsWorkspacefunctionApp.id
}
}
// App service containing the workflow runtime ///
resource sitefunctionApp 'Microsoft.Web/sites#2021-03-01' = {
name: functionAppName
location: location
kind: 'functionapp,linux'
identity: {
type: 'SystemAssigned'
}
properties: {
clientAffinityEnabled: false
httpsOnly: true
serverFarmId: servicePlanfunctionApp.id
siteConfig: {
linuxFxVersion: 'python|3.9'
minTlsVersion: '1.2'
pythonVersion: '3.9'
use32BitWorkerProcess: true
appSettings: [
{
name: 'FUNCTIONS_EXTENSION_VERSION'
value: '~4'
}
{
name: 'FUNCTIONS_WORKER_RUNTIME'
value: 'python'
}
{
name: 'AzureWebJobsStorage'
value: 'DefaultEndpointsProtocol=https;AccountName=${functionAppStorage.name};AccountKey=${listKeys(functionAppStorage.id, '2019-06-01').keys[0].value};EndpointSuffix=core.windows.net'
}
{
name: 'WEBSITE_CONTENTAZUREFILECONNECTIONSTRING'
value: 'DefaultEndpointsProtocol=https;AccountName=${functionAppStorage.name};AccountKey=${listKeys(functionAppStorage.id, '2019-06-01').keys[0].value};EndpointSuffix=core.windows.net'
}
{
name: 'WEBSITE_CONTENTSHARE'
value: 'app-${toLower(name)}-functionservice-${toLower(environmentType)}a6e9'
}
{
name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
value: applicationInsightsfunctionApp.properties.InstrumentationKey
}
{
name: 'ApplicationInsightsAgent_EXTENSION_VERSION'
value: '~2'
}
{
name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
value: applicationInsightsfunctionApp.properties.ConnectionString
}
{
name: 'ENV'
value: toUpper(environmentType)
}
]
}
}
/// VNET integration so flows can access storage and queue accounts ///
resource vnetIntegration 'networkConfig#2022-03-01' = {
name: 'virtualNetwork'
properties: {
subnetResourceId: vnetIntegrationSubnetId
swiftSupported: true
}
}
}
/// Outputs for creating access policies ///
output functionAppName string = sitefunctionApp.name
output functionAppManagedIdentityId string = sitefunctionApp.identity.principalId
Output is used for giving permissions to blob/queue and some keyvault stuff. This code is a single module called in a main.bicep module and deployed via an Azure Devops pipeline.
I have a second repository in which I have some functions and which I also deploy via Azure Pipelines. This one contains three .yaml files for deploying, 2 templates (CI and CD) and 1 main pipeline called azure-pipelines.yml pulling it all together:
functions-ci.yml:
parameters:
- name: environment
type: string
jobs:
- job:
displayName: 'Publish the function as .zip'
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(pythonVersion)'
displayName: 'Use Python $(pythonVersion)'
- task: CopyFiles#2
displayName: 'Create project folder'
inputs:
SourceFolder: '$(System.DefaultWorkingDirectory)'
Contents: |
**
TargetFolder: '$(Build.ArtifactStagingDirectory)'
- task: Bash#3
displayName: 'Install requirements for running function'
inputs:
targetType: 'inline'
script: |
python3 -m pip install --upgrade pip
pip install setup
pip install --target="./.python_packages/lib/site-packages" -r ./requirements.txt
workingDirectory: '$(Build.ArtifactStagingDirectory)'
- task: ArchiveFiles#2
displayName: 'Create project zip'
inputs:
rootFolderOrFile: '$(Build.ArtifactStagingDirectory)'
includeRootFolder: false
archiveType: 'zip'
archiveFile: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip'
replaceExistingArchive: true
- task: PublishPipelineArtifact#1
displayName: 'Publish project zip artifact'
inputs:
targetPath: '$(Build.ArtifactStagingDirectory)'
artifactName: 'functions$(environment)'
publishLocation: 'pipeline'
functions-cd.yml:
parameters:
- name: environment
type: string
- name: azureServiceConnection
type: string
jobs:
- job: worfklowsDeploy
displayName: 'Deploy the functions'
steps:
# Download created artifacts, containing the zipped function codes
- task: DownloadPipelineArtifact#2
inputs:
buildType: 'current'
artifactName: 'functions$(environment)'
targetPath: '$(Build.ArtifactStagingDirectory)'
# Zip deploy the functions code
- task: AzureFunctionApp#1
inputs:
azureSubscription: $(azureServiceConnection)
appType: functionAppLinux
appName: function-app-nlp-$(environment)
package: $(Build.ArtifactStagingDirectory)/**/*.zip
deploymentMethod: 'zipDeploy'
They are pulled together in azure-pipelines.yml:
trigger:
branches:
include:
- develop
- main
pool:
name: "Hosted Ubuntu 1804"
variables:
${{ if notIn(variables['Build.SourceBranchName'], 'main') }}:
environment: dev
azureServiceConnection: SC-NLPDT
${{ if eq(variables['Build.SourceBranchName'], 'main') }}:
environment: prd
azureServiceConnection: SC-NLPPRD
pythonVersion: '3.9'
stages:
# Builds the functions as .zip
- stage: functions_ci
displayName: 'Functions CI'
jobs:
- template: ./templates/functions-ci.yml
parameters:
environment: $(environment)
# Deploys .zip workflows
- stage: functions_cd
displayName: 'Functions CD'
jobs:
- template: ./templates/functions-cd.yml
parameters:
environment: $(environment)
azureServiceConnection: $(azureServiceConnection)
So this successfully deploys my function app the first time around when I have also deployed the infra code. The imports are done well, the right function app is deployed, and the code runs when I trigger it.
But, when I go and redeploy the infra (bicep) code, all of a sudden I the newest version of the functions is gone and is replaced by a previous version.
Also, running this previous version doesn't work anymore since all my requirements that were installed in the pipeline (CI part) via pip install --target="./.python_packages/lib/site-packages" -r ./requirements.txt suddenly cannot be found anymore, giving import errors (i.e. Result: Failure Exception: ModuleNotFoundError: No module named 'azure.identity'). Mind you, this version did work previously just fine.
This is a big problem for me since I need to be able to update some infra stuff (like adding an APP_SETTING) without this breaking the current deployment of functions.
I had thought about just redeploying the function automatically after an infra update, but then I still miss the previous invocations which I need to be able to see.
Am I missing something in the above code because I cannot figure out what would be going wrong here that causes my functions to change on infra deployment...

Looking at the documentation:
To enable your function app to run from a package, add a WEBSITE_RUN_FROM_PACKAGE setting to your function app settings.
1 Indicates that the function app runs from a local package file deployed in the d:\home\data\SitePackages (Windows) or /home/data/SitePackages (Linux) folder of your function app.
In your case, when you deploy your function app code using AzureFunctionApp#1 and zipDeploy, this automatically add this appsetting into your function app. When redeploying your infrastructure, this setting is removed and the function app host does not know where to find the code.
If you add this app setting in your bicep file this should work:
{
name: 'WEBSITE_RUN_FROM_PACKAGE'
value: '1'
}

Related

Azure Logic Apps (Standard) workflows getting deleted when I redeploy Logic App IaC part, how to avoid this?

I set up a Logic App in IaC in the following way:
param environmentType string
param location string
param storageAccountSku string
param vnetIntegrationSubnetId string
param storageAccountTempEndpoint string
param ResourceGroupName string
/// Just a single minimum instance to start with and max scaling of 3 ///
var minimumElasticSize = 1
var maximumElasticSize = 3
var name = 'somename'
var logicAppName = 'logic-app-${name}-${environmentType}'
/// Storage account for service ///
resource logicAppStorage 'Microsoft.Storage/storageAccounts#2019-06-01' = {
name: 'st4logicapp${name}${environmentType}'
location: location
kind: 'StorageV2'
sku: {
name: storageAccountSku
}
properties: {
allowBlobPublicAccess: false
accessTier: 'Hot'
supportsHttpsTrafficOnly: true
minimumTlsVersion: 'TLS1_2'
}
}
/// Existing temp storage for extracting variables ///
resource storageAccountTemp 'Microsoft.Storage/storageAccounts#2021-08-01' existing = {
scope: resourceGroup(ResourceGroupName)
name: 'tmpst${environmentType}'
}
/// Dedicated app plan for the service ///
resource servicePlanLogicApp 'Microsoft.Web/serverfarms#2021-02-01' = {
name: 'plan-${name}-logic-app-${environmentType}'
location: location
sku: {
tier: 'WorkflowStandard'
name: 'WS1'
}
properties: {
targetWorkerCount: minimumElasticSize
maximumElasticWorkerCount: maximumElasticSize
elasticScaleEnabled: true
isSpot: false
zoneRedundant: ((environmentType == 'prd') ? true : false)
}
}
// Create log analytics workspace
resource logAnalyticsWorkspacelogicApp 'Microsoft.OperationalInsights/workspaces#2021-06-01' = {
name: '${name}-logicapp-loganalytics-workspace-${environmentType}'
location: location
properties: {
sku: {
name: 'PerGB2018' // Standard
}
}
}
/// Log analytics workspace insights ///
resource applicationInsightsLogicApp 'Microsoft.Insights/components#2020-02-02' = {
name: 'application-insights-${name}-logic-${environmentType}'
location: location
kind: 'web'
properties: {
Application_Type: 'web'
Flow_Type: 'Bluefield'
publicNetworkAccessForIngestion: 'Enabled'
publicNetworkAccessForQuery: 'Enabled'
Request_Source: 'rest'
RetentionInDays: 30
WorkspaceResourceId: logAnalyticsWorkspacelogicApp.id
}
}
// App service containing the workflow runtime ///
resource siteLogicApp 'Microsoft.Web/sites#2021-02-01' = {
name: logicAppName
location: location
kind: 'functionapp,workflowapp'
properties: {
httpsOnly: true
siteConfig: {
appSettings: [
{
name: 'FUNCTIONS_EXTENSION_VERSION'
value: '~3'
}
{
name: 'FUNCTIONS_WORKER_RUNTIME'
value: 'node'
}
{
name: 'WEBSITE_NODE_DEFAULT_VERSION'
value: '~12'
}
{
name: 'AzureWebJobsStorage'
value: 'DefaultEndpointsProtocol=https;AccountName=${logicAppStorage.name};AccountKey=${listKeys(logicAppStorage.id, '2019-06-01').keys[0].value};EndpointSuffix=core.windows.net'
}
{
name: 'WEBSITE_CONTENTAZUREFILECONNECTIONSTRING'
value: 'DefaultEndpointsProtocol=https;AccountName=${logicAppStorage.name};AccountKey=${listKeys(logicAppStorage.id, '2019-06-01').keys[0].value};EndpointSuffix=core.windows.net'
}
{
name: 'WEBSITE_CONTENTSHARE'
value: 'app-${toLower(name)}-logicservice-${toLower(environmentType)}a6e9'
}
{
name: 'AzureFunctionsJobHost__extensionBundle__id'
value: 'Microsoft.Azure.Functions.ExtensionBundle.Workflows'
}
{
name: 'AzureFunctionsJobHost__extensionBundle__version'
value: '[1.*, 2.0.0)'
}
{
name: 'APP_KIND'
value: 'workflowApp'
}
{
name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
value: applicationInsightsLogicApp.properties.InstrumentationKey
}
{
name: 'ApplicationInsightsAgent_EXTENSION_VERSION'
value: '~2'
}
{
name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
value: applicationInsightsLogicApp.properties.ConnectionString
}
{
name: 'AzureBlob_connectionString'
value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountTemp.name};EndpointSuffix=${environment().suffixes.storage};AccountKey=${listKeys(storageAccountTemp.id, storageAccountTemp.apiVersion).keys[0].value}'
}
{
name: 'azurequeues_connectionString'
value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountTemp.name};EndpointSuffix=${environment().suffixes.storage};AccountKey=${listKeys(storageAccountTemp.id, storageAccountTemp.apiVersion).keys[0].value}'
}
]
use32BitWorkerProcess: true
}
serverFarmId: servicePlanLogicApp.id
clientAffinityEnabled: false
}
/// VNET integration so flows can access storage and queue accounts ///
resource vnetIntegration 'networkConfig' = {
name: 'virtualNetwork'
properties: {
subnetResourceId: vnetIntegrationSubnetId
swiftSupported: true
}
}
}
This all goes well and the Standard Logic App gets deployed.
Next, I define some workflows via azure pipelines (via zipdeploy) with code:
trigger:
branches:
include:
- '*'
pool:
name: "Ubuntu hosted"
stages:
- stage: logicAppBuild
displayName: 'Logic App Build'
jobs:
- job: logic_app_build
displayName: 'Build and publish logic app'
steps:
- task: CopyFiles#2
displayName: 'Create project folder'
inputs:
SourceFolder: '$(System.DefaultWorkingDirectory)/logicapp'
Contents: |
**
TargetFolder: 'project_output'
- task: ArchiveFiles#2
displayName: 'Create project zip'
inputs:
rootFolderOrFile: '$(System.DefaultWorkingDirectory)/project_output'
includeRootFolder: false
archiveType: 'zip'
archiveFile: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip'
replaceExistingArchive: true
- task: PublishPipelineArtifact#1
displayName: 'Publish project zip artifact'
inputs:
targetPath: '$(Build.ArtifactStagingDirectory)'
artifactName: 'artifectdev'
publishLocation: 'pipeline'
- stage: logicAppDeploy
displayName: 'Logic app deployment'
jobs:
- job: logicAppDeploy
displayName: 'Deploy the Logic apps'
steps:
- task: DownloadPipelineArtifact#2
inputs:
buildType: 'current'
artifactName: 'artifectdev'
targetPath: '$(Build.ArtifactStagingDirectory)'
- task: AzureFunctionApp#1 # Add this at the end of your file
inputs:
azureSubscription: SC-DEV
appType: functionApp # default is functionApp
appName: logic-app-name-dev
package: $(Build.ArtifactStagingDirectory)/**/*.zip
Running the IaC code first in a pipeline (called in a main.bicep with some other infra code) results in successful deployment of the LogicApp. After then running the pipeline with the zip-deploy, the flows defined in the logicapp directory get deployed well, connections and all.
However, when the IaC pipeline is run again, all my defined workflows that were deployed with the zip-deploy in the second pipeline, are now gone. Even if I don't change anything in the IaC code.
Is there any way to circumvent this? It is totally unworkable for me to have this happen every time I deploy IaC code (for instance when adding some app setting).
Sharing the resolution as discussed here if someone looking for the similar issue.
For the zip deploy you need to use the AzureFunctionApp task with workflowapp apptype
task: AzureFunctionApp#1
displayName: Deploy Logic App Workflows
inputs:
azureSubscription: ${<!-- -->{variables.azureSubscription}}
appName: $(pv_logicAppName)
appType: 'workflowapp'
package: '$(Pipeline.Workspace)/LogicApps/$(Build.BuildNumber).zip'
deploymentMethod: 'zipDeploy'

Making an Azure Terraform pipeline work with multiple subscriptions

I am trying to get my main Terraform pipeline to deploy to multiple subscriptions, using the same service principle. Yet I keep getting errors that state resource group not found when it tries to deploy to the subscription. Both subscriptions are in the same tenant.
Here is my YAML Code:
parameters:
- name: terraformWorkingDirectory
type: string
default: '$(System.DefaultWorkingDirectory)/Terraform/'
- name: serviceConnection
type: string
default: 'JasonTestEnvManagmentGroup'
- name: azureSubscription
type: string
default: 'JasonTestEnvManagmentGroup'
- name: appconnectionname
type: string
default: 'JasonTestEnvManagmentGroup'
- name: RG
type: string
default: 'Jason_Testing_Terraform'
- name: azureLocation
type: string
default: 'UK South'
- name: terraformVersion
type: string
default: '1.0.4'
- name: artifactName
type: string
default: 'Website'
#- name: authartifactName
# type: string
#default: 'AuthServer'
# Only run against develop
#trigger:
# branches:
# include:
# - main
#pool:
#vmImage: "ubuntu-latest"
# Don't run against PRs
#pr: none
#stages:
#- stage: terraformStage
# displayName: Detect Drift
# jobs:
#- job: terraform_plan_and_apply
steps:
- checkout: self
- task: TerraformInstaller#0
displayName: "install"
inputs:
terraformVersion: ${{ parameters.terraformVersion }}
- task: TerraformTaskV2#2
displayName: "init"
inputs:
provider: "azurerm"
command: "init"
backendServiceArm: ${{ parameters.serviceConnection }}
backendAzureRmResourceGroupName: "TerraformBackendForCICTesting"
backendAzureRmStorageAccountName: "nsterraformstatestorage"
backendAzureRmContainerName: "devopsterraformstatefile"
backendAzureRmKey: "terraform.tfstate"
workingDirectory: ${{ parameters.terraformWorkingDirectory }}
- task: TerraformTaskV1#0
displayName: "plan"
inputs:
provider: "azurerm"
command: "plan"
commandOptions: "-input=false"
environmentServiceNameAzureRM: ${{ parameters.serviceConnection }}
workingDirectory: ${{ parameters.terraformWorkingDirectory }}
- task: TerraformTaskV1#0
displayName: "apply"
inputs:
provider: "azurerm"
command: "apply"
commandOptions: "-input=false -auto-approve"
environmentServiceNameAzureRM: ${{ parameters.serviceConnection }}
workingDirectory: ${{ parameters.terraformWorkingDirectory }}
#- stage: put_pipelines_files_in_place
# displayName: Putting Pipeline Files In Place
#- jobs:
#- job: apply_artifiact_to_web_app
# displayName: Putting Files In Place
# dependsOn: terraform_plan_and_apply
# Download Artifact File
#- download: none
- task: DownloadPipelineArtifact#2 # Website Artifact
displayName: 'Download Build Artifacts'
inputs:
artifact: ${{ parameters.artifactName }}
patterns: '/website/**/*.zip'
path: '$(Build.ArtifactStagingDirectory)/website/'
# deploy to Azure Web App
- task: AzureWebApp#1
displayName: 'Azure Web App Deploy: nsclassroom-dgyn27h2dfoyojc' #Website Deploy Artifact
inputs:
package: $(Build.ArtifactStagingDirectory)/website/**/*.zip
azureSubscription: ${{ parameters.azureSubscription }}
ConnectedServiceName: ${{ parameters.appconnectionname}}
appName: 'nsclassroom-dgyn27h2dfoyojc'
ResourceGroupName: ${{ parameters.RG}}
- task: DownloadPipelineArtifact#2 # Authentication Server Artifact
displayName: 'Download Build Artifacts'
inputs:
artifact: ${{ parameters.artifactName}}
patterns: '/authsrv/**/*.zip'
path: '$(Build.ArtifactStagingDirectory)/authsrv/'
# deploy to Azure Web App
- task: AzureWebApp#1
displayName: 'Azure Web App Deploy: nsclassroomauthentication-dgyn27h2dfoyojc' #Authentication Server Deploy Artifact
inputs:
package: $(Build.ArtifactStagingDirectory)/authsrv/**/*.zip
azureSubscription: ${{ parameters.azureSubscription }}
ConnectedServiceName: ${{ parameters.appconnectionname}}
appName: 'nsclassroomauthentication-dgyn27h2dfoyojc'
ResourceGroupName: ${{ parameters.RG}}
The agent is uses is the default Ubuntu agent.
The service connection I have tried two, one mapped to both of the subscriptions doesn't work. Another mapped and scoped to the Management group.
The original service principle did work until I brought in the second subscription.

How can I pass map variable to Azure Devops pipeline job?

I'm learning Azure Devops pipelines, my first project is to create simple vnet with subnet using Terraform. I figured how to pass simple key-value variables, but problem is how to pass for example list of strings or more important, map variable from Terraform.
I'm using it to create subnets using each key - each value loop.
There are files that I'm using, I'm getting error about syntax in pipeline.yaml for VirtualNetworkAddressSpace and VirtualNetworkSubnets values.
Can you please help me with this one?
variables.tf
variable RG_Name {
type = string
#default = "TESTMS"
}
variable RG_Location {
type = string
#default = "West Europe"
}
variable VirtualNetworkName {
type = string
#default = "TESTSS"
}
variable VirtualNetworkAddressSpace {
type = list(string)
#default = ["10.0.0.0/16"]
}
variable VirtualNetworkSubnets {
type = map
#default = {
#"GatewaySubnet" = "10.0.255.0/27"
#}
}
dev.tfvars
RG_Name = __rgNAME__
RG_Location = __rgLOCATION__
VirtualNetworkName = __VirtualNetworkName__
VirtualNetworkAddressSpace = __VirtualNetworkAddressSpace__
VirtualNetworkSubnets = __VirtualNetworkSubnets__
pipeline.yaml
resources:
repositories:
- repository: self
trigger:
- feature/learning
stages:
- stage: DEV
jobs:
- deployment: TERRAFORM
displayName: 'Terraform deployment'
pool:
nvmImage: 'ubuntu-latest'
workspace:
clean: all
variables:
- name: 'rgNAME'
value: 'skwiera-rg'
- name: 'rgLOCATION'
value: 'West Europe'
- name: 'VirtualNetworkName'
value: 'SkwieraVNET'
- name: 'VirtualNetworkAddressSpace'
value: ['10.0.0.0/16']
- name: 'VirtualNetworkSubnets'
value: {'GatewaySubnet' : '10.0.255.0/27'}
environment: 'DEV'
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: qetza.replacetokens.replacetokens-task.replacetokens#3
displayName: 'Replace Terraform variables'
inputs:
targetFiles: '**/*.tfvars'
tokenPrefix: '__'
tokenSuffix: '__'
- task: TerraformInstaller#0
displayName: "Install Terraform"
inputs:
terraformVersion: '1.0.8'
- task: TerraformTaskV2#2
displayName: 'Terraform Init'
inputs:
provider: 'azurerm'
command: 'init'
backendServiceArm: 'skwieralearning'
backendAzureRmResourceGroupName: 'skwiera-learning-rg'
backendAzureRmStorageAccountName: 'skwieralearningtfstate'
backendAzureRmContainerName: 'tfstate'
backendAzureRmKey: 'dev.tfstate'
- task: TerraformTaskV2#2
displayName: 'Terraform Validate'
inputs:
provider: 'azurerm'
command: 'validate'
- task: TerraformTaskV2#2
displayName: "Terraform Plan"
inputs:
provider: 'azurerm'
command: 'plan'
environmentServiceNameAzureRM: 'skwieralearning'
- task: TerraformTaskV2#2
displayName: 'Terraform Apply'
inputs:
provider: 'azurerm'
command: 'apply'
environmentServiceNameAzureRM: 'skwieralearning'
The Azure Devops pipeline.yaml file is expecting the job variable's value to be a string but if you use:
- name: 'VirtualNetworkSubnets'
value: {'GatewaySubnet' : '10.0.255.0/27'}
Then the YAML parser sees that as a nested mapping under the value key as YAML supports both key1: value and {key: value} syntax for mappings.
You can avoid it being read as a mapping by wrapping it in quotes so that it's read as a string literal:
- name: 'VirtualNetworkSubnets'
value: "{'GatewaySubnet' : '10.0.255.0/27'}"
Separately you can avoid the qetza.replacetokens.replacetokens-task.replacetokens#3 step and the tokenised values in dev.tfvars by prefixing the environment variables with TF_VAR_:
stages:
- stage: DEV
jobs:
- deployment: TERRAFORM
displayName: 'Terraform deployment'
pool:
nvmImage: 'ubuntu-latest'
workspace:
clean: all
variables:
- name: 'TF_VAR_rgNAME'
value: 'skwiera-rg'
- name: 'TF_VAR_rgLOCATION'
value: 'West Europe'
- name: 'TF_VAR_VirtualNetworkName'
value: 'SkwieraVNET'
- name: 'TF_VAR_VirtualNetworkAddressSpace'
value: "['10.0.0.0/16']"
- name: 'TF_VAR_VirtualNetworkSubnets'
value: "{'GatewaySubnet' : '10.0.255.0/27'}"

Azure devops pipeline access Json file inputs and perform for(each) loop

I am using, Linux agent, I need to iterate over json input objects for each project.
I have below json file and It may have more than 200 charts, I need perform build, lint, templates and push to repository, I can do this using shell/python, but I thought to use Azure pipelines yaml code.
{
"helm": {
"charts": [
{
"project1": {
"secretName" : "mysecret1",
"setValues": ["a", "b", "c"]
}
},
{
"project2": {
"secretName" : "mysecret2",
"setValues": ["x", "y", "z"]
}
}
]
}
}
azure-pipelines.yaml:
trigger:
- '*'
variables:
buildConfiguration: 'Release'
releaseBranchName: 'dev'
stages:
- stage: 'Build'
pool:
name: 'linux'
displayName: 'Build helm Projects'
jobs:
- job: 'buildProjects'
displayName: 'Building all the helm projects'
steps:
- task: HelmInstaller#0
displayName: install helm
inputs:
helmVersion: 'latest'
installKubectl: false
- script: chmod -R 755 $(Build.SourcesDirectory)/
displayName: 'Set Directory permissions'
- task: PythonScript#0
inputs:
scriptSource: inline
script: |
import argparse, json, sys
parser = argparse.ArgumentParser()
parser.add_argument("--filePath", help="Provide the json file path")
args = parser.parse_args()
with open(args.filePath, 'r') as f:
data = json.load(f)
data = json.dumps(data)
print('##vso[task.setvariable variable=helmConfigs;]%s' % (data))
arguments: --filePath $(Build.SourcesDirectory)/build/helm/helmConfig.json
failOnStderr: true
displayName: setting up helm configs
- template: helmBuild.yml
parameters:
helmCharts: $(HELMCONFIGS)
Json input is saved to HELMCONFIGS variable in azure pipelines, As per Azure documents, it string type and we cannot convert to any other type like array.
helmBuild.yml file:
parameters:
- name: helmCharts
type: object
default: {}
steps :
- ${{ each helm in parameters.helmCharts }}:
- ${{ each charts in helm}}:
- ${{ each chart in charts }}:
- task: AzureKeyVault#1
inputs:
azureSubscription: 'A-B-C'
KeyVaultName: chart.secretName
SecretsFilter: '*'
RunAsPreJob: true
I am not able to access chart.secretName, How to access to secretNames input?

Unexpected Behavior With Azure Pipelines Variables Using Variable Groups and Templates

I have a Azure DevOps YAML Pipeline to execute a Terraform deployment using the Terraform by MS DevLabs extension and an Azure Resource Manager service connection.
The last working state was using a pipeline template yaml file however I had to configure a parameter within the template and call the variable using the template expression syntax.
...
...
stages:
- stage: Plan
displayName: Terrafom Plan
jobs:
- job: DEV PLAN
displayName: Plan (DEV)
pool:
vmImage: "ubuntu-latest"
variables:
az_service_connection: "MyServiceConnection"
tf_environment: "DEV"
tf_state_rg: "DEV"
tz_state_location: "canadacentral"
tf_state_stgacct_name: "mystorageaccuontname1231231"
tf_state_container_name: "tfstate"
steps:
- template: templates/terraform-plan.yml
parameters:
az_service_connection: ${{ variables.az_service_connection }}
...
...
steps:
- task: terraformInstaller#0
displayName: "Install Terraform $(tf_version)"
inputs:
terraformVersion: $(tf_version)
- task: TerraformTaskV1#0
displayName: "Run > terraform init"
inputs:
command: "init"
commandOptions: "-input=false"
backendServiceArm: ${{ parameters.az_service_connection }}
...
...
I believe the reason why this works is because the template expression syntax ${{ variables.varname}} evaluates at compile time vs. runtime. If I didn't do it this way, i'd either get $(az_service_connection) passed into the backendServiceArm input or an empty value.
With the introduction of variable groups, i'm now facing similar behavior. I expect that the variable group evaluates after the template expression variable which causes ${{ variables.az_service_connection }} to have an empty value. I am unsure how to get this working.
How can I use variable groups with a pipeline template that uses a service connection?
I used $() syntax to pass arm connection to template:
Template file:
parameters:
- name: 'instances'
type: object
default: {}
- name: 'server'
type: string
default: ''
- name: 'armConnection'
type: string
default: ''
steps:
- task: TerraformTaskV1#0
inputs:
provider: 'azurerm'
command: 'init'
backendServiceArm: '${{ parameters.armConnection }}'
backendAzureRmResourceGroupName: 'TheCodeManual'
backendAzureRmStorageAccountName: 'thecodemanual'
backendAzureRmContainerName: 'infra'
backendAzureRmKey: 'some-terrform'
- ${{ each instance in parameters.instances }}:
- script: echo ${{ parameters.server }}:${{ instance }}
Main file:
trigger:
branches:
include:
- master
paths:
include:
- stackoverflow/09-array-parameter-for-template/*
# no PR triggers
pr: none
pool:
vmImage: 'ubuntu-latest'
variables:
- group: my-variable-group
- name: my-passed-variable
value: $[variables.myhello] # uses runtime expression
steps:
- template: template.yaml
parameters:
instances:
- test1
- test2
server: $(myhello)
armConnection: $(armConnection)
Note: Group my-variable-group contains armConnection variable

Resources