Azure Bicep - Conditionally adding elements to an array - azure

I am trying to create a bicep template to deploy a VM with either 1 or 2 NICs depending on a conditional.
Anyone know if there is a way to deploy a VM NIC using conditional statements inside a property definition? Seems an if function is not permitted inside a resource definition and a ternary errors out due to invalid ID.
Just trying to avoid having 2 dupicate VM resource definitions using resource = if (bool) {}
networkProfile: {
networkInterfaces: [
{
id: nic_wan.id
properties: {
primary: true
}
}
{
id: bool ? nic_lan.id : '' #Trying to deploy this as a conditional if bool = true.
properties: {
primary: false
}
}
]
}
The above code errors out because as soon as you define a NIC, it needs a valid ID.
'properties.networkProfile.networkInterfaces[1].id' is invalid. Expect fully qualified resource Id that start with '/subscriptions/{subscriptionId}' or
'/providers/{resourceProviderNamespace}/'. (Code:LinkedInvalidPropertyId)

You can create some variables to handle that:
// Define the default nic
var defaultNic = [
{
id: nic_wan.id
properties: {
primary: true
}
}
]
// Add second nic if required
var nics = concat(defaultNic, bool ? [
{
id: nic_lan.id
properties: {
primary: false
}
}
] : [])
// Deploy the VM
resource vm 'Microsoft.Compute/virtualMachines#2020-12-01' = {
...
properties: {
...
networkProfile: {
networkInterfaces: nics
}
}
}

Related

Deployment of queue, blob and ADLS2 private endpoints via Bicep goes wrong

I am trying to deploy a number of three azure storage resources under two storage accounts, and I want to implement three private endpoints as to only allow connection to these resources from VMs in the same VNET. Type of resources that need to be connected to in two separate storage accounts are (per storage account):
storageAccountTemp
Azure blob queue
Azure blob storage
storageAccountDatalake
ADLS 2 containers (datalake)
I have the following Azure Bicep code for deploying the permanent and temporary stores:
param location string
param environmentType string
param storageAccountSku string
param privateEndpointsSubnetId string
var privateEndpointNameTmpstBlob = 'pe-tmpst-blob-${environmentType}-001'
var privateEndpointNameTmpstQueue = 'pe-tmpst-queue-{environmentType}-001'
var privateEndpointNamePst = 'pe-pst-${environmentType}-001'
/// Temp storage ///
resource storageAccountTemp 'Microsoft.Storage/storageAccounts#2021-08-01' = {
name: 'tmpst${environmentType}'
location: location
sku: {
name: storageAccountSku
}
kind: 'StorageV2'
properties: {
allowBlobPublicAccess: false
accessTier: 'Hot'
minimumTlsVersion: 'TLS1_0'
publicNetworkAccess: 'Disabled'
}
}
resource blobContainerForQueue 'Microsoft.Storage/storageAccounts/blobServices/containers#2021-08-01' = {
name: '${storageAccountTemp.name}/default/claimcheck-storage-${environmentType}'
properties: {
publicAccess: 'None'
}
}
resource storageQueueMain 'Microsoft.Storage/storageAccounts/queueServices/queues#2019-06-01' = {
name: '${storageAccountTemp.name}/default/queue-main-${environmentType}'
}
/// Persistant storage datalake ///
resource storageAccountDatalake 'Microsoft.Storage/storageAccounts#2021-08-01' = {
name: 'pstdatalake${environmentType}'
location: location
sku: {
name: storageAccountSku
}
kind: 'StorageV2'
properties: {
allowBlobPublicAccess: false
accessTier: 'Hot'
minimumTlsVersion: 'TLS1_0'
isHnsEnabled: true
publicNetworkAccess: 'Disabled'
}
}
/// Data///
resource ContainerForData 'Microsoft.Storage/storageAccounts/blobServices/containers#2021-08-01' = {
name: '${storageAccountDatalake.name}/default/data-${environmentType}'
properties: {
publicAccess: 'None'
}
}
/// Private endpoints configuration for tempblob, queue and datalake ///
resource privateEndpointTmpstBlob 'Microsoft.Network/privateEndpoints#2021-05-01' = if (environmentType == 'dev' || environmentType == 'prd') {
name: privateEndpointNameTmpstBlob
location: location
properties: {
subnet: {
id: privateEndpointsSubnetId
}
privateLinkServiceConnections: [
{
name: privateEndpointNameTmpstBlob
properties: {
privateLinkServiceId: storageAccountTemp.id
groupIds: ['blob']
}
}
]
}
}
resource privateEndpointTmpstQueue 'Microsoft.Network/privateEndpoints#2021-05-01' = if (environmentType == 'dev' || environmentType == 'prd') {
name: privateEndpointNameTmpstQueue
location: location
properties: {
subnet: {
id: privateEndpointsSubnetId
}
privateLinkServiceConnections: [
{
name: privateEndpointNameTmpstQueue
properties: {
privateLinkServiceId: storageAccountTemp.id
groupIds: ['queue']
}
}
]
}
}
resource privateEndpointPst 'Microsoft.Network/privateEndpoints#2021-05-01' = if (environmentType == 'dev' || environmentType == 'prd') {
name: privateEndpointNamePst
location: location
properties: {
subnet: {
id: privateEndpointsSubnetId
}
privateLinkServiceConnections: [
{
name: privateEndpointNamePst
properties: {
privateLinkServiceId: storageAccountDatalake.id
groupIds: ['blob']
}
}
]
}
}
As you can see, for the storage account, IsHnsEnabled is set to true, as to enable HierarchicalNamespace and thus ADLS2 functionality. The problem is, if I include the privateEndpointPst resource deployment in the Bicep deployment, and then try to view a datalake container in the portal from a VM that is in the same VNET as the private endpoint (which are in the subnet that makes the privateEndpointsSubnetId variable), I get the following message when trying to look at files in one of the datalake containers:
I believe it is not the problems in the picture. The reason for this is that when I deploy all three endpoints together, they all show this same problem when trying to look at blob/queue/datalake in storageAccountTemp and storageAccountDatalake when I deploy all three endpoints.
However, only deploying the two endpoints for the storageAccountTemp resources and not the one for Datalake, I can see the data in the portal when running from the VM in the VNET and code running from this VM can also reach the queue + blob. So not only does the deployment of the privateEndpointPst seem to mess up datalake reachability, it also in some way does the same to the reachability of my other queue and blob in the storageAccountTemp if I deploy them altogether. My mind is boggled as to why this is happening, and why I cannot seem to deploy the datalake endpoint in the right way. Also, sometimes, deploying the endpoints altogether WILL make the datalake endpoint work, and break the other two, which is even more mind-boggling. Clicking do you want to do some checks to detect common connectivity issues gives me the following information, which does not make me much wiser as to what is causing the issue (since I'm pretty sure it's not firewalls; sometimes I can access, sometimes not):
Does anyone see what could be wrong with my Bicep code for deploying the endpoint that might be causing this issue? I'm at quite a loss here. Also tried to replace groupIds: ['blob'] with groupIds: ['dfs'], but that does not seem to solve my problem.
I seem to have found the issue. For connecting to a datalake resource, one needs to have both a private endpoint with groupIds: ['blob'] and groupIds: ['dfs], since the blob API is still used for getting some meta-info about the containers (as far as I can understand).
So adding:
resource privateEndpointPstDfs 'Microsoft.Network/privateEndpoints#2021-05-01' = if (environmentType == 'dev' || environmentType == 'prd') {
name: privateEndpointNamePstDfs
location: location
properties: {
subnet: {
id: privateEndpointsSubnetId
}
privateLinkServiceConnections: [
{
name: privateEndpointNamePstDfs
properties: {
privateLinkServiceId: storageAccountDatalake.id
groupIds: ['dfs']
}
}
]
}
}
Made the deployment work successfully.

How do I defined subnets in Bicep such that the parent Vnet has a reference and that I can dependOn the subnet deployment?

I have a situation where I need to define my subnets in the properties.subnets field of the parent virtual network otherwise I get the 'InUseSubnetCannotBeDeleted' problem
Option 1 - Defined inline
However if I define my subnets directly in the properties.subnet array (see below) then they are not created as children and I cannot seem to create a reference them as a resource for when I want to create a dependsOn reference for another resource.
resource virtualNetwork 'Microsoft.Network/virtualNetworks#2021-08-01' = {
// ... other fields
properties: {
subnets: [
// How can I get a reference to these that I can 'dependOn'?
{
name: 'subnet-1'
// ... other fields
}
{
name: 'subnet-2'
// ... other fields
}
]
}
}
Option 2 - Defined separately
resource virtualNetwork 'Microsoft.Network/virtualNetworks#2021-08-01' = {
// ... other fields
properties: {
subnets: [
subnet1 // Gives a circular reference error
]
}
}
resource subnet1 'Microsoft.Network/virtualNetworks/subnets#2021-08-01' = {
parent: virtualNetwork
name: 'subnet-1'
// ... other fields
}
I have tried defining the subnets as separate resources and then reference the resources in the properties.subnet array but, since subnets need a reference to the parent virtual network proeprty, Bicep complains about a circular reference.
It seems that ARM templates can use textual references using the name of the subnet in properties.subnets whcih could get around the circular reference, however Bicep does not allow this.
So how do I defined my subnets so that I can simulteneously satisfy the virtual network's required to have a reference to the subnets in properties.subnets as well as be able to have a resource reference that I can use in dependsOn clauses?
Maybe this will work. Bicep builds it without errors, but I have not tried to deploy it.
resource virtualNetwork 'Microsoft.Network/virtualNetworks#2021-08-01' = {
name: 'myvnet'
location: 'swedencentral'
properties: {
addressSpace: {
addressPrefixes: [
'10.0.0.0/20'
]
}
subnets: [
{
name: 'subnet-1'
properties: {
addressPrefix: '10.0.0.0/24'
}
}
]
}
}
resource subnet 'Microsoft.Network/virtualNetworks/subnets#2022-01-01' existing = {
name: 'myvnet/subnet-1'
scope: resourceGroup()
}
resource storage 'Microsoft.Storage/storageAccounts#2021-09-01' = {
name: 'mystorage'
location: 'swedencentral'
dependsOn: [
subnet
]
sku: {
name: 'Standard_LRS'
}
kind: 'StorageV2'
}

Is there a way to deploy Azure resource modules not in the main file?

Normally when creating Azure resources through Bicep modules I would have 2 files. One file designated to hold the parameterized resource and another file, the main file, which will consume that module.
As an example for creating an action group my resource file looks like:
action-group.bicep
param actionGroupName string
param groupShortName string
param emailReceivers array = []
// Alerting Action Group
resource action_group 'microsoft.insights/actionGroups#2019-06-01' = {
name: actionGroupName
location: 'global'
tags: {}
properties: {
groupShortName: groupShortName
enabled: true
emailReceivers: emailReceivers
}
}
This resource is then consumed as a module in the main file, main.bicep:
// Alerting Action Group
module actionGroup '../modules/alerts/alert-group.bicep' = {
name: 'action-group-dply'
params: {
actionGroupName: actionGroupName
groupShortName: actionGroupShortName
emailReceivers: [
{
name: '<Name of email receivers>'
emailAddress: alertEmailList
}
]
}
}
My pipeline references main.bicep and will deploy the listed resources in the file. My question is, is there a way to add a third file in the mix? One file to still hold the parameterized resource, one file to hold the associated resource modules, and the main.bicep file. The idea is to create various alerts throughout my existing resources but I don't want to add a ton of modules to main.bicep as it will quickly increase the complexity and amount of code in this file.
Is there a way that I can have this file of modules, and reference the entire file in main.bicep to still deployment from the original pipeline?
As an example:
alerts.bicep
param metricAlertsName string
param description string
param severity int
param enabled bool = true
param scopes array = []
param evaluationFrequency string
param windowSize string
param targetResourceRegion string = resourceGroup().location
param allOf array = []
param actionGroupName string
var actionGroupId = resourceId(resourceGroup().name, 'microsoft.insights/actionGroups', actionGroupName)
resource dealsMetricAlerts 'Microsoft.Insights/metricAlerts#2018-03-01' = {
name: metricAlertsName
location: 'global'
tags: {}
properties: {
description: description
severity: severity
enabled: enabled
scopes: scopes
evaluationFrequency: evaluationFrequency
windowSize: windowSize
targetResourceRegion: targetResourceRegion
criteria: {
'odata.type': 'Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria'
allOf: allOf
}
actions: [
{
actionGroupId: actionGroupId
}
]
}
alert-modules.bicep
// Function/Web Apps 403 Error
module appServicePlan403Errors '../modules/alerts/alerts.bicep' = {
// Alert Logic
}
// Function/Web Apps 500 Error
module appServicePlan500Errors '../modules/alerts/alerts.bicep' = {
// Alert Logic
}
main.bicep
// Some reference to alert-modules.bicep so when the pipeline runs and looks for main.bicep, it will still deploy all the resources
You can call the module multiple times (in main.bicep or a module) using looping:
https://learn.microsoft.com/en-us/azure/azure-resource-manager/bicep/loops
e.g.
param alertsCollection array = [
{
actionGroupName: 'group1'
groupShortName: 'g1'
emailReceivers: []
}
{
actionGroupName: 'group2'
groupShortName: 'g2'
emailReceivers: [
'foo#bar.com'
'bar#baz.com'
]
}
]
module alerts '../modules/alerts/alert-group.bicep' = [for alert in alertsCollection): {
name: '${alert.actionGroupName}'
params: {
actionGroupName: alert.actionGroupName
groupShortName: alert.groupShortName
emailReceivers: alert.emailReceivers
}
}]
You could simplify the params passing via:
module alerts '../modules/alerts/alert-group.bicep' = [for alert in alertsCollection): {
name: '${alert.actionGroupName}'
params: alert
}]
But you need to be really strict on the schema of the alertsCollection parameter...
That help?

Azure Bicep script produces error "Changing property > 'agentPoolProfile.vnetSubnetID' is not allowed." on second execution

I'm using Azure Bicep to create a virtualNetwork with a single subnet and then use that as the input for creating an aks cluster with : vnetSubnetID: virtualNetwork.properties.subnets[0].id
The first time I run the command, it creates the virtual network and cluster just fine, but the second time I run the command it gives this error :
{"error":{"code":"InvalidTemplateDeployment","message":"The template
deployment 'cluster' is not valid according to the validation
procedure. The tracking id is '[REDACTED_JUST_IN_CASE]'. See inner errors for
details.","details":[{"code":"PropertyChangeNotAllowed","message":"Provisioning
of resource(s) for container service playground-cluster0 in resource
group showcase-kevinplayground2 failed. Message: {\n "code":
"PropertyChangeNotAllowed",\n "message": "Changing property
'agentPoolProfile.vnetSubnetID' is not allowed.",\n "target":
"agentPoolProfile.vnetSubnetID"\n }. Details: "}]}}
I double checked and there is just the one subnet inside the virtualNetwork created by the deployment (no other magically appeared or anything).
I repeated the experiment on a second resource group and the same thing happened, so it's reproducible.
Here is the full bicep file (just call az deployment group create --resource-group showcase-kevinplayground2 -f cluster.bicep in the resource group of your choice)
targetScope = 'resourceGroup'
resource virtualNetwork 'Microsoft.Network/virtualNetworks#2021-02-01' = {
name: 'aksVirtualNetwork'
location: resourceGroup().location
properties:{
addressSpace:{
addressPrefixes:[
'10.10.0.0/16'
]
}
subnets:[
{
name: 'aks'
properties:{
addressPrefix: '10.10.5.0/24'
}
}
]
}
}
resource aksManagedIdentity 'Microsoft.ManagedIdentity/userAssignedIdentities#2018-11-30' = {
name: 'playgroundIdentity'
location: resourceGroup().location
}
resource aks 'Microsoft.ContainerService/managedClusters#2021-02-01' = {
name: 'playground-cluster0'
location: resourceGroup().location
identity: {
type:'UserAssigned'
userAssignedIdentities: {
'${aksManagedIdentity.id}': {}
}
}
sku: {
name: 'Basic'
tier: 'Free'
}
properties: {
kubernetesVersion: '1.21.2'
dnsPrefix: 'playground'
enableRBAC: true
networkProfile: {
networkPlugin: 'azure'
networkPolicy: 'calico'
}
aadProfile: {
managed: true
enableAzureRBAC: true
}
autoUpgradeProfile: {}
apiServerAccessProfile: {
enablePrivateCluster: false
}
agentPoolProfiles: [
{
name: 'aksnodes'
count: 1
vmSize: 'Standard_B2s'
osDiskSizeGB: 30
osDiskType: 'Managed'
vnetSubnetID: virtualNetwork.properties.subnets[0].id
osType: 'Linux'
maxCount: 1
minCount: 1
enableAutoScaling: true
type: 'VirtualMachineScaleSets'
mode: 'System'
orchestratorVersion: null
}
]
}
}
Looking at this reported github issue, you need to use the resourceId function.
In your case, something like that should work:
vnetSubnetID: resourceId('Microsoft.Network/virtualNetworks/subnets', 'aksVirtualNetwork', 'aks')

Is there a way to connect an Azure firewall to a Front Door Premium Policy with Bicep?

I am trying to implement Azure Front Door Premium with a Web Application Firewall connection. I am able to create the Front Door both manually and through Bicep. However, when I try to connect to a WAF through Bicep, I'm not sure if it completely works.
The Bicep resource for my WAF looks like:
resource profiles_gbt_nprod_sandbox_FrontDoorTest_name_AzureFDTest_ac196269 'Microsoft.Cdn/profiles/securitypolicies#2020-09-01' = {
parent: profiles_gbt_nprod_sandbox_FrontDoorTest_name_resource
name: 'AzureFDTest-ac196269'
properties: {
parameters: {
wafPolicy: {
id: frontdoorwebapplicationfirewallpolicies_AzureFDTest_externalid
}
associations: [
{
domains: [
{
id: profiles_gbt_nprod_sandbox_FrontDoorTest_name_TestFDEndpoint.id
}
]
patternsToMatch: [
'/*'
]
}
]
type: 'WebApplicationFirewall'
}
}
}
To get: AzureFDTest-ac196269 I created the Front Door through Bicep, then manually connected the AzureFDTest policy and it generated this name.
When this is run, it looks like it connects to my Front Door in the Endpoint Manager:
But when I click on the AzureFDTest WAF policy it looks like:
And AzureFDTest is not listed. If I was to manually connect the WAF, this drop down menu would say AzureFDTest. Is this still working as expected or is there an issue with the way I have the resource written?
You can connect an Azure Front Door Premium to a WAF in Bicep via a security policy as follows:
var frontdoorName = 'frontDoor'
var frontDoorSkuName = 'Premium_AzureFrontDoor'
var endpointName = 'endpoint'
var wafPolicyName = 'wafPolicy'
var securityPolicyName = 'securityPolicy'
param tags object
// Front Door CDN profile
resource profile 'Microsoft.Cdn/profiles#2020-09-01' = {
name: frontdoorName
location: 'global'
sku: {
name: frontDoorSkuName
}
tags: tags
}
// Azure Front Door endpoint
resource endpoint 'Microsoft.Cdn/profiles/afdEndpoints#2020-09-01' = {
parent: profile
name: endpointName
location: 'Global'
tags: tags
properties: {
originResponseTimeoutSeconds: 60
enabledState: 'Enabled'
}
}
// WAF policy using Azure managed rule sets
resource wafPolicy 'Microsoft.Network/FrontDoorWebApplicationFirewallPolicies#2020-11-01' = {
name: wafPolicyName
location: 'global'
tags: tags
sku: {
name: frontDoorSkuName
}
properties: {
policySettings: {
enabledState: 'Enabled'
mode: 'Prevention'
}
managedRules: {
managedRuleSets: [
{
ruleSetType: 'Microsoft_DefaultRuleSet'
ruleSetVersion: '1.1'
}
{
ruleSetType: 'Microsoft_BotManagerRuleSet'
ruleSetVersion: '1.0'
}
]
}
}
}
// Security policy for Front Door which defines the WAF policy linking
resource securityPolicy 'Microsoft.Cdn/profiles/securityPolicies#2020-09-01' = {
parent: profile
name: securityPolicyName
properties: {
parameters: {
type: 'WebApplicationFirewall'
wafPolicy: {
id: wafPolicy.id
}
associations: [
{
domains: [
{
id: endpoint.id
}
]
patternsToMatch: [
'/*'
]
}
]
}
}
}
There is also an azure-quickstart-template available for this scenario:
Front Door Premium with Web Application Firewall and Microsoft-managed rule sets

Resources