How to create Microsoft.DBforPostgreSQL with Bicep? - azure

I would like to create PostgreSQL for location based service needs.
I would install GIS extensions next.
I have created manually Azure Database for PostgreSQL flexible server to decide correct config.
"sku": {"name": "Standard_B1ms","tier": "Burstable"}
I wanted to create Single server, but it was not available in Europe for some reason. I thought that missing Burstable is good for initial POC, but general purpose is good as well.
Now trying to create PostgreSQL with Bicep.
However I have difficulty to set valid server. Firstly Burstable was not available. Next I cannot set valid sku name.
az deployment group create:
{"status":"Failed","error":{"code":"DeploymentFailed","message":"At least one resource
deployment operation failed. Please list deployment operations for details. Please see
https://aka.ms/DeployOperations for usage details.","details":
[{"code":"BadRequest","message":"{\r\n \"error\": {\r\n \"code\":
\"InvalidEditionSloCombination\",\r\n \"message\": \"The edition
GeneralPurpose does not support the service objective Standard_D2s_v3\"\r\n }\r\n}"}]}}
main.bicep:
resource symbolicname 'Microsoft.DBforPostgreSQL/servers#2017-12-01' = {
name: 'my-postgresql-dev'
location: 'West Europe'
tags: {
tagName1: 'tagValue1'
tagName2: 'tagValue2'
}
sku: {
name: 'Standard_D2s_v3'
tier: 'GeneralPurpose'
}
identity: {
type: 'SystemAssigned'
}
properties: {
administratorLogin: 'sqladmin'
administratorLoginPassword: 'asfar43efw!sdf'
storageProfile: {
backupRetentionDays: 7
geoRedundantBackup: 'Disabled'
storageMB: 32000
}
version: '11'
createMode: 'Default'
// For remaining properties, see ServerPropertiesForCreate objects
}
}

The error you received is related to the sku name:
The edition GeneralPurpose does not support the service objective Standard_D2s_v3
Looking at the documentation, the sku name has a specific naming convention:
The name of the sku, typically, tier + family + cores, e.g. B_Gen4_1, GP_Gen5_8.
In you case, for general purpose it will be GP_GEN5_{number of cores}.

Related

Azure bicep - App Service Plans names and tiers

When working with, for example, Azure Function and bicep, the fuction needs an App Service Plan and this plan needs a sku:
resource MyHostingPlan 'Microsoft.Web/serverfarms#2021-02-01' = {
name: MyAppPlanName
location: location
sku: {
name: ?
tier: ?
}
}
resource MyFunction1 'Microsoft.Web/sites#2021-02-01' = {
name: MyAppPlanName
location: location
kind: 'functionapp'
properties: {
httpsOnly: true
serverFarmId: MyHostingPlan.id
...
There doesn't seem to be a 1 to 1 mapping between what's in https://azure.microsoft.com/en-au/pricing/details/app-service/linux/#pricing and the names of the tier and SKUs for bicep scripts. For example:
The consumption plan is "Y1"/"Dynamic"
P1v2 seem to be "sku": { "name": "P1v2", "tier": "PremiumV2" }
Neither of these two are explicitly stated in the pricing table and while for P1v2 maybe one could guess, for the consumption plan this is not obvious at all.
Q: What is correct process of finding out the strings for the name and *tier` for an App Service Plan to use in a bicep script?
Consumption is Y1,
Pxv2 is the App Service Plan (https://azure.microsoft.com/en-us/pricing/details/app-service/windows/#pricing)
EPx are the Function Premium Plans (https://azure.microsoft.com/en-us/pricing/details/functions/#pricing)
*x defines the scale of a single instance.
Best place to look this up is the Azure Portal in the Scaling Section of the Function/App Service Plan
Note: All those plans are different. While the consumption plan
provides event based auto-scale at pay-what-you-use rates, the
function premium plan offers the same but at fixed per instance prices
with additional features (e.g. network integration, no-warmup time,
etc). The App Service Plan is different than those two plans. No
event-based autoscaling but is great if you have functions and unused
capacity at an existing plan.

Why does my BICEP template, fail to create authorization rules consistently?

I've created a bicep template for deploying the Azure Service Bus which includes creation of multiple topics, subscriptions, filters, and authorisation rules.
I'm attempting to deploy 24 authorisation rules in a serial for loop after the rest of the servicebus has been created. The first time deployment will always fail with one or two authorisation rules returning with the error MessagingGatewayTooManyRequests or AuthorizationRuleNotFound. A subsequent deployment will always work as expected.
I have tried only using a template that only deploys authorisation rules, and have run into the same problem. The first 18 rules were created almost instantly, then after that they start to show as duplicated in the azure portal and fail.
I have found that I can get closer to my goal by splitting up the policies into multiple dependent deployments, which does slow down the request speed due to the physical overhead from creating a new deployment. I'd rather create a pure solution that is low effort, easy to maintain, and doesn't abuse the limitations of ARM deployments in order to succeed.
Please see the cut down version of my module below;
#description('The namespace of the servicebus resource')
param namespace string = 'myservicebus'
#description('An array of shared access policy configurations for service bus topics')
param sharedAccessPolicies array = [
{
topicName: 'mytopic'
policyName: 'listen-policy'
policyRights: ['Listen']
secretName: 'sb-mytopic-listen'
}
{
topicName: 'mytopic'
policyName: 'send-policy'
policyRights: ['Send']
secretName: 'sb-mytopic-send'
}
]
#batchSize(1)
resource topic_auth_rule 'Microsoft.ServiceBus/namespaces/topics/authorizationRules#2021-11-01' = [for policy in sharedAccessPolicies: {
name: '${namespace}/${policy.topicName}/${policy.policyName}'
properties: {
rights: policy.policyRights
}
}]
I've found a similar post around this issue which is what lead to my current solution. Although I don't understand why this single API endpoint is so aggressively rate limited.
Any advice on this would be much appreciated.
The above code in my question now works as expected. I spent the past month talking to multiple levels of Microsoft support, I managed to get in touch with the ARM team who looked into and resolved the problem.
The alternative solution which I was suggested was to individually register each resource and create a huge dependency chain, see example below.
resource topic_auth_rule_listen 'Microsoft.ServiceBus/namespaces/topics/authorizationRules#2021-11-01' = {
name: '${namespace}/mytopic/listen-policy'
properties: {
rights: [ 'Listen' ]
}
}
resource topic_auth_rule_send 'Microsoft.ServiceBus/namespaces/topics/authorizationRules#2021-11-01' = {
name: '${namespace}/mytopic/send-policy'
properties: {
rights: [ 'Send' ]
}
dependsOn: [ topic_auth_rule_listen ]
}
...

roleAssignment & diagnosticLog deployments fail if role/log already exist

I'm using bicep to assign roles to my resources - the first run works perfectly, but any consecutive run fails because the role already exists. The same goes for diagnosticLogs - if they already exist, the pipeline fails.
Is there any way to check if the resource exist and skip the resource-deployment if so? Or at the very least to reduce the severity to a "warning", so the pipeline doesn't fail?
It took me a while to figure out the problem, because the log of Azure Pipelines does not even return an error description, but just fails...
##[error]At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.
##[error]Details:
##[error]DeploymentFailed: At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.
##[error]Check out the troubleshooting guide to see if your issue is addressed: https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-resource-group-deployment?view=azure-devops#troubleshooting
##[error]Task failed while creating or updating the template deployment.
Here is the log from the resource-group deployments - diagnosticLogs:
{"code":"DeploymentFailed","message":"At least one resource deployment operation failed.
Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{
"code":"Conflict",
"message":"Data sink '/subscriptions/X-X-X-X-X/resourceGroups/<NAME>/providers/Microsoft.Storage/storageAccounts/<NAME>'
is already used in diagnostic setting '<NAME>' for category 'allLogs'.
Data sinks can't be reused in different settings on the same category for the same resource."
}]}
The error from the roleAssignment:
{"code":"DeploymentFailed",
"message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{
"code":"RoleAssignmentExists",
"message":"The role assignment already exists."
}]}
Here is the bicep code for the deployment:
// roleAssignment
resource role_developer_adls_blob_contributors 'Microsoft.Authorization/roleAssignments#2022-04-01' = {
name: guid(resourceGroup().id, aad_admin_developer_group_object_id)
scope: resourceGroup()
properties:{
description: 'Developer Group - BlobStorageContributor.'
principalId: aad_admin_developer_group_object_id
principalType: 'Group'
roleDefinitionId: resourceId('Microsoft.Authorization/roleDefinitions', storageBlobDataContributorRoleID)
}
}
// diagnosticLogs
resource keyvault_diagnostic_settings 'Microsoft.Insights/diagnosticSettings#2021-05-01-preview' = {
name: '${keyVaultName}-log-adls'
scope: key_vault
properties: {
storageAccountId: adls_storage_base.id
logs: [
{
categoryGroup: 'allLogs'
enabled : true
}
]
}
}
The roleAssignment name needs to be unique for a given principal, role and scope. The seed you have for the guid() function is not unique enough. It should be:
name: guid(resourceGroup().id, aad_admin_developer_group_object_id, storageBlobDataContributorRoleID)
Note that since you already have a roleAssigment for the principal with those perms at that scope, you'll have to remove the "old" roleAssignment before you can use the new naming scheme.

How to change OS disk SKU for Virtual Machine Scale Set (VMSS)?

I create Azure Kubernetes Service by Bicep like below.
param clusterName string = 'kubernetes'
param location string = resourceGroup().location
resource aksCluster 'Microsoft.ContainerService/managedClusters#2022-02-01' = {
name: clusterName
location: location
sku: {
name: 'Basic'
tier: 'Free'
}
identity: {
type: 'SystemAssigned'
}
properties: {
kubernetesVersion: '1.23.5'
enableRBAC: true
agentPoolProfiles: [
{
name: 'agentpool'
mode: 'System'
type:'VirtualMachineScaleSets'
orchestratorVersion: '1.23.5'
enableAutoScaling: false
enableFIPS: false
maxPods: 110
count: 1
vmSize: 'Standard_B2s'
osType:'Linux'
osSKU:'Ubuntu'
osDiskType: 'Managed'
osDiskSizeGB: 0
enableUltraSSD: false
enableNodePublicIP: false
}
]
dnsPrefix: 'kubernetes-cluster-dns'
networkProfile: {
loadBalancerSku: 'basic'
networkPlugin: 'kubenet'
}
}
}
Azure Kubernetes Service (AKS) create Virtual Machine Scale Set like below.
I do not want to use Premium SSD LRS OS disk. It is too expensive for me while learning kubernetes. I want to change it to Standard SSD LRS.
What should I do?
2022.05.30 Update
I create an issue at Azure/bicep.
• Once an Azure resource is deployed, you cannot change its inherent hardware, i.e., disk storage, compute, and memory. Thus, as you have already deployed an AKS cluster with Linux VMs in ‘Standard_B2S’ format and not mentioned any configuration related to the type of disk storage to be used along with it, it usually goes with the default/top option available according to the Azure portal and underlying service fabric infrastructure.
Also, even if you check while creating a VM or VMSS in Azure, the first option for selecting the type of OS disk storage is always ‘Premium SSD (locally redundant storage)’ in the portal as shown below. Thus, due to which, you should have mentioned the ‘DiskSku: StandardSSD_LRS’ in your bicep template for you to deploy the AKS cluster VMs with standard HDDs.
Thus, now if you want to change the disk type after deploying the resources, it is not possible and you would need to redeploy the AKS cluster VMs for the disk type to be changed.
For more information, kindly refer to the documentation link below: -
https://learn.microsoft.com/en-us/azure/templates/microsoft.compute/disks?tabs=bicep#disksku

Setting the `customDomainVerificationId` property on an AppService in a Bicep deployment has no effect

I'm creating an App Service via Bicep, and trying to set the "Custom Domain Verification ID" property, so that I can setup the correct TXT records for verification (before the Bicep deployments run).
resource app_service 'Microsoft.Web/sites#2021-01-15' = {
name: name
location: location
properties: {
serverFarmId: plan_id
siteConfig: {
netFrameworkVersion: 'v4.6'
}
customDomainVerificationId: custom_domain_verification_id
}
}
But, the value I set isn't respected and will be something else under "Custom Domains" on the App Service.
Is this property meant to be read only?
Update
Seems like this might be a readonly property actually. It's the same across the entire Subscription, so makes sense that it can't be set on an individual AppService. Reported an issue here.
Yes, this property should be read only. It’s filled when app service is created.
If you don’t mind, please report this inaccuracy on https://aka.ms/bicep-type-issues

Resources