Event Grid System Topic with Service Bus Geo-disaster recovery - arm-template

I have an Event Grid System Topic that sends blob created events to a Service Bus Topic. The messages are processed by a dotnet application sitting in AKS.
Now the requirement came for high availability so we decided to turn on Service Bus Geo-disaster recovery.
We set the Primary and Secondary namespaces and want to use the Service Bus through the Geo-DR Alias.
Unfortunately I cannot find a way to connect the Event Grid Topic to this Service Bus Geo-DR Alias, it only allows me to select a single service bus namespace.
We are using ARM templates, and I tried different ways, but it is not allowing me to target the alias as a destination resource.
"resources": [{
"type": "Microsoft.EventGrid/systemTopics/eventSubscriptions",
"apiVersion": "2020-10-15-preview",
"name": "[concat(parameters('systemTopicName'), '/', parameters('eventSubscriptionName'))]",
"properties": {
"deliveryWithResourceIdentity": {
"identity": {
"type": "SystemAssigned"
},
"destination": {
"properties": {
"resourceId": "[concat(resourceId(resourceGroup().name, 'Microsoft.ServiceBus/namespaces', parameters('serviceBusNameSpace')), '/topics/blobcreated-event')]"
},
"endpointType": "ServiceBusTopic"
}
},
"filter": {
"subjectBeginsWith": "/blobServices/default/containers/stage",
"includedEventTypes": [
"Microsoft.Storage.BlobCreated"
],
"enableAdvancedFilteringOnArrays": true,
"advancedFilters": [{
"operatorType": "StringIn",
"key": "data.api",
"values": [
"PutBlob",
"PutBlockList",
"FlushWithClose"
]
}]
},
"labels": [],
"eventDeliverySchema": "EventGridSchema",
"retryPolicy": {
"maxDeliveryAttempts": 30,
"eventTimeToLiveInMinutes": 1440
}
}
}]
I have tried to add the alias to the path:
"resourceId": "[concat(resourceId(resourceGroup().name, 'Microsoft.ServiceBus/namespaces', parameters('serviceBusNameSpace')), '/disasterRecoveryConfigs/{aliassname}/topics/blobcreated-event')]"
But that gives me 'Invalid ARM Id.' error.
Other idea was to use Webhook, but EventGrid requires handshake to prove ownership to the Webhook endpoint which I am not sure if Service Bus is capable of.
https://learn.microsoft.com/en-us/azure/event-grid/webhook-event-delivery
Has anybody got this working or found a workaround?

We had a discussion with Microsoft and got confirmation that SB Aliases are not supported with Event Grid
(they are looking into options to enable this).
The options therefore would be either in a DR scenario changing the subscription or having multiple subscribers in different regions.
I am planning to write a script which switches the subscription and executing the failover after.

Related

Azure Function's log streams from Azure Event Hub can't be consumed by Filebeat

Our team wants to import the log of Azure Function by Azure Event Hub and Filebeat into Elastic Search. We followed this references to set up an Event Hub for Azure function. But we faced an issue with the wrong format of the log stream.
Firstly, Let me show what's the correct format we expect. Take Azure PostgresSQL's log from Event Hub for example:
[
{
"records": [
{
"time": "2023-01-04T03:45:31.1040000Z",
"properties": {
"timestamp": "2023-01-04 03:45:31.104 UTC",
"processId": 8909,
"errorLevel": "LOG",
"sqlerrcode": "00000",
"message": "2023-01-04 03:45:31 UTC-63b4f65b.22cd-LOG: connection received: host=<host> port=<port>"
},
"resourceId": "/SUBSCRIPTIONS/<my subscription id>/RESOURCEGROUPS/<my resource group>/PROVIDERS/MICROSOFT.DBFORPOSTGRESQL/FLEXIBLESERVERS/<postgres server>",
"category": "PostgreSQLLogs",
"operationName": "LogEvent"
}
]
}
]
Notice that the properties is a flattened json so that it can be consumed by Filebeat. We want this kind of properties. But the Azure Function's log looks like the following, which is a string rather than flattened json:
[
{
"records": [
{
"level": "Informational",
"resourceId": "/SUBSCRIPTIONS/<my subscription id>/RESOURCEGROUPS/<my resource group>/PROVIDERS/MICROSOFT.WEB/SITES/<my azure function>"
"operationName": "Microsoft.Web/sites/functions/log",
"category": "FunctionAppLogs",
"time": "01/04/2023 01:55:00",
"properties": "{'appName':'<my azure function>','roleInstance':'<id>','message':'Host Status: {\\n \\'id\\': \\'<function app id>\\',\\n \\'state\\': \\'Running\\',\\n \\'version\\': \\'4.13.0.0\\',\\n \\'versionDetails\\': \\'4.13.0+da9a765ed67be48c79440526f78fa1b5c6efdeea\\',\\n \\'platformVersion\\': \\'99.0.10.764\\',\\n \\'instanceId\\': \\'<instance id>\\',\\n \\'computerName\\': \\'<computer name>\\',\\n \\'processUptime\\': 69254486,\\n \\'functionAppContentEditingState\\': \\'Unknown\\'\\n}','category':'Host.Controllers.Host','hostVersion':'4.13.0.0','hostInstanceId':'<host id>','level':'Information','levelId':2,'processId':1}",
"EventStampType": "Stamp",
"EventPrimaryStampName": "waws-prod-ty1-081",
"EventStampName": "waws-prod-ty1-081",
"Host": "<host name>",
"EventIpAddress": "<ip address>"
},
The string value of properties can't be processed by decode_json_fields of Filebeat either because the format is not json (the format is 'key': value rather than "key": value). Is there any way to correct the format of properties before it is consumed by Filebeat? By the way, our Azure Function is deployed using a container.

ARM Deployment error: Unable to freeze secondary namespace before creating pairing, this is probably because secondary namespace is not empty

I have two premium Service bus instances deployed manually through the azure portal. They don't have geo-recovery alias configured and the service bus instances have been operational for about a year.
Now, I'm trying automate the deployment process of these service bus instances and also add a georecovery alias resource to it as follows:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"serviceBusNamespaceName": {
"type": "string",
"metadata": {
"description": "Name of the Service Bus namespace"
}
},
"serviceBusQueueName": {
"type": "string",
"metadata": {
"description": "Name of the Queue"
}
},
"serviceBusLocation": {
"type": "string"
},
"sku": {
"type": "object",
"defaultValue": "Standard"
},
"serviceBusTopicName": {
"type": "string"
},
"serviceBusSubscriptionName": {
"type": "string"
},
"isAliasEnabled": {
"type": "bool"
},
"isQueueCreationEnabled": {
"type": "bool"
},
"aliasName": {
"type": "string"
},
"partnerNamespace": {
"type": "string"
}
},
"variables": {
"defaultSASKeyName": "RootManageSharedAccessKey",
"authRuleResourceId": "[resourceId('Microsoft.ServiceBus/namespaces/authorizationRules', parameters('serviceBusNamespaceName'), variables('defaultSASKeyName'))]",
"sbVersion": "2017-04-01"
},
"resources": [
{
"apiVersion": "2018-01-01-preview",
"name": "[parameters('serviceBusNamespaceName')]",
"type": "Microsoft.ServiceBus/Namespaces",
"location": "[parameters('serviceBusLocation')]",
"sku": {
"name": "[parameters('sku').name]",
"tier": "[parameters('sku').tier]",
"capacity": "[parameters('sku').capacity]"
},
"properties": {
"zoneRedundant": false
},
"resources": [
{
"apiVersion": "2017-04-01",
"name": "[parameters('aliasName')]",
"type": "disasterRecoveryConfigs",
"condition": "[parameters('isAliasEnabled')]",
"dependsOn": [
"[concat('Microsoft.ServiceBus/namespaces/', parameters('serviceBusNamespaceName'))]"
],
"properties": {
"partnerNamespace": "[parameters('partnerNamespace')]"
}
}
]
}
]
}
I'm using the same template to deploy the primary and secondary instances separately. Note that the disasterRecoveryConfigs resource will only be deployed when it's the primary instance.
This template successfully deploys the secondary namespace, but the primary namespace deployment fails with the following error:
Unable to freeze secondary namespace before creating pairing, this is
probably because secondary namespace is not empty.
Which is correct i.e. the secondary namespace has a couple of topics/subscriptions and queues already created. I don't want to delete them and just want to pair the primary and secondary namespaces.
How can this be done?
I had a similar issue with the Service Bus Geo-Recovery ARM Template. I read the exception closely; its state that the secondary namespace is not empty means we have to delete the topic and queue from the secondary namespace, then run the template again. It will work and create the topic and queue again based on the primary namespace.
But if you run the template a second time, you will get a different exception, which is that the secondary namespace cannot be updated (since it’s in geo-pairing). It’s strange, but by design, you cannot update the secondary namespace while it’s in Geo Pairing, and even if you remove Geo Pairing, your secondary namespace should be empty without any instances such as Topic, Queue, etc.
How to overcome this?
Lets consider, now I wanted to add the Topic or Queue in existing deployment by using the ARM template then, you will ran into the issue when your template is in the pipeline or anywhere and needs to run multiple times and update the existing primary namespace.
1. Quick Fix (one time only second time manually again you have to do the following steps)
Login to the Azure Portal
Go to the your primary Service bus Namespace
Click on the Geo-Recovery option under setting section
At the right hand side at the top find the option break pairing and
click on it.
It will break the pairing & if you not follow this step you will get the exception, Secondary Namespaces can not be updated
Next, delete the secondary Namespace instance or Namespace and run
the pipeline. it will work.
If you not follow above step then you will get error unable to freeze secondary Namespace.
The above is the one time fix, if you run template again you have to repeat above process manually again.
2. Automation using CI-CD DevOps Pipeline or CLI or PowerShell
The most of the time ARM templates runs in the pipeline and there is option to break the pairing using the Azure CLI or PowerShell. You should consider the adding two task in the YAML file
First Task, to break the Pairing
Azure CLI
az servicebus georecovery-alias break-pair --resource-group myresourcegroup --namespace-name primarynamespace --alias myaliasname
PowerShell
Set-AzureRmServiceBusGeoDRConfigurationBreakPair -ResourceGroupName $resourcegroup -Name $aliasname -Namespace $primarynamespace
Second Task, to delete the secondary namespace instances (topic,
Queue) or delete entire Namespace.
PowerShell
Remove-AzServiceBusNamespace -ResourceGroup Default-ServiceBus-WestUS -NamespaceName SB-Example1
To remove Topic or Queue instead of Namespace, refer the following documentation.
Azure Service Bus Management Common PowerShell commands
Also if you are running template locally, you can add small script or CLI command prior to run your template.
Does it affect on ConnectionString or Data after deleting Secondary Namespace or instances?
Its valid question, what will happens to the connection string or data since some clients are already using it, The answer is connection string wont be change if we delete the secondary namespace because in Geo recovery scenario we are supposed to be use alias connection string so there is no impact on existing customers.
Regarding the second question about the data, the answer is secondary namespace wont store any data it has only the meta data, meaning in the case of failover secondary namespace start working.
So during the deployment deleting secondary namespace instances or namespace wont impact on anything.
Is there any better option?
Might be you are thinking, why should I follow such a long process but the above problem because of the Geo Recovery design (service bus, event hub, event grid etc.) and there is no other option.
I hope Microsoft will come up with some better approach in the future.
If you try to create a pairing between a primary namespace with a private endpoint and a secondary namespace without a private endpoint, the pairing will fail.
You could refer to this template allows you to configure Service Bus Geo-disaster recovery alias.
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"serviceBusNamespaceNamePrimary": {
"type": "string",
"metadata": {
"description": "Name of Service Bus namespace"
}
},
"serviceBusNamespaceNameSecondary": {
"type": "string",
"metadata": {
"description": "Name of Service Bus namespace"
}
},
"aliasName": {
"type": "string",
"metadata": {
"description": "Name of Geo-Recovery Configuration Alias "
}
},
"locationSecondaryNamepsace": {
"type": "string",
"defaultValue": "South Central US",
"metadata": {
"description": "Location of Secondary namespace"
}
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location of Primary namespace"
}
}
},
"variables": {
"defaultSASKeyName": "RootManageSharedAccessKey",
"defaultAuthRuleResourceId": "[resourceId('Microsoft.ServiceBus/namespaces/authorizationRules', parameters('serviceBusNamespaceNamePrimary'), variables('defaultSASKeyName'))]"
},
"resources": [
{
"apiVersion": "2017-04-01",
"name": "[parameters('serviceBusNamespaceNameSecondary')]",
"type": "Microsoft.ServiceBus/Namespaces",
"location": "[parameters('locationSecondaryNamepsace')]",
"sku": {
"name": "Premium",
"tier": "Premium",
"capacity": 4
},
"tags": {
"tag1": "value1",
"tag2": "value2"
}
},
{
"apiVersion": "2017-04-01",
"type": "Microsoft.ServiceBus/Namespaces",
"dependsOn": [ "[concat('Microsoft.ServiceBus/namespaces/', parameters('serviceBusNamespaceNameSecondary'))]" ],
"name": "[parameters('serviceBusNamespaceNamePrimary')]",
"location": "[parameters('location')]",
"sku": {
"name": "Premium",
"tier": "Premium",
"capacity": 4
},
"tags": {
"tag1": "value1",
"tag2": "value2"
},
"resources": [
{
"apiVersion": "2017-04-01",
"name": "[parameters('aliasName')]",
"type": "disasterRecoveryConfigs",
"dependsOn": [ "[concat('Microsoft.ServiceBus/namespaces/', parameters('serviceBusNamespaceNamePrimary'))]" ],
"properties": {
"partnerNamespace": "[resourceId('Microsoft.ServiceBus/Namespaces', parameters('serviceBusNamespaceNameSecondary'))]"
}
}
]
}
],
"outputs": {
"NamespaceDefaultConnectionString": {
"type": "string",
"value": "[listkeys(variables('defaultAuthRuleResourceId'), '2017-04-01').primaryConnectionString]"
},
"DefaultSharedAccessPolicyPrimaryKey": {
"type": "string",
"value": "[listkeys(variables('defaultAuthRuleResourceId'), '2017-04-01').primaryKey]"
}
}
}

Enable HTTPS on Azure Front Door custom domain with ARM template deployment

I am deploying an Azure Front Door via an ARM template, and attempting to enable HTTPS on a custom domain.
According to the Azure documentation for Front Door, there is a quick start template to "Add a custom domain to your Front Door and enable HTTPS traffic for it with a Front Door managed certificate generated via DigiCert." However, while this adds a custom domain, it does not enable HTTPS.
Looking at the ARM template reference for Front Door, I can't see any obvious way to enable HTTPS, but perhaps I'm missing something?
Notwithstanding the additional information below, I'd like to be able to enable HTTPS on a Front Door custom domain via an ARM template deployment. Is this possible at this time?
Additional information
Note that there is a REST operation to enable HTTPS, but this does not seem to work with a Front Door managed certificate -
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/frontDoors/{frontDoorName}/frontendEndpoints/{frontendEndpointName}/enableHttps?api-version=2019-05-01
{
"certificateSource": "FrontDoor",
"protocolType": "ServerNameIndication",
"minimumTLSVersion": "1.2"
}
There is also a Az PowerShell cmdlet to enable HTTP, which does work.
Enable-AzFrontDoorCustomDomainHttps -ResourceGroupName "lmk-bvt-accounts-front-door" -FrontDoorName "my-front-door" -FrontendEndpointName "my-front-door-rg"
UPDATE: This implementation currently seems to be unstable and is working only intermittently, which indicates it may not be production ready yet.
This now actually seems to be possible with ARM templates, after tracking down the latest Front Door API (2020-01-01) specs (which don't appear to be fully published in the MS reference websites yet):
https://github.com/Azure/azure-rest-api-specs/tree/master/specification/frontdoor/resource-manager/Microsoft.Network/stable/2020-01-01
There's a new customHttpsConfiguration property in the frontendEndpoint properties object:
"customHttpsConfiguration": {
"certificateSource": "AzureKeyVault" // or "FrontDoor",
"minimumTlsVersion":"1.2",
"protocolType": "ServerNameIndication",
// Depending on "certificateSource" you supply either:
"keyVaultCertificateSourceParameters": {
"secretName": "<secret name>",
"secretVersion": "<secret version>",
"vault": {
"id": "<keyVault ResourceID>"
}
}
// Or:
"frontDoorCertificateSourceParameters": {
"certificateType": "Dedicated"
}
}
KeyVault Managed SSL Certificate Example
Note: I have tested this and appears to work.
{
"type": "Microsoft.Network/frontdoors",
"apiVersion": "2020-01-01",
"properties": {
"frontendEndpoints": [
{
"name": "[variables('frontendEndpointName')]",
"properties": {
"hostName": "[variables('customDomain')]",
"sessionAffinityEnabledState": "Enabled",
"sessionAffinityTtlSeconds": 0,
"webApplicationFirewallPolicyLink": {
"id": "[variables('wafPolicyResourceId')]"
},
"resourceState": "Enabled",
"customHttpsConfiguration": {
"certificateSource": "AzureKeyVault",
"minimumTlsVersion":"1.2",
"protocolType": "ServerNameIndication",
"keyVaultCertificateSourceParameters": {
"secretName": "[parameters('certKeyVaultSecret')]",
"secretVersion": "[parameters('certKeyVaultSecretVersion')]",
"vault": {
"id": "[resourceId(parameters('certKeyVaultResourceGroupName'),'Microsoft.KeyVault/vaults',parameters('certKeyVaultName'))]"
}
}
}
}
}
],
...
}
}
Front Door Managed SSL Certificate Example
Looks like for a FrontDoor managed certificate you would need to set:
Note: I have not tested this
{
"type": "Microsoft.Network/frontdoors",
"apiVersion": "2020-01-01",
"properties": {
"frontendEndpoints": [
{
"name": "[variables('frontendEndpointName')]",
"properties": {
"hostName": "[variables('customDomain')]",
"sessionAffinityEnabledState": "Enabled",
"sessionAffinityTtlSeconds": 0,
"webApplicationFirewallPolicyLink": {
"id": "[variables('wafPolicyResourceId')]"
},
"resourceState": "Enabled",
"customHttpsConfiguration": {
"certificateSource": "FrontDoor",
"minimumTlsVersion":"1.2",
"protocolType": "ServerNameIndication",
"frontDoorCertificateSourceParameters": {
"certificateType": "Dedicated"
}
}
}
}
],
...
}
}
I was able to successfully make an enableHttps REST Call using the Azure Management API.
I got a successful response and can see the resource results in the portal.azure.com and resource.azure.com sites.
However I am pretty sure the Management API, and PowerShell methods are the only ways supported right now. Since there is likely some validation required on the Certificate and Handling, they didn't include that yet in the ARM Templates. Given validation can be quite important, it is best you confirm your configuration is workable in the UI first, before automating it (IMHO).
According to this discussion this seems only possible via the REST API (see e.g. this answer) and not (yet) via ARM.
I managed to get this working with an ARM template. The below link shows you how to do this using Azure Front Door as a certificate source:
https://github.com/Azure/azure-quickstart-templates/blob/master/101-front-door-custom-domain/azuredeploy.json
I drew inspiration from this for deploying a certificate from Azure Key Vault for a custom domain. Here are the relevant elements from the ARM template that I am using:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"hubName": {
"type": "string",
"metadata": {
"description": "Name to assign to the hub. This name will prefix all resources contained in the hub."
}
},
"frontdoorName": {
"type": "string",
"metadata": {
"description": "Name to assign to the Frontdoor instance"
}
},
"frontdoorCustomDomain": {
"type": "string",
"metadata": {
"description": "The custom domain name to be applied to the provisioned Azure Frontdoor instance"
}
},
"keyVaultCertificateName": {
"type": "string",
"metadata": {
"description": "Name of the TLS certificate in the Azure KeyVault to be deployed to Azure Frontdoor for supporting TLS over a custom domain",
"assumptions": [
"Azure KeyVault containing the TLS certificate is deployed to the same resource group as the resource group where Azure Frontdoor will be deployed to",
"Azure KeyVault name is the hub name followed by '-keyvault' (refer to variable 'keyVaultName' in this template)"
]
}
},
...
},
"variables": {
"frontdoorName": "[concat(parameters('hubName'), '-', parameters('frontdoorName'))]",
"frontdoorEndpointName": "[concat(variables('frontdoorName'), '-azurefd-net')]",
"customDomainFrontdoorEndpointName": "[concat(variables('frontdoorName'), '-', replace(parameters('frontdoorCustomDomain'), '.', '-'))]",
"keyVaultName": "[concat(parameters('hubName'), '-keyvault')]",
"frontdoorHostName": "[concat(variables('frontdoorName'), '.azurefd.net')]",
...
},
"resources": [
{
"type": "Microsoft.Network/frontdoors",
"apiVersion": "2020-05-01",
"name": "[variables('frontdoorName')]",
"location": "Global",
"properties": {
"resourceState": "Enabled",
"backendPools": [...],
"healthProbeSettings": [...],
"frontendEndpoints": [
{
"id": "[concat(resourceId('Microsoft.Network/frontdoors', variables('frontdoorName')), concat('/FrontendEndpoints/', variables('frontdoorEndpointName')))]",
"name": "[variables('frontdoorEndpointName')]",
"properties": {
"hostName": "[variables('frontdoorHostName')]",
"sessionAffinityEnabledState": "Enabled",
"sessionAffinityTtlSeconds": 0,
"resourceState": "Enabled"
}
},
{
"id": "[concat(resourceId('Microsoft.Network/frontdoors', variables('frontdoorName')), concat('/FrontendEndpoints/', variables('customDomainFrontdoorEndpointName')))]",
"name": "[variables('customDomainFrontdoorEndpointName')]",
"properties": {
"hostName": "[parameters('frontdoorCustomDomain')]",
"sessionAffinityEnabledState": "Enabled",
"sessionAffinityTtlSeconds": 0,
"resourceState": "Enabled"
}
}
],
"loadBalancingSettings": [...],
"routingRules": [...],
"backendPoolsSettings": {
"enforceCertificateNameCheck": "Enabled",
"sendRecvTimeoutSeconds": 30
},
"enabledState": "Enabled",
"friendlyName": "[variables('frontdoorName')]"
}
},
{
"type": "Microsoft.Network/frontdoors/frontendEndpoints/customHttpsConfiguration",
"apiVersion": "2020-07-01",
"name": "[concat(variables('frontdoorName'), '/', variables('customDomainFrontdoorEndpointName'), '/default')]",
"dependsOn": [
"[resourceId('Microsoft.Network/frontdoors', variables('frontdoorName'))]"
],
"properties": {
"protocolType": "ServerNameIndication",
"certificateSource": "AzureKeyVault",
"minimumTlsVersion": "1.2",
"keyVaultCertificateSourceParameters": {
"secretName": "[parameters('keyVaultCertificateName')]",
"vault": {
"id": "[resourceId(resourceGroup().name, 'Microsoft.KeyVault/vaults', variables('keyVaultName'))]"
}
}
}
}
]
}
Azure Front Door classic now seems to support both managed certificates and custom certificates for custom domains. At least there are quickstart templates in the official repo from Microsoft exactly for these cases:
managed certificate
custom certificate
They both use Microsoft.Network/frontdoors/frontendEndpoints/customHttpsConfiguration subresource of the Front Door, currently with API version 2020-07-01. Only the parent subresource is documented in the templates reference, though.
The name of the customHttpsConfiguration resource is "default", so when the resource is specified as a top-level resource in the template, its complete name is something like "myfrontdoorafd/www-example-com/default".
Using Bicep (which transpiles to JSON ARM templates and which I highly recommend), the important part of the template looks like this:
param frontDoorName string
param customDomainName string
var frontEndEndpointCustomName = replace(customDomainName, '.', '-')
resource frontDoor 'Microsoft.Network/frontDoors#2020-01-01' = {
name: frontDoorName
properties: {
frontendEndpoints: [
{
name: frontEndEndpointCustomName
properties: {
hostName: customDomainName
...
}
}
...
]
...
}
...
resource frontendEndpoint 'frontendEndpoints' existing = {
name: frontEndEndpointCustomName
}
}
// This resource enables a Front Door-managed TLS certificate on the frontend.
resource customHttpsConfiguration 'Microsoft.Network/frontdoors/frontendEndpoints/customHttpsConfiguration#2020-07-01' = {
parent: frontDoor::frontendEndpoint
name: 'default'
properties: {
protocolType: 'ServerNameIndication'
certificateSource: 'FrontDoor'
frontDoorCertificateSourceParameters: {
certificateType: 'Dedicated'
}
minimumTlsVersion: '1.2'
}
}
Note that the deployment will be in progress till the certificate is actually issued and deployed to all points of presence (PoP) of Azure. This may take really long and even fail due to RequestTimeout. If you want to just start the operation and let it complete asynchronously, use e.g. the enable-https subcommand in Azure CLI. Even after the failure, the customHttpsProvisioningState is Pending and the certificate provisioning process may complete successfully.
Also note that when you have many frontend endpoints and changes happen frequently but most frontend endpoints stay unchanged, the pattern from this template cannot be generalized just by specifying multiple customHttpsConfiguration instances for multiple frontend endpoints. Such a generalization is not efficient and likely hits the rate limit of the underlying API (429 TooManyRequests) because the API is called even when the endpoint already has the HTTPS configuration.
In such a case, I was able to use nested templates and conditional deployment to deploy the customHttpsConfiguration subresource only when the frontend endpoint's property customHttpsProvisioningState has the value of Disabled. This works OK even with tens of frontend endpoints when a new frontend endpoint is added (and it should get a managed certificate). Even in deployment mode Complete, the once-applied configuration persists.

Azure Front Door - How to add geo filtering policy?

I want to apply a geo filter to azure front door for countries that are outside of the US.
I've applied a waf policy (following the microsoft docs), but I'm not getting the desired result. All traffic appears to be denied. If I try a different country code, all traffic seems to be allowed.
Here's an example of a deny policy I'm trying to get working. If I apply this rule and test via locabrowser, the traffic is allowed.
I'm testing this theory by using locabrowser to simulate traffic from different locations.
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"frontdoorwebapplicationfirewallpolicies_DenyChinaWafPolicy_name": {
"defaultValue": "DenyChinaWafPolicy",
"type": "String"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.Network/frontdoorwebapplicationfirewallpolicies",
"apiVersion": "2018-08-01",
"name": "[parameters('frontdoorwebapplicationfirewallpolicies_DenyChinaWafPolicy_name')]",
"location": "Global",
"properties": {
"policySettings": {
"enabledState": "Enabled",
"mode": "Prevention"
},
"customRules": {
"rules": [
{
"name": "geoFilterRule",
"priority": 1,
"ruleType": "MatchRule",
"rateLimitDurationInMinutes": 1,
"rateLimitThreshold": 0,
"matchConditions": [
{
"matchVariable": "RemoteAddr",
"operator": "GeoMatch",
"negateCondition": false,
"matchValue": [
"CH"
]
}
],
"action": "Block"
}
]
},
"managedRules": {
"ruleSets": []
}
}
}
]
}
Geo-filtering in AFD is currently broken. It includes all the country instead of specific location. Fix is made and will be released soon. Will update here once the fix is updated.
This also does not work for me. Whatever I set the action to allow or block with matchVariable": "RemoteAddr" and "operator": "GeoMatch", It seems that this policy did not rely on the "matchValue", just works depending on the action. It seems that Geo filtering with WAF is still not available.
Please note that the Azure web application firewall (WAF) for Azure Front Door is currently in public preview.
You could give your voices or vote these feedback1 and feedback2 about geo-filtering.

How to set the connection string for a Service Bus Logic App action in an ARM template?

I'm attempting to deploy an Azure Logic App that includes an action to Send a message on a Service Bus using an ARM template.
In addition to deploying the Logic App, the ARM template deploys a Service Bus Namespace, a Queue and two AuthorizationRule (one for sending and one for listening).
I want to dynamically set the connection information for the Send Service Bus Message action to use the Connection string generated for the AuthorizationRule that supports sending.
When I create this in the portal editor (specifying the connection string for sending), I noticed the following is generated in code view...
"Send_message.": {
"conditions": [
{
"dependsOn": "<previous action>"
}
],
"inputs": {
"body": {
"ContentData": "#{encodeBase64(triggerBody())}"
},
"host": {
"api": {
"runtimeUrl": "https://logic-apis-westus.azure-apim.net/apim/servicebus"
},
"connection": {
"name": "#parameters('$connections')['servicebus']['connectionId']"
}
},
"method": "post",
"path": "/#{encodeURIComponent(string('<queuename>'))}/messages"
},
"type": "apiconnection"
}
},
I assume that the connection information is somehow buried in #parameters('$connections')['servicebus']['connectionId']"
I then used resources.azure.com to navigate to the logic app to see if I could get more details as to how #parameters('$connections')['servicebus']['connectionId']" is defined.
I found this:
"parameters": {
"$connections": {
"value": {
"servicebus": {
"connectionId": "/subscriptions/<subguid>/resourceGroups/<rgname>/providers/Microsoft.Web/connections/servicebus",
"connectionName": "servicebus",
"id": "/subscriptions/<subguid>/providers/Microsoft.Web/locations/westus/managedApis/servicebus"
}
}
}
}
But I still don't see where the connection string is set.
Where can I set the connection string for the service bus action in an ARM template using something like the following?
[listkeys(variables('sendAuthRuleResourceId'), variables('sbVersion')).primaryConnectionString]
EDIT: Also, I've referred to was seems to be a promising Azure quick start on github (based on the title), but I can't make any sense of it. It appears to use an older schema 2014-12-01-preview, and the "queueconnector" references an Api Gateway. If there is a newer example out there for this scenario, I'd love to see it.
I've recently worked on an ARM Template for the deployment of logic apps and service bus connection. Here is the sample template for configuring service bus connection string within the type "Microsoft.Web/connections". Hope it helps.
{
"type": "Microsoft.Web/connections",
"apiVersion": "2016-06-01",
"name": "[parameters('connections_servicebus_name')]",
"location": "centralus",
"dependsOn": [
"[resourceId('Microsoft.ServiceBus/namespaces/AuthorizationRules', parameters('ServiceBusNamespace'), 'RootManageSharedAccessKey')]"
],
"properties": {
"displayName": "ServiceBusConnection",
"customParameterValues": {},
"api": {
"id": "[concat(subscription().id, '/providers/Microsoft.Web/locations/centralus/managedApis/servicebus')]"
},
"parameterValues": {
"connectionString": "[listKeys(resourceId('Microsoft.ServiceBus/namespaces/authorizationRules', parameters('ServiceBusNamespace'), 'RootManageSharedAccessKey'), '2017-04-01').primaryConnectionString]"
}
}
}
As you know connections is a resource so it needs to be created first did you refer this https://blogs.msdn.microsoft.com/logicapps/2016/02/23/deploying-in-the-logic-apps-preview-refresh/. Quick start link you are referring is for older schema.

Resources