I am currently reading this: https://learn.microsoft.com/en-us/azure/sql-database/sql-database-auto-failover-group, and I have a hard time understanding the automatic failover policy:
By default, a failover group is configured with an automatic failover
policy. The SQL Database service triggers failover after the failure
is detected and the grace period has expired. The system must verify
that the outage cannot be mitigated by the built-in high availability
infrastructure of the SQL Database service due to the scale of the
impact. If you want to control the failover workflow from the
application, you can turn off automatic failover.
When defining the failover group in an ARM template:
{
"condition": "[equals(parameters('redundancyId'), 'pri')]",
"type": "Microsoft.Sql/servers",
"kind": "v12.0",
"name": "[variables('sqlServerPrimaryName')]",
"apiVersion": "2014-04-01-preview",
"location": "[parameters('location')]",
"properties": {
"administratorLogin": "[parameters('sqlServerPrimaryAdminUsername')]",
"administratorLoginPassword": "[parameters('sqlServerPrimaryAdminPassword')]",
"version": "12.0"
},
"resources": [
{
"condition": "[equals(parameters('redundancyId'), 'pri')]",
"apiVersion": "2015-05-01-preview",
"type": "failoverGroups",
"name": "[variables('sqlFailoverGroupName')]",
"properties": {
"serverName": "[variables('sqlServerPrimaryName')]",
"partnerServers": [
{
"id": "[resourceId('Microsoft.Sql/servers/', variables('sqlServerSecondaryName'))]"
}
],
"readWriteEndpoint": {
"failoverPolicy": "Automatic",
"failoverWithDataLossGracePeriodMinutes": 60
},
"readOnlyEndpoint": {
"failoverPolicy": "Disabled"
},
"databases": [
"[resourceId('Microsoft.Sql/servers/databases', variables('sqlServerPrimaryName'), variables('sqlDatabaseName'))]"
]
},
"dependsOn": [
"[variables('sqlServerPrimaryName')]",
"[resourceId('Microsoft.Sql/servers/databases', variables('sqlServerPrimaryName'), variables('sqlDatabaseName'))]",
"[resourceId('Microsoft.Sql/servers', variables('sqlServerSecondaryName'))]"
]
},
{
"condition": "[equals(parameters('redundancyId'), 'pri')]",
"name": "[variables('sqlDatabaseName')]",
"type": "databases",
"apiVersion": "2014-04-01-preview",
"location": "[parameters('location')]",
"dependsOn": [
"[variables('sqlServerPrimaryName')]"
],
"properties": {
"edition": "[variables('sqlDatabaseEdition')]",
"requestedServiceObjectiveName": "[variables('sqlDatabaseServiceObjective')]"
}
}
]
},
{
"condition": "[equals(parameters('redundancyId'), 'pri')]",
"type": "Microsoft.Sql/servers",
"kind": "v12.0",
"name": "[variables('sqlServerSecondaryName')]",
"apiVersion": "2014-04-01-preview",
"location": "[variables('sqlServerSecondaryRegion')]",
"properties": {
"administratorLogin": "[parameters('sqlServerSecondaryAdminUsername')]",
"administratorLoginPassword": "[parameters('sqlServerSecondaryAdminPassword')]",
"version": "12.0"
}
}
I specify the readWriteEndpoint like this:
"readWriteEndpoint": {
"failoverPolicy": "Automatic",
"failoverWithDataLossGracePeriodMinutes": 60
}
With a failoverWithDataLossGracePeriodMinutes set to 60 minutes.
What does this mean? I cannot find a clear answer anywhere. Does it mean that:
When an outage is happening in my primary region where my primary database resides, the read/write endpoint points to the primary and only after 60 minutes it fails over to my secondary, which becomes the new primary. In the 60 minutes, the only way to read my data is to use the readOnlyEndpoint directly? OR
My read/write endpoint is turned instantly, if they somehow can detect that there was no data to be synced
I think it boils down to: do I have to manually make the failover, if I detect an outage, if I don't care about data loss, but I want to be able to write to my database?
Bonus question: is the reason why the grace period is present because there can be unsynced data on the primary, that will be overwritten, or tossed away, if the secondary becomes the new primary (if i switch manually)?
Sorry, I can't keep it to only one question. I have read a lot and I really need to know this.
What does this mean?
It means that:
"when a outage is happening in my primary region where my primary database resides, the read/write endpoint points to the primary and only after 60 minutes it fails over to my secondary, which becomes the new primary. "
It can't failover automatically even when the data is synced because the high-availability solution in the primary region is trying to do the same thing, and almost all of the time your primary database will come back quickly in the primary region. Performing an automatic cross-region fail-over would interfere with this.
And
"the reason why the grace period is present, is that because the there can be unsynced data on the primary, that will be overwritten, or tossed away, if the secondary becomes the new primary"
And to allow time for the database to failover within the primary region.
Related
Successfully deployed from ARM template the primary and secondary Azure SQL servers with expected failover groups. Deploying the ARM template on subsequent deployments is returning the following error message:
"error": {
"code": "FailoverGroupCreateOrUpdateRequestReadOnlyPropertyModified",
"message": "The create or update failover group request body should not modify the read-only property 'location'."
}
} undefined
We haven't made any changes to the primary or secondary server's location property as indicated in the error message.
Code snippet from the ARM template:
{
"comments": "Azure SQL Server Failover Group",
"condition": "[parameters('isProduction')]",
"type": "Microsoft.Sql/servers/failoverGroups",
"apiVersion": "2015-05-01-preview",
"name": "[concat(variables('sqlServerPrimaryName'), '/', variables('sqlServerFailoverName'))]",
"location": "[parameters('sqlServerPrimaryLocation')]",
"dependsOn": [
"[resourceId('Microsoft.Sql/servers', variables('sqlServerPrimaryName'))]",
"[resourceId('Microsoft.Sql/servers', variables('sqlServerSecondaryName'))]",
"[resourceId('Microsoft.Sql/servers/databases', variables('sqlServerPrimaryName'), variables('adminDbName'))]",
"[resourceId('Microsoft.Sql/servers/databases', variables('sqlServerPrimaryName'), variables('trxnDbName'))]",
"[resourceId('Microsoft.Sql/servers/databases', variables('sqlServerPrimaryName'), variables('dbaDbName'))]"
],
"properties": {
"readWriteEndpoint": {
"failoverPolicy": "Automatic",
"failoverWithDataLossGracePeriodMinutes": 60
},
"readOnlyEndpoint": {
"failoverPolicy": "Disabled"
},
"partnerServers": [
{
"id": "[resourceId('Microsoft.Sql/servers', variables('sqlServerSecondaryName'))]"
}
],
"databases": [
"[resourceId('Microsoft.Sql/servers/databases', variables('sqlServerPrimaryName'), variables('adminDbName'))]",
"[resourceId('Microsoft.Sql/servers/databases', variables('sqlServerPrimaryName'), variables('trxnDbName'))]",
"[resourceId('Microsoft.Sql/servers/databases', variables('sqlServerPrimaryName'), variables('dbaDbName'))]"
]
}
}
If possible then remove the location from the ARM template, As you have already given the sqlServerPrimaryName in failover group creation name, It takes the location of sqlServerPrimaryName.
As #Leon Yue's comment said:
Once the arm template is deployed, the failover group is created and
exist. As the error said, location is read only. When we deploy it
twice, even if you didn't set the location value, it will still update
it, then cause the error.
You couldn't update location property when you deploy at the second time, and you need to move this property.
I exported arm template from Datafactory V2, when Importing the template it is asking me to manually enter SQL database connection string. To minimize the human interaction I made the following changes.
{
"name": "[concat(parameters('factoryName'), '/myFactory')]",
"type": "Microsoft.DataFactory/factories/linkedServices",
"apiVersion": "2018-06-01",
"properties": {
"type": "AzureSqlDatabase",
"typeProperties": {
"connectionString": "[concat('Server=tcp:',parameters('sqlServerName'),'.database.windows.net,1433;Initial Catalog=', parameters('sqlDatabaseName'), ';Persist Security Info=False;User ID=',parameters('sqlServerUserName'),';Password=(password)',';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30')]",
"password": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "AzureKeyVault1",
"type": "LinkedServiceReference"
},
"secretName": "sql-password"
}
}
},
"dependsOn": [
"[concat(variables('factoryId'), '/linkedServices/AzureKeyVault1')]"
]
},
So currently when deploying to Datafactory V2 and test connection to this SQL server, I got
Cannot connect to SQL Database: 'tcp:mysqlserver.database.windows.net,1433',
Database: 'mydatabase', User: 'admin'. Check the linked service configuration
is correct, and make sure the SQL Database firewall allows the integration runtime to access.
Login failed for user 'admin'., SqlErrorNumber=18456,
If I manually input all the connections in the portal UI, I can easily connect to the database and test successfully so it is not a firewall issue.
Then I think there could be 2 issue:
1.how the password from keyvault is consumed in the connectionstring. I didn't find much information about it online.
When I open the created Sql Linked service, I notice the Fully qualified domain name is missing,If i manually add it in then the connection works.
The SQL connection UI
Throwing this as an alternative answer/approach.
Store the connection string in it's entirety in Key Vault. If doing this then the reference would look like:
{
"name": "[concat(parameters('factoryName'), '/',parameters('connectionNameAdventureWorks'))]",
"type": "Microsoft.DataFactory/factories/linkedServices",
"apiVersion": "2018-06-01",
"properties": {
"annotations": [],
"type": "AzureSqlDatabase",
"typeProperties": {
"connectionString": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "[variables('azkDataAnalyticsReferenceName')]",
"type": "LinkedServiceReference"
},
"secretName": "[variables('azkAdventureWorksSecretName')]"
}
}
},
"dependsOn": [
"[concat(variables('factoryId'), '/linkedServices/',variables('azkDataAnalyticsReferenceName'))]"
]
}
And even more secure approach would be to add Data Factory as a Managed Identity and then run a sql script to add the user If doing this then all there is no need for any credentials to be passed at all.
One downside is if the DataFactory is deleted and recreated then the managed identity permissions would need to be reassigned to the sql database.
I have two premium Service bus instances deployed manually through the azure portal. They don't have geo-recovery alias configured and the service bus instances have been operational for about a year.
Now, I'm trying automate the deployment process of these service bus instances and also add a georecovery alias resource to it as follows:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"serviceBusNamespaceName": {
"type": "string",
"metadata": {
"description": "Name of the Service Bus namespace"
}
},
"serviceBusQueueName": {
"type": "string",
"metadata": {
"description": "Name of the Queue"
}
},
"serviceBusLocation": {
"type": "string"
},
"sku": {
"type": "object",
"defaultValue": "Standard"
},
"serviceBusTopicName": {
"type": "string"
},
"serviceBusSubscriptionName": {
"type": "string"
},
"isAliasEnabled": {
"type": "bool"
},
"isQueueCreationEnabled": {
"type": "bool"
},
"aliasName": {
"type": "string"
},
"partnerNamespace": {
"type": "string"
}
},
"variables": {
"defaultSASKeyName": "RootManageSharedAccessKey",
"authRuleResourceId": "[resourceId('Microsoft.ServiceBus/namespaces/authorizationRules', parameters('serviceBusNamespaceName'), variables('defaultSASKeyName'))]",
"sbVersion": "2017-04-01"
},
"resources": [
{
"apiVersion": "2018-01-01-preview",
"name": "[parameters('serviceBusNamespaceName')]",
"type": "Microsoft.ServiceBus/Namespaces",
"location": "[parameters('serviceBusLocation')]",
"sku": {
"name": "[parameters('sku').name]",
"tier": "[parameters('sku').tier]",
"capacity": "[parameters('sku').capacity]"
},
"properties": {
"zoneRedundant": false
},
"resources": [
{
"apiVersion": "2017-04-01",
"name": "[parameters('aliasName')]",
"type": "disasterRecoveryConfigs",
"condition": "[parameters('isAliasEnabled')]",
"dependsOn": [
"[concat('Microsoft.ServiceBus/namespaces/', parameters('serviceBusNamespaceName'))]"
],
"properties": {
"partnerNamespace": "[parameters('partnerNamespace')]"
}
}
]
}
]
}
I'm using the same template to deploy the primary and secondary instances separately. Note that the disasterRecoveryConfigs resource will only be deployed when it's the primary instance.
This template successfully deploys the secondary namespace, but the primary namespace deployment fails with the following error:
Unable to freeze secondary namespace before creating pairing, this is
probably because secondary namespace is not empty.
Which is correct i.e. the secondary namespace has a couple of topics/subscriptions and queues already created. I don't want to delete them and just want to pair the primary and secondary namespaces.
How can this be done?
I had a similar issue with the Service Bus Geo-Recovery ARM Template. I read the exception closely; its state that the secondary namespace is not empty means we have to delete the topic and queue from the secondary namespace, then run the template again. It will work and create the topic and queue again based on the primary namespace.
But if you run the template a second time, you will get a different exception, which is that the secondary namespace cannot be updated (since it’s in geo-pairing). It’s strange, but by design, you cannot update the secondary namespace while it’s in Geo Pairing, and even if you remove Geo Pairing, your secondary namespace should be empty without any instances such as Topic, Queue, etc.
How to overcome this?
Lets consider, now I wanted to add the Topic or Queue in existing deployment by using the ARM template then, you will ran into the issue when your template is in the pipeline or anywhere and needs to run multiple times and update the existing primary namespace.
1. Quick Fix (one time only second time manually again you have to do the following steps)
Login to the Azure Portal
Go to the your primary Service bus Namespace
Click on the Geo-Recovery option under setting section
At the right hand side at the top find the option break pairing and
click on it.
It will break the pairing & if you not follow this step you will get the exception, Secondary Namespaces can not be updated
Next, delete the secondary Namespace instance or Namespace and run
the pipeline. it will work.
If you not follow above step then you will get error unable to freeze secondary Namespace.
The above is the one time fix, if you run template again you have to repeat above process manually again.
2. Automation using CI-CD DevOps Pipeline or CLI or PowerShell
The most of the time ARM templates runs in the pipeline and there is option to break the pairing using the Azure CLI or PowerShell. You should consider the adding two task in the YAML file
First Task, to break the Pairing
Azure CLI
az servicebus georecovery-alias break-pair --resource-group myresourcegroup --namespace-name primarynamespace --alias myaliasname
PowerShell
Set-AzureRmServiceBusGeoDRConfigurationBreakPair -ResourceGroupName $resourcegroup -Name $aliasname -Namespace $primarynamespace
Second Task, to delete the secondary namespace instances (topic,
Queue) or delete entire Namespace.
PowerShell
Remove-AzServiceBusNamespace -ResourceGroup Default-ServiceBus-WestUS -NamespaceName SB-Example1
To remove Topic or Queue instead of Namespace, refer the following documentation.
Azure Service Bus Management Common PowerShell commands
Also if you are running template locally, you can add small script or CLI command prior to run your template.
Does it affect on ConnectionString or Data after deleting Secondary Namespace or instances?
Its valid question, what will happens to the connection string or data since some clients are already using it, The answer is connection string wont be change if we delete the secondary namespace because in Geo recovery scenario we are supposed to be use alias connection string so there is no impact on existing customers.
Regarding the second question about the data, the answer is secondary namespace wont store any data it has only the meta data, meaning in the case of failover secondary namespace start working.
So during the deployment deleting secondary namespace instances or namespace wont impact on anything.
Is there any better option?
Might be you are thinking, why should I follow such a long process but the above problem because of the Geo Recovery design (service bus, event hub, event grid etc.) and there is no other option.
I hope Microsoft will come up with some better approach in the future.
If you try to create a pairing between a primary namespace with a private endpoint and a secondary namespace without a private endpoint, the pairing will fail.
You could refer to this template allows you to configure Service Bus Geo-disaster recovery alias.
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"serviceBusNamespaceNamePrimary": {
"type": "string",
"metadata": {
"description": "Name of Service Bus namespace"
}
},
"serviceBusNamespaceNameSecondary": {
"type": "string",
"metadata": {
"description": "Name of Service Bus namespace"
}
},
"aliasName": {
"type": "string",
"metadata": {
"description": "Name of Geo-Recovery Configuration Alias "
}
},
"locationSecondaryNamepsace": {
"type": "string",
"defaultValue": "South Central US",
"metadata": {
"description": "Location of Secondary namespace"
}
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location of Primary namespace"
}
}
},
"variables": {
"defaultSASKeyName": "RootManageSharedAccessKey",
"defaultAuthRuleResourceId": "[resourceId('Microsoft.ServiceBus/namespaces/authorizationRules', parameters('serviceBusNamespaceNamePrimary'), variables('defaultSASKeyName'))]"
},
"resources": [
{
"apiVersion": "2017-04-01",
"name": "[parameters('serviceBusNamespaceNameSecondary')]",
"type": "Microsoft.ServiceBus/Namespaces",
"location": "[parameters('locationSecondaryNamepsace')]",
"sku": {
"name": "Premium",
"tier": "Premium",
"capacity": 4
},
"tags": {
"tag1": "value1",
"tag2": "value2"
}
},
{
"apiVersion": "2017-04-01",
"type": "Microsoft.ServiceBus/Namespaces",
"dependsOn": [ "[concat('Microsoft.ServiceBus/namespaces/', parameters('serviceBusNamespaceNameSecondary'))]" ],
"name": "[parameters('serviceBusNamespaceNamePrimary')]",
"location": "[parameters('location')]",
"sku": {
"name": "Premium",
"tier": "Premium",
"capacity": 4
},
"tags": {
"tag1": "value1",
"tag2": "value2"
},
"resources": [
{
"apiVersion": "2017-04-01",
"name": "[parameters('aliasName')]",
"type": "disasterRecoveryConfigs",
"dependsOn": [ "[concat('Microsoft.ServiceBus/namespaces/', parameters('serviceBusNamespaceNamePrimary'))]" ],
"properties": {
"partnerNamespace": "[resourceId('Microsoft.ServiceBus/Namespaces', parameters('serviceBusNamespaceNameSecondary'))]"
}
}
]
}
],
"outputs": {
"NamespaceDefaultConnectionString": {
"type": "string",
"value": "[listkeys(variables('defaultAuthRuleResourceId'), '2017-04-01').primaryConnectionString]"
},
"DefaultSharedAccessPolicyPrimaryKey": {
"type": "string",
"value": "[listkeys(variables('defaultAuthRuleResourceId'), '2017-04-01').primaryKey]"
}
}
}
I am currently deploying a VM Scale Set (VMSS) using an ARM template which has a resource inside VMSS to install Azure extension for Azure DevOps (ADO) Deployment Agent. All is deployed successfully and a node is registered in ADO with all details as are in the ARM template. However the problem is that it installs the agent only on first node and (as far as I see) ignores the rest of the nodes. I've tested this with multiple nodes during creation of the scale set and with auto-scale as well. Both scenarios result in only first agent registered.
This is the code layout I'm using (I've removed the VMSS bits to reduce the template length here, there are of course OS, storage and network settings inside):
{
"type": "Microsoft.Compute/virtualMachineScaleSets",
"name": "[parameters('VMSSName')]",
"apiVersion": "2018-10-01",
"location": "[resourceGroup().location]",
"sku": {
"name": "[parameters('VMSSSize')]",
"capacity": "[parameters('VMSSCount')]",
"tier": "Standard"
},
"dependsOn": [],
"properties": {
"overprovision": "[variables('overProvision')]",
"upgradePolicy": {
"mode": "Automatic"
},
"virtualMachineProfile": {},
"storageProfile": {},
"networkProfile": {},
"extensionProfile": {
"extensions": [
{
"type": "Microsoft.Compute/virtualMachineScaleSets/extensions",
"name": "VMSS-NetworkWatcher",
"location": "[resourceGroup().location]",
"properties": {
"publisher": "Microsoft.Azure.NetworkWatcher",
"type": "[if(equals(parameters('Platform'), 'Windows'), 'NetworkWatcherAgentWindows', 'NetworkWatcherAgentLinux')]",
"typeHandlerVersion": "1.4",
"autoUpgradeMinorVersion": true
}
},
{
"type": "Microsoft.Compute/virtualMachineScaleSets/extensions",
"name": "VMSS-TeamServicesAgent",
"location": "[resourceGroup().location]",
"properties": {
"publisher": "Microsoft.VisualStudio.Services",
"type": "[if(equals(parameters('Platform'), 'Windows'), 'TeamServicesAgent', 'TeamServicesAgentLinux')]",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"VSTSAccountName": "[parameters('VSTSAccountName')]",
"TeamProject": "[parameters('VSTSTeamProjectName')]",
"DeploymentGroup": "[parameters('VSTSDeploymentGroupName')]",
"AgentName": "[concat(parameters('VMSSName'),'-DG')]",
"Tags": "[parameters('VSTSDeploymentAgentTags')]"
},
"protectedSettings": {
"PATToken": "[parameters('VSTSPATToken')]"
}
}
}
]
}
}
}
}
Now the desired state, of course, is that all nodes will have agent installed so that I can use the Deployment Group inside Release pipeline.
your problem is in the fact that all agents have the same AgentName, so it effectively overwrites the agent and only the latest one "survives". I dont think there is anything you can do, unless you just amend the AgentName and it auto assigns based on computer name.
You can convert this to a script\dsc extension, that way you can calculate everything on the fly.
I have an Azure Application Gateway set up with Path-Based routing to route between two different Backend Pools. I also have Application Insights configured on one of the Pools, which I will come back to in a moment. My path rule is configured like this:
/home/* -> Backend Pool 1
/* -> Backend Pool 2
I have never been able to connect to Backend Pool 1 but, I have been able to successfully connect to Backend Pool 2 at /* and when I was able to do that, going to /home/* would still be sent to Backend Pool 2 which didn't exist there. I tried using the Override Backend Path setting on the HTTP Settings, but then neither route would work and I would receive a 502 error. So naturally, I tried to reverse that setting, but nothing would change.
However, I did notice in the Application Insights for Backend Pool 2 that after removing the Override Backend Path setting, that the server pool was receiving the /* as part of the request and thus, receiving a 400 error because that route doesn't exist and the character is not allowed in the URL (It's worth noting that my web.config file doesn't have request URL character restrictions right now).
I know that this type of routing is possible, given the number of documents from Azure, but I've been dealing with this problem for two weeks and have poured over every scrap of documentation and don't seem to be getting anywhere.
So to clarify, my specific question is:
Given the things I've already tried, am I missing something in my configuration, is something wrong about my configuration?
I'd be more than happy to clarify any points that you feel I've left out.
EDIT: Adding the configuration of the one rule and its path map for context.
[
{
"backendAddressPool": null,
"backendHttpSettings": null,
"etag": "<####>",
"httpListener": {
"id": "<####>",
"resourceGroup": "<####>"
},
"id": "<####>",
"name": "HttpsPaths",
"provisioningState": "Succeeded",
"redirectConfiguration": null,
"resourceGroup": "<####>",
"ruleType": "PathBasedRouting",
"type": null,
"urlPathMap": {
"defaultBackendAddressPool": {
"id": "<####>/backendPool1",
"resourceGroup": "<####>"
},
"defaultBackendHttpSettings": {},
"defaultRedirectConfiguration": null,
"etag": "<####>",
"id": "<####>",
"name": "HttpsPaths",
"pathRules": [
{
"backendAddressPool": {
"id": "<####>/backendPool1"
},
"backendHttpSettings": {
"id": "<####>/OverrideBackendPathSettings (redirects to '/' on the backend)",
"resourceGroup": "<####>"
},
"etag": "<####>",
"id": "<#####>",
"name": "home",
"paths": [
"/home/*"
],
"provisioningState": "Succeeded",
"redirectConfiguration": null,
"resourceGroup": "<####>",
"type": null
},
{
"backendAddressPool": {
"id": "<####>/BackendPool2",
"resourceGroup": "<####>"
},
"backendHttpSettings": {
"id": "<####>/appGatewayBackendHttpSettings (sends request as is)",
"resourceGroup": "<####>"
},
"etag": "<####>",
"id": "<####>/gryphon",
"name": "gryphon",
"paths": [
"/*"
],
"provisioningState": "Succeeded",
"redirectConfiguration": null,
"resourceGroup": "<####>",
"type": null
}
],
"provisioningState": "Succeeded",
"resourceGroup": "<####>",
"type": null
},
"provisioningState": "Succeeded",
"resourceGroup": "<####>",
"type": null
}
]
Rules are evaluated in the order they are specified. It could be that you have a basic rule preceding the path based rule. This would cause the basic rule to intercept all traffic and route to backend pool specified in that rule. If that is not the case, then pasting the rules configuration would probably help.
--
Edit
I looked at your configuration details in our monitoring system. This is because of an incorrect probe configuration. You have /* in probes which is not valid. The probe should point to an existing page which returns a 200 http response code. Also you do not need an path override and can be removed. Once you have probes configured correctly, please ensure that the backend health report is showing all backend servers as healthy. Then your path based rules would work as expected.