I was in the process of configuring DevOps to deploy my Dev ADF to the UAT ADF instance.
I had come across the standard issue of the deploy not deleting out-dated pipelines, and attempted to use "Complete" deployment mode to resolve that.
Whereupon DevOps entirely deleted the UAT ADF instance!
Looking further at the docs, it appears that this is the expected behaviour if the factories are not in the ARM Templates.
And looking at my ARM Template (generated entirely by ADF, and with [AFAIK] entirely standard settings), it confirms that the factory itself is NOT amongst the documented resources to be created.
This seems ... odd.
Am I missing something?
How do I get the factory to be included in the ARM Template?
Or alternatively, how can I use the "Complete" deployment mode without it deleting the target ADF instance?
Note that the reason I don't want to use the "define a separate script to solve this" approach, is that it seems excessively complex when that the "Complete" mode sounds like it should do exactly what I want :) (If it weren't for this one oddity about deleting the factory)
You are correct. I've run into this issue before. To circumnavigate it this I recommend creating a core ARM template that would contain the Data Factory and any necessary linked services solely used by Data Factory. This will ensure the "infrastructure/connections" are deployed when creating a new instance.
If you are following Azure Data Factory CI/CD this would be an additional Azure Resource Group Deployment task before the Pipelines are deployed and reference the ARM template which should be in a separate repository.
Here's a template for Data Factory w/ Log Analytics to get you started. I included Log Analytics as most people don't realize about Log retention until they need it. Plus it's a best practices. Just update the system name as this will create a naming standard of adf-systemName-environment-regionAbrviation. The region abbreviation is dynamic based upon the object and will look up agianst the resource group.
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"environment": {
"type": "string",
"metadata": "Name of the environment being deployed to"
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location for all resources."
}
}
},
"variables": {
"systemName": "DataFactoryBaseName",
"regionReference": {
"centralus": "cus",
"eastus": "eus",
"westus": "wus"
},
"dataFactoryName": "[toLower(concat('adf-', variables('systemName'),'-', parameters('environment'),'-',variables('regionDeployment')))]",
"logAnalyticsName": "[toLower(concat('law-', variables('systemName'),'-', parameters('environment'),'-',variables('regionDeployment')))]",
"regionDeployment": "[toLower(variables('regionReference')[parameters('location')])]"
},
"resources": [
{
"name": "[variables('dataFactoryName')]",
"type": "Microsoft.DataFactory/factories",
"apiVersion": "2018-06-01",
"location": "[parameters('location')]",
"tags": {
"displayName": "Data Factory",
"ProjectName": "[variables('systemName')]",
"Environment":"[parameters('environment')]"
},
"identity": {
"type": "SystemAssigned"
}
},
{
"type": "Microsoft.OperationalInsights/workspaces",
"name": "[variables('logAnalyticsName')]",
"tags": {
"displayName": "Log Analytics",
"ProjectName": "[variables('systemName')]",
"Environment":"[parameters('environment')]"
},
"apiVersion": "2020-03-01-preview",
"location": "[parameters('location')]"
},
{
"type": "microsoft.datafactory/factories/providers/diagnosticsettings",
"name": "[concat(variables('dataFactoryName'),'/Microsoft.Insights/diagnostics')]",
"location": "[parameters('location')]",
"apiVersion": "2017-05-01-preview",
"dependsOn": [
"[resourceID('Microsoft.OperationalInsights/workspaces',variables('logAnalyticsName'))]",
"[resourceID('Microsoft.DataFactory/factories',variables('dataFactoryName'))]"
],
"properties": {
"name": "diagnostics",
"workspaceId": "[resourceID('Microsoft.OperationalInsights/workspaces',variables('logAnalyticsName'))]",
"logAnalyticsDestinationType": "Dedicated",
"logs": [
{
"category": "PipelineRuns",
"enabled": true,
"retentionPolicy": {
"enabled": false,
"days": 0
}
},
{
"category": "TriggerRuns",
"enabled": true,
"retentionPolicy": {
"enabled": false,
"days": 0
}
},
{
"category": "ActivityRuns",
"enabled": true,
"retentionPolicy": {
"enabled": false,
"days": 0
}
}
],
"metrics": [
{
"category": "AllMetrics",
"timeGrain": "PT1M",
"enabled": true,
"retentionPolicy": {
"enabled": false,
"days": 0
}
}
]
}
}
]
}
I have prepared an ARM template for deploying an Azure Eventhub instance and wonder how to access the both connection keys for returning them as output?
I would like to return a string in the form:
Endpoint=sb://my-eventhub.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=ojZMQcJD7uYifxJyGeXG6tNDdZyaC1/h5tmX6ODVfmY=
Here is my current template:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"clusterName": {
"type": "string",
"defaultValue": "eventhub",
"metadata": {
"description": "Name for the Event Hub cluster."
}
},
"namespaceName": {
"type": "string",
"defaultValue": "namespace",
"metadata": {
"description": "Name for the Namespace to be created in cluster."
}
}
},
"variables": {
"clusterName": "[concat(resourceGroup().name, '-', parameters('clusterName'))]",
"namespaceName": "[concat(resourceGroup().name, '-', parameters('namespaceName'))]"
},
"outputs": {
"MyClusterName": {
"type": "string",
"value": "[variables('clusterName')]"
},
"PrimaryConnectionString": {
"type": "string",
"value": "WHAT TO USE HERE PLEASE?"
},
"SecondaryConnectionString": {
"type": "string",
"value": "WHAT TO USE HERE PLEASE?"
}
},
"resources": [
{
"type": "Microsoft.EventHub/clusters",
"apiVersion": "2018-01-01-preview",
"name": "[variables('clusterName')]",
"location": "[resourceGroup().location]",
"sku": {
"name": "Dedicated",
"capacity": 1
}
},
{
"type": "Microsoft.EventHub/namespaces",
"apiVersion": "2018-01-01-preview",
"name": "[variables('namespaceName')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[resourceId('Microsoft.EventHub/clusters', variables('clusterName'))]"
],
"sku": {
"name": "Standard",
"tier": "Standard",
"capacity": 1
},
"properties": {
"isAutoInflateEnabled": false,
"maximumThroughputUnits": 0,
"clusterArmId": "[resourceId('Microsoft.EventHub/clusters', variables('clusterName'))]"
}
}
]
}
I have tried the following:
"value": "[listKeys(resourceId(concat('Microsoft.ServiceBus/namespaces/AuthorizationRules'), variables('namespaceName'), 'RootManageSharedAccessKey'),'2018-01-01-preview').primaryConnectionString]"
but get deployment error:
[error]ParentResourceNotFound: Can not perform requested operation on nested resource. Parent resource 'my-rg-namespace' not found.
UPDATE:
The following has worked for me as suggested by Jesse (thank you!):
"variables": {
"clusterName": "[concat(resourceGroup().name, '-', parameters('clusterName'))]",
"namespaceName": "[concat(resourceGroup().name, '-', parameters('namespaceName'))]",
"defaultSASKeyName": "RootManageSharedAccessKey",
"authRuleResourceId": "[resourceId('Microsoft.EventHub/namespaces/authorizationRules', variables('namespaceName'), variables('defaultSASKeyName'))]"
},
"outputs": {
"MyClusterName": {
"type": "string",
"value": "[variables('clusterName')]"
},
"PrimaryConnectionString": {
"type": "string",
"value": "[listkeys(variables('authRuleResourceId'), '2015-08-01').primaryConnectionString]"
},
"SecondaryConnectionString": {
"type": "string",
"value": "[listkeys(variables('authRuleResourceId'), '2015-08-01').secondaryConnectionString]"
}
},
UPDATE 2:
Also, Jesse has noticed that my ARM template is wrong in 2 ways, because it does not create an Event Hub, but a cluster and it is outside my namespace and provided this valuable comment:
The Event Hubs cluster is basically a way of reserving dedicated compute. It's not something that most scenarios need and it is... not cheap. Think of something on the scale of Xbox Live where you're seeing nearly 5 millions of events per second and which have higher performance needs. If you're not looking at that kind of scale or that sensitivity around timing, you probably want to rethink the need for a dedicated cluster.
Normally, you'd just provision an Event Hubs namespace which will use shared infrastructure with certain guarantees to minimize noisy neighbors and similar. This is adequate for the majority of scenarios, even those with high throughput needs. If you're not sure, this is probably the place that you want to start and then upgrade to a dedicated cluster if your needs justify the cost.
An Event Hubs namespace is the container for a set of Event Hub instances grouped together by a unique endpoint. Each Event Hub is made of a set of partitions. When you're publishing or consuming events, the partitions of an Event Hub are where the actual data is. When you're working with one of the SDKs, you'll start by telling it about the endpoint of your namespace and the Event Hub that you're interested in. You'll need a general awareness of partitions, but most of the "Getting Started" scenarios handle that detail for you, as do a fair portion of the real-world ones.... but, the concept is an important one.
It looks like you may be using an incorrect resource id, pulling from Microsoft.ServiceBus rather than Microsoft.EventHub where the failure is because there is no Service Bus namespace with the correct name.
You may want to try using a form similar to the following to identify your resource:
"variables": {
"location": "[resourceGroup().location]",
"apiVersion": "2015-08-01",
"defaultSASKeyName": "RootManageSharedAccessKey",
"authRuleResourceId": "[resourceId('Microsoft.EventHub/namespaces/authorizationRules', parameters('namespaceName'), variables('defaultSASKeyName'))]"
},
Which should allow it to be returned using listkeys as you you detailed above:
"outputs": {
"NamespaceConnectionString": {
"type": "string",
"value": "[listkeys(variables('authRuleResourceId'), variables('apiVersion')).primaryConnectionString]"
}
}
A full example for a simple deployment can be found in the Event Hubs sample template.
I am having an issue where I cannot create KBs in QnA Maker for services which I have deployed via ARM template/DevOps. There are a number of issues here and on Github, but the main suggestions (create all the resources in the same region, don't put anything else on the app service plan, delete and redeploy) have not worked for me. As noted the resources HAVE been created and deleted multiple times with the same names, so I don't know if that's part of the issue. The resources create just fine (cognitive service, app service, app service plan, azure search, and app insights), all in WestUS, but then I am unable to create a knowledge base either through the API or directly at qnamaker.ai. In both cases I get the error message:
No Endpoint keys found.
I can get the keys through Azure CLI, plus they are showing in the portal, so that's not the issue. It may perhaps be an issue with the Authorization EndpointKey which is generated/shown after publishing a new KB, but as I cannot create or publish one, I cannot find this key. Not sure if that is the key the error message is referring to.
Here is the ARM template I am using the set up the resources.
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"sites_etn_qnamaker_name": {
"defaultValue": "etn-qnamaker",
"type": "string"
},
"serverfarms_etn_qnamaker_name": {
"defaultValue": "etn-qnamaker",
"type": "string"
},
"components_etn_qnamaker_ai_name": {
"defaultValue": "etn-qnamaker-ai",
"type": "string"
},
"accounts_etn_qnamaker_name": {
"defaultValue": "etn-qnamaker",
"type": "string"
},
"searchServices_etnqnamaker_azsearch_name": {
"defaultValue": "etnqnamaker-azsearch",
"type": "string"
},
"smartdetectoralertrules_failure_anomalies___etn_qnamaker_ai_name": {
"defaultValue": "failure anomalies - etn-qnamaker-ai",
"type": "string"
},
"actiongroups_application_20insights_20smart_20detection_externalid": {
"defaultValue": "/subscriptions/REDACTED/resourceGroups/avcnc-chatbot-rg/providers/microsoft.insights/actiongroups/application%20insights%20smart%20detection",
"type": "string"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.CognitiveServices/accounts",
"apiVersion": "2017-04-18",
"name": "[parameters('accounts_etn_qnamaker_name')]",
"location": "westus",
"sku": {
"name": "S0"
},
"kind": "QnAMaker",
"properties": {
"apiProperties": {
"qnaRuntimeEndpoint": "[concat('https://', parameters('accounts_etn_qnamaker_name'), '.azurewebsites.net')]"
},
"customSubDomainName": "[parameters('accounts_etn_qnamaker_name')]"
}
},
{
"type": "Microsoft.Insights/components",
"apiVersion": "2015-05-01",
"name": "[parameters('components_etn_qnamaker_ai_name')]",
"location": "westus",
"tags": {
"hidden-link:/subscriptions/REDACTED/resourceGroups/ENTP-Chatbot-rg/providers/Microsoft.Web/sites/etn-qnamaker": "Resource"
},
"kind": "web",
"properties": {
"Application_Type": "web"
}
},
{
"type": "Microsoft.Search/searchServices",
"apiVersion": "2015-08-19",
"name": "[parameters('searchServices_etnqnamaker_azsearch_name')]",
"location": "West US",
"sku": {
"name": "basic"
},
"properties": {
"replicaCount": 1,
"partitionCount": 1,
"hostingMode": "default"
}
},
{
"type": "Microsoft.Web/serverfarms",
"apiVersion": "2018-02-01",
"name": "[parameters('serverfarms_etn_qnamaker_name')]",
"location": "West US",
"sku": {
"name": "S1",
"tier": "Standard",
"size": "S1",
"family": "S",
"capacity": 1
},
"kind": "app",
"properties": {
"perSiteScaling": false,
"maximumElasticWorkerCount": 1,
"isSpot": false,
"reserved": false,
"isXenon": false,
"hyperV": false,
"targetWorkerCount": 0,
"targetWorkerSizeId": 0
}
},
{
"type": "microsoft.alertsmanagement/smartdetectoralertrules",
"apiVersion": "2019-06-01",
"name": "[parameters('smartdetectoralertrules_failure_anomalies___etn_qnamaker_ai_name')]",
"location": "global",
"dependsOn": [
"[resourceId('microsoft.insights/components', parameters('components_etn_qnamaker_ai_name'))]"
],
"properties": {
"description": "Failure Anomalies notifies you of an unusual rise in the rate of failed HTTP requests or dependency calls.",
"state": "Enabled",
"severity": "Sev3",
"frequency": "PT1M",
"detector": {
"id": "FailureAnomaliesDetector",
"name": "Failure Anomalies",
"description": "Detects if your application experiences an abnormal rise in the rate of HTTP requests or dependency calls that are reported as failed. The anomaly detection uses machine learning algorithms and occurs in near real time, therefore there's no need to define a frequency for this signal.<br/></br/>To help you triage and diagnose the problem, an analysis of the characteristics of the failures and related telemetry is provided with the detection. This feature works for any app, hosted in the cloud or on your own servers, that generates request or dependency telemetry - for example, if you have a worker role that calls <a class=\"ext-smartDetecor-link\" href=\\\"https://learn.microsoft.com/en-us/azure/application-insights/app-insights-api-custom-events-metrics#trackrequest\\\" target=\\\"_blank\\\">TrackRequest()</a> or <a class=\"ext-smartDetecor-link\" href=\\\"https://learn.microsoft.com/en-us/azure/application-insights/app-insights-api-custom-events-metrics#trackdependency\\\" target=\\\"_blank\\\">TrackDependency()</a>.",
"supportedResourceTypes": [
"ApplicationInsights"
],
"imagePaths": [
"https://globalsmartdetectors.blob.core.windows.net/detectors/FailureAnomaliesDetector/v0.18/FailureAnomaly.png"
]
},
"scope": [
"[resourceId('microsoft.insights/components', parameters('components_etn_qnamaker_ai_name'))]"
],
"actionGroups": {
"groupIds": [
"[parameters('actiongroups_application_20insights_20smart_20detection_externalid')]"
]
}
}
},
{
"type": "Microsoft.Web/sites",
"apiVersion": "2018-11-01",
"name": "[parameters('sites_etn_qnamaker_name')]",
"location": "West US",
"dependsOn": [
"[resourceId('Microsoft.Web/serverfarms', parameters('serverfarms_etn_qnamaker_name'))]"
],
"tags": {
"hidden-related:/subscriptions/REDACTED/resourcegroups/ENTP-Chatbot-rg/providers/Microsoft.Web/serverfarms/etn-qnamaker": "empty"
},
"kind": "app",
"properties": {
"enabled": true,
"hostNameSslStates": [
{
"name": "[concat(parameters('sites_etn_qnamaker_name'), '.azurewebsites.net')]",
"sslState": "Disabled",
"hostType": "Standard"
},
{
"name": "[concat(parameters('sites_etn_qnamaker_name'), '.scm.azurewebsites.net')]",
"sslState": "Disabled",
"hostType": "Repository"
}
],
"serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('serverfarms_etn_qnamaker_name'))]",
"reserved": false,
"isXenon": false,
"hyperV": false,
"scmSiteAlsoStopped": false,
"clientAffinityEnabled": true,
"clientCertEnabled": false,
"hostNamesDisabled": false,
"containerSize": 0,
"dailyMemoryTimeQuota": 0,
"httpsOnly": false,
"redundancyMode": "None"
}
},
{
"type": "Microsoft.Web/sites/config",
"apiVersion": "2018-11-01",
"name": "[concat(parameters('sites_etn_qnamaker_name'), '/web')]",
"location": "West US",
"dependsOn": [
"[resourceId('Microsoft.Web/sites', parameters('sites_etn_qnamaker_name'))]"
],
"tags": {
"hidden-related:/subscriptions/REDACTED/resourcegroups/ENTP-Chatbot-rg/providers/Microsoft.Web/serverfarms/etn-qnamaker": "empty"
},
"properties": {
"numberOfWorkers": 1,
"defaultDocuments": [
"Default.htm",
"Default.html",
"Default.asp",
"index.htm",
"index.html",
"iisstart.htm",
"default.aspx",
"index.php",
"hostingstart.html"
],
"netFrameworkVersion": "v4.0",
"phpVersion": "5.6",
"requestTracingEnabled": false,
"remoteDebuggingEnabled": false,
"httpLoggingEnabled": false,
"logsDirectorySizeLimit": 35,
"detailedErrorLoggingEnabled": false,
"publishingUsername": "[concat('$',parameters('sites_etn_qnamaker_name'))]",
"scmType": "None",
"use32BitWorkerProcess": true,
"webSocketsEnabled": false,
"alwaysOn": false,
"managedPipelineMode": "Integrated",
"virtualApplications": [
{
"virtualPath": "/",
"physicalPath": "site\\wwwroot",
"preloadEnabled": false
}
],
"loadBalancing": "LeastRequests",
"experiments": {
"rampUpRules": []
},
"autoHealEnabled": false,
"cors": {
"allowedOrigins": [
"*"
],
"supportCredentials": false
},
"localMySqlEnabled": false,
"ipSecurityRestrictions": [
{
"ipAddress": "Any",
"action": "Allow",
"priority": 1,
"name": "Allow all",
"description": "Allow all access"
}
],
"scmIpSecurityRestrictions": [
{
"ipAddress": "Any",
"action": "Allow",
"priority": 1,
"name": "Allow all",
"description": "Allow all access"
}
],
"scmIpSecurityRestrictionsUseMain": false,
"http20Enabled": false,
"minTlsVersion": "1.2",
"ftpsState": "AllAllowed",
"reservedInstanceCount": 0
}
},
{
"type": "Microsoft.Web/sites/hostNameBindings",
"apiVersion": "2018-11-01",
"name": "[concat(parameters('sites_etn_qnamaker_name'), '/', parameters('sites_etn_qnamaker_name'), '.azurewebsites.net')]",
"location": "West US",
"dependsOn": [
"[resourceId('Microsoft.Web/sites', parameters('sites_etn_qnamaker_name'))]"
],
"properties": {
"siteName": "[parameters('sites_etn_qnamaker_name')]",
"hostNameType": "Verified"
}
}
]
}
Here are just a few of the sites I checked
https://github.com/MicrosoftDocs/azure-docs/issues/44719
https://github.com/MicrosoftDocs/azure-docs/issues/40089
Unable to create knowledgebase for azure cognitive service (Error: "No Endpoint keys found.")
EDIT: KB creation fails both through qnamaker.ai and via API. On qnamaker.ai, I get this message when trying to create a KB:
And here is the PowerShell script I was using to try and create it programmatically:
$body = Get-Content '$(System.DefaultWorkingDirectory)/_AveryCreek_OEM_CSC_Bot/models/qnamaker/Avery_Creek_Commercial_QnA.json' | Out-String
$header = #{
"Content-Type"="application/json"
"Ocp-Apim-Subscription-Key"="$(QNA_KEY)"
}
Invoke-RestMethod -Uri "https://westus.api.cognitive.microsoft.com/qnamaker/v4.0/knowledgebases/create" -Method 'Post' -Body $body -Headers $header
Searching for issues with endpoint keys and qnamaker turns up a fair few results.
I've just closed a case with Azure support for the same issue, and here are some of the steps we checked on the way to fixing this, hopefully one of these will be useful for anyone having this issue in the future as the error message doesn't give you much to go on:
First, check the troubleshooting FAQ https://learn.microsoft.com/en-us/azure/cognitive-services/qnamaker/troubleshooting. There's nothing on the endpoint keys issue, but when you hit something else it's a good starting point.
All services - check your naming. For example, for me my search service was named differently than the rest of my config was expecting, and also my cognitive services runtime endpoint in the api-properties was incorrect. Still deployed though - you won't always get an error on the service itself if you provide incorrect names to later created services, you'll just fail at the point of creating your KBs.
All services - check your SKUs. While there's no problem that I could find being on free/basic, you can only have 1 qna cognitive service on a free subscription, so you'll need to tear down and recreate or update as you go.
QnA cognitive service - config settings (keys and values) are case-sensitive.
Qna web app and web app plan - check your quotas haven't been hit, particularly memory and CPU.
QnA Web App - You should be able to go the https://{endpoint}/qnamaker/corehealthstatus and see a positive json response like this (or if there's an initException, you've at least got another error to go on):
{"processId":4920,"runtimeVersion":"5.46.0","initException":"","startupTime":"10/28/2020 2:44:39 PM"}
Qna Web App - You should also be able to go the https://{endpoint}/qnamaker/proxyhealthstatus and see a positive json response like this.:
{
"coreVersion": "5.46.0",
"coreProcessId": 4920,
"coreUrl": "http://localhost:50061"
}
Qna Web App - Don't try to create a KB, whether through the qnamaker portal or dynamically, if your app doesn't show similar successes on those two check endpoints - build a wait if need be. You'll almost certainly see the endpoint errors via the API if you hit it immediately.
For the check endpoints above, the endpoint is visible in the overview section of your web app in the portal, and usually is the name of your app e.g. https://example-app-qnamaker-webapp.azurewebsites.net/qnamaker/corehealthstatus if the app was called example-app-qnamaker-webapp. In my own creation scripts, I checked against coreProcessId > 0 and startupTime is a valid date to indicate service readiness before creating a KB.
EDIT: I'd also add that if it takes a long time to deploy, part of your config is probably wrong. Every time I've had things work correctly, it's been a rapid deployment (and that goes for the services, knowledge bases, and calls to both az cli and the qnamaker REST api).
I suspect you may have been downvoted because this looks an awful lot more like a bug report than a Stack Overflow question. From the first issue you linked:
We will go ahead and close this issue as this is a service level issue and the best way to report it if it occurs again is through the QnA portal from "General Enquiry through uservoice" option from the top right corner.
I'll try to answer you anyway. You say you've tried creating all the resources in the same region, but remember that resource groups have locations too. You should make sure the resource group is also in the same region according to the answer to the Stack Overflow question you linked to: Unable to create knowledgebase for azure cognitive service (Error: "No Endpoint keys found.")
It seems that there is sometimes the problem that the endpoint keys can only be found, if the Resource Group holding all resources for the QnA Maker Service (like App Service, Application Insights, Search Service and the Application Service Plan) is hosted in the same region as the QnA Maker Service itself.
I also see that you've tried not putting anything else on the app service plan, and you've tried deleting and redeploying. But you might also try just waiting a while, or retrying more persistently. From another GitHub issue:
These failures are intermittent, If I persistently retry a failure, the knowledgebase will often eventually get created.
And from this issue:
According to the QnA Maker team, this error is shown when the QnA Maker service has not finished provisioning. There appear to be service issues QnA Maker right now that are causing the provisioning process to take even longer than the time we wait in the script.
If you would like to raise an issue through UserVoice, I highly recommend posting it on the forum so that other people can see the problem and upvote it.
is there any way to receive a mail (some kind of alert) when someone creates a new Pipeline in a Specific Data Factory?
Something like "user XYZ created a new Pipeline"
Thanks for your inputs,
Marcelo
To specify an alert definition, you create a JSON file describing the operations that you want to be alerted on.
Following example creates an alert for Run Completion.
Below JSON will help you to create similar for update alert.
{
"contentVersion": "1.0.0.0",
"$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
"parameters": {},
"resources": [
{
"name": "ADFAlertsSlice",
"type": "microsoft.insights/alertrules",
"apiVersion": "2014-04-01",
"location": "East US",
"properties": {
"name": "ADFAlertsSlice",
"description": "One or more of the data slices for the Azure Data Factory has failed processing.",
"isEnabled": true,
"condition": {
"odata.type": "Microsoft.Azure.Management.Insights.Models.ManagementEventRuleCondition",
"dataSource": {
"odata.type": "Microsoft.Azure.Management.Insights.Models.RuleManagementEventDataSource",
"operationName": "RunFinished",
"status": "Failed",
"subStatus": "FailedExecution"
}
},
"action": {
"odata.type": "Microsoft.Azure.Management.Insights.Models.RuleEmailAction",
"customEmails": [
"#contoso.com"
]
}
}
}
We are having a deployment error in working deployments since last Thursday AEST.
When we run an ARM deployment DocumentDb fails with the message:
Resource Microsoft.DocumentDB/databaseAccounts 'xxx' failed with message 'Document service name 'xxx' already exists.
{
"apiVersion": "2015-04-08",
"type": "Microsoft.DocumentDB/databaseAccounts",
"name": "[parameters('databaseAccountName')]",
"location": "[resourceGroup().location]",
"properties": {
"name": "[parameters('databaseAccountName')]",
"databaseAccountOfferType": "Standard"
}
In the snippet [parameters('databaseAccountName')] = 'xxx'
We are guessing that something underlying has happened to cause this. Can you please let us know the new properties into the ARM template that we need to include for the DocumentDb instance to be found again?
Update: We have updated our documentation to cover ARM deployment for multi-region enabled accounts. https://azure.microsoft.com/documentation/articles/documentdb-automation-resource-manager-cli/#create-multi-documentdb-account
We are in the process of enabling multi-region accoutns for all accounts. As a part of this effort, there is a change in the ARM template. A few accounts are seeing errors when using the currently published template in certain scenarios.
We will be updating our documentation very soon. In the meantime, the below template should get you going. Your old template will also start working in a couple of days.
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"databaseAccountName": {
"type": "string"
},
"locationName1": {
"type": "string"
}
},
"variables": { },
"resources": [
{
"apiVersion": "2015-04-08",
“kind”: “GlobalDocumentDB”,
"type": "Microsoft.DocumentDb/databaseAccounts",
"name": "[parameters('databaseAccountName')]",
"location": "[resourceGroup().location]",
"properties": {
"databaseAccountOfferType": "Standard",
"locations": [
{
"id": "[concat(parameters('databaseAccountName'), '-', resourceGroup().location)]",
"failoverPriority": 0,
"locationName": "[parameters('locationName1')]"
}]
}
}]
}
Edit:
locationName1 should be in the format of the "Azure Regions" column on this page: https://azure.microsoft.com/en-us/regions/