DnsNameLabelNotSupported - VSTS ARM Template for Azure Container Instances - azure-pipelines-build-task

Basically after adding the "dnsNameLabel" value for my arm template for azure container instances, i got this message:
2018-07-03T14:31:14.8518944Z ##[error]At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.
2018-07-03T14:31:14.8571875Z ##[error]Details:
2018-07-03T14:31:14.8616789Z ##[error]BadRequest: {
"error": {
"code": "DnsNameLabelNotSupported",
"message": "DNS name label for container group is not supported before version '2018-02-01-preview'."
}
}
Excerpt from the arm-template.json
...
"osType": "[variables('osType')]",
"ipAddress": {
"type": "Public",
"dnsNameLabel": "rabbitmq",
"ports": [
{
"protocol": "tcp",
"port": "15672"
}
]
},
...
P.S. I'm deploying using VSTS's Azure Resource Group Deployment task.

The problem was caused by the "apiVersion" key in the arm template file. It had to be updated to match a newer version of the api. Navigating to github arm templates repo
you could see which is the latest version.
Updating it to latest solved the problem.
Another suggestion is to use JSON schema validator for making sure the contents of the .json file matches the schema.

Related

How can I create a Data Explorer Stream Analytics output using ARM templates?

I successfully manually configured a Stream Analytics Job that outputs data into Data Explorer. However, I am unable to set up my infrastructure pipeline using ARM templates. All the documentation only describes the other types of output (e.g. ServiceBus).
I also tried exporting the template in the resource group, but this does not work for my Stream Analytics Job.
How can I configure this output using ARM templates?
For many of such issues, you can use the following process:
Setup the desired configuration manually
Check https://resources.azure.com and find your resource
OR Export the resource using the export template feature
In this case, resources.azure.com is sufficient.
If you look at the Data Explorer output resource, you will see the required ARM representation that you can use in your ARM template:
"name": "xxxxxxx",
"type": "Microsoft.StreamAnalytics/streamingjobs/outputs",
"properties": {
"datasource": {
"type": "Microsoft.Kusto/clusters/databases",
"properties": {
"cluster": "https://yyyyy.westeurope.kusto.windows.net",
"database": "mydatabase",
"table": "mytable",
"authenticationMode": "Msi"
}
},

What is the difference between Pipelines - Create and Definitions - Create in Azure DevOps REST API?

I was trying to create a yml pipeline with Pipeline - Create Azure DevOps REST API and it was throwing an exception 'No pool was specified' even though I have mentioned pool in the yml file. More details of this issue is available here.
Please find below, the request body used for creating pipeline.
{
"folder": "",
"name": "pipeline-by-api",
"configuration": {
"type": "yaml",
"path": "/azure-pipelines.yml",
"repository": {
"id": "00000000-0000-0000-0000-000000000000",
"name": "repo-by-api",
"type": "azureReposGit"
}
}
Then I identified a second REST API, Definitions - Create and using this I was able to successfully create a pipeline. Please find below the request body used for creating build definition.
{
"process":{
"yamlFilename": "azure-pipelines.yml"
},
"queue":{
"pool":{
"name": "Azure Pipelines"
}
},
"repository": {
"id": "00000000-0000-0000-0000-000000000000",
"type": "TfsGit",
"name": "repo-by-api",
"defaultBranch": "refs/heads/master"
},
"name": "pipeline-by-api",
"path": "\\API-Test",
"type": "build",
"queueStatus": "enabled"
}
I would like to understand what's the difference between the two. I have tried Definitions - Create as Pipelines - Create was not working for me. But is this a correct way of creating pipeline?
Definitions - Create is an older endpoint. It was availabe before YAML pipeline becomes first class citizens. Pipelines - Create is a new endpoint suited for YAML pipelines. Both can be used to create pipeline and if you change API version to 4.1 you will see that Pipelines is not available.
If I have to guess, they find a reason to create a new endpoint for handling yaml pipelines, probably to avoid some breaking changes, but this is only a guess.

Azure Data Factory publishing error "404 - File or directory not found"

I have three Data Factories in Azure. I've made several changes to pipelines in the Data Factories (different in each) and now I am no longer able to publish from the Data Factory UI. Previously, publishing worked just fine. I believe the issue started after making changes in the UI and running a DevOps pipeline. The pipeline, however, does not deploy anything to the data factories. It simply makes an artifact of the ADF content.
In two out of three data factory I've made changes to
Pipelines: changing the target of the copy activity from blob storage to ADLS
Adding linked service for on-premises SQL server.
In the other data factory, I made no changes but the error also shows there.
It displays the following error (I've removed sensitive details) in all ADFs:
Publishing error
{"_body":"\r\n\r\n\r\n\r\n404 - File or directory not found.\r\n\r\n\r\n\r\n
Server Error
\r\n
\r\n
\r\n
404 - File or directory not found.
\r\n
The resource you are looking for might have been removed, had its name changed, or is temporarily unavailable.
\r\n
\r\n
\r\n\r\n\r\n","status":404,"ok":false,"statusText":"OK","headers":{},"url":"https://management.azure.com/subscriptions/<subscription id>/resourcegroups/<resource group>/providers/Microsoft.DataFactory/factories/<adf name>/mds/databricks%20notebooks.md?api-version=2018-06-01"}
Clicking on 'Details' gives the following information on the error:
Error code: OK
Inner error code: undefined
Message: undefined
The data factories are almost exact replicas, apart from some additional pipelines and linked services. One of the data factories has a databricks instance in the same resource group and is connected to that. Pipelines have always run successfully. The other data factories have the same linked service for databricks, but have no databricks workspace. It's only there as a template.
The JSON of the databricks linked service looks like this, after removing secret names:
{
"properties": {
"type": "AzureDatabricks",
"annotations": [],
"typeProperties": {
"domain": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "LS_keyvault",
"type": "LinkedServiceReference"
},
"secretName": ""
},
"authentication": "MSI",
"workspaceResourceId": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "LS_keyvault",
"type": "LinkedServiceReference"
},
"secretName": ""
},
"existingClusterId": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "LS_keyvault",
"type": "LinkedServiceReference"
},
"secretName": ""
}
}
}
}
Solutions I've tried
Added Databricks as resource provider in the subscriptions but still the same error shows.
In the data factory that is actually connected to databricks, updated the databricks notebooks path to reference the true location of the notebooks.
The error suggests to me that the issue is related to databricks, but I can't pinpoint the problem. Has anyone solved this issue before?
Thanks!
I've seen similar issues when working directly against the main branch. The Publish branch can get stale/out of sync with main, specifically when items get moved or renamed. Here is another post on a related issue, the solution there may help with your situation.

Azure IoT Hub Deployment: Default eventHub endpoint 'operationsMonitoringEvents' is missing

Recently I had issues to deploy an IoT Hub. I used an Azure Resource Manager (ARM) template that worked so far but then resulted in the error Default eventHub endpoint 'operationsMonitoringEvents' is missing. Below what you have to add to achieve a successfull deployment.
You must have to add the following section in IoT Hub ARM Template :
"operationsMonitoringEvents": {
"retentionTimeInDays": "[parameters('retentionDays')]",
"partitionCount": "[parameters('partitionCount')]"
}
The above section is not required to add in ARM Template when creating new IoT Hub but if it is not added in ARM Template and deploy to portal, it will add the above section at time of deployment.
So when we do the incremental deployment with the same ARM Template(Which does not contain the above section), it will compare with the existing ARM Template deployed on portal and with the current deploying one which cause to the above error.
So we also face this error in past and resolved by adding the above code in ARM Template.
You need to add the eventHub endpoint 'operationsMonitoringEvents'
"operationsMonitoringEvents": {
"retentionTimeInDays": "[parameters('opMonRetentionTimeInDays')]",
"partitionCount": "[parameters('opMonPartitionCount')]",
"path": "[concat(parameters('iotHubName'),'-operationmonitoring')]",
"endpoint": "[parameters('opMonEndpoint')]"
}
The endpoint can be found e.g. via the portal here
Additionally you can configure operations monitoring e.g. via
"operationsMonitoringProperties": {
"events": {
"None": "None",
"Connections": "None",
"DeviceTelemetry": "None",
"C2DCommands": "None",
"DeviceIdentityOperations": "None",
"FileUploadOperations": "None",
"Routes": "None"
}
}
Edit: as mentioned by Dipti Mamidala it is also enough to add only
"operationsMonitoringEvents": {
"retentionTimeInDays": "[parameters('opMonRetentionTimeInDays')]",
"partitionCount": "[parameters('opMonPartitionCount')]"
}

Can an Azure ARM nested template be deployed with a mode of Complete?

All the examples have the mode of nested templates set to 'Incremental'.
When I set it to 'Complete', I get the following error:
error: InvalidNestedDeploymentMode : Specified deployment mode 'Complete' is not supported for nested deployment 'shared'. Please see https://aka.ms/arm-deploy for usage details.
error: Deployment validate failed.
error: Error information has been recorded to /Users/.../.azure/azure.err
verbose: Error: Deployment validate failed.
I've tried running the deployment creation w/ both incremental and complete mode, getting the same error.
Wasn't sure if this was even possible - can't find any docs related to the error 'InvalidNestedDeploymentMode'.
Portion of the ARM template :
{
"name": "[concat('node', copyIndex())]",
"type": "Microsoft.Resources/deployments",
"apiVersion": "2015-01-01",
"dependsOn": [
"[resourceId('Microsoft.Resources/deployments', 'shared')]"
],
"copy": {
"name": "nodecopy",
"count": "[parameters('vmCount')]"
},
"properties": {
"mode": "Complete",
"templateLink": {
"uri": "...",
"contentVersion": "1.0.0.0"
}
}
Can an Azure ARM nested template be deployed with a mode of Complete?
Firstly, we could know Incremental and Complete mode that used to deploy resources from this documentation.
Besides, as Andrew W said, only the root-level template is allowed Complete for the deployment mode. If you use Azure PowerShell with Resource Manager templates to deploy your resources to Azure and use -Debug parameter, you could see detailed error message.
See the note under "Link Templates for Deployment".
TL;DR: if your nested template targets the same resource group as the top level template and you deploy the top level template in "Complete" mode the nested template will be treated as being deployed in "Complete" mode but otherwise it will be deployed in "Incremental" mode.

Resources