I created a Cosmos Db account, database, and containers using this ARM template. Deployment via Azure DevOps Release pipeline is working.
I used this ARM template to adjust the database throughput. It is also in a Release pipeline and is working.
Currently the throughput is provisioned at the database level and shared across all containers. How do I provision throughput at the container level? I tried running this ARM template to update throughput at the container level. It appears that once shared throughput is provisioned at the database level there's no way to provision throughput at the container level.
I found this reference document but throughput is not listed. Am I missing something super obvious or is the desired functionality not implemented yet?
UPDATE:
When attempting to update the container with the above template I get the following:
2019-05-29T20:25:10.5166366Z There were errors in your deployment. Error code: DeploymentFailed.
2019-05-29T20:25:10.5236514Z ##[error]At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.
2019-05-29T20:25:10.5246027Z ##[error]Details:
2019-05-29T20:25:10.5246412Z ##[error]NotFound: {
"code": "NotFound",
"message": "Entity with the specified id does not exist in the system.\r\nActivityId: 7ba84...b52b2, Microsoft.Azure.Documents.Common/2.4.0.0"
} undefined
2019-05-29T20:25:10.5246730Z ##[error]Task failed while creating or updating the template deployment.
I was also experiencing the same error:
"code": "NotFound",
"message": "Entity with the specified id does not exist in the system.
I was deploying an ARM template via DevOps pipeline to change the configuration of an existing resource in Azure.
The existing resource had a dedicated throughput defined at the container/collection level, and my ARM template was trying to defined the throughput at the database level...
Once adjusted my deployment pipeline worked.
Here is some info on my throughput provisioning fix: https://github.com/MicrosoftDocs/azure-docs/issues/30853
I believe you have to create the container with a dedicated throughput, first. I have not seen any documentation for changing a container from shared to dedicated throughput. In the Microsoft documentation, the example is creating containers with both shared and dedicated throughput.
Set throughput on a database and a container
You can combine the two models. Provisioning throughput on both the database and the container is allowed. The following example shows how to provision throughput on an Azure Cosmos database and a container:
You can create an Azure Cosmos database named Z with provisioned throughput of "K" RUs.
Next, create five containers named A, B, C, D, and E within the database. When creating container B, make sure to enable Provision dedicated throughput for this container option and explicitly configure "P" RUs of provisioned throughput on this container. Note that you can configure shared and dedicated throughput only when creating the database and container.
The "K" RUs throughput is shared across the four containers A, C, D, and E. The exact amount of throughput available to A, C, D, or E varies. There are no SLAs for each individual container’s throughput.
The container named B is guaranteed to get the "P" RUs throughput all the time. It's backed by SLAs.
There is a prereq ARM template in a subfolder for the 101-cosmosdb-sql-container-ru-update. In the prereq version, the container has the throughput property set when the container is created. After the container is created with dedicated throughput, the update template works without error. I have tried it out and verified that it works.
{
"type": "Microsoft.DocumentDB/databaseAccounts/apis/databases",
"name": "[concat(variables('accountName'), '/sql/', variables('databaseName'))]",
"apiVersion": "2016-03-31",
"dependsOn": [ "[resourceId('Microsoft.DocumentDB/databaseAccounts/', variables('accountName'))]" ],
"properties":{
"resource":{
"id": "[variables('databaseName')]"
},
"options": { "throughput": "[variables('databaseThroughput')]" }
}
},
{
"type": "Microsoft.DocumentDb/databaseAccounts/apis/databases/containers",
"name": "[concat(variables('accountName'), '/sql/', variables('databaseName'), '/', variables('containerName'))]",
"apiVersion": "2016-03-31",
"dependsOn": [ "[resourceId('Microsoft.DocumentDB/databaseAccounts/apis/databases', variables('accountName'), 'sql', variables('databaseName'))]" ],
"properties":
{
"resource":{
"id": "[variables('containerName')]",
"partitionKey": {
"paths": [
"/MyPartitionKey1"
],
"kind": "Hash"
}
},
"options": { "throughput": "[variables('containerThroughput')]" }
}
}
Related
I am attempting to deploy an Azure Storage account along with an indeterminate number of tables via an ARM template.
Since MS are yet to provide a tables resource type for ARM, I'm instead using Azure Container Instances to spin up a container running azure-cli and then create the table that way.
As you can see in my example below, I'm using property iteration to create multiple containers - one for each table. This seemed to be working until the number of tables to create changed, and then I started getting errors.
The updates on container group 'your-aci-instance' are invalid. If you are going to update the os type, restart policy, network profile, CPU, memory or GPU resources for a container group, you must delete it first and then create a new one.
I understand what it's saying, but it does seem strange to me that you can create a container group yet not alter the group of containers within.
As ARM doesn't allow you do delete resources, I'd have to add a manual step to my deployment process to ensure that the ACI doesn't exist, which isn't really desirable.
Equally undesirable would be to use resource iteration to create multiple ACI's - there would be the possibility of many ACI's being strewn about the Resource Group that will never be used again.
Is there some ARM magic that I don't yet know about which can help me achieve the creation of tables that meets the following criteria?
Only a single ACI is created.
The number of tables to be created can change.
Notes
I have tried to use variable iteration to create a single 'command' array for a single container, but it seems that ACI considers all commands as a one liner, so this caused an error.
Further reading suggests that it is only possible to run one command on container startup.
How do I run multiple commands when deploying a container group?
Current attempt
Here is a snippet from my ARM template showing how I used property iteration to try and achieve my goal.
{
"condition": "[not(empty(variables('tables')))]",
"type": "Microsoft.ContainerInstance/containerGroups",
"name": "[parameters('containerInstanceName')]",
"apiVersion": "2018-10-01",
"location": "[resourceGroup().location]",
"properties": {
"copy": [
{
"name": "containers",
"count": "[max(length(variables('tables')), 1)]",
"input": {
"name": "[toLower(variables('tables')[copyIndex('containers')])]",
"properties": {
"image": "microsoft/azure-cli",
"command": [
"az",
"storage",
"table",
"create",
"--name",
"[variables('tables')[copyIndex('containers')]]"
],
"environmentVariables": [
{
"name": "AZURE_STORAGE_KEY",
"value": "[listkeys(parameters('storageAccount_Application_Name'), '2019-04-01').keys[0].value]"
},
{
"name": "AZURE_STORAGE_ACCOUNT",
"value": "[parameters('storageAccount_Application_Name')]"
},
{
"name": "DATETIME",
"value": "[parameters('dateTime')]"
}
],
"resources": {
"requests": {
"cpu": "1",
"memoryInGb": "1.5"
}
}
}
}
}
],
"restartPolicy": "OnFailure",
"osType": "Linux"
},
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount_Application_Name'))]"
],
"tags": {
"displayName": "Application Storage Account - Tables",
"Application": "[parameters('tagApplication')]",
"environment": "[parameters('tagEnvironment')]",
"version": "[parameters('tagVersion')]"
}
}
If it says the field is immutable - it is, there's nothing you can do about it really. You can always create a unique name for that container instance and use complete deployment mode and only deploy ACI to this particular resource group, that way it will always have only this ACI instance and others will get deleted and it will work around immutability.
you can call an azure function from inside the template (HTTP trigger) and pass in names of storage tables to create and it will do that, for example.
But either way its a hack.
I am looking for a way to create, but not update, the SKU of a PaaS sql server when deployed by ARM templat, however all other changes in the template are still wanted to be deployed.
I have an ARM template representing my current infrastructure stack, which is deployed as part of our CI.
One of the things specified in the file is the size scale of our PaaS database, eg:
"sku": {
"name": "BC_Gen4",
"tier": "BusinessCritical",
"family": "Gen4",
"capacity": 2
}
Because of a temporary high workload, i have scaled the number of cpu's up to 4 (or even 8). Is there any way i can deploy the template which doesn't forcibly down-scale my database back to the specified sku?
resources.azure.com shows that there are other attributes that relate to scaling.
Ideally this would be set to something like 'if this resource doesn't exist then set it to X, otherwise use the existing currentServiceObjectiveName/currentSku'
"kind": "v12.0,user,vcore",
"properties": {
"currentServiceObjectiveName": "BC_Gen4_2",
"requestedServiceObjectiveName": "BC_Gen4_2",
"currentSku": {
"name": "BC_Gen4",
"tier": "BusinessCritical",
"family": "Gen4",
"capacity": 2
}
}
At the moment our infrastructure is deployed via VSTS Azure Resource Group Deployment V2.* in 'create or update resource group, complete' mode.
This is not possible in arm templates, you have to use external source to make that decision, not arm template. and you cannot really pull data in the arm template, so you probably need to pull the SKU externally and pass it to the template
I receive the error on my output dataset in Azure data factory.
"HDInsight region is not supported. Region code: ln."
It's a little odd as I'm not using HDInsight, it's a pipeline of a custom activity in c# running on Azure batch and Two storage accounts for experimentation purposes.
The datafactory is in North Europe and the rest in UK South.
Does HDInsight perhaps power the data movement?
Reading the FAQ the location of the compute and storage resource can be in separate regions?
Edit:
Here is the activity JSON from inside the pipeline:
"activities": [
{
"type": "DotNetActivity",
"typeProperties": {
"assemblyName": "AzureBatchDemoActivity.dll",
"entryPoint": "AzureBatchDemoActivity.DemoActivity",
"packageLinkedService": "AzureStorageLinkedService",
"packageFile": "/demoactivitycontainer/AzureBatchDemoActivity.zip",
"extendedProperties": {
"SliceStart": "$$Text.Format('{0:yyyyMMddHH-mm}', Time.AddMinutes(SliceStart, 0))"
}
},
"inputs": [
{
"name": "InputDataset"
}
],
"outputs": [
{
"name": "OutputDataset"
}
],
"policy": {
"timeout": "00:30:00",
"concurrency": 2,
"retry": 3
},
"scheduler": {
"frequency": "Hour",
"interval": 1
},
"name": "DemoActivity",
"linkedServiceName": "AzureBatchLinkedService"
}
],
I've been in contact with Azure support in tandem, a very prompt response from them!
It appears to be an incorrect error message when using custom activities along with storage accounts in regions which don't support data movement.
I see re-reading the documentation, there is a subtly:
the service powering the data movement in Data Factory is available
globally in several regions.
-- (supported regions)
I read “globally” incorrectly as meaning everywhere, but I should off read it as in particular regions around the globe.
I assume that even though I'm using a custom activity because there is a source and destination storage accounts involved then it's implicitly considered a "data movement" operation.
I had a similar issue (identical error message) running HDInsightOnDemand. There were no problems with the regions of the storage account.
The problem was that cluster details were not specified in the LinkedService. I guess ADF was confused which cluster to create Linux or Windows, Hadoop or Spark.
Anyway, the solution was to add the following properties in HDInsightLinkedService
"properties": {
"type": "HDInsightOnDemand",
"typeProperties": {
"clusterType": "Hadoop",
"osType": "linux",
"version": "3.5",
...
I had this exact issue and found out it is an Azure bug. 'du' is an internal code for a data center in the North Europe region.
HDInsight or storage of Azure Batch region is not supported. Region code: du.
Two resource groups deployed via the same script to the same region produced one working and one broken Data Factory resource. An Azure support engineer told me it was because a data center in that region was new and had not been white listed yet.
The recommended workaround was to redeploy the environment, and hope the Storage Account would be deployed to a different data center in that region that is white listed.
I'm trying to learn Azure Resource Templates and am trying to understand the workflow behind when to use them and when to use the REST API.
My sense is that creating a Virtual Network and Subnets in Azure is a fairly uncommon occurance, once you get that set up as you want you don't modify it too frequently, you deploy things into that structure.
So with regard to an ARM Template let's say I have a template with resources for VNET and Subnet. To take the example from https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-template-walkthrough#virtual-network-and-subnet I might have:
{
"apiVersion": "2015-06-15",
"type": "Microsoft.Network/virtualNetworks",
"name": "[parameters('vnetName')]",
"location": "[resourceGroup().location]",
"properties": {
"addressSpace": {
"addressPrefixes": [
"10.0.0.0/16"
]
},
"subnets": [
{
"name": "[variables('subnetName')]",
"properties": {
"addressPrefix": "10.0.0.0/24"
}
}
]
}
}
which I deploy to a Resource Group. Let's say I then add a Load Balancer and redeploy the template. In this case the user then gets asked to supply the value for the vnetName parameter again and of course cannot supply the same value so we would end up with another VNET which is not what we want.
So is the workflow that you define your ARM Template (VNET, LBs, Subnets, NICs etc) in one go and then deploy? Then when you want to deploy VMs, Scale Sets etc you use the REST API to deploy then to the Resource Group / VNET Subnet? Or is there a way to incrementally build up an ARM Template and deploy it numerous times in a way that if a VNET already exists (for example) the user is not prompted to supply details for another one?
I've read around and seen incremental mode (default unless complete is specified) but not sure if this is relevant and if it is how to use it.
Many thanks for any help!
Update
OK so I can now use azure group deployment create -f azuredeploy.json -g ARM-Template-Tests -m Incremental and have modified the VNET resource in my template from
{
"apiVersion": "2016-09-01",
"type": "Microsoft.Network/virtualNetworks",
"name": "[variables('virtualNetworkName')]",
"location": "[resourceGroup().location]",
"properties": {
"addressSpace": {
"addressPrefixes": [
"[variables('addressPrefix')]"
]
},
"subnets": [
{
"name": "[variables('subnetName')]",
"properties": {
"addressPrefix": "[variables('subnetPrefix')]"
}
}
]
}
},
to
{
"apiVersion": "2015-05-01-preview",
"type": "Microsoft.Network/virtualNetworks",
"name": "[parameters('virtualNetworkName')]",
"location": "[resourceGroup().location]",
"properties": {
"addressSpace": {
"addressPrefixes": [
"[parameters('addressPrefix')]"
]
},
"subnets": [
{
"name": "[parameters('subnet1Name')]",
"properties": {
"addressPrefix": "[parameters('subnet1Prefix')]"
}
},
{
"name": "[parameters('gatewaySubnet')]",
"properties": {
"addressPrefix": "[parameters('gatewaySubnetPrefix')]"
}
}
]
}
},
but the subnets don't change. Should they using azure group deployment create -f azuredeploy.json -g ARM-Template-Tests -m Incremental
I am going to piggy back on this Azure documentation. Referencing appropriate section below:
Incremental and complete deployments
When deploying your resources,
you specify that the deployment is either an incremental update or a
complete update. By default, Resource Manager handles deployments as
incremental updates to the resource group.
With incremental deployment, Resource Manager
leaves unchanged resources that exist in the resource group but are not specified in the template
adds resources that are specified in the template but do not exist in the resource group
does not reprovision resources that exist in the resource group in the same condition defined in the template
reprovisions existing resources that have updated settings in the template
With complete deployment, Resource Manager:
deletes resources that exist in the resource group but are not specified in the template
adds resources that are specified in the template but do not exist in the resource group
does not reprovision resources that exist in the resource group in the same condition defined in the template
reprovisions existing resources that have updated settings in the template
To choose Incremental update or Complete update it depends on if you have resources that are in use. If devops requirement is to always have resources in sync with what is defined in the json template then Complete Update mode should be used. The biggest benefit of using templates and source code for deploying resources is to prevent configuration drift and it is beneficial to use Complete Update mode.
As for specifying the parameters if you specify in parameters file then you don't have to specify them again.
A new template can be deployed in incremental mode which would add new resources to the existing resource group. Define only the new resources in the template, existing resources would not be altered.
From powershell use the following cmdlet
New-AzureRmResourceGroupDeployment -ResourceGroupName "YourResourceGroupName" -TemplateFile "path\to\template.json" -Mode Incremental -Force
My rule of thumb is for things that I want to tear down, or for things I want to replicate across Subscriptions, I use ARM templates.
For example we want things in test environments, I just ARM it up, build on the scripts as developers request things ("Hey I need a cache", "Oh by the way I need to start using a Service Bus"), using incremental mode we can just push it out to Dev, then as we migrate up to different environments you just deploy to a different Subscription in Azure, and it should have everything ready to go.
Also, we've started provisioning our own Cloud Load Test agents in a VMSS, a simple ARM template that's called by a build to scale up to x number of machines, then when done, we just trash the Resource Group. It's repeatable and reliable, sure you can script it up, but as TFS has a task to deploy these things (also with schedules)...
One of the beautiful things I've come across is Key Vault, when you ARM it up and poke all the values from your service busses, storage accounts/whatevers, you can simply get the connection strings/keys/whatevers and put them straight into the Key Vault, so you never need to worry about it, and if you want to regenerate anything (say a dev wants to change the name of a cache or whatever, or accidentally posted the key to GitHub), just redeploy (often I'll just trash the whole Resource Group) and it updates the vault for you.
I'm publishing an Azure Resource Manager template using the AzureRm Powershell commandlets, and everything seems to be working pretty well. Except when it's provisioning a resource of type "Microsoft.Storage/storageAccounts" it is taking over 10 minutes to complete. Is this in line with expectations? I don't recall it taking this long previously.
I'm deploying to East US, with Standard_LRS storage type.
{
"name": "[parameters('storageName')]",
"type": "Microsoft.Storage/storageAccounts",
"location": "[parameters('deployLocation')]",
"apiVersion": "2015-06-15",
"dependsOn": [ ],
"tags": {
"displayName": "[parameters('storageName')]"
},
"properties": {
"accountType": "[parameters('storageType')]"
}
}
It is an interesting question, and I will probably spend a little time getting metrics later!
However, I suspect you are correct but there are reasons that that would be the case.
When you create a Storage Account in Powershell, whether Service or Resource Management, you submit a job to create an account and that is all that needs occur.
When you deploy a template, there are a number of steps that need to be completed. Such as
Template Validation
Job ordering / Resolving dependencies (even if there is only a single step, this would be part of the pipeline)
Comparison with existing infrastructure (because templates are idempotent there needs to be a check of what currently exists)
creating a deployment job
the actual deployment
Each step in the pipeline is (very likely) performed through a queue, so even a few seconds for each step to dequeue will add up.
If you enable verbose / debug logging, you will see a lot of this going on (especially with larger templates)