I am looking for a way to create, but not update, the SKU of a PaaS sql server when deployed by ARM templat, however all other changes in the template are still wanted to be deployed.
I have an ARM template representing my current infrastructure stack, which is deployed as part of our CI.
One of the things specified in the file is the size scale of our PaaS database, eg:
"sku": {
"name": "BC_Gen4",
"tier": "BusinessCritical",
"family": "Gen4",
"capacity": 2
}
Because of a temporary high workload, i have scaled the number of cpu's up to 4 (or even 8). Is there any way i can deploy the template which doesn't forcibly down-scale my database back to the specified sku?
resources.azure.com shows that there are other attributes that relate to scaling.
Ideally this would be set to something like 'if this resource doesn't exist then set it to X, otherwise use the existing currentServiceObjectiveName/currentSku'
"kind": "v12.0,user,vcore",
"properties": {
"currentServiceObjectiveName": "BC_Gen4_2",
"requestedServiceObjectiveName": "BC_Gen4_2",
"currentSku": {
"name": "BC_Gen4",
"tier": "BusinessCritical",
"family": "Gen4",
"capacity": 2
}
}
At the moment our infrastructure is deployed via VSTS Azure Resource Group Deployment V2.* in 'create or update resource group, complete' mode.
This is not possible in arm templates, you have to use external source to make that decision, not arm template. and you cannot really pull data in the arm template, so you probably need to pull the SKU externally and pass it to the template
Related
I have performed discovery operations for listing protectable items in Azure Backup: 'SQL in Azure VM'.
I am able to perform 'Disovery' using the following template
"resources": [
{
"type": "Microsoft.RecoveryServices/vaults/backupFabrics/protectionContainers",
"apiVersion": "2016-12-01",
"name": "[concat(parameters('vaultName'), '/', parameters('fabricName'), '/',parameters('protectionContainers')[copyIndex()])]",
"properties": {
"backupManagementType": "[parameters('backupManagementType')]",
"workloadType": "[parameters('workloadType')]",
"containerType": "[parameters('protectionContainerTypes')[copyIndex()]]",
"sourceResourceId": "[parameters('sourceResourceIds')[copyIndex()]]",
"operationType": "Register"
},
"copy": {
"name": "protectionContainersCopy",
"count": "[length(parameters('protectionContainers'))]"
}
}
]
I similarly tried the following operation types:
"Reregister": Works as expected.
"Invalid: Did not perform any operation.
Could someone guide me with unregistering of containers using the ARM template?
(I already have the API to do it, but I need it with an ARM template).
Similarly is there any way to rediscover DBs within a registered container using an ARM template?
Any help is much appriceiated.
Looking at the Register API for Protection Containers, it looks like the supported values for OperationType are: Invalid, Register and Reregister. The Unregister API fires a HTTP DELETE request that is not straightforward to simulate with an ARM template. ARM Templates are primarily meant to be used for creating and managing your Azure resources as an IaC solution.
That said, if you have ARM templates as your only option, you could try deploying it in Complete mode. In complete mode, Resource Manager deletes resources that exist in the resource group but aren't specified in the template.
To deploy a template in Complete mode, you'd have to set it explicitly using the Mode parameter since the default mode is incremental. Be sure to use the what-if operation before deploying a template in complete mode to avoid unintentionally deleting resources.
Azure Functions Consumption Plan running on Linux is now GA.
How can I provision such application with an ARM Template?
Basically, I want this template but on Linux.
If you want to create a new Linux Consumption Plan, set the reserved property to true for Microsoft.Web/serverfarms (see FAQ):
{
"type": "Microsoft.Web/serverfarms",
"apiVersion": "2016-09-01",
"name": "[parameters('serverfarms_NorthEuropeLinuxDynamicPlan_name')]",
"location": "North Europe",
"sku": {
"name": "Y1",
"tier": "Dynamic",
"size": "Y1",
"family": "Y",
"capacity": 0
},
"kind": "functionapp",
"properties": {
"name": "[parameters('serverfarms_NorthEuropeLinuxDynamicPlan_name')]",
"reserved": true
}
},
If you rather want to deploy a Function App to a built-in Linux Consumption Plan, set the kind property for Microsoft.Web/sites:
"kind": "functionapp,linux"
See this link:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-infrastructure-as-code#create-a-consumption-plan
Primary this line:
The Consumption plan cannot be explicitly defined for Linux. It will be created automatically.
One simple way I found out during my trial and error is, download the ARM template from the Azure for this purpose and then modify with appropriate naming convention for consumption plan. Then deploy the ARM template to create the function app.
Steps are as below:
Downloading ARM template from Azure:
Try to create a new function app with Linux consumption plan. At this point in time on GUI, azure does not allow us to select the name for the consumption plan (screenshot below)
Complete other steps (monitoring, tags, etc) and then go to "Review and Create" step. Let the validation pass here. Once this step is done, do not click "Create" button. On the right side down now you can see "Download a template for automation" link. Click this link and download the template. Modify the parameters with the required ones'. Change the Hosting plan name to required name.
Modify the parameters file and deploy to create function app:
In Azure, go to "Custom Deployment" blade and then upload both Template and parameters file. Deploy this ARM template. It will easily create the function app with Linux consumption plan with desired naming convention for Consumption plan (dynamic)
I created a Cosmos Db account, database, and containers using this ARM template. Deployment via Azure DevOps Release pipeline is working.
I used this ARM template to adjust the database throughput. It is also in a Release pipeline and is working.
Currently the throughput is provisioned at the database level and shared across all containers. How do I provision throughput at the container level? I tried running this ARM template to update throughput at the container level. It appears that once shared throughput is provisioned at the database level there's no way to provision throughput at the container level.
I found this reference document but throughput is not listed. Am I missing something super obvious or is the desired functionality not implemented yet?
UPDATE:
When attempting to update the container with the above template I get the following:
2019-05-29T20:25:10.5166366Z There were errors in your deployment. Error code: DeploymentFailed.
2019-05-29T20:25:10.5236514Z ##[error]At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.
2019-05-29T20:25:10.5246027Z ##[error]Details:
2019-05-29T20:25:10.5246412Z ##[error]NotFound: {
"code": "NotFound",
"message": "Entity with the specified id does not exist in the system.\r\nActivityId: 7ba84...b52b2, Microsoft.Azure.Documents.Common/2.4.0.0"
} undefined
2019-05-29T20:25:10.5246730Z ##[error]Task failed while creating or updating the template deployment.
I was also experiencing the same error:
"code": "NotFound",
"message": "Entity with the specified id does not exist in the system.
I was deploying an ARM template via DevOps pipeline to change the configuration of an existing resource in Azure.
The existing resource had a dedicated throughput defined at the container/collection level, and my ARM template was trying to defined the throughput at the database level...
Once adjusted my deployment pipeline worked.
Here is some info on my throughput provisioning fix: https://github.com/MicrosoftDocs/azure-docs/issues/30853
I believe you have to create the container with a dedicated throughput, first. I have not seen any documentation for changing a container from shared to dedicated throughput. In the Microsoft documentation, the example is creating containers with both shared and dedicated throughput.
Set throughput on a database and a container
You can combine the two models. Provisioning throughput on both the database and the container is allowed. The following example shows how to provision throughput on an Azure Cosmos database and a container:
You can create an Azure Cosmos database named Z with provisioned throughput of "K" RUs.
Next, create five containers named A, B, C, D, and E within the database. When creating container B, make sure to enable Provision dedicated throughput for this container option and explicitly configure "P" RUs of provisioned throughput on this container. Note that you can configure shared and dedicated throughput only when creating the database and container.
The "K" RUs throughput is shared across the four containers A, C, D, and E. The exact amount of throughput available to A, C, D, or E varies. There are no SLAs for each individual container’s throughput.
The container named B is guaranteed to get the "P" RUs throughput all the time. It's backed by SLAs.
There is a prereq ARM template in a subfolder for the 101-cosmosdb-sql-container-ru-update. In the prereq version, the container has the throughput property set when the container is created. After the container is created with dedicated throughput, the update template works without error. I have tried it out and verified that it works.
{
"type": "Microsoft.DocumentDB/databaseAccounts/apis/databases",
"name": "[concat(variables('accountName'), '/sql/', variables('databaseName'))]",
"apiVersion": "2016-03-31",
"dependsOn": [ "[resourceId('Microsoft.DocumentDB/databaseAccounts/', variables('accountName'))]" ],
"properties":{
"resource":{
"id": "[variables('databaseName')]"
},
"options": { "throughput": "[variables('databaseThroughput')]" }
}
},
{
"type": "Microsoft.DocumentDb/databaseAccounts/apis/databases/containers",
"name": "[concat(variables('accountName'), '/sql/', variables('databaseName'), '/', variables('containerName'))]",
"apiVersion": "2016-03-31",
"dependsOn": [ "[resourceId('Microsoft.DocumentDB/databaseAccounts/apis/databases', variables('accountName'), 'sql', variables('databaseName'))]" ],
"properties":
{
"resource":{
"id": "[variables('containerName')]",
"partitionKey": {
"paths": [
"/MyPartitionKey1"
],
"kind": "Hash"
}
},
"options": { "throughput": "[variables('containerThroughput')]" }
}
}
I'm trying to learn Azure Resource Templates and am trying to understand the workflow behind when to use them and when to use the REST API.
My sense is that creating a Virtual Network and Subnets in Azure is a fairly uncommon occurance, once you get that set up as you want you don't modify it too frequently, you deploy things into that structure.
So with regard to an ARM Template let's say I have a template with resources for VNET and Subnet. To take the example from https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-template-walkthrough#virtual-network-and-subnet I might have:
{
"apiVersion": "2015-06-15",
"type": "Microsoft.Network/virtualNetworks",
"name": "[parameters('vnetName')]",
"location": "[resourceGroup().location]",
"properties": {
"addressSpace": {
"addressPrefixes": [
"10.0.0.0/16"
]
},
"subnets": [
{
"name": "[variables('subnetName')]",
"properties": {
"addressPrefix": "10.0.0.0/24"
}
}
]
}
}
which I deploy to a Resource Group. Let's say I then add a Load Balancer and redeploy the template. In this case the user then gets asked to supply the value for the vnetName parameter again and of course cannot supply the same value so we would end up with another VNET which is not what we want.
So is the workflow that you define your ARM Template (VNET, LBs, Subnets, NICs etc) in one go and then deploy? Then when you want to deploy VMs, Scale Sets etc you use the REST API to deploy then to the Resource Group / VNET Subnet? Or is there a way to incrementally build up an ARM Template and deploy it numerous times in a way that if a VNET already exists (for example) the user is not prompted to supply details for another one?
I've read around and seen incremental mode (default unless complete is specified) but not sure if this is relevant and if it is how to use it.
Many thanks for any help!
Update
OK so I can now use azure group deployment create -f azuredeploy.json -g ARM-Template-Tests -m Incremental and have modified the VNET resource in my template from
{
"apiVersion": "2016-09-01",
"type": "Microsoft.Network/virtualNetworks",
"name": "[variables('virtualNetworkName')]",
"location": "[resourceGroup().location]",
"properties": {
"addressSpace": {
"addressPrefixes": [
"[variables('addressPrefix')]"
]
},
"subnets": [
{
"name": "[variables('subnetName')]",
"properties": {
"addressPrefix": "[variables('subnetPrefix')]"
}
}
]
}
},
to
{
"apiVersion": "2015-05-01-preview",
"type": "Microsoft.Network/virtualNetworks",
"name": "[parameters('virtualNetworkName')]",
"location": "[resourceGroup().location]",
"properties": {
"addressSpace": {
"addressPrefixes": [
"[parameters('addressPrefix')]"
]
},
"subnets": [
{
"name": "[parameters('subnet1Name')]",
"properties": {
"addressPrefix": "[parameters('subnet1Prefix')]"
}
},
{
"name": "[parameters('gatewaySubnet')]",
"properties": {
"addressPrefix": "[parameters('gatewaySubnetPrefix')]"
}
}
]
}
},
but the subnets don't change. Should they using azure group deployment create -f azuredeploy.json -g ARM-Template-Tests -m Incremental
I am going to piggy back on this Azure documentation. Referencing appropriate section below:
Incremental and complete deployments
When deploying your resources,
you specify that the deployment is either an incremental update or a
complete update. By default, Resource Manager handles deployments as
incremental updates to the resource group.
With incremental deployment, Resource Manager
leaves unchanged resources that exist in the resource group but are not specified in the template
adds resources that are specified in the template but do not exist in the resource group
does not reprovision resources that exist in the resource group in the same condition defined in the template
reprovisions existing resources that have updated settings in the template
With complete deployment, Resource Manager:
deletes resources that exist in the resource group but are not specified in the template
adds resources that are specified in the template but do not exist in the resource group
does not reprovision resources that exist in the resource group in the same condition defined in the template
reprovisions existing resources that have updated settings in the template
To choose Incremental update or Complete update it depends on if you have resources that are in use. If devops requirement is to always have resources in sync with what is defined in the json template then Complete Update mode should be used. The biggest benefit of using templates and source code for deploying resources is to prevent configuration drift and it is beneficial to use Complete Update mode.
As for specifying the parameters if you specify in parameters file then you don't have to specify them again.
A new template can be deployed in incremental mode which would add new resources to the existing resource group. Define only the new resources in the template, existing resources would not be altered.
From powershell use the following cmdlet
New-AzureRmResourceGroupDeployment -ResourceGroupName "YourResourceGroupName" -TemplateFile "path\to\template.json" -Mode Incremental -Force
My rule of thumb is for things that I want to tear down, or for things I want to replicate across Subscriptions, I use ARM templates.
For example we want things in test environments, I just ARM it up, build on the scripts as developers request things ("Hey I need a cache", "Oh by the way I need to start using a Service Bus"), using incremental mode we can just push it out to Dev, then as we migrate up to different environments you just deploy to a different Subscription in Azure, and it should have everything ready to go.
Also, we've started provisioning our own Cloud Load Test agents in a VMSS, a simple ARM template that's called by a build to scale up to x number of machines, then when done, we just trash the Resource Group. It's repeatable and reliable, sure you can script it up, but as TFS has a task to deploy these things (also with schedules)...
One of the beautiful things I've come across is Key Vault, when you ARM it up and poke all the values from your service busses, storage accounts/whatevers, you can simply get the connection strings/keys/whatevers and put them straight into the Key Vault, so you never need to worry about it, and if you want to regenerate anything (say a dev wants to change the name of a cache or whatever, or accidentally posted the key to GitHub), just redeploy (often I'll just trash the whole Resource Group) and it updates the vault for you.
I'm looking for any examples of configuring Azure Batch using an Azure Resource Manager template. Googling yielded nothing, and the Azure QuickStart Templates do not yet have any Batch examples, however this SO question implies that it has been done.
What I would like to achieve is, via an ARM template, to create a Batch account and configure a pool (with a minimum number of compute nodes, auto expanding to a maximum number of nodes), and then set the resulting pool ID into my API server's appsettings resource.
I'm about to start reverse engineering it using the Azure Resource Explorer, but any pre-existing examples would be very much appreciated.
Update
So far I've managed to create the resource:
{
"name": "[variables('batchAccountName')]",
"type": "Microsoft.Batch/batchAccounts",
"location": "[resourceGroup().location]",
"apiVersion": "2015-07-01",
"dependsOn": [ ],
"tags": {
"displayName": "BatchInstance"
}
}
And to configure the account settings in the appsettings of my API server:
"BATCH_ACCOUNT_URL": "[concat('https://', reference(concat('Microsoft.Batch/batchAccounts/', variables('batchAccountName'))).accountEndpoint)]",
"BATCH_ACCOUNT_KEY": "[listKeys(resourceId('Microsoft.Batch/batchAccounts', variables('batchAccountName')), providers('Microsoft.Batch', 'batchAccounts').apiVersions[0]).primary]",
"BATCH_ACCOUNT_NAME": "[variables('batchAccountName')]"
I still haven't managed to create a pool and fetch the pool ID via ARM, mainly because the pool I created using Batch Explorer never showed up in either the Azure Portal or the Azure Resource Explorer. I'll update this if I find the solution.
Unfortunately we don't have a way today to create a pool using ARM templates. The Azure Portal should show the pools created under your account (even if you didn't created them using ARM).
This is supported, please see the reference docs here: https://learn.microsoft.com/azure/templates/microsoft.batch/2019-04-01/batchaccounts/pools