Has provisioning storage accounts through resource manager degraded in performance? - azure

I'm publishing an Azure Resource Manager template using the AzureRm Powershell commandlets, and everything seems to be working pretty well. Except when it's provisioning a resource of type "Microsoft.Storage/storageAccounts" it is taking over 10 minutes to complete. Is this in line with expectations? I don't recall it taking this long previously.
I'm deploying to East US, with Standard_LRS storage type.
{
"name": "[parameters('storageName')]",
"type": "Microsoft.Storage/storageAccounts",
"location": "[parameters('deployLocation')]",
"apiVersion": "2015-06-15",
"dependsOn": [ ],
"tags": {
"displayName": "[parameters('storageName')]"
},
"properties": {
"accountType": "[parameters('storageType')]"
}
}

It is an interesting question, and I will probably spend a little time getting metrics later!
However, I suspect you are correct but there are reasons that that would be the case.
When you create a Storage Account in Powershell, whether Service or Resource Management, you submit a job to create an account and that is all that needs occur.
When you deploy a template, there are a number of steps that need to be completed. Such as
Template Validation
Job ordering / Resolving dependencies (even if there is only a single step, this would be part of the pipeline)
Comparison with existing infrastructure (because templates are idempotent there needs to be a check of what currently exists)
creating a deployment job
the actual deployment
Each step in the pipeline is (very likely) performed through a queue, so even a few seconds for each step to dequeue will add up.
If you enable verbose / debug logging, you will see a lot of this going on (especially with larger templates)

Related

How do I create an Azure Functions Consumption Plan on Linux with an ARM Template?

Azure Functions Consumption Plan running on Linux is now GA.
How can I provision such application with an ARM Template?
Basically, I want this template but on Linux.
If you want to create a new Linux Consumption Plan, set the reserved property to true for Microsoft.Web/serverfarms (see FAQ):
{
"type": "Microsoft.Web/serverfarms",
"apiVersion": "2016-09-01",
"name": "[parameters('serverfarms_NorthEuropeLinuxDynamicPlan_name')]",
"location": "North Europe",
"sku": {
"name": "Y1",
"tier": "Dynamic",
"size": "Y1",
"family": "Y",
"capacity": 0
},
"kind": "functionapp",
"properties": {
"name": "[parameters('serverfarms_NorthEuropeLinuxDynamicPlan_name')]",
"reserved": true
}
},
If you rather want to deploy a Function App to a built-in Linux Consumption Plan, set the kind property for Microsoft.Web/sites:
"kind": "functionapp,linux"
See this link:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-infrastructure-as-code#create-a-consumption-plan
Primary this line:
The Consumption plan cannot be explicitly defined for Linux. It will be created automatically.
One simple way I found out during my trial and error is, download the ARM template from the Azure for this purpose and then modify with appropriate naming convention for consumption plan. Then deploy the ARM template to create the function app.
Steps are as below:
Downloading ARM template from Azure:
Try to create a new function app with Linux consumption plan. At this point in time on GUI, azure does not allow us to select the name for the consumption plan (screenshot below)
Complete other steps (monitoring, tags, etc) and then go to "Review and Create" step. Let the validation pass here. Once this step is done, do not click "Create" button. On the right side down now you can see "Download a template for automation" link. Click this link and download the template. Modify the parameters with the required ones'. Change the Hosting plan name to required name.
Modify the parameters file and deploy to create function app:
In Azure, go to "Custom Deployment" blade and then upload both Template and parameters file. Deploy this ARM template. It will easily create the function app with Linux consumption plan with desired naming convention for Consumption plan (dynamic)

how to deploy an azure ARM template without changing existing sku

I am looking for a way to create, but not update, the SKU of a PaaS sql server when deployed by ARM templat, however all other changes in the template are still wanted to be deployed.
I have an ARM template representing my current infrastructure stack, which is deployed as part of our CI.
One of the things specified in the file is the size scale of our PaaS database, eg:
"sku": {
"name": "BC_Gen4",
"tier": "BusinessCritical",
"family": "Gen4",
"capacity": 2
}
Because of a temporary high workload, i have scaled the number of cpu's up to 4 (or even 8). Is there any way i can deploy the template which doesn't forcibly down-scale my database back to the specified sku?
resources.azure.com shows that there are other attributes that relate to scaling.
Ideally this would be set to something like 'if this resource doesn't exist then set it to X, otherwise use the existing currentServiceObjectiveName/currentSku'
"kind": "v12.0,user,vcore",
"properties": {
"currentServiceObjectiveName": "BC_Gen4_2",
"requestedServiceObjectiveName": "BC_Gen4_2",
"currentSku": {
"name": "BC_Gen4",
"tier": "BusinessCritical",
"family": "Gen4",
"capacity": 2
}
}
At the moment our infrastructure is deployed via VSTS Azure Resource Group Deployment V2.* in 'create or update resource group, complete' mode.
This is not possible in arm templates, you have to use external source to make that decision, not arm template. and you cannot really pull data in the arm template, so you probably need to pull the SKU externally and pass it to the template

Azure ARM Templates and REST API

I'm trying to learn Azure Resource Templates and am trying to understand the workflow behind when to use them and when to use the REST API.
My sense is that creating a Virtual Network and Subnets in Azure is a fairly uncommon occurance, once you get that set up as you want you don't modify it too frequently, you deploy things into that structure.
So with regard to an ARM Template let's say I have a template with resources for VNET and Subnet. To take the example from https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-template-walkthrough#virtual-network-and-subnet I might have:
{
"apiVersion": "2015-06-15",
"type": "Microsoft.Network/virtualNetworks",
"name": "[parameters('vnetName')]",
"location": "[resourceGroup().location]",
"properties": {
"addressSpace": {
"addressPrefixes": [
"10.0.0.0/16"
]
},
"subnets": [
{
"name": "[variables('subnetName')]",
"properties": {
"addressPrefix": "10.0.0.0/24"
}
}
]
}
}
which I deploy to a Resource Group. Let's say I then add a Load Balancer and redeploy the template. In this case the user then gets asked to supply the value for the vnetName parameter again and of course cannot supply the same value so we would end up with another VNET which is not what we want.
So is the workflow that you define your ARM Template (VNET, LBs, Subnets, NICs etc) in one go and then deploy? Then when you want to deploy VMs, Scale Sets etc you use the REST API to deploy then to the Resource Group / VNET Subnet? Or is there a way to incrementally build up an ARM Template and deploy it numerous times in a way that if a VNET already exists (for example) the user is not prompted to supply details for another one?
I've read around and seen incremental mode (default unless complete is specified) but not sure if this is relevant and if it is how to use it.
Many thanks for any help!
Update
OK so I can now use azure group deployment create -f azuredeploy.json -g ARM-Template-Tests -m Incremental and have modified the VNET resource in my template from
{
"apiVersion": "2016-09-01",
"type": "Microsoft.Network/virtualNetworks",
"name": "[variables('virtualNetworkName')]",
"location": "[resourceGroup().location]",
"properties": {
"addressSpace": {
"addressPrefixes": [
"[variables('addressPrefix')]"
]
},
"subnets": [
{
"name": "[variables('subnetName')]",
"properties": {
"addressPrefix": "[variables('subnetPrefix')]"
}
}
]
}
},
to
{
"apiVersion": "2015-05-01-preview",
"type": "Microsoft.Network/virtualNetworks",
"name": "[parameters('virtualNetworkName')]",
"location": "[resourceGroup().location]",
"properties": {
"addressSpace": {
"addressPrefixes": [
"[parameters('addressPrefix')]"
]
},
"subnets": [
{
"name": "[parameters('subnet1Name')]",
"properties": {
"addressPrefix": "[parameters('subnet1Prefix')]"
}
},
{
"name": "[parameters('gatewaySubnet')]",
"properties": {
"addressPrefix": "[parameters('gatewaySubnetPrefix')]"
}
}
]
}
},
but the subnets don't change. Should they using azure group deployment create -f azuredeploy.json -g ARM-Template-Tests -m Incremental
I am going to piggy back on this Azure documentation. Referencing appropriate section below:
Incremental and complete deployments
When deploying your resources,
you specify that the deployment is either an incremental update or a
complete update. By default, Resource Manager handles deployments as
incremental updates to the resource group.
With incremental deployment, Resource Manager
leaves unchanged resources that exist in the resource group but are not specified in the template
adds resources that are specified in the template but do not exist in the resource group
does not reprovision resources that exist in the resource group in the same condition defined in the template
reprovisions existing resources that have updated settings in the template
With complete deployment, Resource Manager:
deletes resources that exist in the resource group but are not specified in the template
adds resources that are specified in the template but do not exist in the resource group
does not reprovision resources that exist in the resource group in the same condition defined in the template
reprovisions existing resources that have updated settings in the template
To choose Incremental update or Complete update it depends on if you have resources that are in use. If devops requirement is to always have resources in sync with what is defined in the json template then Complete Update mode should be used. The biggest benefit of using templates and source code for deploying resources is to prevent configuration drift and it is beneficial to use Complete Update mode.
As for specifying the parameters if you specify in parameters file then you don't have to specify them again.
A new template can be deployed in incremental mode which would add new resources to the existing resource group. Define only the new resources in the template, existing resources would not be altered.
From powershell use the following cmdlet
New-AzureRmResourceGroupDeployment -ResourceGroupName "YourResourceGroupName" -TemplateFile "path\to\template.json" -Mode Incremental -Force
My rule of thumb is for things that I want to tear down, or for things I want to replicate across Subscriptions, I use ARM templates.
For example we want things in test environments, I just ARM it up, build on the scripts as developers request things ("Hey I need a cache", "Oh by the way I need to start using a Service Bus"), using incremental mode we can just push it out to Dev, then as we migrate up to different environments you just deploy to a different Subscription in Azure, and it should have everything ready to go.
Also, we've started provisioning our own Cloud Load Test agents in a VMSS, a simple ARM template that's called by a build to scale up to x number of machines, then when done, we just trash the Resource Group. It's repeatable and reliable, sure you can script it up, but as TFS has a task to deploy these things (also with schedules)...
One of the beautiful things I've come across is Key Vault, when you ARM it up and poke all the values from your service busses, storage accounts/whatevers, you can simply get the connection strings/keys/whatevers and put them straight into the Key Vault, so you never need to worry about it, and if you want to regenerate anything (say a dev wants to change the name of a cache or whatever, or accidentally posted the key to GitHub), just redeploy (often I'll just trash the whole Resource Group) and it updates the vault for you.

Why can't configure Azure diagnostics to use Azure Table Storage via new Azure Portal?

I am developing a web api which will be hosted in Azure. I would like to use Azure diagnostics to log errors to Azure table storage.
In the Classic portal, I can configure the logs to go to Azure table storage.
Classic Portal Diagnostic Settings
However in the new Azure portal, the only storage option I have is to use Blob storage:
New Azure Portal Settings
It seems that if I was to make use of a web role, I could configure the data store for diagnostics but as I am developing a web api, I don't want to create a separate web role for every api just so that I can log to an azure table.
Is there a way to programmatically configure azure diagnostics to propagate log messages to a specific data store without using a web role? Is there any reason why the new Azure portal only has diagnostic settings for blob storage and not table storage?
I can currently work around the problem by using the classic portal but I am worried that table storage for diagnostics will eventually become deprecated since it hasn't been included in the diagnostic settings for the new portal.
(I'll do some necromancy on this question as this was the most relevant StackOverflow question I found while searching for a solution to this as it is no longer possible to do this through the classic portal)
Disclaimer: Microsoft has seemingly removed support for logging to Table in the Azure Portal, so I don't know if this is deprecated or will soon be deprecated, but I have a solution that will work now (31.03.2017):
There are specific settings determining logging, I first found out information on this from an issue in the Azure Powershell github: https://github.com/Azure/azure-powershell/issues/317
The specific settings we need are (from github):
AzureTableTraceEnabled = True, & AppSettings has:
DIAGNOSTICS_AZURETABLESASURL
Using the excellent resource explorer (https://resources.azure.com) under (GUI navigation):
/subscriptions/{subscriptionName}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/sites/{siteName}/config/logs
I was able to find the Setting AzureTableTraceEnabled in the Properties.
The property AzureTableTraceEnabled has Level and sasURL. In my experience updating these two values (Level="Verbose",sasUrl="someSASurl") will work, as updating the sasURL sets DIAGNOSTICS_AZURETABLESASURL in appsettings.
How do we change this? I did it in Powershell. I first tried the cmdlet Get-AzureRmWebApp, but could not find what i wanted - the old Get-AzureWebSite does display AzureTableTraceEnabled, but I could not get it to update (perhaps someone with more powershell\azure experience can come with input on how to use the ASM cmdlets to do this).
The solution that worked for me was setting the property through the Set-AzureRmResource command, with the following settings:
Set-AzureRmResource -PropertyObject $PropertiesObject -ResourceGroupName "$ResourceGroupName" -ResourceType Microsoft.Web/sites/config -ResourceName "$ResourceName/logs" -ApiVersion 2015-08-01 -Force
Where the $PropertiesObject looks like this:
$PropertiesObject = #{applicationLogs=#{azureTableStorage=#{level="$Level";sasUrl="$SASUrl"}}}
The Level corresponds to "Error", "Warning", "Information", "Verbose" and "Off".
It is also possible to do this in the ARM Template (important bits is in properties on the logs resource in the site):
{
"apiVersion": "2015-08-01",
"name": "[variables('webSiteName')]",
"type": "Microsoft.Web/sites",
"location": "[resourceGroup().location]",
"tags": {
"displayName": "WebApp"
},
"dependsOn": [
"[resourceId('Microsoft.Web/serverfarms/', variables('hostingPlanName'))]"
],
"properties": {
"name": "[variables('webSiteName')]",
"serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]"
},
"resources": [
{
"name": "logs",
"type": "config",
"apiVersion": "2015-08-01",
"dependsOn": [
"[resourceId('Microsoft.Web/sites/', variables('webSiteName'))]"
],
"tags": {
"displayName": "LogSettings"
},
"properties": {
"azureTableStorage": {
"level": "Verbose",
"sasUrl": "SASURL"
}
}
}
}
The issue with doing it in ARM is that I've yet to find a way to generate the correct SAS, it is possible to fetch out Azure Storage Account keys (from: ARM - How can I get the access key from a storage account to use in AppSettings later in the template?):
"properties": {
"type": "AzureStorage",
"typeProperties": {
"connectionString": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), providers('Microsoft.Storage', 'storageAccounts').apiVersions[0]).keys[0].value)]"
}
}
There are also some clever ways of generating them using linked templates (from: http://wp.sjkp.dk/service-bus-arm-templates/).
The current solution I went for (time constraints) was a custom Powershell script that looks something like this:
...
$SASUrl = New-AzureStorageTableSASToken -Name $LogTable -Permission $Permissions -Context $StorageContext -StartTime $StartTime -ExpiryTime $ExpiryTime -FullUri
$PropertiesObject = #{applicationLogs=#{azureTableStorage=#{level="$Level";sasUrl="$SASUrl"}}}
Set-AzureRmResource -PropertyObject $PropertiesObject -ResourceGroupName "$ResourceGroupName" -ResourceType Microsoft.Web/sites/config -ResourceName "$ResourceName/logs" -ApiVersion 2015-08-01 -Force
...
This is quite an ugly solution, as it is something extra you need to maintain in addition to the ARM template - but it is easy, fast and it works while we wait for updates to the ARM Templates (or for someone cleverer than I to come and enlighten us).
We don't typically recommend using Tables for log data - it can result in the append only pattern which at scale doesn't work effectively for Table Storage. See the log-data anti-pattern in this guide Table Design Guide. Often times we see that even though people think of log data as structured - they way they typically query it makes Blobs more efficient.
Excerpt from the Design Guide:
Log data anti-pattern
Typically, you should use the Blob service instead of the Table service to store log data.
Context and problem
A common use case for log data is to retrieve a selection of log entries for a specific date/time range: for example, you want to find all the error and critical messages that your application logged between 15:04 and 15:06 on a specific date. You do not want to use the date and time of the log message to determine the partition you save log entities to: that results in a hot partition because at any given time, all the log entities will share the same PartitionKey value (see the section Prepend/append anti-pattern).
...
Solution
The previous section highlighted the problem of trying to use the Table service to store log entries and suggested two, unsatisfactory, designs. One solution led to a hot partition with the risk of poor performance writing log messages; the other solution resulted in poor query performance because of the requirement to scan every partition in the table to retrieve log messages for a specific time span. Blob storage offers a better solution for this type of scenario and this is how Azure Storage Analytics stores the log data it collects.
This section outlines how Storage Analytics stores log data in blob storage as an illustration of this approach to storing data that you typically query by range.
Storage Analytics stores log messages in a delimited format in multiple blobs. The delimited format makes it easy for a client application to parse the data in the log message.
Storage Analytics uses a naming convention for blobs that enables you to locate the blob (or blobs) that contain the log messages for which you are searching. For example, a blob named "queue/2014/07/31/1800/000001.log" contains log messages that relate to the queue service for the hour starting at 18:00 on 31 July 2014. The "000001" indicates that this is the first log file for this period. Storage Analytics also records the timestamps of the first and last log messages stored in the file as part of the blob’s metadata. The API for blob storage enables you locate blobs in a container based on a name prefix: to locate all the blobs that contain queue log data for the hour starting at 18:00, you can use the prefix "queue/2014/07/31/1800."
Storage Analytics buffers log messages internally and then periodically updates the appropriate blob or creates a new one with the latest batch of log entries. This reduces the number of writes it must perform to the blob service.
If you are implementing a similar solution in your own application, you must consider how to manage the trade-off between reliability (writing every log entry to blob storage as it happens) and cost and scalability (buffering updates in your application and writing them to blob storage in batches).
Issues and considerations
Consider the following points when deciding how to store log data:
If you create a table design that avoids potential hot partitions, you may find that you cannot access your log data efficiently.
To process log data, a client often needs to load many records.
Although log data is often structured, blob storage may be a better solution.

Configuring Azure Batch using an Azure Resource Manager template

I'm looking for any examples of configuring Azure Batch using an Azure Resource Manager template. Googling yielded nothing, and the Azure QuickStart Templates do not yet have any Batch examples, however this SO question implies that it has been done.
What I would like to achieve is, via an ARM template, to create a Batch account and configure a pool (with a minimum number of compute nodes, auto expanding to a maximum number of nodes), and then set the resulting pool ID into my API server's appsettings resource.
I'm about to start reverse engineering it using the Azure Resource Explorer, but any pre-existing examples would be very much appreciated.
Update
So far I've managed to create the resource:
{
"name": "[variables('batchAccountName')]",
"type": "Microsoft.Batch/batchAccounts",
"location": "[resourceGroup().location]",
"apiVersion": "2015-07-01",
"dependsOn": [ ],
"tags": {
"displayName": "BatchInstance"
}
}
And to configure the account settings in the appsettings of my API server:
"BATCH_ACCOUNT_URL": "[concat('https://', reference(concat('Microsoft.Batch/batchAccounts/', variables('batchAccountName'))).accountEndpoint)]",
"BATCH_ACCOUNT_KEY": "[listKeys(resourceId('Microsoft.Batch/batchAccounts', variables('batchAccountName')), providers('Microsoft.Batch', 'batchAccounts').apiVersions[0]).primary]",
"BATCH_ACCOUNT_NAME": "[variables('batchAccountName')]"
I still haven't managed to create a pool and fetch the pool ID via ARM, mainly because the pool I created using Batch Explorer never showed up in either the Azure Portal or the Azure Resource Explorer. I'll update this if I find the solution.
Unfortunately we don't have a way today to create a pool using ARM templates. The Azure Portal should show the pools created under your account (even if you didn't created them using ARM).
This is supported, please see the reference docs here: https://learn.microsoft.com/azure/templates/microsoft.batch/2019-04-01/batchaccounts/pools

Resources