Throttling/errors when attaching data disks to VM in Azure? - azure

I am trying to performance test each of the different size tiers (A, D, DS, F...etc) of virtual machines in Azure devtest labs. In doing so, I need to attach the maximum number of data disks that each size will accept, however I keep getting two errors when trying to attach the disks.
Failed to add data disk to virtual machine, the request is being throttled.
Number of write requests for subscription '(subscription number)' exceeded the limit of '1200' for time interval '01:00:00'. Please try again after 'X' minutes. (time has been as low a 3 minutes and high as 30 minutes)
Currently I will attach a disk, wait 10 minutes, then try to attach another disk with about a 50% success rate.
Is there any way to avoid these errors, like a setting change to the subscription, or am I just trying to attach the disks too quickly?

Is there any way to avoid these errors, like a setting change to the subscription, or am I just trying to attach the disks too quickly?
In brief, there is no way to avoid this error.
There are several limits and restrictions on Azure. And your issue is caused by one of them.
The default limit of Resource Manager API Writes is 1200 per hour.
Normally, if you want to raise the limit above the Default Limit, you can open an online customer support request at no charge. But the limits cannot be raised above the Maximum Limit value.
Unfortunately, the Maximum Limit of Resource Manager API Writes is same with the default limit, which is 1200 per hour. So, to my knowledge, there is no way to raise this limit.
For detailed information about the limits on Azure, please refer to the link below:
Azure subscription and service limits, quotas, and constraints

Related

How to log/query historic Azure Quotas?

We are using Azure Databricks to spin up multiple Job clusters during our Production runs, and we need to run more scripts at the same time; however, the organization has defined Quota limits for each Family of Computes, and we have thresholds we do want to cross.
We can check the current status of the Quota through the Azure Portal > Quotas > Computes. However, it only shows the current Quota status and there is no option to check the logs of historic quota utilization.
After a given run, we want to check the Quota usage during the run, and map Quota usage peaks with script runtimes. Right now, we have to keep checking the Quota Page manually continuously and log it manually somewhere like an excel to get a rough idea.
Is there a better way to log and query the Quota Usage history?
You might looking for getting VM usage quota which you can get for example from
Get-AzVMUsage
It provides you list of current usage for different compute families:

HDInsight Cores Quota increase

I have two different HDInsight deployments that I need to deploy. One of the HDInsight deployments uses the D12_v2 VM type and the second HDInishgt deployment uses the DS3_v2 VM type.
Although both the VM types use the same number of cores, would the deployments work if I just request a quota increase of the Dv2-series type? Do note that, at a time, only a single deployment will exist.
Although both the VM types use the same number of cores, would the
deployments work if I just request a quota increase of the Dv2-series
type?
No, it won't work that way as both are of different VM series i.e. Dv2 and DSv2. So , even if they are using same cores , deployment will fail in that region if you don't have sufficient quota to allocate in your subscription for both of the VM series as it depends on your total Vcpu's available for that region.
You can refer this Microsoft Document for the VM series specifications.
So , as per your requirement ,You have to create the quota request for both the series in the particular region .
Reference for Quota limits of VM:
Request an increase in vCPU quota limits per Azure VM series - Azure supportability | Microsoft Docs
Reference for Quota limits of HDInsights:
CPU Core quota increase request - Azure HDInsight | Microsoft Docs
You should include both VMs in your request.
Please refer to the following document which provides info about requesting a quota increase for HDInsight. You need to be sure to ask for HDInsight quota, and not regular Compute-VM quota. In the text box entry, you state which VMs you need and they will process the request accordingly.
Requesting quota increases for Azure HDInsight

Error when creating DevOps project on Azure: Cores quota have ran out in this region, kindly choose a different region or VM

When I try to create a DevOps project described at https://learn.microsoft.com/en-us/azure/devops-project/azure-devops-project-aks, I get the below error message which apparently hasn't been grammarly checked! I tried different regions and lowered the number of nodes to 1 but I still got the error.
Cores quota have ran out in this region, kindly choose a different
region or VM which requires lesser cores.
I think you are setting deploy your project to VM. The error should be caused by your Cores Quota Limit. You need first go Subscription--> Usage + quotas of Azure Portal to check the limit about your different region.
In fact, vCPU quotas for virtual machines and virtual machines scale sets are enforced at two tiers for each subscription, in each region. The first tier is the Total Regional vCPUs limit (across all VM Series), and the second tier is the per VM Series vCPUs limit. If you exceed either of these, you will not be allowed to do VM deployment.
On the Usages + quotas page, you can search the current quota of your chosen region by using the Quota + Usage filters. And you will see current usage and quota limit values are displayed at the end of the row.
If you need request increase to finish your deployment, just click upper right button. Here has detailed steps you can refer: Request Increase

What Azure Table IOPS limitations exist with the different Web App (or function) sizes?

I'm looking at high scale Azure table operations, and I am looking for documentation that describes the maxIOPS to expect from Azure instance sizes for Azure Web App, Function, etc.
The Web Roles and corresponding limitation is well documented. For example see this comment in the linked question
so, we ran our tests on different instance sizes and yes that makes a huge difference. at medium we get around 1200 writes per second, on extra large we get around 7200. We are looking at building a distributed read/write controller possibly using the dcache as the middle man. – JTtheGeek Aug 9 '13 at 22:39
Question
What is the corresponding limitation for the Web Apps (logic, mobile, etc) and Azure Table IOPS
According to the official document that total Request Rate (assuming 1 KB object size) per storage account Up to 20,000 IOPS, entities per second, or messages per second. We also can get the VM Max IOPS limitations from the Azure VM size document. Web Apps are based on service plan, in the service plan we could choose different price tiers that have different VM sizes. It maybe could use for reference. More Azure limitation please refer to Azure subscription and service limits, quotas, and constraints.

Azure instance limit and size limit

I have just setup an Azure cloud trial account for my application that solves complex problem.
Solution is working but too slow.
Limit is 20 instances - why? how to make more?
also instances are "small", how can I make "big"
If Azure not scalable, what cloud is?
During "Free" trial period, you get limited resources so that you can evaluate whether Azure is right for your business needs. Also the limit of 20 instances is by default so that you don't accidently overrun the cost (there are a lot of users including myself who have been affected by this where we ran the stuff without fully understanding the cost implications).
You could contact support to get the limit increased but I doubt that they will do it for a trial account. My guess is that you would need to purchase a subscription first to get the quota increased.

Resources