Initially we had deployed database elastic pool in Azure though ARM template. The pool is in Standard edition and had 50 EDTUs in total. This is happening when deploying the app from VSTS through release management.
At some point the databases grew in size so we had to increase the EDTUs of the pool to get some additional space. We did this directly from the portal and we didn't deploy through ARM templates. We increase the EDTUs to 100.
The problem happens now when we want to redeploy the app through VSTS and use the ARM template. We update the value in ARM template to reflect the one we configured in the portal (100) but we are getting the following error.
The DTUs or storage limit for the elastic pool 'pool-name' cannot be decreased since that would not provide sufficient storage space for its databases. "
Our ARM template for the pool is like the following
{
"comments": "The elastic pool that hosts all the databases",
"apiVersion": "2014-04-01-preview",
"type": "elasticPools",
"location": "[resourceGroup().location]",
"dependsOn": ["[concat('Microsoft.Sql/Servers/', variables('sqlServerName'))]"],
"name": "[variables('elasticPoolName')]",
"properties": {
"edition": "Standard",
"dtu": "100",
"databaseDtuMin": "0",
"databaseDtuMax": "10",
}
}
The message is descriptive but we don't get why it tries to decrease the size even if we have provided an appropriate size through EDTUs value.
We partially identified why the problem is happening.
As it's mentioned here and especially the documentation of StorageMB optional argument is better not to provide this and let Azure calculate the size.
Specifies the storage limit, in megabytes, for the elastic pool. You cannot specify a value for this parameter for the Premium edition.
If you do not specify this parameter, this cmdlet calculates a value that depends on the value of the Dtu parameter. We recommend that you do not specify the StorageMB parameter.
As noted in the initial post we didn't specify the StorageMB option in the ARM template and this was set by Azure. What is not mentioned and it was not clear is that this happens only the first time.
So when we deployed for first time with 50 EDTUs the size of the pool was set to 50 GB. When we deployed again and set the EDTUs to 100 then the size remains on 50GB which is confusing. So the solution and probably a safer way is to always specify the StorageMB option for the pool to have a better view and control of what is happening.
My guess is that the current size of the databases in the pool may be greater than the included data storage that comes with a Standard 100 eDTU pool. The included storage amount for that size is 100 GB. The amount of storage is a meter that can be adjusted separately so that you have pools with fewer eDTU but higher amounts of storage. The current max of storage on an Standard 100 eDTU pool is 750 GB.
I wonder if someone went into the portal and also adjusted the max data storage size for the pool. If this is the case, and the databases within the pool now exceed the 100 GB mark, then this error you are seeing makes sense. Since the template doesn't specify the larger data storage amount then my guess is the system is defaulting it to the included amount of 100 GB and attempting to apply that, which may be too small now.
I'd suggest checking the portal for the total size of storage currently being used by the databases in the pool. If it exceeds the 100 GB then you'll want to update the template to also include the additional setting for the max size you are using.
If it doesn't exceed the 100 GB total now I'm not sure what it's complaining about.
Related
I ran into this issue below, when trying to run a simple pyspark script in Azure:
%%pyspark
df = spark.read.load('abfss://products#xyzabcstorageaccount.dfs.core.windows.net/userdata1.parquet', format='parquet')
display(df.limit(10))
InvalidHttpRequestToLivy: Your Spark job requested 24 vcores. However, the workspace has a 12 core limit. Try reducing the numbers of vcores requested or increasing your vcore quota. HTTP status code: 400. Trace ID: 3308513f-be78-408b-981b-cd6c81eea8b0.
I am new to Azure and using the free trial now. Do you know how to reduce the numbers of vcores requested?
Thanks a lot
I tried to reproduce the same in my environment and got the below results:
If I create 4 vCores, It's working fine for me.
As per the above error, If you want 24 vCores as your pre-request. Please follow below steps:
First of all, you need to raise a request. To Increase capacity via the azure portal by creating a new support ticket.
Step 1:
Go to Azure Synapse -> Create support ticket -> +Add Issue type as service and subscription limits quotes and +Add Quota type as Azure Synapse
Step2:
Go to additional details -> Select Enter details add azure synapse quota type, resource and request quota.
I am trying to create an Azure Batch pool using the below specs.
Region: East US 2
VM Series: Basic A Series
When I create the Batch Acc, I am getting the below error.
Code: AccountVMSeriesCoreQuotaReached
The specified account has reached VM series core quota for basicAFamily
I created a support request and Increased the quota for 100 VMs as below. Currently, it supports 100 VMs.
However, still, I am getting the above error for the below specs.
And the Batch Account is in East US 2 as well.
Am I doing anything wrong here? How I can get rid of AccountVMSeriesCoreQuotaReached.
Thanks in advance.
In a subscription, Azure Batch has its own set of quotas which is separate from the subscription-wide quotas
Go to: Batch accounts => <name of batch account> => Quotas
And increase it
While answering Retrieve quota for Microsoft Azure App Service Storage, I stumbled upon the FileSystemUsage metric for Microsoft.Web/sites resource type. As per the documentation, this metric should return Percentage of filesystem quota consumed by the app..
However when I execute Metrics - List REST API operation (and also in the Metrics blade in Azure Portal) for my web app, the value is always returned as zero. I checked it against a number of web apps in my Azure Subscriptions and for all of them the result was zero. I am curious to know the reason for that.
In contrast, if I execute App Service Plans - List Usages REST API operation, it returns me the correct value. For example, if my App Service Plan is S2, I get following response back:
{
"unit": "Bytes",
"nextResetTime": "9999-12-31T23:59:59.9999999Z",
"currentValue": 815899648,
"limit": 536870912000,//500 GB (50 GB/instance x max 10 instances)
"name": {
"value": "FileSystemStorage",
"localizedValue": "File System Storage"
}
},
Did I misunderstand FileSystemUsage for Web Apps? Would appreciate if someone can explain the purpose of this metric? If it is indeed what is documented, then why the API is returning zero value?
This should be the default behavior, please check this doc Understand metrics:
Note
File System Usage is a new metric being rolled out globally, no data
is expected unless your app is hosted in an App Service Environment.
So currently this metric File System Usage should only be working on ASE.
I have this setup on Azure.
1 Azure Service Bus
1 Sql Azure Database
1 Dynamic App Service Plan
1 Azure function
I'm writing messages in the service bus, my function is triggered when a message is received and writes to the database.
I have a huge number of messages to process and I have this exception:
The request limit for the database is 90 and has been reached
I dig here on SO and in the docs and I found this answer from Paul Battum: https://stackoverflow.com/a/50769314/1026105
You can use the configuration settings in host.json to control the level of concurrency your functions execute at per instance and the max scaleout setting to control how many instances you scale out to. This will let you control the total amount of load put on your database.
What is the strategy to limit the function, since it can be limited on:
the level of concurrency your functions execute at per instance
the number of instances
Thanks guys!
Look in to using the Durable Extensions for Azure Functions where you can control the number of orchestrator and activity functions. You will need to change your design a little but will then get far better control over concurrency.
{
"version": "2.0",
"durableTask": {
"HubName": "MyFunctionHub",
"maxConcurrentActivityFunctions": 10,
"maxConcurrentOrchestratorFunctions": 10
},
"functionTimeout": "00:10:00"
}
I am looking for REST API that gives me the maximum number of NICs that can be attached to a VM based on the VM Size.
I have searched for Azure REST API references, but I couldn't find any API. I am able to use the below API to get max. data disks that can be attached to VM, I also need to get the max. NICs. Any help how I can get this information?
https://management.azure.com/subscriptions/xxxxxx-xxxx-xxxx-xxxx-xxxxxxx/providers/Microsoft.Compute/locations/westus/vmSizes?api-version=2016-03-30
Sample output:
{
"name": "Standard_DS1_v2",
"numberOfCores": 1,
"osDiskSizeInMB": 1047552,
"resourceDiskSizeInMB": 7168,
"memoryInMB": 3584,
"maxDataDiskCount": 4
},
well, its dependent on the size of the VM. Check this article, it has got everything you need in it.