Azure API Throttling - azure

I am trying to get a list of all Storage Accounts present in my Azure subscription but I am getting a throttling error.
com.microsoft.azure.CloudException: Status code 429, {"error":{"code":"ResourceCollectionRequestsThrottled","message":"Operation 'Microsoft.Storage/storageAccounts/read' failed as server encountered too many requests. Please try after '17' seconds. Tracking Id is 'e982a894-0f3e-4291-a9b3-e147c18f8f60'."}}
The request prior to this request prints there are 13869 more remaining subscription reads but it still fails.
x-ms-ratelimit-remaining-subscription-reads: 13869
There are around 60 Storage Accounts in my subscription and that according is a small number.
Any idea what's causing this and that too only while listing Storage Accounts and nowhere else.

According to this article:
For each subscription and tenant, Resource Manager limits read requests to 15,000 per hour and write requests to 1,200 per hour. These limits apply to each Azure Resource Manager instance; there are multiple instances in every Azure region, and Azure Resource Manager is deployed to all Azure regions. So, in practice, limits are effectively much higher than those listed above, as user requests are generally serviced by many different instances.
If your application or script reaches these limits, you need to throttle your requests.
So if you reach the request limit, Resource Manager returns the 429 HTTP status code and a Retry-After value in the header. The Retry-After value specifies the number of seconds your application should wait (or sleep) before sending the next request. If you send a request before the retry value has elapsed, your request is not processed and a new retry value is returned.
I suggest you could use this way to get the number of the read time. If it will meet the limit, you could write codes to limit the application to send the request.

Related

Logic App returns a Gateway timeout error even after 30 secs

I have a logic app that gets called from APIM > Function > Logic app > D365 (synchronous call based on http request trigger). When I call it the first time (after a day, or after a few hours), takes longer than usual (around 25-30 seconds) and results in a Gateway timeout error.
When I call it the second time, it usually completes the operation within 8-10 secs with no timeouts.
The error message is typical:
The execution of template action 'Response_-_to_be_displayed' is failed: the client application timed out waiting for a response from service. This means that workflow took longer to respond than the alloted timeout value. The connection maintained between the client application and service will be closed and client application will get an HTTP status code 504 Gateway Timeout.
While keeping the pattern synchronous, i don't think this is something that may cause a timeout issue. I have already checked this link and this one too, but that's not a solution to my problem.
I want to keep the call synchronous (it's only a 25-30 sec call), is it something to do with an APIM policy or any settings in logic apps that can increase this?
you can increase the timeout in both logic app and APIM. In the case of APIM we implement a policy, which will dictate the timeout and in logic app it's app settings.
But in case of APIM if the timeout is more than 240 sec then APIM won't be reliable, and it is advised to look for the implementation of the function.
Now to increase the timeout in APIM we must set up the forward request policy and this policy has the attribute of timeout which you can set to your desired limit.
Go to policy fragments in APIM and add the following to a policy
<forward-request timeout="60"/>
Now regarding the logic-app, we can setup custom timeout by setting up application settings. For this you would need two setting one is threshold and other one is time out. you have to add these in app setting which is under configuration in portal for logic app.
The settings are called
Runtime.FlowRetentionThreshold
and
Runtime.Backend.FlowRunTimeout
the values of these settings are in format Days.hours:Minutes:seconds for example : 00.00:05:00 this one is for 5 minutes
Also please check that your function is working properly as workflow timeout in logic apps and apim timeouts default value is more than the time it takes to execute a task according to you.
Refer this msDoc on Logic app timeout
Refer this msdoc on apim timeout.

Azure BLOB storage phantom requests

I see strange requests when uploading blobs to storage. The only methods I use is PutBlob and SetBlobTier. But metrics shows large amount of GetBlobProperties requests with time interval about 1 hour. It seems like Azure makes some extra requests for statistic purposes. It happens only when uploading process is running. At attached diagram you can see 4 peaks of GetBlobProperties requests.
Does anybody know what is it? Another question is, will I be billed for this requests?
Any API call is considered to be a transaction:
Transactions – Each individual Blob, Table and Queue REST request to the storage service is considered as a potential transaction for billing. Applications can then control their transaction costs by controlling how often and how many requests they send to the storage service. We analyze each request received and then classify it as billable or not billable based upon our ability to process the request and the request’s outcome.
For that specific transaction, my guess is you're using Blob Archive(correct me if i'm wrong), the prices are as below:
It will be considered as All other Operations(per 10,000) pricing.
What could be causing it?
Due to a bug in our system, some internal transactions are miscategorized as customer requests when blobs transition between cold and archive storage tiers. These unexpected transactions will also be visible in both Azure Monitor as well as Storage Analytics logging. We are working on a fix for this and will deploy it as soon as possible. We apologize for any inconvenience and confusion this may have caused.

Throttling issue while listing Azure Storage Accounts

I am using Azure JAVA SDK and am trying to list the Storage Accounts for the subscription. But I am intermittently getting this exception response.
com.microsoft.azure.CloudException: Status code 429,
{"error":{"code":"ResourceCollectionRequestsThrottled","message":"Operation
'Microsoft.Storage/storageAccounts/read' failed as server encountered
too many requests. Please try after '17' seconds. Tracking Id is
'f13c3318-8fb3-4ae1-85a5-975f4c17a512'."}}
Is there a limit on the number of requests one can make to the Azure resource API ?
Is there a limit on the number of requests one can make to the Azure
resource API ?
Yes. The limits are documented here: https://learn.microsoft.com/en-us/azure/azure-subscription-service-limits (please see Subscription limits - Azure Resource Manager section). And you can see the 429 error code from here.
Based on the documentation, currently you're allowed to make 15000 Read requests/hour for Azure Resource Manager API.

Sending emails through Amazon SES with Azure Functions

The Problem:
So we are building a newsletter system for our app that must have a capacity to send 20k-40k emails up to several times a day.
Tools of Preference:
Amazon SES - for pricing and scalability
Azure Functions - for serverless compute to send emails
Limitations of Amazon SES:
Amazon SES Throttling having Max Send Rate - Amazon SES throttles sending via their services by imposing a max send rate. Right now, being out of the sandbox SES environment, our capacity is 14 emails/sec with 50K daily emails cap. But this limit can be increased via a support ticket.
Limitations of Azure Functions:
On a Consumption Plan, there's no way to limit the scale as to how many instances of your Azure Function execute. Currently the scaling is handled internally by Azure, and thus the function can execute between just a few to hundreds of instances.
From reading other post on Azure Functions, there seems to be "warm-up" period for Azure Functions, meaning the function may not execute as soon as it is triggered via one of the documented triggers.
Limitations of Azure Functions with SES:
The obvious problem would be Amazon SES throttling sending emails from Azure functions because the scaled execution of Azure Function that sends out an email will be much higher than allowed send rate for SES.
Due to "warm-up" period of Azure Function messages may end up being piled up in a queue before Azure Function actually starts processing them at a scale and sending out the email, thus there's a very high probability of hitting that send/rate limit.
Question:
How can we utilize sending emails via Azure Functions while still being under X emails/second limit of SES? Is there a way to limit how many times an Azure Function can execute per time frame? So like let's says we don't want more than 30 instances of Azure Function running per/second?
Other thoughts:
Amazon SES might not like continuous throttling of SES for a customer if the customer's implementation is constantly hitting up that throttling limit. Amazon SES folks, can you please comment?
Azure Functions - as per documentation, the scaling of Azure Functions on a Consumption Plan is handled internally. But isn't there a way to put a manual "cap" on scaling? This seems like such a common requirement from a customer's point of view. The problem is not that Azure Functions can't handle the load, the problem is that other components of the system that interface with Azure Functions can't handle the load at the massive scale at which Azure Functions can handle it.
Thank you for your help.
If I understand your problem correctly, the easiest method is a custom queue throttling solution.
Basically your AF just retrieve the calls for all the mailing requests, and then queue them into a queue system (say ServiceBus / EventHub / IoTHub) then you can have another Azure function that runs by x minute interval, which will pull a maximum of y messages and push it to SES. Your control point becomes that clock function, and since the queue system will help you to ensure you know your message delivery status (has sent to SES yet) and you can pop the queue once done, it would allow you to ensure the job is eventually processed.
You should be able to set the maxConcurrentCalls to 1 within the host.json file for the function; this will ensure that only 1 function execution is occurring at any given time and should throttle your processing rate to something more agreeable from AWS perspective in terms of sends per second:
host.json
{
// The unique ID for this job host. Can be a lower case GUID
// with dashes removed. When running in Azure Functions, the id can be omitted, and one gets generated automatically.
"id": "9f4ea53c5136457d883d685e57164f08",
// Configuration settings for 'serviceBus' triggers. (Optional)
"serviceBus": {
// The maximum number of concurrent calls to the callback the message
// pump should initiate. The default is 16.
"maxConcurrentCalls": 1,
...

Availability of Azure storage account

This page says that the availability of an azure storage account is computed as (billable requests)/(total requests). Billable requests mean all the requests excluding those which experienced anonymous failures (except network errors), throttled requests, server timeout errors and unknown errors.
Now,what I see on the azure portal for my storage account is a straight line continuously at 100%, meaning that the account is available at 100% availability continuously. The line is without any break which means that the availability is being calculated continuously.
I know for sure that I don't throw requests to the storage account continuously. Then, how is this metric calculated for times when there are no requests?
Additionally, even a slight drop in storage availability means that some requests failed due to some server side issues. How can we ensure that these failed requests are retried and they pass?
When there isn't any incoming request, the availability is 100%. If your request encounters server side failures, you should retry the request via your code explicitly (in .NET client library, you can easily leverage RetryPolicy. See more details about RetryPolicy here).

Resources