Availability of Azure storage account - azure

This page says that the availability of an azure storage account is computed as (billable requests)/(total requests). Billable requests mean all the requests excluding those which experienced anonymous failures (except network errors), throttled requests, server timeout errors and unknown errors.
Now,what I see on the azure portal for my storage account is a straight line continuously at 100%, meaning that the account is available at 100% availability continuously. The line is without any break which means that the availability is being calculated continuously.
I know for sure that I don't throw requests to the storage account continuously. Then, how is this metric calculated for times when there are no requests?
Additionally, even a slight drop in storage availability means that some requests failed due to some server side issues. How can we ensure that these failed requests are retried and they pass?

When there isn't any incoming request, the availability is 100%. If your request encounters server side failures, you should retry the request via your code explicitly (in .NET client library, you can easily leverage RetryPolicy. See more details about RetryPolicy here).

Related

Azure Webapp Request returning 502 after 4 min

I know this has been asked before, but I tried all known solutions and still no luck. I have a request that returns roughly 26MB of JSON. It is returning a 502 on my azure web app. I have set maxRequestLength and maxAllowedContentLength to their max allowed values as detailed here.
How to set the maxAllowedContentLength to 500MB while running on IIS7?
I have also set the applicationHost.xdt on the site folder of my webapp and verified it is applied as detailed here.
ApplicationHost.xdt in Azure Web Apps
None the less, my request timeout at exactly 4 minutes every time. I can run the same request against my localhost running on iisexpress pointed to the Azure SQL database and it returns the data, so I know this is something azure webapp speciic.
I have enabled all types of logging in "App Service Logs" section of my webapp. I see other failed request traces for 401 when session expires, but this request doesn't log a failed request trace, or an application error. In live log stream it shows the request as a 200 response in the web server logs.
Any other ideas?
Thanks for a detailed question and sharing the solutions that you have already tried. I'm unsure if "Always ON" feature is turned on on your WebApp. Such time-out error may occur due this,so kindly enable it and let us know for further investigation.
Additional information, Azure Load Balancer has a default idle timeout setting of approximately four minutes (230 sec); this is a general idle request timeout that will cause clients to get disconnected after 230 seconds. However, the command will still continue running server-side after that. For a typical scenario, this is generally a reasonable response time limit for a web request. In such scenarios, you could look at async methods to run additional reports. WebJobs or Azure Functions is another option.
If ‘Always On’ config is not turned On, please do turn it on. The AlwaysOn would help keep the app loaded even when there's no traffic, it will send a request to the ROOT of your application. Whatever file is delivered when a request is made to / is the one which will be warmed up and this feature comes with the App Service Plan is not charged separately
1) From the Azure Portal, go to your WebApp.
2) Select Settings> Configuration > General settings.
3) For Always On, select On.

Reducing maximum concurrent requests on Azure Cloud Service

I have an Azure Cloud Service (classic) that performs a lengthy operation for a request. If the traffic suddenly increases dramatically, I would rather the cloud service not be slowed to the speed of a snail by trying to serve all the requests at once. So the natural solution to me seems to be to simply reject new requests if the number of concurrent requests is too high (for my application it it is better if some of the requests are rejected than all of them slowed down massively).
For ASP.NET there seems to be a capability to do this through <system.web>:
<system.web>
<applicationPool
maxConcurrentRequestsPerCPU="30"
maxConcurrentThreadsPerCPU="0"
requestQueueLimit="30"
/>
</system.web>
However this doesn't seem to work for me, all requests then fail without clear error message. And besides Visual Studio is telling me that the tag is not expected. So I would guess this isn't available for Azure Cloud Services.
What can I do here to achieve this for an Azure Cloud Service? I would also be interested in something like limiting the maximum request time. The only thing I can think of is to actually try to count the requests in the C# code but that definitely seems suboptimal.
I am using .NET 4.6.1 and when RDPing into the cloud service VM the IIS version seems to be 10.0 (from looking at the manager).
I have seen the answer to this question: Limit concurrent requests in Azure App Service . However that is not what I want as I do not want to block IP addresses at any stage.

Azure BLOB storage phantom requests

I see strange requests when uploading blobs to storage. The only methods I use is PutBlob and SetBlobTier. But metrics shows large amount of GetBlobProperties requests with time interval about 1 hour. It seems like Azure makes some extra requests for statistic purposes. It happens only when uploading process is running. At attached diagram you can see 4 peaks of GetBlobProperties requests.
Does anybody know what is it? Another question is, will I be billed for this requests?
Any API call is considered to be a transaction:
Transactions – Each individual Blob, Table and Queue REST request to the storage service is considered as a potential transaction for billing. Applications can then control their transaction costs by controlling how often and how many requests they send to the storage service. We analyze each request received and then classify it as billable or not billable based upon our ability to process the request and the request’s outcome.
For that specific transaction, my guess is you're using Blob Archive(correct me if i'm wrong), the prices are as below:
It will be considered as All other Operations(per 10,000) pricing.
What could be causing it?
Due to a bug in our system, some internal transactions are miscategorized as customer requests when blobs transition between cold and archive storage tiers. These unexpected transactions will also be visible in both Azure Monitor as well as Storage Analytics logging. We are working on a fix for this and will deploy it as soon as possible. We apologize for any inconvenience and confusion this may have caused.

Azure API Throttling

I am trying to get a list of all Storage Accounts present in my Azure subscription but I am getting a throttling error.
com.microsoft.azure.CloudException: Status code 429, {"error":{"code":"ResourceCollectionRequestsThrottled","message":"Operation 'Microsoft.Storage/storageAccounts/read' failed as server encountered too many requests. Please try after '17' seconds. Tracking Id is 'e982a894-0f3e-4291-a9b3-e147c18f8f60'."}}
The request prior to this request prints there are 13869 more remaining subscription reads but it still fails.
x-ms-ratelimit-remaining-subscription-reads: 13869
There are around 60 Storage Accounts in my subscription and that according is a small number.
Any idea what's causing this and that too only while listing Storage Accounts and nowhere else.
According to this article:
For each subscription and tenant, Resource Manager limits read requests to 15,000 per hour and write requests to 1,200 per hour. These limits apply to each Azure Resource Manager instance; there are multiple instances in every Azure region, and Azure Resource Manager is deployed to all Azure regions. So, in practice, limits are effectively much higher than those listed above, as user requests are generally serviced by many different instances.
If your application or script reaches these limits, you need to throttle your requests.
So if you reach the request limit, Resource Manager returns the 429 HTTP status code and a Retry-After value in the header. The Retry-After value specifies the number of seconds your application should wait (or sleep) before sending the next request. If you send a request before the retry value has elapsed, your request is not processed and a new retry value is returned.
I suggest you could use this way to get the number of the read time. If it will meet the limit, you could write codes to limit the application to send the request.

Microsoft Azure Storage - Success Percentage

I added a microsoft azure storage to my hosting strategy and moved about 15gb if videos to Azure. I set up the CDN endpoint and it seems fine and i have no complaints . But I added monotoring and cant understand the "Success percentage" indicator and why would it not be 100%?
Because some requests can go wrong:
Throttling
Authentication failed (SAS key expired for example)
Client / Server Timeout
Network errors
The complete list of possible errors can be found here: Storage Analytics Logged Operations and Status Messages
Sandrino was mostly correct however a page which may have a postback has a http 304 on assets which are chached. Currently azure reports that in its fail to deliver percentage. I belive thsi is going to be changed next console update.

Resources