Microsoft Azure Storage - Success Percentage - azure

I added a microsoft azure storage to my hosting strategy and moved about 15gb if videos to Azure. I set up the CDN endpoint and it seems fine and i have no complaints . But I added monotoring and cant understand the "Success percentage" indicator and why would it not be 100%?

Because some requests can go wrong:
Throttling
Authentication failed (SAS key expired for example)
Client / Server Timeout
Network errors
The complete list of possible errors can be found here: Storage Analytics Logged Operations and Status Messages

Sandrino was mostly correct however a page which may have a postback has a http 304 on assets which are chached. Currently azure reports that in its fail to deliver percentage. I belive thsi is going to be changed next console update.

Related

Azure Blob Storage - Static Site analytics

I've got a static web site hosted in Azure Blob Storage with Cloudflare as my CDN. It's such a small site (not even 1Mb and only 1 blog post), but I'm getting 1.1-1.2Gb of requests each month for the past 6 months or so with no explanation. Is there a way to find out what is being requested? In Azure, I can only find information about the performance, up-time, etc, but nothing about url's and I need to pay to get this info from Cloudflare (I believe). Has anyone else experience such strange requests?
I suggest you open Diagnostic settings and download Azure Storage Explorer to view the logs.
When you finished settings, u can check logs by tools. You can see request urls and http status info.
The previous data should not be visible, but you can monitor and analyze future requests.
When I did a lookup on those two IP's, they were both registered to Cloudflare, one makes sense given I'm using their service, but to have what appears to be their bot hit my site with this frequency doesn't... Wonder if there's a setting

Azure Webapp Request returning 502 after 4 min

I know this has been asked before, but I tried all known solutions and still no luck. I have a request that returns roughly 26MB of JSON. It is returning a 502 on my azure web app. I have set maxRequestLength and maxAllowedContentLength to their max allowed values as detailed here.
How to set the maxAllowedContentLength to 500MB while running on IIS7?
I have also set the applicationHost.xdt on the site folder of my webapp and verified it is applied as detailed here.
ApplicationHost.xdt in Azure Web Apps
None the less, my request timeout at exactly 4 minutes every time. I can run the same request against my localhost running on iisexpress pointed to the Azure SQL database and it returns the data, so I know this is something azure webapp speciic.
I have enabled all types of logging in "App Service Logs" section of my webapp. I see other failed request traces for 401 when session expires, but this request doesn't log a failed request trace, or an application error. In live log stream it shows the request as a 200 response in the web server logs.
Any other ideas?
Thanks for a detailed question and sharing the solutions that you have already tried. I'm unsure if "Always ON" feature is turned on on your WebApp. Such time-out error may occur due this,so kindly enable it and let us know for further investigation.
Additional information, Azure Load Balancer has a default idle timeout setting of approximately four minutes (230 sec); this is a general idle request timeout that will cause clients to get disconnected after 230 seconds. However, the command will still continue running server-side after that. For a typical scenario, this is generally a reasonable response time limit for a web request. In such scenarios, you could look at async methods to run additional reports. WebJobs or Azure Functions is another option.
If ‘Always On’ config is not turned On, please do turn it on. The AlwaysOn would help keep the app loaded even when there's no traffic, it will send a request to the ROOT of your application. Whatever file is delivered when a request is made to / is the one which will be warmed up and this feature comes with the App Service Plan is not charged separately
1) From the Azure Portal, go to your WebApp.
2) Select Settings> Configuration > General settings.
3) For Always On, select On.

Throttling issue while listing Azure Storage Accounts

I am using Azure JAVA SDK and am trying to list the Storage Accounts for the subscription. But I am intermittently getting this exception response.
com.microsoft.azure.CloudException: Status code 429,
{"error":{"code":"ResourceCollectionRequestsThrottled","message":"Operation
'Microsoft.Storage/storageAccounts/read' failed as server encountered
too many requests. Please try after '17' seconds. Tracking Id is
'f13c3318-8fb3-4ae1-85a5-975f4c17a512'."}}
Is there a limit on the number of requests one can make to the Azure resource API ?
Is there a limit on the number of requests one can make to the Azure
resource API ?
Yes. The limits are documented here: https://learn.microsoft.com/en-us/azure/azure-subscription-service-limits (please see Subscription limits - Azure Resource Manager section). And you can see the 429 error code from here.
Based on the documentation, currently you're allowed to make 15000 Read requests/hour for Azure Resource Manager API.

Availability of Azure storage account

This page says that the availability of an azure storage account is computed as (billable requests)/(total requests). Billable requests mean all the requests excluding those which experienced anonymous failures (except network errors), throttled requests, server timeout errors and unknown errors.
Now,what I see on the azure portal for my storage account is a straight line continuously at 100%, meaning that the account is available at 100% availability continuously. The line is without any break which means that the availability is being calculated continuously.
I know for sure that I don't throw requests to the storage account continuously. Then, how is this metric calculated for times when there are no requests?
Additionally, even a slight drop in storage availability means that some requests failed due to some server side issues. How can we ensure that these failed requests are retried and they pass?
When there isn't any incoming request, the availability is 100%. If your request encounters server side failures, you should retry the request via your code explicitly (in .NET client library, you can easily leverage RetryPolicy. See more details about RetryPolicy here).

Is it considered a good idea to use Azure storage blob URIs as part of a web site's assets (i.e., linkable)?

From my Azure web app service (ASP.NET MVC), I am serving up images via an anonymous controller method from Azure classic blob storage in two ways:
one, as a redirect straight to the storage blob URI, and
two, served up via a HttpClient fetch and then a FileContentResult from those retrieved bytes (advantage of hiding the storage URI).
Both controller methods involve several database select calls, but I'm using a P2-tier Azure database.
I'm still testing the performance differences between the two - obviously the second requires double the traffic overall: bytes from storage into web app, and then bytes from web app to client.
In both cases however, I'm seeing pretty unacceptable response times and errors from blob storage under load, using the Azure performance testing tool (250 concurrent simulated users over 5 mins).
When using the second approach (fetch on web app from storage and stream back) I'm getting lots of HTTP request errors when requesting from storage, so I have an instant retry mechanism (max 3 tries) to mitigate this. The end result is an avg response time of between 12 and 25 secs for an image, which isn't much good for displaying in an email these days.
Using the first approach (clean redirect to storage URI) this flies down to 3-6 secs on average in order to serve the redirect - but I have no control over whether the subsequent client redirect request to storage then fails (which it clearly must do - between 80% and 95% success rate according to the diagnostic logs). So I'm looking at a fourfold latency increase by 'guaranteeing' the image is definitely served to the client - which is effectively just as bad.
Is this an all-out stupid approach? I'm probably being quite the noob about all this. Surely there are architectures built on Azure storage that are far larger than mine and with fast response rates?
This is just anecdotal, but we've seen good results with Azure CDN using blob storage as the source. So instead of redirecting to the blob storage URL you using the Azure CDN url.

Resources