How to capture IIS Request Wait Time - iis

Is there any way by which I can get the value of how long a request waited in the IIS queues before it was given to my Azure Web Role?

What you are asking for really doesn't have anything to do with Azure since the IIS request queue is completely independent of anything in Azure - you just happen to be running IIS in an Azure VM.
For information about how to monitor request processing time see http://msdn.microsoft.com/en-us/library/ms972959.aspx. There is a lot information in that article, but in particular you are probably interested in the ASP.NET\Request Wait Time counter.

Related

Reducing maximum concurrent requests on Azure Cloud Service

I have an Azure Cloud Service (classic) that performs a lengthy operation for a request. If the traffic suddenly increases dramatically, I would rather the cloud service not be slowed to the speed of a snail by trying to serve all the requests at once. So the natural solution to me seems to be to simply reject new requests if the number of concurrent requests is too high (for my application it it is better if some of the requests are rejected than all of them slowed down massively).
For ASP.NET there seems to be a capability to do this through <system.web>:
<system.web>
<applicationPool
maxConcurrentRequestsPerCPU="30"
maxConcurrentThreadsPerCPU="0"
requestQueueLimit="30"
/>
</system.web>
However this doesn't seem to work for me, all requests then fail without clear error message. And besides Visual Studio is telling me that the tag is not expected. So I would guess this isn't available for Azure Cloud Services.
What can I do here to achieve this for an Azure Cloud Service? I would also be interested in something like limiting the maximum request time. The only thing I can think of is to actually try to count the requests in the C# code but that definitely seems suboptimal.
I am using .NET 4.6.1 and when RDPing into the cloud service VM the IIS version seems to be 10.0 (from looking at the manager).
I have seen the answer to this question: Limit concurrent requests in Azure App Service . However that is not what I want as I do not want to block IP addresses at any stage.

Can I see Http Queue Length inside Azure Application Insight?

My Azure WebApp is having problem with many requests in Http Queue.
To analyse the reason and find out what the problem is, I would like to see the Http Queue Length inside Application Insight.
I wonder if it's possible to see the Http Queue Length inside Azure Application Insight?
If it is possible then I would like to know how to see this value.
I have tried to find Http Queue Length in Azure Application Insight in Portal GUI.
I have also tried to find Http Queue Length in analytics.applicationinsights.io.
I have also tried to get Http Queue Length from the Azure REST API, but I did not succeed in getting the value.
If I did then I could add it as a custom event into Application Insight.
Thanks,
Henrik
Unfortunately for Azure Web Apps the answer is no as there is only a subset of performance counters available to the web app process and by extension Application Insights.
You’ve already been looking into this, but be sure you’re trying to get the queue length for the App Sevice Plan and not the Web App Instance via the REST APIs or the PowerShell cmdlet. The link below explains that it is only available for some tiers and only available for the App Service Plan.
https://learn.microsoft.com/en-us/azure/app-service/web-sites-monitor#understanding-quotas-and-metrics
Http Queue Length is not in Application Insights but it can be seen in the metrics on the App Service Plan, since it is a VM level statistic. See this answer.

Azure - Triggered by Q-message

In our app (Azure hosted) we produce invoices, these have to be injected into an on premise accounting software. It is not possible to host an API that would be reachable from the Azure to post the invoices to.
Is it possible to create an exe that runs on-premise an that get's triggered by Azure Q-messages like WebJobs can ? Once triggered retrieve the invoice from a blob-storage-object.
Other suggestions are also welcome.
One important thing I want to mention is that even WebJobs poll the queue at predetermined interval (I believe the default is 30 seconds). Azure Queues don't support triggering mechanism like you think.
What you want to do is entirely possible though. What you could do is write a Windows Service, that essentially wakes up at a predetermined interval and checks for messages in the queue. If it finds messages, then it processes those messages otherwise go back to sleep again.

Azure WebJobs for Aggregation

I'm trying to figure out a solution for recurring data aggregation of several thousand remote XML and JSON data files, by using Azure queues and WebJobs to fetch the data.
Basically, an input endpoint URL of some sort would be called (with a data URL as parameter) on an Azure website/app. It should trigger a WebJobs background job (or can it continuously running and checking the queue periodically for new work), fetch the data URL and then callback an external endpoint URL on completion.
Now the main concern is the volume and its performance/scaling/pricing overhead. There will be around 10,000 URLs to be fetched every 10-60 minutes (most URLs will be fetched once every 60 minutes). With regards to this scenario of recurring high-volume background jobs, I have a couple of questions:
Is Azure WebJobs (or Workers?) the right option for background processing at this volume, and be able to scale accordingly?
For this sort of volume, which Azure website tier will be most suitable (comparison at http://azure.microsoft.com/en-us/pricing/details/app-service/)? Or would only a Cloud or VM(s) work at this scale?
Any suggestions or tips are appreciated.
Yes, Azure WebJobs is an ideal solution to this. Azure WebJobs will scale with your Web App (formerly Websites). So, if you increase your web app instances, you will also increase your web job instances. There are ways to prevent this but that's the default behavior. You could also setup autoscale to automatically scale your web app based on CPU or other performance rules you specify.
It is also possible to scale your web job independently of your web front end (WFE) by deploying the web job to a web app separate from the web app where your WFE is deployed. This has the benefit of not taking up machine resources (CPU, RAM) that your WFE is using while giving you flexibility to scale your web job instances to the appropriate level. Not saying this is what you should do. You will have to do some load testing to determine if this strategy is right (or necessary) for your situation.
You should consider at least the Basic tier for your web app. That would allow you to scale out to 3 instances if you needed to and also removes the CPU and Network I/O limits that the Free and Shared plans have.
As for the queue, I would definitely suggest using the WebJobs SDK and let the JobHost (from the SDK) invoke your web job function for you instead of polling the queue. This is a really slick solution and frees you from having to write the infrastructure code to retrieve messages from the queue, manage message visibility, delete the message, etc. For a working example of this and a quick start on building your web job like this, take a look at the sample code the Azure WebJobs SDK Queues template punches out for you.

Escaping from Affinity while Polling in the Azure Cloud

I am trying to build an application in Windows Azure that requires notifications - so I am using Http Polling, but I got a problem. I need to get to the very same instance of my web role so I can maintain the polling. I found a solution with web farm request rewriting(adding extra proxy role), but I don't like the solution.
Soo my question:
Is there a way to do stateless polling and if there is someone that implemented this - can you give me a hint or link?
You just need the poll request to thunk through to some sort of shared 'stateful' layer. Probably the best candidate would be to use the Windows Azure caching service. http://www.windowsazure.com/en-us/home/tour/caching/ you could also just use storage and choosing between the two will largely be determined by how busy your app is; i.e. whether it is cheaper to pay a fixed cost for some cache space with no per request charge or pay almost nothing for your storage space but pay a per request charge.
By taking this approach it does not matter if susequent polling requests end up routed to a different instance. To me this is best practice regardless; I've seen some fugly bugs pop up when people assume session affinity when making AJAX calls back into an Azure web role.

Resources