I have an Azure Cloud Service (classic) that performs a lengthy operation for a request. If the traffic suddenly increases dramatically, I would rather the cloud service not be slowed to the speed of a snail by trying to serve all the requests at once. So the natural solution to me seems to be to simply reject new requests if the number of concurrent requests is too high (for my application it it is better if some of the requests are rejected than all of them slowed down massively).
For ASP.NET there seems to be a capability to do this through <system.web>:
<system.web>
<applicationPool
maxConcurrentRequestsPerCPU="30"
maxConcurrentThreadsPerCPU="0"
requestQueueLimit="30"
/>
</system.web>
However this doesn't seem to work for me, all requests then fail without clear error message. And besides Visual Studio is telling me that the tag is not expected. So I would guess this isn't available for Azure Cloud Services.
What can I do here to achieve this for an Azure Cloud Service? I would also be interested in something like limiting the maximum request time. The only thing I can think of is to actually try to count the requests in the C# code but that definitely seems suboptimal.
I am using .NET 4.6.1 and when RDPing into the cloud service VM the IIS version seems to be 10.0 (from looking at the manager).
I have seen the answer to this question: Limit concurrent requests in Azure App Service . However that is not what I want as I do not want to block IP addresses at any stage.
Related
We are attempting to get Azure SignalR serverless to with a dotnet core API application. With "default" SignalR, we ran into scaling issues in Azure as server instances behind an API App would continue to receive socket connections even as its CPU increases. There is no way to currently to change load balancing behavior or to take an instance out of traffic. As such, we've looked to use "serverless", however, all documentation points to using Azure Functions. However, given that serverless uses webhooks and such, we should be able to use anything that can take an HTTP request. We already have our APIs setup so getting this to work against out APIs is preferred.
Update 1
Effectively, we're looking for support for serverless that Functions get but for APIs. Functions have triggers and Serverless Hubs to inherit from, etc etc. These things handle negotiate calls and deserialization of negotiation data and all the other things SigR has to do. We're looking for something similar for, I guess, API controllers.
Thanks!
Within the Web App Down page in Diagnose and Solve for my Azure App Service I am seeing a series of 502 errors that have been occurring consistently for the past few hours. My site is unreachable upon browsing. I have tried restarting the app, and this has not helped. There have been no recent code deployments or configuration changes that led to this error.
Looking at the Microsoft Documentation I see:
https://learn.microsoft.com/en-us/azure/application-gateway/application-gateway-troubleshooting-502#cause-1
This seems to be an issue with the connectivity to the back end address pool that is behind a gateway which should be managed by Azure.
As you said, 502s generally indicate being unable to connect to back-end instances.
A solution to this can be to scale up or scale down your app service plan ensuring that you remain within the same tier (i.e. standard vs premium), so as to not change your inbound virtual IP, wait ~5 minutes, and then scale back.
Examples: S1 -> S2 or P2v2 -> P1v2
This operation, also referred to as the "scaling trick", allocates both new instances to the app service plan hosting your web apps, as well as a new internal load balancer.
In the event that there is a process hang-up caused by another resource running on the same hardware hosting your instance(s) and your site, this is the most efficient way to move your site to a new instance. Essentially, this functions as a hard reset beyond the capabilities of the traditional restart.
Lastly, because Azure bills by the hour and this temporary scale is for only 5 minutes, in the event that you need to scale up to remain in the same app service plan tier
(i.e. standard vs premium), you will face either negligible cost or no cost at all.
For future reference, in order to prevent this issue from re-occurring, if you have multiple instances running for your app then please consider enabling health check feature: https://learn.microsoft.com/en-us/azure/azure-monitor/platform/autoscale-get-started#route-traffic-to-healthy-instances-app-service
You can find other best practices here: https://azure.github.io/AppService/2020/05/15/Robust-Apps-for-the-cloud.html
My Azure WebApp is having problem with many requests in Http Queue.
To analyse the reason and find out what the problem is, I would like to see the Http Queue Length inside Application Insight.
I wonder if it's possible to see the Http Queue Length inside Azure Application Insight?
If it is possible then I would like to know how to see this value.
I have tried to find Http Queue Length in Azure Application Insight in Portal GUI.
I have also tried to find Http Queue Length in analytics.applicationinsights.io.
I have also tried to get Http Queue Length from the Azure REST API, but I did not succeed in getting the value.
If I did then I could add it as a custom event into Application Insight.
Thanks,
Henrik
Unfortunately for Azure Web Apps the answer is no as there is only a subset of performance counters available to the web app process and by extension Application Insights.
You’ve already been looking into this, but be sure you’re trying to get the queue length for the App Sevice Plan and not the Web App Instance via the REST APIs or the PowerShell cmdlet. The link below explains that it is only available for some tiers and only available for the App Service Plan.
https://learn.microsoft.com/en-us/azure/app-service/web-sites-monitor#understanding-quotas-and-metrics
Http Queue Length is not in Application Insights but it can be seen in the metrics on the App Service Plan, since it is a VM level statistic. See this answer.
I'm trying to figure out a solution for recurring data aggregation of several thousand remote XML and JSON data files, by using Azure queues and WebJobs to fetch the data.
Basically, an input endpoint URL of some sort would be called (with a data URL as parameter) on an Azure website/app. It should trigger a WebJobs background job (or can it continuously running and checking the queue periodically for new work), fetch the data URL and then callback an external endpoint URL on completion.
Now the main concern is the volume and its performance/scaling/pricing overhead. There will be around 10,000 URLs to be fetched every 10-60 minutes (most URLs will be fetched once every 60 minutes). With regards to this scenario of recurring high-volume background jobs, I have a couple of questions:
Is Azure WebJobs (or Workers?) the right option for background processing at this volume, and be able to scale accordingly?
For this sort of volume, which Azure website tier will be most suitable (comparison at http://azure.microsoft.com/en-us/pricing/details/app-service/)? Or would only a Cloud or VM(s) work at this scale?
Any suggestions or tips are appreciated.
Yes, Azure WebJobs is an ideal solution to this. Azure WebJobs will scale with your Web App (formerly Websites). So, if you increase your web app instances, you will also increase your web job instances. There are ways to prevent this but that's the default behavior. You could also setup autoscale to automatically scale your web app based on CPU or other performance rules you specify.
It is also possible to scale your web job independently of your web front end (WFE) by deploying the web job to a web app separate from the web app where your WFE is deployed. This has the benefit of not taking up machine resources (CPU, RAM) that your WFE is using while giving you flexibility to scale your web job instances to the appropriate level. Not saying this is what you should do. You will have to do some load testing to determine if this strategy is right (or necessary) for your situation.
You should consider at least the Basic tier for your web app. That would allow you to scale out to 3 instances if you needed to and also removes the CPU and Network I/O limits that the Free and Shared plans have.
As for the queue, I would definitely suggest using the WebJobs SDK and let the JobHost (from the SDK) invoke your web job function for you instead of polling the queue. This is a really slick solution and frees you from having to write the infrastructure code to retrieve messages from the queue, manage message visibility, delete the message, etc. For a working example of this and a quick start on building your web job like this, take a look at the sample code the Azure WebJobs SDK Queues template punches out for you.
I am trying to build an application in Windows Azure that requires notifications - so I am using Http Polling, but I got a problem. I need to get to the very same instance of my web role so I can maintain the polling. I found a solution with web farm request rewriting(adding extra proxy role), but I don't like the solution.
Soo my question:
Is there a way to do stateless polling and if there is someone that implemented this - can you give me a hint or link?
You just need the poll request to thunk through to some sort of shared 'stateful' layer. Probably the best candidate would be to use the Windows Azure caching service. http://www.windowsazure.com/en-us/home/tour/caching/ you could also just use storage and choosing between the two will largely be determined by how busy your app is; i.e. whether it is cheaper to pay a fixed cost for some cache space with no per request charge or pay almost nothing for your storage space but pay a per request charge.
By taking this approach it does not matter if susequent polling requests end up routed to a different instance. To me this is best practice regardless; I've seen some fugly bugs pop up when people assume session affinity when making AJAX calls back into an Azure web role.