100% Memory usage on Azure App Service Plan with two Apps - working set used 10gb+ - azure

I've got an app service plan with 14gb of memory - it should be plenty for my application's needs. There are two application services running on it, each identical - the private memory consumption of these hovers around 1gb but can spike to 4gb during periods of high usage. One app has a heavier usage pattern than the other.
Lately, during periods of high usage, I've noticed that the heavily used service can become unresponsive, and memory usage stays at 100% in the App Service Plan.
The high traffic service is using 4gb of private memory and starting to massively slow down. When I head over to the /scm.../ProcessExplorer/ page, I can see that the low traffic service has 1gb private memory used and 10gb of 'Working Set'.
As I understand it, on a single machine at least, the working set should be freed up when that memory is needed on another process. Does this happen naturally when two App Services share a single Plan?
It looks to me like the working set on the low-traffic instance is not being freed up to supply the needs of the high-traffic App Service.
If this is indeed the case, the simple fix is to move them to separate App Service Plans, each with 7gb of memory. However this seems like it might potentially be just shifting the problem around - has anyone else noticed similar issues with multiple Apps on a single App Service Plan? As far as I understand it, these shouldn't interfere with one another to the extent that they all need to be separated. Or have I got the wrong diagnosis?

In some high memory-consumption scenarios, your app might truly require more computing resources. In that case, consider scaling to a higher service tier so the application gets all the resources it needs. Other times, a bug in the code might cause a memory leak. A coding practice also might increase memory consumption. Getting insight into what's triggering high memory consumption is a two-part process. First, create a process dump, and then analyze the process dump. Crash Diagnoser from the Azure Site Extension Gallery can efficiently perform both these steps. For more information.
refer Capture and analyze a dump file for intermittent high memory for Web Apps.

In the end we solved this one via mitigation, rather than getting to the root cause.
We found a mitigation strategy to our previous memory issues several months ago, which was just to restart the server each night using a powershell script. This seems to prevent the memory just building up over time, and only costs us a few seconds of downtime. Our system doesn't have much overnight traffic as our users are all based in the same geographic location.
However we recently found that the overnight restart was reporting 'success' but actually failing each night due to expired credentials. Which meant that the memory issues we were having in the question I posted were actually exacerbated by server uptimes of several weeks. Restoring the overnight restart resolved the memory issues we were seeing, and we certainly don't see our system ever using 10gb+ again.
We'll investigate the memory issues if they rear their heads again. KetanChawda-MSFT's suggestion of using memory dumps to analyse the memory usage will be employed for this investigation when it's needed.

Related

App services on azure seems to be very slow

I am trying to track down when our frontend started to work that slow. Recently I created new app services within the same service plan.
so now I have six apps (2 frontend, 4 backend) running under same App Service plan using Basic pricing tier. Also, we use Kudu for deployments.
Could that be the reason? or how to look for the reason?
this is overview of that service plan
appreciating any ideas and suggestions
#user122222 This is a high CPU issue and not a slow request issue as others have pointed out.
An immediate action you can take is to scale up. If you are using a B1 instance in the basic tier, try to scale up to a B3, which will provide you with more CPU cores and RAM. See if that provides you relief. If so, then you likely need to remain at this instance level. At this point it would also be worth while to analyze your number of requests. You should scale up when you are running many sites or resource intensive sites and you should scale out when you are receiving a high number of requests.
My money is on the fact that you likely have an issue with your code that is causing a deadlock or similar. Your CPU usage graph is stuck at 100% usage over many hours. Even an overloaded ASP will see a few dips over the course of a few hours.
To troubleshoot high CPU usage, start by using the diagnose and solve problems blade in your app service plan. This is the same troubleshooting tool that a support engineer would use in a paid technical support case. Use it to troubleshoot high CPU (not slow requests as based on your screenshot, it would appear the CPU is the culprit of the slow requests).
This can tell you what app in the ASP is causing the issue and sometimes even tell you the process in that app that is causing the issue. Beyond this, I'd suggest creating and analyzing a memory dump of the problematic web app. More steps on how to do that here.
Please try to restart the worker instance.
https://learn.microsoft.com/en-us/rest/api/appservice/app-service-plans/reboot-worker#code-try-0

What would cause high KUDU usage (and eventual 502 errors) on an Azure App Service Plan?

We have a number of API apps and WebApps on an Azure App Service P2v2 instance. We've been experiencing an amount of platform instability: the App Service becomes unhealthy and we get a rash of 502 errors across various of the Apps (different ones each time), attributable to very high CPU and Memory usage on the app service. We've tried scaling all the way up to P3v2, but whatever the issue is seems eventually to consume all resources available.
Whenever we've been able to trace a culprit among the apps, it has turned dout not to be the app itself but the Kudu service related to it.
A sample error message is High physical memory usage detected on multiple occasions. The kudu process for the app [sitename]'pe-services-color' is the most common cause of high memory usage. The most common cause of high memory usage for the kudu process is web jobs. where the actual app whose Kudu service is named changes quite frequently.
What could be causing the Kudu services to consume so much CPU/Memory, and what can we do to stabilise this app service?
Is it simply that we have too many apps running on one plan? This seems unlikely since all these apps ran previously on a single classic cloud service instance, but if so, what are the limits for apps and slots on a single plan?
(I have seen this question but the answer doesn't help)
Update
From Azure support, these are apparently the limits on Small - Medium - Large non-shared app services:
Worker Size Max sites
Small 5 Medium 10 Large 20
with 'sites' comprising app services/api apps and their slots.
They seem ridiculously low, and make the larger App Service units highly uneconomic. Can anyone confirm these numbers?
(Incidentally, we found that turning off Always On across the board fixed the issue - it was only causing a problem on empty sites though - we haven't had a chance yet to see if performance is good with all the sites filled.)
High CPU and memory utilization would be mostly caused by your program/code itself. If there are lot of CPU intensive tasks and you applied lot of parallel programming that spawn lot of new threads can contribute to high cpu and memory utilization. So review your code and see such instances. When number of parallel threads increased cpu utilization goes high and it starts scaling up frequently that adds up your cost also sometime thread loss and unexpected results. As Azure resources costs are high you need to plan your performance accordingly.
You can monitor this using the Metrics option of the app service plan in the blade .

What can cause an Azure Cloud Service's Disk Write/Read to spike unexpectedly?

We have a simple worker that picks up messages from a queue and runs a few queries. We don't ever write to the disk ourselves but we do have diagnostics turned on in the roll settings.
Once in a while the the disk write/read spikes and the worker becomes unresponsive. What is the role trying to write to the disk? On the surface it doesn't appear to be a crash dump because those tables and blobs are still empty. Are our diagnostics configured improperly?
Here's an example of a spike we saw recently. It was writing for over an hour!
Try enabling remote desktop support in the role configuration in the Azure portal.
Once the problem resurfaces, log in via RDP and start Resource Monitor. The Disk tab should be able to pinpoint disk IO usage by process and by file.
Enabling storage logs should tell you exactly what are those reads and writes on the disk.
So, this is a very open ended question and is very hard to predict. Your Cloud Services are ultimately Windows machines and what's happening on Windows can (usually) only be monitored by something inside Windows.
It is very possible that a Windows Update related task was running. Those may cause spikes in disk R/W
We typically advice users who use CloudMonix and want to know what causes CPU/Memory/other issues to install CloudMonix agent on their machines as it captures running processes, their memory and CPU utilizations and can show process that caused a spike. Usually spikes in disk R/W are correlated to spikes in CPU usage.
Note, if the spike was caused by your own code, you'll need to use profilers such as RedGate's ANTS performance profiler or Jetbrains dotTrace or some such to determine the ultimate root cause.
HTH

What is normal Azure WaIISHost.exe Memory Usage?

I have recently installed NewRelic server monitoring to our Azure web role. The role is a small instance. We are on OSv4 (Win 2012 R2) using 2.2 Service Runtime.
Looking at memory usage I notice that WallSHost.exe (which I understand to be Azure related) it reported as consuming 219Mb (down from a peak of 250Mb) via NewRelic. Is that a lot of memory for it? Can I reduce it? Just seemed like a lot to be taking up.
CPU usage seems to aperiodically spike at about 4% for it. However CPU isn't really an issue as my instance rarely goes above 50%
First off, why do you care how much memory a process is taking up? All of that memory will be paged out to disk, and assuming it isn't being paged back in regularly then all it does is take up page file size which is usually irrelevant.
The WaIISHost process runs your role entry point code (OnStart, Run, StatusCheck, Changing, etc) and is typically implemented in WebRole.cs. If you want to reduce the memory size of this process then you can reduce the amount of memory being loaded by your role entry point code.
See http://blogs.msdn.com/b/kwill/archive/2011/05/05/windows-azure-role-architecture.aspx for more information about the WaIISHost.exe process and what it does.

iis Cpu is on 95% usage with very little users - on production

I have a web site and I am using iis as my web server. I noticed that on production server, the cpu reaches 95% usage pretty fast with very little users. this behaviour I don't see on my developement server. I am using visual studio to develop and iis as my local web server as well.
How much big traffic you have on production comparing to development server? How their parameters compare? Before starting a deep analysis of the application itself, I would identify all the infrastructure and environmental differences. Sometime such problems happens because of some other software, like antivirus software running in the background...
Nevertheless, because it sounds rather as a application problem, I would first check Event Viewer for errors. Then I would start from monitoring a few Performance Counters to correlate % Processor Time counter with Current Connections, Available Memory, # of Exceps Thrown / sec, % Time in GC and so on. This kind of behavior usually has a reason from the list:
excessive loops usage due to some logic error, like calling the same service again and again, trying to load or parse malfunctioned file etc. This can be analyzed with dump analysis (look below).
high CPU usage due to Garbage Collector - when memory usage is extensive (or there is a memory leak even) GC may start to consume more and more CPU fighting with the memory shortage. You will see this with memory-related performance counters.
a considerable amount of exceptions thrown (for example due to some environmental problems like network unavailability, production data difference) can also consume a lot of CPU. Event Viewer and exception-related performance counters (as they can be handled silently by your application) should be a indicator here.
To further analyze your application, I suggest to make a full memory dump during high CPU usage. You can do that with Debug Diag tool. Please refer this IIS troubleshooting guide for details.

Resources