Best practices for memory limits in an IIS app pool - iis

I wanted to know what people used as a best practice for limiting memory on IIS [5/6/7]. I'm running on 32bit web servers with 4GB of physical memory, and no /3GB switch. I'm currently limiting my app pools to 1GB used memory. Is this too low? any thoughts?

All the limits in the application pool are for bad behaving apps. And more specifically:
To prevent the bad app from disturbing to good apps.
To try and keep the bad app running as much as possible.
In that light, the answer is of course: It depends.
If your application is leaking then without a limit it will crash around 1.2 - 1.6 Gb (if memory serves). So 1 Gb is sensible. If during normal operation your application consumes not more than 100 Mb and you have many app pools on the server, than you should set the limit lower to prevent one app from damaging other apps.
To conclude: 1 Gb is sensible. Hitting the limits should be treated as an application crash and should be debugged and fixed.
David Wang blog is a good resource on those issues.

There's a great writeup from a MS Field Engineer about this subject.

Related

how to allow pm2 to use all of the available system memory

I have multiple micro-services written in Node and running on pm2. Whenever there is a high traffic on any of these micro-services, the memory doesn't exceed 800 MB even though the system has more than 10GB of memory free. Instead the system becomes slow. I have used only the below command with no additional settings to start the services.
pm2 start app.js --name='app_name'
I have gone through the docs for pm2 but it only mention about limiting the memory usage using max-memory-restart. Is there a way I can make sure my micro-services use all the available system memory.
Whenever there is a high traffic on any of these micro-services, the memory doesn't exceed 800 MB even though the system has more than 10GB of memory free. Instead the system becomes slow.
You need to look at CPU metrics too, not just memory. More likely than not, those services aren't starved for memory and would begin to swap out to disk, but are just working your server's CPUs.
Profiling your services wouldn't hurt either, to find any possible bottlenecks or stalls that occur during high load.
Is there a way I can make sure my micro-services use all the available system memory.
Yes, there is: use more memory in those services. There's no intrinsic limit unless you've configured one.

100% Memory usage on Azure App Service Plan with two Apps - working set used 10gb+

I've got an app service plan with 14gb of memory - it should be plenty for my application's needs. There are two application services running on it, each identical - the private memory consumption of these hovers around 1gb but can spike to 4gb during periods of high usage. One app has a heavier usage pattern than the other.
Lately, during periods of high usage, I've noticed that the heavily used service can become unresponsive, and memory usage stays at 100% in the App Service Plan.
The high traffic service is using 4gb of private memory and starting to massively slow down. When I head over to the /scm.../ProcessExplorer/ page, I can see that the low traffic service has 1gb private memory used and 10gb of 'Working Set'.
As I understand it, on a single machine at least, the working set should be freed up when that memory is needed on another process. Does this happen naturally when two App Services share a single Plan?
It looks to me like the working set on the low-traffic instance is not being freed up to supply the needs of the high-traffic App Service.
If this is indeed the case, the simple fix is to move them to separate App Service Plans, each with 7gb of memory. However this seems like it might potentially be just shifting the problem around - has anyone else noticed similar issues with multiple Apps on a single App Service Plan? As far as I understand it, these shouldn't interfere with one another to the extent that they all need to be separated. Or have I got the wrong diagnosis?
In some high memory-consumption scenarios, your app might truly require more computing resources. In that case, consider scaling to a higher service tier so the application gets all the resources it needs. Other times, a bug in the code might cause a memory leak. A coding practice also might increase memory consumption. Getting insight into what's triggering high memory consumption is a two-part process. First, create a process dump, and then analyze the process dump. Crash Diagnoser from the Azure Site Extension Gallery can efficiently perform both these steps. For more information.
refer Capture and analyze a dump file for intermittent high memory for Web Apps.
In the end we solved this one via mitigation, rather than getting to the root cause.
We found a mitigation strategy to our previous memory issues several months ago, which was just to restart the server each night using a powershell script. This seems to prevent the memory just building up over time, and only costs us a few seconds of downtime. Our system doesn't have much overnight traffic as our users are all based in the same geographic location.
However we recently found that the overnight restart was reporting 'success' but actually failing each night due to expired credentials. Which meant that the memory issues we were having in the question I posted were actually exacerbated by server uptimes of several weeks. Restoring the overnight restart resolved the memory issues we were seeing, and we certainly don't see our system ever using 10gb+ again.
We'll investigate the memory issues if they rear their heads again. KetanChawda-MSFT's suggestion of using memory dumps to analyse the memory usage will be employed for this investigation when it's needed.

What is normal Azure WaIISHost.exe Memory Usage?

I have recently installed NewRelic server monitoring to our Azure web role. The role is a small instance. We are on OSv4 (Win 2012 R2) using 2.2 Service Runtime.
Looking at memory usage I notice that WallSHost.exe (which I understand to be Azure related) it reported as consuming 219Mb (down from a peak of 250Mb) via NewRelic. Is that a lot of memory for it? Can I reduce it? Just seemed like a lot to be taking up.
CPU usage seems to aperiodically spike at about 4% for it. However CPU isn't really an issue as my instance rarely goes above 50%
First off, why do you care how much memory a process is taking up? All of that memory will be paged out to disk, and assuming it isn't being paged back in regularly then all it does is take up page file size which is usually irrelevant.
The WaIISHost process runs your role entry point code (OnStart, Run, StatusCheck, Changing, etc) and is typically implemented in WebRole.cs. If you want to reduce the memory size of this process then you can reduce the amount of memory being loaded by your role entry point code.
See http://blogs.msdn.com/b/kwill/archive/2011/05/05/windows-azure-role-architecture.aspx for more information about the WaIISHost.exe process and what it does.

Application pool private memory limit is 0

I'm new to IIS. I have several questions about recycling application pool:
Private memory limit and Vitual memory limit are all 0 by default. I read the official document of IIS 7.0 (We are using IIS 8.0 and Windows Server 2012 but I think they should be the same in this respect). So there is really no limit for the memory usage? It's just waiting 1740 minutes (by default) for the application to recycle? It won't recycle even if the total memory usage is very high until 1740 minutes later? I searched very hard for the answers but couldn't find any...
I read an article which said application pool should never be recycled. So what is the mechanism of memory management for IIS? When would the memory not being used be released? Is it similar to Java? I think no one said that full GC in Java is no good practice...
Thanks.

Running multiple virtual directories on IIS - any performance issues?

I need to run 8-10 instances of my application on IIS 6.0 that are all identical but point to different backends (handled via config files, which would be different for each virtual directory). I want to create multiple virtual directories that point to different versions of the app and I want to know if there is any significant performance penalty for this. The server (Windows Server 2003) is a quad-core with 4 GB of ram and the single install of the app barely touches the CPU or memory, so it doesn't seem to be a concern. This doesn't seem to justify another server, especially since some of the instances will be very lightly used. Obviously, performance depends on the server and the application, but are there any concerns with this situation?
IIS on Windows Server 2003 is built to handle lots of sites, so the number of sites itself is not a concern. The resource needs of your application is much more of a factor. I.e., How much, i/o, cpu, threads, database resources does it consume?
We have a quad-core Windows Server 2003 server here handling several hundred sites no problem. But one resource-intensive app can eat a whole server no problem.
If you find your application is cpu bound, you can put each instance in its own application pool and then limit the amount of cpu each pool can use, so that no one instance can bottleneck any of the others.
I suggest you add a few at a time and see how it goes.
No concerns. If you run into any performance issues, it won't be with IIS for 10 apps that size.
You should consider using multiple application pool. If you do that, and the cpu, memory, IO and network resources of the server are in order. Then there is no performance issue.
It is possible to run them all on the same application pool. But then add to the list, thread pool usage issue, because all application will use one thread pool, and if it is 32 bit server Then there is a limit( around 1.5 Gb ) for the w3wp process.
We constantly run 15-20 per server on a 10 server load balanced farm. We don't come across any issues
The short answer is no, there should be no concerns.
In effect, you are asking if IIS can host 8 - 10 websites... of course it can. Perhaps, you might want to configure it as individual websites rather than virtual directories, and perhaps with individual application pools so that each instance is entirely independent.
You mention that these aren't vary demanding applications; assuming they aren't all linking into the same Access database, I can't see any problems.

Resources