IIS application initialization module and memory management - iis

I am researching into the IIS Application Initialization module and from what I can see, when using the AlwaysRunning option for Start Mode setting for the application pool, basically it starts a new worker process that will always run even if there isn't any requests. When applying this option it starts the process automatically.
My concern is memory management and CPU usage, specifically how is this handled since the process always runs.
How can I compare this to setting the Start Mode to OnDemand and increase the Idle Time minutes to couple of days? That way, I guess, the process will run in idle mode for x days before it's terminated, and reinitialized on the next request and keep running for a couple of days. If I set the minutes to let's say 1.5 days, someone is bound to use the application at least once a day, so it will persist the process runtime and it will never be terminated.
Can anyone share experience regarding this topic?
Thanks

I have multisite application that runs few sites under separate app pools. All are set OnDemand for Start Mode and IdleTime for 1740 minutes, also I use Page Output Cache from app with different times for different page types. There is also NHibernate behind scene and DB is MySql.
The most active site have more than 100k visits per day and almost never is idle. When it starts if I recycle, need 30 seconds to 2 minutes to became full operable depending on requests at the moment and CPU usage is going from 40% to 70%. After the site is up CPU usage is very low (0-4%) if there are no new entries in DB and memory usage is around 3GB when all is cached. Sometimes CPU is going to 20% if at that moment are new request (for not cached content) and there is new entry saving.
Also Page Output Cache works on First Come First Served base so maybe this can also cause little problem while caching is done - user must wait, little more CPU to do the caching.
The most biggest problem in my case is using NHibernate and MySql but Page Output Cache resolved the problem for me when I decided to cache the page modules and content. I realize that is better application to starve for memory then for CPU.
3.5k visitors at one moment when everything is cached gave to me same memory usage (3GB) and CPU (server overall) around 40%
Other sites are using around 1-1.5GB memory and CPU never more then 20% at start.
The application with same settings for app pool and using MSSQL with EF I can't even notice that run on server. It is used by 10-60 users in minute there is not much content except embedding codes and it use 1-5% CPU and never more than 8MB memory. On recycle it is up for less then 10 seconds.
With this my experience I can tell you that all depends on what application serves and how it works :) and how much content do you have.
If you use OnDemand with long IdleTime it will be same as AlwaysStart and process is not used at that moment. If you use OnDemand with short IdleTime more often you will need CPU to start the process.

Related

IIS - Worker threads not increasing beyond certain number even though the CPU usage is less than 40 percent

We are running a web API hosted in IIS 10 on an 8 core machine with 16 GB Memory and running Windows 10, and throwing a load of say 100 to 200 requests per second through JMeter on the server.
Individual transactions are taking less than 500 milliseconds. When we throw the load initially, IIS threads grow up to around 150-160 mark (monitored through resource monitor and Performance monitor) and throughput increases up to 22-24 transactions per second but throughput and number of threads stop to grow beyond this point even though the CPU usage is less than 40 per cent and we have enough physical memory also available at the peak, the resource monitor does not show any choking at the network or IO level.
The web API is making calls to the Oracle database (3-4 select calls and 2-3 inserts/updates).
We fail to understand what is stopping IIS to further grow its thread pool to process more requests in parallel while all the resources including processing power, memory, network etc are available.
We have placed many performance counters as well, there is no queue build-up (that's probably because jmeter works in synchronous mode)
Also, we have tried to set the min and max threads settings through machine.config as well as ThreadPool.SetMin and Max threads APIs but no difference was observed and seems like those setting are not taking any effect.
Important to mention that we are using synchronous calls/operations (no asnch and await). Someone has advised to convert all our blocking IO calls e.g. database calls to asynchronous mode to achieve more throughput but my understanding is that if threads cant be grown beyond this level then making async calls might not help or may indeed negatively impact the throughput. Since our code size is huge, that would be a very costly activity in terms of time and effort and we dont want to invest in it till we are sure that it would really help. If someone has anything to share on these two problems, pls do share.
Below is a screenshot of the permanence monitor.

What is consuming memory in my Node JS application?

Background
I have a relatively simple node js application (essentially just expressjs + mongoose). It is currently running in production on an Ubuntu Server and serves about 20,000 page views per day.
Initially the application was running on a machine with 512 MB memory. Upon noticing that the server would essentially crash every so often I suspected that the application might be running out of memory, which was the case.
I have since moved the application to a server with 1 GB of memory. I have been monitoring the application and within a few minutes the application tends to reach about 200-250 MB of memory usage. Over longer periods of time (say 10+ hours) it seems that the amount keeps growing very slowly (I'm still investigating that).
I have been since been trying to figure out what is consuming the memory. I have been going through my code and have not found any obvious memory leaks (for example unclosed db connections and such).
Tests
I have implemented a handy heapdump function using node-heapdump and I have now enabled --expore-gc to be able to manually trigger garbage collection. From time to time I try triggering a manual GC to see what happens with the memory usage, but it seems to have no effect whatsoever.
I have also tried analysing heapdumps from time to time - but I'm not sure if what I'm seeing is normal or not. I do find it slightly suspicious that there is one entry with 93% of the retained size - but it just points to "builtins" (not really sure what the signifies).
Upon inspecting the 2nd highest retained size (Buffer) I can see that it links back to the same "builtins" via a setTimeout function in some Native Code. I suspect it is cache or https related (_cache, slabBuffer, tls).
Questions
Does this look normal for a Node JS application?
Is anyone able to draw any sort of conclusion from this?
What exactly is "builtins" (does it refer to builtin js types)?

How does one configure "Autoscale" to deal with Web instances which have long wait times due to external processes?

I am using MVC3, ASP.NET4.5, C#, Razor, EF6.1, SQL Azure
I have been doing some load testing using JMeter, and I have found some surprising results.
I have a test of 30 concurrent users, ramping up over 10 secs. The test plan is fairly simple:
Login
Navigate to page
Do query
Navigate back
Logout
I am using "small" "standard" instances.
I have noticed that web instances may be waiting on external processes, such as databases queries, so the web CPU could be low, but it is still a bottleneck. The CPU could be idling at 40% while waiting for a result set from the DB. So this could also be a reason why the extra instance may not be triggered. Actually this is a real issue. How do you trigger extra instance based on longer wait times? At the moment the only way round this is to have 2 instances up there permanently, or proactively set it up against a schedule.
Use async calls and you won't have to worry about scaling up. The waiting threads will be asleep, freeing up resources to handle other users.
If you still see lengthened response times after that it's probably the external process that's choking and in need of being scaled up

What is normal Azure WaIISHost.exe Memory Usage?

I have recently installed NewRelic server monitoring to our Azure web role. The role is a small instance. We are on OSv4 (Win 2012 R2) using 2.2 Service Runtime.
Looking at memory usage I notice that WallSHost.exe (which I understand to be Azure related) it reported as consuming 219Mb (down from a peak of 250Mb) via NewRelic. Is that a lot of memory for it? Can I reduce it? Just seemed like a lot to be taking up.
CPU usage seems to aperiodically spike at about 4% for it. However CPU isn't really an issue as my instance rarely goes above 50%
First off, why do you care how much memory a process is taking up? All of that memory will be paged out to disk, and assuming it isn't being paged back in regularly then all it does is take up page file size which is usually irrelevant.
The WaIISHost process runs your role entry point code (OnStart, Run, StatusCheck, Changing, etc) and is typically implemented in WebRole.cs. If you want to reduce the memory size of this process then you can reduce the amount of memory being loaded by your role entry point code.
See http://blogs.msdn.com/b/kwill/archive/2011/05/05/windows-azure-role-architecture.aspx for more information about the WaIISHost.exe process and what it does.

Why Even Recycle an Application Pool?

Maybe someone can shed some light on this simple question:
I have a .NET web application that has been thoroughly vetted. It loads a cache per appdomain (process) whenever one starts and can not fully reply to requests until it completes this cache loading.
I have been examining the settings on my application pools and have started wondering why I was even recycling so often (once every 1,000,000 calls or 2 hours).
What would prevent me from setting auto-recycles to being once every 24 hours or even longer? Why not completely remove the option and just recycle if memory spins out of control for the appdomain?
If your application runs reliably for longer then the threshold set for app pool recycling, then by all means increase the threshold. There is no downside if your app is stable.
For us, we have recycling turned off altogether, and instead have a task that loads a test page every minute and runs an iisreset if it fails to load five times in a row.
You should probably be looking at recycling from the point of view of reliability. Based on historical data, you should have an idea how much memory, CPU and so on your app uses, and the historical patterns and when trouble starts to occur. Knowing that, you can configure recycling to counter those issues. For example, if you know your app has an increasing memory usage pattern* that leads to the app running out of memory after a period of several days, you could configure it to recycle before that would have happened.
* Obviously, you would also want to resolve this bug if possible, but recycling can be used to increase reliability for the customer
The reason they do it is that an application can be "not working" even though it's CPU and memory are fine (think deadlock). The app recycling is a final failsafe measure which can protect flawed code from dying.
Also any code which has failed to implement IDisposable would run finalizers on the recycle which will possibly release held resources.

Resources