I am currently investigating the feasibility of an architecture where we will have potentially thousands of AppPools and therefore Worker Processes for each of our micro-services running in IIS (10+). (It is one of a few options)
I understand the overhead of each worker process. Currently my estimation would be that each worker is going to be about 20-30MB. Server resourcing should not be too much of an issue as we are likely going to be provisioning servers with 32-64GB of RAM. To add to this not all workers would be active at all times so we should gain headroom when AppPools are idle.
My question: Can IIS handle this many AppPools/Worker processes?
I don't see a reason it shouldn't given sufficient resources however have not been able to find any documentation around it after some brief searching.
So I'll add some answers to my own question here as I did a little bit of testing.
Server
Intel Xeon - X5550
32GB Ram
Windows Server 2012 R2
Application
Created a barebones WebAPI only ASP.Net application with a single controller and action.
When installed in IIS this is the observed memory footprint.
Memory (Idle) = ~ 5172 K
Memory (Running) = ~26 000 K
Prep
I created some powershell scripts (sorry can't share it as they leverage our closed source deployment scripts) to:
Create - Unique folder for each application to prevent possible resource sharing
Launch - Makes a web request
Cleanup - Deletes all applications, pools and folders
Recycle - Unloads the application, sets it back to Idle state
Test
Below are my results observed from PerfMon
As you will note I could not get all 1000 running at once. I ran into a few things:
Trying to fire a call to all 1000 so they all running simultaneously is not as easy as it sounds.
ASP.Net temporary internet files is on the C:\ which ran out of space
Things began running slowly since memory was being paged.
Conclusion
It seems that IIS really has no limit on the number of processes. The core constraint is the resourcing on the machine.
What is interesting is it is unlikely all applications would be running simulationoesly and so one can take advantage of the fact that IIS will provision mem
Related
I have written an application (Qt/C++) that creates a lot of concurrent worker threads to accomplish its task, utilizing QThreadPool (from the Qt framework). It has worked flawlessly running on a dedicated server/hardware.
A copy of this application is now running in a virtual machine (RHEL 7), and performance has suffered significantly in that the queue (from the thread pool) is being exercised quite extensively. This has resulted in things getting backed up a bit. This, despite having more cores available to the application through this VM version than the dedicated, non-virtualized server.
Today, I did some troubleshooting with the top -H -p <pid> command, and found that there were 16 total llvmpipe-# threads running all at once, apparently for software rendering of my application's very simple graphical display. It looks to me like the presence of so many of these rendering threads has left limited resources available for my actual application's threads to be concurrently running. Meaning, my worker threads are yielding/taking a back seat to these.
As this is a small/simple GUI running on a server, I don't care to dedicate so many threads to software rendering of its display. I read some Mesa3D documentation about utilizing the LP_NUM_THREADS environment variable, to limit its use. I set it to LP_NUM_THREADS=4, and as a result I seem to have effectively opened up 12 cores for my application to now use for its worker threads.
Does this sound reasonable, or will I pay some sort of other consequence for doing this?
So I'm trying to migrate a Legacy website from an AWS VM to an Azure VM and we're trying to get the same level of performance. The problem is I'm pretty new to setting up sites on IIS.
The authors of the application are long gone and we struggle with the application for many reasons. One of the problems with the site is when it's "warming up" it pulls back a ton of data to store in memory for the entire day. This involves executing long running stored procs and in memory processes which means first load of certain pages takes up to 7 minutes. It then uses a combination of in memory data and output caching to deliver the pages.
Sessions do seem to be in use although the site is capable of recovering session data from the database in some more relatively long running database operations so sessions are better to stick with where possible which is why I'm avoiding a web garden.
That's a little bit of background, however my question is really about upping the performance on IIS. When I went through their settings on the AWS box they had something call NUMA enabled with what appears to be the default settings and then the maximum worker processes set to 0 which seems to enable NUMA. I don't know why they enabled NUMA or if it was necessary, but I am trying to get as close to a like for like transition as possible and if it gives extra performance in this application we'll probably need it!
On the Azure box I can see options to set the maximum worker processes to 0 but no NUMA options. My question is whether NUMA is enabled with those default options or is there something further I need to do to enable NUMA.
Both are production sized VMs but the one on Azure I'm working with is a Standard D16s_v3 with 16 vCores and 64Gb RAM. We are load balancing across a few of them.
If you don't see the option in the Azure VM it's because the server is using symmetric processing and isn't NUMA aware.
Now to optimize your loading a bit:
HUGE CAVEAT - if you have memory leak type issues, don't do this! To ensure you don't, put on a private bytes limit roughly 70% the size of memory on the server. If you see that get hit/issue an IIS recycle (that event is logged by default) then you may want to ignore further steps. Either that or mess around with perfmon (or more easily iteratively check peak bytes in task manager where you'll have to add that column in the details pane)
Change your app pool startup mode to: AlwaysRunning
Change your web app to preloadenabled=true
Set an initialization page in your web.config (so that preloading knows what to load).
*Edit forgot some steps. Make sure your idle timeout is clear or set it to midnight.
Make sure you don't have the default recycle time enabled, clear that out.
If you want to get fancy you can add a loading page and set an http refresh or due further customizations seen below:
https://learn.microsoft.com/en-us/iis/get-started/whats-new-in-iis-8/iis-80-application-initialization
Goal
Determine the cause of the sporadic lock ups of our web application running on IIS.
Problem
An application we are running on IIS sporadically locks up throughout the day. When it locks up it will lock up on all workers and on all load balanced instance.
Environment and Application
The application is running on 4 different Windows Server 2016 machines. The machines are load balanced using ha-proxy using a round robin load balancing scheme. The IIS application pools this website is hosted in are configured to have 4 workers each and the application it hosts is a 32-bit application. The IIS instances are not using a shared configuration file but the application pools for this application are all configured the same.
This application is the only application in the IIS application pool. The application is an ASP.NET web API and is using .NET 4.6.1. The application is not creating threads of its own.
Theory
My theory for why this is happening is that we have requests that are coming in that are taking ~5-30 minutes to complete. Every machine gets tied up servicing these requests so they look "locked up". The company rolled their own logging mechanism and from that I can tell we have requests that are taking ~5-30 minutes to complete. The team responsible for the application has cleaned up many of these but I am still seeing ~5 minute requests in the log.
I do not have access to the machines personally so our systems team has gotten memory dumps of the application when this happens. In the dumps I generally will see ~50 threads running and all of them are in our code. These threads will be all over our application and do not seem to be stopped on any common piece of code. When the application is running correctly the dumps have 3-4 threads running. Also I have looked at performance counters like the ASP.NET\Requests Queued but it never seems to have any requests queued. During these times the CPU, Memory, Disk and Network usage look normal. Using windbg none of the threads seem to have a high CPU time other than the finalizer thread which as far as I know should live the entire time.
Conclusion
I am looking for a means to prove or disprove my theory as to why we are locking up as well as any metrics or tools I should look at.
So this issue came down to our application using query in stitch on a table with 2,000,000 records in it to another table. Memory would become so fragmented that the Garbage Collector was spending more time trying to find places to put objects and moving them around than it was running our code. This is why it appeared that our application was still working and why their was no exceptions. Oddly IIS would time out the requests but would continue processing the threads.
We use Kentico CMS and I've exchanged emails with them about a web garden deployment.
We have a single site running on a server with 8 cpu cores. In line with Kentico's advice, we have not altered the application pool web garden setting from the default i.e. it is set to a maximum number of worker processes of 1.
Our experience is that the site only uses one of the cpu cores - the others are idling. When I emailed them about this, their response was that the OS/IIS would handle this and use other cores as necessary even though the application pool only has a single worker process.
Now, I've a lot of respect for the guys at Kentico, but this doesn't seem right to me?
Surely, if we want to use all cores, we need to permit eight worker processes (and implement session state storage in SQL server)?
Many thanks
Tony
I would suggest running perfmon for a 24 hours and see if you can determine what resources are being used. Indeed they might already be running on all cores . . . Also, if their web app is a heavily threaded system, then it will take full advantage of multiple cores(at least ours does). Threads, not worker processes, are what actually count for processor utilization.
Not sure if you got an answer on ServerFault, at any rate ASP.NET is multi-threaded and in a single worker process there are several threads, each serving a single request.
I need to run 8-10 instances of my application on IIS 6.0 that are all identical but point to different backends (handled via config files, which would be different for each virtual directory). I want to create multiple virtual directories that point to different versions of the app and I want to know if there is any significant performance penalty for this. The server (Windows Server 2003) is a quad-core with 4 GB of ram and the single install of the app barely touches the CPU or memory, so it doesn't seem to be a concern. This doesn't seem to justify another server, especially since some of the instances will be very lightly used. Obviously, performance depends on the server and the application, but are there any concerns with this situation?
IIS on Windows Server 2003 is built to handle lots of sites, so the number of sites itself is not a concern. The resource needs of your application is much more of a factor. I.e., How much, i/o, cpu, threads, database resources does it consume?
We have a quad-core Windows Server 2003 server here handling several hundred sites no problem. But one resource-intensive app can eat a whole server no problem.
If you find your application is cpu bound, you can put each instance in its own application pool and then limit the amount of cpu each pool can use, so that no one instance can bottleneck any of the others.
I suggest you add a few at a time and see how it goes.
No concerns. If you run into any performance issues, it won't be with IIS for 10 apps that size.
You should consider using multiple application pool. If you do that, and the cpu, memory, IO and network resources of the server are in order. Then there is no performance issue.
It is possible to run them all on the same application pool. But then add to the list, thread pool usage issue, because all application will use one thread pool, and if it is 32 bit server Then there is a limit( around 1.5 Gb ) for the w3wp process.
We constantly run 15-20 per server on a 10 server load balanced farm. We don't come across any issues
The short answer is no, there should be no concerns.
In effect, you are asking if IIS can host 8 - 10 websites... of course it can. Perhaps, you might want to configure it as individual websites rather than virtual directories, and perhaps with individual application pools so that each instance is entirely independent.
You mention that these aren't vary demanding applications; assuming they aren't all linking into the same Access database, I can't see any problems.