So we have multiple application pools in iis for multiple web applications, is garbage collection shared between all of these app pools or does each app pool have its own garbage collection ?
Garbage Collection is Process specific (w3wp.exe) and each application pool is associated with a w3wp process irrespective of number of applications running within it.
So if you have multiple applications within a single application pool and if one of the application triggers a GC, all threads within the process will be stalled until GC completes and hence impacting other applications as well .
On the other hand, if each application has its own respective application pool, GC triggered in one application will not impact the other applications.
Hope this answers ! Please feel free to ask any follow up questions.
Related
I have a Spring Boot application in which everytime API call is made, I am creating an ExecutorService with fixedThreadPool size of 5 threads and passing around 500 tasks to CompletableFuture to run Async. I am using this for a migration of lakhs of data.
As I started the migration, initially API was working fine and each API Call ( Basically code logic + ThreadPool Creation + Jobs Assignment to threads ) was taking around just 200 ms or so. But as API calls increased and new threadpools kept on creating, I can see gradual increase in time being taken to Create the thread Pool and assign the jobs, as a result API response time went till 4 secs.
Note : After the jobs are done, i am shutting down the executor service in finally block.
Question :
Can multiple creation create overhead to the application and do those pools keep on piling up?
Wont there be any automatic garbage collection to this ?
Will there be any limit to how many pools get created ?
And what could be causing this time delay ..
I can add further clarifications based on specific queries..
Can multiple creation create overhead to the application and do those pools keep on piling up?
Yes absolutely. Unless you shutdown the thread pools, they won't be destroyed automatically and consume resources. See next question for more details.
Wont there be any automatic garbage collection to this ?
You need to take care that the thread pools are destructed after they are no longer needed. For example, the javadoc of ThreadPoolExecutor provides some hints:
A pool that is no longer referenced in a program AND has no remaining threads will be shutdown automatically. If you would like to ensure that unreferenced pools are reclaimed even if users forget to call shutdown(), then you must arrange that unused threads eventually die, by setting appropriate keep-alive times, using a lower bound of zero core threads and/or setting allowCoreThreadTimeOut(boolean).
Will there be any limit to how many pools get created ?
There is no hard limit on how many threads are supported by Java, however there may be restrictions depending on your operating system and available resources such as memory. This is quite a complex question, more details can be found in the answers to this question: How many threads can a Java VM support?
And what could be causing this time delay?
I assume that you don't have a proper cleanup / shutdown mechanism in place for the thread pools. Every thread allocates at least 1 MB of memory for the thread stack. For example, the more threads you create, the more memory your application consumes. Depending on the system / jvm configuration, the application may utilize swap which dramatically slows down the performance.
There may be other things that cause a drop in performance, so this is just what came to my mind right now.
Profilers will help you to identify performance issues or resource leaks. This article by Baeldung shows a few profilers you could use.
Goal
Determine the cause of the sporadic lock ups of our web application running on IIS.
Problem
An application we are running on IIS sporadically locks up throughout the day. When it locks up it will lock up on all workers and on all load balanced instance.
Environment and Application
The application is running on 4 different Windows Server 2016 machines. The machines are load balanced using ha-proxy using a round robin load balancing scheme. The IIS application pools this website is hosted in are configured to have 4 workers each and the application it hosts is a 32-bit application. The IIS instances are not using a shared configuration file but the application pools for this application are all configured the same.
This application is the only application in the IIS application pool. The application is an ASP.NET web API and is using .NET 4.6.1. The application is not creating threads of its own.
Theory
My theory for why this is happening is that we have requests that are coming in that are taking ~5-30 minutes to complete. Every machine gets tied up servicing these requests so they look "locked up". The company rolled their own logging mechanism and from that I can tell we have requests that are taking ~5-30 minutes to complete. The team responsible for the application has cleaned up many of these but I am still seeing ~5 minute requests in the log.
I do not have access to the machines personally so our systems team has gotten memory dumps of the application when this happens. In the dumps I generally will see ~50 threads running and all of them are in our code. These threads will be all over our application and do not seem to be stopped on any common piece of code. When the application is running correctly the dumps have 3-4 threads running. Also I have looked at performance counters like the ASP.NET\Requests Queued but it never seems to have any requests queued. During these times the CPU, Memory, Disk and Network usage look normal. Using windbg none of the threads seem to have a high CPU time other than the finalizer thread which as far as I know should live the entire time.
Conclusion
I am looking for a means to prove or disprove my theory as to why we are locking up as well as any metrics or tools I should look at.
So this issue came down to our application using query in stitch on a table with 2,000,000 records in it to another table. Memory would become so fragmented that the Garbage Collector was spending more time trying to find places to put objects and moving them around than it was running our code. This is why it appeared that our application was still working and why their was no exceptions. Oddly IIS would time out the requests but would continue processing the threads.
Can someone please summarize the advantages of creating an Azure WokerRole vs. simply starting a new thread?
By starting a new worker role instance you have all of the memory and CPU available to that instance size vs. when creating threads you'd be sharing the resources of one role for that instance size.
I would say that it also depends on what you're processing. Also, I think that threading or any parallel processing only makes sense when you're using a Medium instance and up where you have 2 or more cores.
The primary advantages IMHO are that you create a seperation of concerns as well as the ability to dependently scale the capacity of the background process and front end.
I assume you mean starting a new thread from an IIS-hosted service/app in a WebRole. My main concern would be recycling of IIS app pools and memory consumption.
Depending on the type of application, load on your application and IIS settings you don't have a lot of control over the lifecycle and resources of the process your thread will be living in.
I have an application that is currently running on IIS 6.0 with one worker process (the default). I am trying to determine if creating a web garden will improve performance. I have read a bunch of articles that say that a web garden is not the right approach for everyone (since it duplicates resources, cache is not shared, etc). I could not find an article that had a clear rational for using a web garden (Microsoft's site provides three bullet points, but no specific examples can be found). My situation is as follows:
We have can have up to 40 concurrent users at a given time.
Our application performs a series of calcuations (on the magnitude of 1,000s of calculations) that can take up to 10 minutes to complete.
We have multipe database calls some of which can take upwards to 30 seconds to complete.
Will creating a web garden improve performance, or should I simply increase the number of threads in the current worker process? When would be an example of when you should use a web garden? If a thread in the current worker process is performing calcuations (running .net code) and/or calling the database, can other threads run at the same time (I assume yes).
Thanks.
Ryan
Personally, I have gone the route of using a web garden when my web application required frequent worker process recycles.
In this specific case we needed to recycle the worker process often because we were using CodeDOM to emit assemblies dynamically, which has a memory leak by definition as more assemblies are loaded.
Having a web garden helped avoid a delay in server response every time the worker process was recycled.
We use Kentico CMS and I've exchanged emails with them about a web garden deployment.
We have a single site running on a server with 8 cpu cores. In line with Kentico's advice, we have not altered the application pool web garden setting from the default i.e. it is set to a maximum number of worker processes of 1.
Our experience is that the site only uses one of the cpu cores - the others are idling. When I emailed them about this, their response was that the OS/IIS would handle this and use other cores as necessary even though the application pool only has a single worker process.
Now, I've a lot of respect for the guys at Kentico, but this doesn't seem right to me?
Surely, if we want to use all cores, we need to permit eight worker processes (and implement session state storage in SQL server)?
Many thanks
Tony
I would suggest running perfmon for a 24 hours and see if you can determine what resources are being used. Indeed they might already be running on all cores . . . Also, if their web app is a heavily threaded system, then it will take full advantage of multiple cores(at least ours does). Threads, not worker processes, are what actually count for processor utilization.
Not sure if you got an answer on ServerFault, at any rate ASP.NET is multi-threaded and in a single worker process there are several threads, each serving a single request.