I just started working on a legacy Spring Boot application, and I noticed it was not shutting down smoothly, required a hard kill to close it.
I did a thread dump and I see Spring is launching lots of threads that just don't want to die.
Ok, says I, must be the #Async's and #EnableAsync, we have a number of those to handle the initialization. I removed all of those, but no change.
Then I thought it might be Micrometer - we do a lot of instrumentation using #Timed, but I removed those and no change to the thread dump.
I searched for any instances of any kind of executor but nothing turns up.
What could be starting all of these threads and QuartzScheduler_Workers?
Related
I have a Spring Boot application in which everytime API call is made, I am creating an ExecutorService with fixedThreadPool size of 5 threads and passing around 500 tasks to CompletableFuture to run Async. I am using this for a migration of lakhs of data.
As I started the migration, initially API was working fine and each API Call ( Basically code logic + ThreadPool Creation + Jobs Assignment to threads ) was taking around just 200 ms or so. But as API calls increased and new threadpools kept on creating, I can see gradual increase in time being taken to Create the thread Pool and assign the jobs, as a result API response time went till 4 secs.
Note : After the jobs are done, i am shutting down the executor service in finally block.
Question :
Can multiple creation create overhead to the application and do those pools keep on piling up?
Wont there be any automatic garbage collection to this ?
Will there be any limit to how many pools get created ?
And what could be causing this time delay ..
I can add further clarifications based on specific queries..
Can multiple creation create overhead to the application and do those pools keep on piling up?
Yes absolutely. Unless you shutdown the thread pools, they won't be destroyed automatically and consume resources. See next question for more details.
Wont there be any automatic garbage collection to this ?
You need to take care that the thread pools are destructed after they are no longer needed. For example, the javadoc of ThreadPoolExecutor provides some hints:
A pool that is no longer referenced in a program AND has no remaining threads will be shutdown automatically. If you would like to ensure that unreferenced pools are reclaimed even if users forget to call shutdown(), then you must arrange that unused threads eventually die, by setting appropriate keep-alive times, using a lower bound of zero core threads and/or setting allowCoreThreadTimeOut(boolean).
Will there be any limit to how many pools get created ?
There is no hard limit on how many threads are supported by Java, however there may be restrictions depending on your operating system and available resources such as memory. This is quite a complex question, more details can be found in the answers to this question: How many threads can a Java VM support?
And what could be causing this time delay?
I assume that you don't have a proper cleanup / shutdown mechanism in place for the thread pools. Every thread allocates at least 1 MB of memory for the thread stack. For example, the more threads you create, the more memory your application consumes. Depending on the system / jvm configuration, the application may utilize swap which dramatically slows down the performance.
There may be other things that cause a drop in performance, so this is just what came to my mind right now.
Profilers will help you to identify performance issues or resource leaks. This article by Baeldung shows a few profilers you could use.
My .net application process will not stop running. I can use ANTS to profile it, but all examples talk about increasing memory and new instances. How do I find out what is preventing the application from exiting.
I have done snapshots while the application is running properly and then a second snapshot after it is "closed" but is still running as a process. What should I be looking for?
I've got a Node.JS application that spawns a number of Web Workers.
I'm seeing what looks like a slow memory leak, but I don't think it's my code. Even if I comment out the code entirely, and I just have a web worker that accepts messages and returns nothing, the memory leak still occurs!
The problem seems to be that I'm sending large messages. Often they are 1MB of JSON or more. Eventually the Workers balloon up from 6MB up to 25MB, and I'm not sure it will stop there.
Is this a known problem with Node.JS web workers? Is there a workaround?
The workers are managed with a pool abstraction. Should I just kill them off and spawn new ones from time to time?
EDIT: I'm thinking maybe it's the particular pool library I used,backgrounder. No obvious culprits in the code, though.
I have client and server threads in my applications. When I run these apps as standalone apps, these threads communicate properly.
But when I run client as JUnit and server as standalone, client thread dies within few seconds.
I couldn't get, why such different behavior.
When the JUnit runner terminates, all spawned threads etc. are killed too (as it is most likely run in a separate JVM instance).
Here is a (rather old) article describing the problem you experienced (the GroboUtils library it is recommending seems to have been abandoned long time ago though). And another, recent one, with a more modern solution using the new Java concurrency framework.
The gist of the latter solution is that it runs the threads via an executor, which publishes the results of the runs via Futures. And Future.get is blocking until the thread finishes with the task, automatically keeping the JUnit tests alive. You may be able to adapt this trick to your case.
I ran into an issue with an IIS web app shutting down an idle worker process! The next request would then have to re-initialize the application, leading to delays.
I disabled the IIS shutdown of idle worker processes on the application pool to resolve this. Are there any issues associated with turning this off? If the process is leaking memory, I imagine it is nice to recycle the process every now and then.
Are there any other benefits to having this process shutdown?
I'm assuming that you're referring to IIS 6.
Instead of disabling shutdown altogether, maybe you can just increase the amount of time it waits before killing the process. The server is essentially conserving resources - if your server can stand the resource allocation for a process that mostly sits around doing nothing, then there isn't any harm in letting it be.
As you mentioned, setting the auto-recycling of the process on a memory limit would be a good idea, if the possibility of a memory leak is there.