Behavior of multiple instance of ParallelWebAppClassLoaders in Tomcat JVM retaining objects - memory-leaks

I am working on identifying repeated OutOfMemory issues in a Tomcat 8.5.38 server (Centos 7.6, openjdk 1.8, 4 CPU) running a Spring MVC application.
This issue is new for this app. (Edit: The issue started after Tomcat was upgraded from 8.5.35 to 8.5.38). I saved a memory heap dump by adding the “-XX:+HeapDumpOnOutOfMemoryError” JVM setting. In the heap dump I see that there are 2 instances of ParallelWebAppClassLoader. This app uses a large HashMap (about 200 Mb) of lookup values as a cache. Each class loader has a separate reference to this HashMap. I am trying to find why there are 2 ParallelWebAppClassLoader instances in this JVM? The server.xml does not specify the use of ParallelWebAppClassLoader.
Also is it correct to expect the ParallelWebAppClassLoader to maintain a copy of the HashMap?
If it is 2 copies of the same object, how can this duplicate space be optimized, if ParallelWebAppClassLoader is used?

The issue was because of the upgrade of Tomcat from 8.5.35 to 8.5.35. After this upgrade the JVM has two class loaders, and thus occupies twice the memory. A quick fix can be by increasing the RAM or rolling back the Tomcat version to 8.5.35.
If there is a setting to control the number of class loaders, please post an answer. I will upvote that.

Related

Nashorn memory leak Much memory is consumed by jdk.nashorn.internal.scripts.JO4P0

We are using Nashorn to use javascript from java (JDK 1.8 U66),after some profiling we are seeing a larger data is getting occupied by jdk.nashorn.internal.scripts.JO4P0
Object. Would like to know if any one have any idea?
jdk.nashorn.internal.scripts.JO4PO and other similar instances are used to represent scripts objects from your scripts. But, you need to provide more info for further investigation. I suggest you please write to nashorn-dev openjdk alias with profiler output + more info. about your application (like to project, if open source, would be useful

OUtOfMemoryError: define PermGen space properly in Tomcat 7 and Linux

We have a Tomcat7 instance which deploys 2 web apps. These webapps has a lot of dependencies and the currently memory space seems not to be enough.
We have configured in a startup file the environment variables needed. In particular, we set JAVA_OPTS=-Xmx8192
I talked to my mates about the lack of more config parameters because in another configs I saw -Xms, MaxPermSize,.. etc
Which parameters are missing in order to avoid the PermGen exception and which is their role?
Thanks in advance
-Xms256m -Xmx1024m -XX:+DisableExplicitGC -Dcom.sun.management.jmxremote
-XX:PermSize=256m -XX:MaxPermSize=512m
add above line of code in your VM arguments i am sure its work for you
if you use eclipse IDE than with use of it you can change the VM argument
double click on the server > open Lunch Configuration > Arguments > VM Arguments
and add above two lines
The amount of memory given to Java process is specified at startup. To make things more complex, the memory is divided into separate areas, heap and permgen being the most familiar sub-areas.
While you specify the maximum size of the heap allowed for this particular process via -Xmx, the corresponding parameter for permgen is -XX:MaxPermSize. 90% of the Java apps seem to require between 64 and 512 MB of permgen to work properly. In order to find your limits, experiment a bit.

Rails 3.2.x + Glassfish + How to multithread?

I have a JRuby 1.6.7/Rails 3.2.11 web application deployed on Glassfish (with no web server in front of it). I would like to make my application multi-threaded.
A best practices article suggests that I need to set the max and min runtimes to 1, and then go to config/environment.rb and put in the line
config.threadsafe!
However, a note from Oracle says (along with this note at Github) that I only have to set the minimum and maximum number of runtimes in the web.xml configuration file or the command line, and it says nothing about config.threadsafe!. (My feeling with this method is that it will take up a lot of memory because each runtime loads up a full instance of Rails).
Which method is right? Are they both right? Which is the better way to go multi-threaded?
One must do the following
set the min and max runtimes to 1
go into config/environments/production.rb and uncomment the
#config.threadsafe! line, you must also do this for any other environments you would want threadsafe mode to work in.
By doing these things Rails will run using one runtime and multiple threads saving you lots of memory. Additional information regarding threadsafe jruby on rails apps can be found here http://nowhereman.github.com/how-to/rails_thread_safe/
If you are using Warbler, you can skip step one - if you only follow step#2 the min and max runtimes will be set by default look at the web.xml within the war file you will see that it has been set. Likewise, if threadsafe has not been set you will see the absence of the min and max settings.
That being said Rails 4 will have threadsafe enabled by default. Here's the commit https://github.com/rails/rails/pull/6685
Also, here's a post about the hows and whys: http://tenderlovemaking.com/2012/06/18/removing-config-threadsafe.html

Jetty 7: OutOfMemoryError: PermGen space on application redeploy

First time app starts correctly. Then I delete webapp/*.war file and paste new version of *.war. Jetty start deploying new war but error java.lang.OutOfMemoryError: PermGen space occurs. How can I configure Jetty to fix error / make correct redeploy?
This solution doesn't help me.
Jetty version: jetty-7.4.3.v20110701
There is probably no way to configure the problem away. Each JVM has one PermGen memory area that is used for class loading and static data. Whenever your application gets undeployed its classloader should be discarded and all classes loaded by it as well. When this fails because other references to the classloader still exist, garbage collecting the classloader and your applications classes will also fail.
A blog entry and its follow up explain a possible source of the problem. Whenever the application container's code uses a class that holds a reference to one of your classes, garbage collection of your classes is prevented. The example from the mentioned blog entry is the java.util.logging.Level constructor:
protected Level(String name, int value) {
this.name = name;
this.value = value;
synchronized (Level.class) {
known.add(this);
}
}
Note that known is a static member of java.util.logging.Level. The constructor stores a reference to all created instances. So as soon as the Level class was loaded or instantiated from outwith your application's code, garbage collection can't remove your classes.
To solve the problem you could avoid all classes that are in use outwith your own code or ensure no references are held to your classes from outwith your code. Both problems could occur within any class delivered with Java and are thus not feasible to be fixed within your application. You cannot prevent the problem by altering only your own code!
Your options are basically:
Increasing the memory limits and have the error strike less often
Analyze your code as detailed in the linked blog posts and avoid using the classes that store references to your objects
If a PermGen out of memory occurs, you need to restart the jvm, in your case restart jetty. You may increase the PermGen space with the JVM options in your linked solution so this happens later (with later I mean: after more redeploys). But it will happen every once in a while and you can do next to nothing to avoid that. The answer you linked explained well what PermGenSpace is and why it overflows.
Use:
-XX:PermSize=64M -XX:MaxPermSize=128M
or, if that was not enough yet
-XX:PermSize=256M -XX:MaxPermSize=512M
Also, be sure to increase the amount of space available to the vm in general if you use this commands.
use
-Xms128M -Xmx256M
For Jetty 7.6.6 or later this may help http://www.eclipse.org/jetty/documentation/current/preventing-memory-leaks.html.
We used the AppContextLeakPreventer and it helped with the OOM errors due to permgen space
I have this same problem with HotSpot, but with JRockit, which doesn't have a Permanent Generation, the problem has gone away. It's free now, so you might want to try it: https://blogs.oracle.com/henrik/entry/jrockit_is_now_free_and
Looks very much like Permanent Generation leak. Whenever your application left some classes to hang around after it is undeployed, you get this problem. You can try the latest version of Plumbr, may be it will find the left-over classes.
For Readers of the Future (relative to when this question has been asked):
In JDK 8 the Perm Gen Space is gone (not there anymore). Instead there is now Metaspace which is taken from the native space of the machine.
If you had problems of Perm Gen Overflow then you might want to have a look into this explanation and this comments on the removal process.

what causes memory leak in java

I have a web application deployed in Oracle iPlanet web server 7. Website is used actively in Internet.
After deploying, heap size is growing and after 2 or 3 weeks, OutOfMemory error is thrown.
So I began to use profiling tool. I am not familiar with heap dump. All I noticed that char[], hashmap and String objects occupy too much at heap. How can I notice what causes memory leak from heap dump? My assumptations about my memory leak;
I do so much logging in code using log4j for keeping in log.txt file. Is there a problem with it?
may be an error removing inactive sessions?
some static values like cities, gender type stored in static hashmap ?
I have a login mechanism but no logout mechanism. When site is opened again, new login needed. (silly but not implemented yet.) ?
All?
Do you have an idea about them or can you add another assumptions about memory leak?
Since Java has garbage collection a "memory leak" would usually be the result of you keeping references to some objects when they shouldn't be kept alive.
You might be able to see just from the age of the objects which ones are potentially old and being kept around when they shouldn't.
log4j shouldn't cause any problems.
The hashmap should be okay, since you actually want to keep these values around.
Inactive sessions might be the problem if they're stored in memory and if something keeps references to them.
There is one more thing you can try: new project, Plumbr, which aims to find memory leaks in java applications. It is in beta stage, but should be stable enough to give it a try.
As a side node, Strings and char[] are almost always on top of the profilers' data. This rarely means any real problem.

Resources