Class Loaded By Groovy Cannot Get Unloaded - groovy

I am running jvm 1.8.0_65 and using Groovy to load classes dynamically, the Groovy version is 2.4.7. I turned on class unloading by adding "-XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:+TraceClassUnloading -Dgroovy.use.classvalue=true". But the classes loaded by Groovy won't unload on GC. I also loaded native Java classes in the same application, and these classes got unloaded as expected when garbage collected.
I analyzed the heap dump by MAT, and could not find any references to GC root that is not a weak reference, here are some screen shots:
The Class's path to GC Roots excluding weak/soft references
The classloader's path to GC Roots excluding weak/soft references
Things referencing the classloader.
So I really don't know what's stopping the JVM from unloading these classes. Any help is greatly appreciated!

Related

Behavior of multiple instance of ParallelWebAppClassLoaders in Tomcat JVM retaining objects

I am working on identifying repeated OutOfMemory issues in a Tomcat 8.5.38 server (Centos 7.6, openjdk 1.8, 4 CPU) running a Spring MVC application.
This issue is new for this app. (Edit: The issue started after Tomcat was upgraded from 8.5.35 to 8.5.38). I saved a memory heap dump by adding the “-XX:+HeapDumpOnOutOfMemoryError” JVM setting. In the heap dump I see that there are 2 instances of ParallelWebAppClassLoader. This app uses a large HashMap (about 200 Mb) of lookup values as a cache. Each class loader has a separate reference to this HashMap. I am trying to find why there are 2 ParallelWebAppClassLoader instances in this JVM? The server.xml does not specify the use of ParallelWebAppClassLoader.
Also is it correct to expect the ParallelWebAppClassLoader to maintain a copy of the HashMap?
If it is 2 copies of the same object, how can this duplicate space be optimized, if ParallelWebAppClassLoader is used?
The issue was because of the upgrade of Tomcat from 8.5.35 to 8.5.35. After this upgrade the JVM has two class loaders, and thus occupies twice the memory. A quick fix can be by increasing the RAM or rolling back the Tomcat version to 8.5.35.
If there is a setting to control the number of class loaders, please post an answer. I will upvote that.

Nashorn memory leak Much memory is consumed by jdk.nashorn.internal.scripts.JO4P0

We are using Nashorn to use javascript from java (JDK 1.8 U66),after some profiling we are seeing a larger data is getting occupied by jdk.nashorn.internal.scripts.JO4P0
Object. Would like to know if any one have any idea?
jdk.nashorn.internal.scripts.JO4PO and other similar instances are used to represent scripts objects from your scripts. But, you need to provide more info for further investigation. I suggest you please write to nashorn-dev openjdk alias with profiler output + more info. about your application (like to project, if open source, would be useful

Can't unloaded Groovy classes - PermGen Erros

I have a legacy system that is extensively using Groovy version 1.0 and currently I can not update Groovy to an updated version. From time to time I'm getting a PermGen error due to the fact that all the Groovy classes/scripts are kept in memory even if no one is using them any more.
I'm trying to figure out what is the correct way to unload those classes after I've finished using them.
I'm using the following code to create the Script object:
GroovyShell shell=new GroovyShell();
Script script = shell.parse("ANY_TEXT");
.....
And I'm trying the following code to unload the generated class:
MetaClassRegistry metaClassRegistry = MetaClassRegistry.getIntance(0);
metaClassRegistry.removeMetaClass(script.getMetaClass().getClass());
metaClassRegistry.removeMetaClass(script.getClass());
metaClassRegistry.removeMetaClass(script.getBinding().getClass());
script.setBinding(null);
script.setMetaClass(null);
script = null;
Unfortunately, this doesn't seems to work because I keep on getting a PermGen error. How can I unloaded Groovy classes and keep the PermGen size reasonable?
The reason you are experiencing this issue with Groovy is due to the nature of how scripting languages like Groovy work. They create a lot of extra Class objects behind the scenes, causing your PermGen space requirements to increase.
You can not force a Class loaded into the PermGen space to be removed. However, you can increase the PermGen space:
Or for additional options, like JVM settings to increase PermGen space and add settings to allow unloading of PermGen classes during GC, see this post:
Increase permgen space

Do (statically linked) DLLs use a different heap than the main program?

I'm new to Windows programming and I've just "lost" two hours hunting a bug which everyone seems aware of: you cannot create an object on the heap in a DLL and destroy it in another DLL (or in the main program).
I'm almost sure that on Linux/Unix this is NOT the case (if it is, please say it, but I'm pretty sure I did that thousands of times without problems...).
At this point I have a couple of questions:
1) Do statically linked DLLs use a different heap than the main program?
2) Is the statically linked DLL mapped in the same process space of the main program? (I'm quite sure the answer here is a big YES otherwise it wouldn't make sense passing pointers from a function in the main program to a function in a DLL).
I'm talking about plain/regular DLL, not COM/ATL services
EDIT: By "statically linked" I mean that I don't use LoadLibrary to load the DLL but I link with the stub library
DLLs / exes will need to link to an implementation of C run time libraries.
In case of C Windows Runtime libraries, you have the option to specify, if you wish to link to the following:
Single-threaded C Run time library (Support for single threaded libraries have been discontinued now)
Multi-threaded DLL / Multi-threaded Debug DLL
Static Run time libraries.
Few More (You can check the link)
Each one of them will be referring to a different heap, so you are not allowed pass address obtained from heap of one runtime library to other.
Now, it depends on which C run time library the DLL which you are talking about has been linked to. Suppose let's say, the DLL which you are using has been linked to static C run time library and your application code (containing the main function) has linked to multi-threaded C Runtime DLL, then if you pass a pointer to memory allocated in the DLL to your main program and try to free it there or vice-versa, it can lead to undefined behaviour. So, the basic root cause are the C runtime libraries. Please choose them carefully.
Please find more info on the C run time libraries supported here & here
A quote from MSDN:
Caution Do not mix static and dynamic versions of the run-time libraries. Having more than one copy of the run-time libraries in a process can cause problems, because static data in one copy is not shared with the other copy. The linker prevents you from linking with both static and dynamic versions within one .exe file, but you can still end up with two (or more) copies of the run-time libraries. For example, a dynamic-link library linked with the static (non-DLL) versions of the run-time libraries can cause problems when used with an .exe file that was linked with the dynamic (DLL) version of the run-time libraries. (You should also avoid mixing the debug and non-debug versions of the libraries in one process.)
Let’s first understand heap allocation and stack on Windows OS wrt our applications/DLLs. Traditionally, the operating system and run-time libraries come with an implementation of the heap.
At the beginning of a process, the OS creates a default heap called Process heap. The Process heap is used for allocating blocks if no other heap is used.
Language run times also can create separate heaps within a process. (For example, C run time creates a heap of its own.)
Besides these dedicated heaps, the application program or one of the many loaded dynamic-link libraries (DLLs) may create and use separate heaps, called private heaps
These heap sits on top of the operating system's Virtual Memory Manager in all virtual memory systems.
Let’s discuss more about CRT and associated heaps:
C/C++ Run-time (CRT) allocator: Provides malloc() and free() as well as new and delete operators.
The CRT creates such an extra heap for all its allocations (the handle of this CRT heap is stored internally in the CRT library in a global variable called _crtheap) as part of its initialization.
CRT creates its own private heap, which resides on top of the Windows heap.
The Windows heap is a thin layer surrounding the Windows run-time allocator(NTDLL).
Windows run-time allocator interacts with Virtual Memory Allocator, which reserves and commits pages used by the OS.
Your DLL and exe link to multithreaded static CRT libraries. Each DLL and exe you create has a its own heap, i.e. _crtheap. The allocations and de-allocations has to happen from respective heap. That a dynamically allocated from DLL, cannot be de-allocated from executable and vice-versa.
What you can do? Compile our code in DLL and exe’s using /MD or /MDd to use the multithread-specific and DLL-specific version of the run-time library. Hence both DLL and exe are linked to the same C run time library and hence one _crtheap. Allocations are always paired with de-allocations within a single module.
If I have an application that compiles as an .exe and I want to use a library I can either statically link that library from a .lib file or dynamically linked that library from a .dll file.
Each linked module (ie. each .exe or .dll) will be linked to an implementation of the C or C++ run times. The run times themselves are a library that can be statically or dynamically linked to and come in different threading configurations.
By saying statically linked dlls are you describing a set up where an application .exe dynamically links to a library .dll and that library internally statically links to the runtime? I will assume that this is what you mean.
Also worth noting is that every module (.exe or .dll) has its own scope for statics i.e. a global static in an .exe will not be the same instance as a global static with the same name in a .dll.
In the general case therefore it cannot be assumed that lines of code running inside different modules are using the same implementation of the runtime, furthermore they will not be using the same instance of any static state.
Therefore certain rules need to be obeyed when dealing with objects or pointers that cross module boundaries. Allocations and deallocations must be occur in the same module for any given address. Otherwise the heaps will not match and behaviour will not be defined.
COM solves this this using reference counting, objects delete themselves when the reference count reaches zero. This is a common pattern used to solve the matched location problem.
Other problems can exist, for instance windows defines certain actions e.g. how allocation failures are handled on a per thread basis, not on a per module basis. This means that code running in module A on a thread setup by module B can also run into unexpected behaviour.

Jetty 7: OutOfMemoryError: PermGen space on application redeploy

First time app starts correctly. Then I delete webapp/*.war file and paste new version of *.war. Jetty start deploying new war but error java.lang.OutOfMemoryError: PermGen space occurs. How can I configure Jetty to fix error / make correct redeploy?
This solution doesn't help me.
Jetty version: jetty-7.4.3.v20110701
There is probably no way to configure the problem away. Each JVM has one PermGen memory area that is used for class loading and static data. Whenever your application gets undeployed its classloader should be discarded and all classes loaded by it as well. When this fails because other references to the classloader still exist, garbage collecting the classloader and your applications classes will also fail.
A blog entry and its follow up explain a possible source of the problem. Whenever the application container's code uses a class that holds a reference to one of your classes, garbage collection of your classes is prevented. The example from the mentioned blog entry is the java.util.logging.Level constructor:
protected Level(String name, int value) {
this.name = name;
this.value = value;
synchronized (Level.class) {
known.add(this);
}
}
Note that known is a static member of java.util.logging.Level. The constructor stores a reference to all created instances. So as soon as the Level class was loaded or instantiated from outwith your application's code, garbage collection can't remove your classes.
To solve the problem you could avoid all classes that are in use outwith your own code or ensure no references are held to your classes from outwith your code. Both problems could occur within any class delivered with Java and are thus not feasible to be fixed within your application. You cannot prevent the problem by altering only your own code!
Your options are basically:
Increasing the memory limits and have the error strike less often
Analyze your code as detailed in the linked blog posts and avoid using the classes that store references to your objects
If a PermGen out of memory occurs, you need to restart the jvm, in your case restart jetty. You may increase the PermGen space with the JVM options in your linked solution so this happens later (with later I mean: after more redeploys). But it will happen every once in a while and you can do next to nothing to avoid that. The answer you linked explained well what PermGenSpace is and why it overflows.
Use:
-XX:PermSize=64M -XX:MaxPermSize=128M
or, if that was not enough yet
-XX:PermSize=256M -XX:MaxPermSize=512M
Also, be sure to increase the amount of space available to the vm in general if you use this commands.
use
-Xms128M -Xmx256M
For Jetty 7.6.6 or later this may help http://www.eclipse.org/jetty/documentation/current/preventing-memory-leaks.html.
We used the AppContextLeakPreventer and it helped with the OOM errors due to permgen space
I have this same problem with HotSpot, but with JRockit, which doesn't have a Permanent Generation, the problem has gone away. It's free now, so you might want to try it: https://blogs.oracle.com/henrik/entry/jrockit_is_now_free_and
Looks very much like Permanent Generation leak. Whenever your application left some classes to hang around after it is undeployed, you get this problem. You can try the latest version of Plumbr, may be it will find the left-over classes.
For Readers of the Future (relative to when this question has been asked):
In JDK 8 the Perm Gen Space is gone (not there anymore). Instead there is now Metaspace which is taken from the native space of the machine.
If you had problems of Perm Gen Overflow then you might want to have a look into this explanation and this comments on the removal process.

Resources