I have a legacy system that is extensively using Groovy version 1.0 and currently I can not update Groovy to an updated version. From time to time I'm getting a PermGen error due to the fact that all the Groovy classes/scripts are kept in memory even if no one is using them any more.
I'm trying to figure out what is the correct way to unload those classes after I've finished using them.
I'm using the following code to create the Script object:
GroovyShell shell=new GroovyShell();
Script script = shell.parse("ANY_TEXT");
.....
And I'm trying the following code to unload the generated class:
MetaClassRegistry metaClassRegistry = MetaClassRegistry.getIntance(0);
metaClassRegistry.removeMetaClass(script.getMetaClass().getClass());
metaClassRegistry.removeMetaClass(script.getClass());
metaClassRegistry.removeMetaClass(script.getBinding().getClass());
script.setBinding(null);
script.setMetaClass(null);
script = null;
Unfortunately, this doesn't seems to work because I keep on getting a PermGen error. How can I unloaded Groovy classes and keep the PermGen size reasonable?
The reason you are experiencing this issue with Groovy is due to the nature of how scripting languages like Groovy work. They create a lot of extra Class objects behind the scenes, causing your PermGen space requirements to increase.
You can not force a Class loaded into the PermGen space to be removed. However, you can increase the PermGen space:
Or for additional options, like JVM settings to increase PermGen space and add settings to allow unloading of PermGen classes during GC, see this post:
Increase permgen space
Related
I'm running Matlab R2016b on Ubuntu GNOME 16.04.3. Every time I create a new plot (i.e., a figure), I can see Matlab's virtual memory allocation grow. If I'm using Matlab for a long period of time, the virtual memory allocation, combined with the resident memory allocation, eventually reach my RAM limits and begin to eat into the swap space and the system slows considerably. When I close the figures, the memory is not freed. To be clear, none of the following commands reduce the virtual or resident memory allocated to Matlab:
clear all; % clear all variables
close all; % close all the figures
pack; % Tell matlab to consolidate its memory
java.lang.Runtime.getRuntime.gc; % Run java garbage collection
Does anyone have a solution to prevent Matlab from eventually consuming all available memory? I've never noticed Matlab doing this on my apple computer. Why doesn't Linux/Ubuntu clean up the memory once the figure is closed?
I'm not running into any errors like java.lang.OutOfMemoryError, but the system gets really slow once the RAM is all allocated and Swap begins to be used.
In the past, this problem was randomly occurring on my Matlab applications too. The most common symptom was the following exception showing up whenever a plot or an interface was being repainted/refreshed: java.lang.OutOfMemoryError: Java heap space.
After a few days of researches, I came up with a solution in the form of a small Java utility. Below the source code:
package mutilities;
import java.awt.Dimension;
import javax.swing.RepaintManager;
public final class Environment
{
public static void CleanMemoryHeap()
{
try
{
final RepaintManager rm = RepaintManager.currentManager(null);
final Dimension localDimension = rm.getDoubleBufferMaximumSize();
rm.setDoubleBufferMaximumSize(new Dimension(0, 0));
rm.setDoubleBufferMaximumSize(localDimension);
System.gc();
}
catch (Exception e) { }
}
// other utility methods...
}
Once compiled into a small jar package, you can call it from Matlab whenever you need to clean up some memory as follows:
% you need to javaaddpath before
import('mutilities.*');
Environment.CleanMemoryHeap();
I usually call it within the constructor of my GUIDE applications:
function Construct(this)
warning('off','all');
% this is how I called my jar package
javaaddpath(fullfile(pwd(),'MatlabUtilities.jar'));
import('mutilities.*');
Environment.CleanMemoryHeap();
% ...
end
Of course, you can also run the same code within Matlab environment without compiling it into a jar package since there is full interoperability.
On a side note, for what concerns the Garbage Collector, please notice that even if Runtime.getRuntime().gc() and System.gc() are equivalent since the latter internally calls the former, System.gc() is a class method so it's more convenient to use it.
The problem seems to have been linked to the jogl implementation and its interactions with Matlab. After updating/upgrading the jogl packages (described in detail on SuperUser), the memory allocation error has disappeared; Memory appears to be freed appropriately.
I am running jvm 1.8.0_65 and using Groovy to load classes dynamically, the Groovy version is 2.4.7. I turned on class unloading by adding "-XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:+TraceClassUnloading -Dgroovy.use.classvalue=true". But the classes loaded by Groovy won't unload on GC. I also loaded native Java classes in the same application, and these classes got unloaded as expected when garbage collected.
I analyzed the heap dump by MAT, and could not find any references to GC root that is not a weak reference, here are some screen shots:
The Class's path to GC Roots excluding weak/soft references
The classloader's path to GC Roots excluding weak/soft references
Things referencing the classloader.
So I really don't know what's stopping the JVM from unloading these classes. Any help is greatly appreciated!
We are using Nashorn to use javascript from java (JDK 1.8 U66),after some profiling we are seeing a larger data is getting occupied by jdk.nashorn.internal.scripts.JO4P0
Object. Would like to know if any one have any idea?
jdk.nashorn.internal.scripts.JO4PO and other similar instances are used to represent scripts objects from your scripts. But, you need to provide more info for further investigation. I suggest you please write to nashorn-dev openjdk alias with profiler output + more info. about your application (like to project, if open source, would be useful
First time app starts correctly. Then I delete webapp/*.war file and paste new version of *.war. Jetty start deploying new war but error java.lang.OutOfMemoryError: PermGen space occurs. How can I configure Jetty to fix error / make correct redeploy?
This solution doesn't help me.
Jetty version: jetty-7.4.3.v20110701
There is probably no way to configure the problem away. Each JVM has one PermGen memory area that is used for class loading and static data. Whenever your application gets undeployed its classloader should be discarded and all classes loaded by it as well. When this fails because other references to the classloader still exist, garbage collecting the classloader and your applications classes will also fail.
A blog entry and its follow up explain a possible source of the problem. Whenever the application container's code uses a class that holds a reference to one of your classes, garbage collection of your classes is prevented. The example from the mentioned blog entry is the java.util.logging.Level constructor:
protected Level(String name, int value) {
this.name = name;
this.value = value;
synchronized (Level.class) {
known.add(this);
}
}
Note that known is a static member of java.util.logging.Level. The constructor stores a reference to all created instances. So as soon as the Level class was loaded or instantiated from outwith your application's code, garbage collection can't remove your classes.
To solve the problem you could avoid all classes that are in use outwith your own code or ensure no references are held to your classes from outwith your code. Both problems could occur within any class delivered with Java and are thus not feasible to be fixed within your application. You cannot prevent the problem by altering only your own code!
Your options are basically:
Increasing the memory limits and have the error strike less often
Analyze your code as detailed in the linked blog posts and avoid using the classes that store references to your objects
If a PermGen out of memory occurs, you need to restart the jvm, in your case restart jetty. You may increase the PermGen space with the JVM options in your linked solution so this happens later (with later I mean: after more redeploys). But it will happen every once in a while and you can do next to nothing to avoid that. The answer you linked explained well what PermGenSpace is and why it overflows.
Use:
-XX:PermSize=64M -XX:MaxPermSize=128M
or, if that was not enough yet
-XX:PermSize=256M -XX:MaxPermSize=512M
Also, be sure to increase the amount of space available to the vm in general if you use this commands.
use
-Xms128M -Xmx256M
For Jetty 7.6.6 or later this may help http://www.eclipse.org/jetty/documentation/current/preventing-memory-leaks.html.
We used the AppContextLeakPreventer and it helped with the OOM errors due to permgen space
I have this same problem with HotSpot, but with JRockit, which doesn't have a Permanent Generation, the problem has gone away. It's free now, so you might want to try it: https://blogs.oracle.com/henrik/entry/jrockit_is_now_free_and
Looks very much like Permanent Generation leak. Whenever your application left some classes to hang around after it is undeployed, you get this problem. You can try the latest version of Plumbr, may be it will find the left-over classes.
For Readers of the Future (relative to when this question has been asked):
In JDK 8 the Perm Gen Space is gone (not there anymore). Instead there is now Metaspace which is taken from the native space of the machine.
If you had problems of Perm Gen Overflow then you might want to have a look into this explanation and this comments on the removal process.
I have a web application deployed in Oracle iPlanet web server 7. Website is used actively in Internet.
After deploying, heap size is growing and after 2 or 3 weeks, OutOfMemory error is thrown.
So I began to use profiling tool. I am not familiar with heap dump. All I noticed that char[], hashmap and String objects occupy too much at heap. How can I notice what causes memory leak from heap dump? My assumptations about my memory leak;
I do so much logging in code using log4j for keeping in log.txt file. Is there a problem with it?
may be an error removing inactive sessions?
some static values like cities, gender type stored in static hashmap ?
I have a login mechanism but no logout mechanism. When site is opened again, new login needed. (silly but not implemented yet.) ?
All?
Do you have an idea about them or can you add another assumptions about memory leak?
Since Java has garbage collection a "memory leak" would usually be the result of you keeping references to some objects when they shouldn't be kept alive.
You might be able to see just from the age of the objects which ones are potentially old and being kept around when they shouldn't.
log4j shouldn't cause any problems.
The hashmap should be okay, since you actually want to keep these values around.
Inactive sessions might be the problem if they're stored in memory and if something keeps references to them.
There is one more thing you can try: new project, Plumbr, which aims to find memory leaks in java applications. It is in beta stage, but should be stable enough to give it a try.
As a side node, Strings and char[] are almost always on top of the profilers' data. This rarely means any real problem.