Why is Java11 keeping java.util.zip.ZipFile$Source on heap? - memory-leaks

Can somebody help me understand if what I see is deliberate, correct behaviour or some kind of leak in Java11? Let's take a stupid-simple hello world app:
package com.example;
public class HelloWorld {
public static void main(String[] args) throws InterruptedException {
for( int i =0 ; i < 50; i++){
Thread.sleep(1000);
System.out.println("hello " + i);
}
}
}
The only interesting part is a jar dependency. It can be any jar, but to make the problem more spectacular let's use a big one - old gwt-user jar, which weigh 30MB:
plugins {
id 'java'
}
group 'com.example'
version '1.0-SNAPSHOT'
repositories {
mavenCentral()
}
dependencies {
// https://mvnrepository.com/artifact/com.google.gwt/gwt-user
compile group: 'com.google.gwt', name: 'gwt-user', version: '2.7.0'
}
Run the app, open the jvisualvm, make a dump and look for retained set of java.util.zip.ZipFile$Source:
That jar from classpath (never used, actually) occupies 1.5MB of heap. It doesn't go away during GC, it doesn't go away when you are short of memory, I even saw these entries in OutOfMemory heapdumps.
The entry is obviously kept by a map java.util.zip.ZipFile$Source.files. From the source I can tell that this theoretically should be cleaned by a Common-Cleaner thread from InnocuousThreadGroup, but I don't see it happening.
I encountered this problem when migrating a small, lightweight Java app from JDK8 to JDK11.
With low Xmx settings those jars use up significant portion of my heap as compared to JDK8.
So is it a bug or a feature?

This is deliberate.
The observation that memory use seem to increase
is caused by moving from a native to pure java-based implementation[1]
for zip/jar files in JDK 9. Mainly for stability purposes.
I'll note that the native implementation allocates similar and similarly
sized data structures, but they were hidden out of sight from tools
inspecting the java heap.
But even though total memory footprint should be more or less neutral, there might be a need to increase the java heap size to encompass the increased java heap usage.
[1] https://bugs.openjdk.java.net/browse/JDK-8145260

Related

Matlab Figure Memory Not Freed

I'm running Matlab R2016b on Ubuntu GNOME 16.04.3. Every time I create a new plot (i.e., a figure), I can see Matlab's virtual memory allocation grow. If I'm using Matlab for a long period of time, the virtual memory allocation, combined with the resident memory allocation, eventually reach my RAM limits and begin to eat into the swap space and the system slows considerably. When I close the figures, the memory is not freed. To be clear, none of the following commands reduce the virtual or resident memory allocated to Matlab:
clear all; % clear all variables
close all; % close all the figures
pack; % Tell matlab to consolidate its memory
java.lang.Runtime.getRuntime.gc; % Run java garbage collection
Does anyone have a solution to prevent Matlab from eventually consuming all available memory? I've never noticed Matlab doing this on my apple computer. Why doesn't Linux/Ubuntu clean up the memory once the figure is closed?
I'm not running into any errors like java.lang.OutOfMemoryError, but the system gets really slow once the RAM is all allocated and Swap begins to be used.
In the past, this problem was randomly occurring on my Matlab applications too. The most common symptom was the following exception showing up whenever a plot or an interface was being repainted/refreshed: java.lang​.OutOfMemo​ryError: Java heap space.
After a few days of researches, I came up with a solution in the form of a small Java utility. Below the source code:
package mutilities;
import java.awt.Dimension;
import javax.swing.RepaintManager;
public final class Environment
{
public static void CleanMemoryHeap()
{
try
{
final RepaintManager rm = RepaintManager.currentManager(null);
final Dimension localDimension = rm.getDoubleBufferMaximumSize();
rm.setDoubleBufferMaximumSize(new Dimension(0, 0));
rm.setDoubleBufferMaximumSize(localDimension);
System.gc();
}
catch (Exception e) { }
}
// other utility methods...
}
Once compiled into a small jar package, you can call it from Matlab whenever you need to clean up some memory as follows:
% you need to javaaddpath before
import('mutilities.*');
Environment.CleanMemoryHeap();
I usually call it within the constructor of my GUIDE applications:
function Construct(this)
warning('off','all');
% this is how I called my jar package
javaaddpath(fullfile(pwd(),'MatlabUtilities.jar'));
import('mutilities.*');
Environment.CleanMemoryHeap();
% ...
end
Of course, you can also run the same code within Matlab environment without compiling it into a jar package since there is full interoperability.
On a side note, for what concerns the Garbage Collector, please notice that even if Runtime.getRuntime().gc() and System.gc() are equivalent since the latter internally calls the former, System.gc() is a class method so it's more convenient to use it.
The problem seems to have been linked to the jogl implementation and its interactions with Matlab. After updating/upgrading the jogl packages (described in detail on SuperUser), the memory allocation error has disappeared; Memory appears to be freed appropriately.

How can I speed up Gradle dependency resolution or generally improve performance?

I'm converting a very large build over from Maven. There were a number of BOMs which I've converted to dependency lists. I'm also using the Spring Dependency management plug-in.
Problem is that dependency management is taking forever. Note that it seems to take way too long even when I use --offline. I've also just read that using allprojects {} and subprojects{} causes parallelism to fail. Clearly I need something that provides similar functionality, though. The objective of this migration in the first place was to improve performance but I don't think it's any better so far. I need to know:
How can I set up my dependency lists in configuration phase, do it only once and have it scoped so that the information is available to all projects? Is there an example of a plug-in that does this? Of course, it would have to work with parallelism.
Is there anything I need to do with the Spring dependency management plugin that will improve performance?
Right now, build time is roughly 25 minutes (running offline) and I'm on a half-way decent 8 core box. That's with the daemon running and no unit or integration testing. :-/
It's hard to say without knowing more about your environment or your set up. But some general rules:
Are you sure it is the dependency resolution that is the problem, use --profile to get more information. (see docs)
Make sure you only have one repository to look from, preferably close to you and fast. We normally set up a proxy in our Nexus, that way we let Nexus cache for the whole department. For each new repository, Gradle looks for all versions there as well.
Make sure your Gradle cache is fast (think local SSD vs NFS mounted old disk). Otherwise move your $GRADLE_USER_HOME to another local place.
Adding DependencyResolutionListener may give you more information regarding what may be the bottleneck.
Try adding the following to the start of your build.gradle:
gradle.addListener(new DependencyResolutionListener() {
ThreadLocal<Long> start = new ThreadLocal<>()
#Override
void beforeResolve(ResolvableDependencies dependencies) {
start.set(System.nanoTime())
}
#Override
void afterResolve(ResolvableDependencies dependencies) {
long stop = System.nanoTime() - start.get()
println "resolving $dependencies.resolutionResult.root.moduleVersion of configuration $dependencies.name (${stop/1000000} ms)"
}
})

Can't unloaded Groovy classes - PermGen Erros

I have a legacy system that is extensively using Groovy version 1.0 and currently I can not update Groovy to an updated version. From time to time I'm getting a PermGen error due to the fact that all the Groovy classes/scripts are kept in memory even if no one is using them any more.
I'm trying to figure out what is the correct way to unload those classes after I've finished using them.
I'm using the following code to create the Script object:
GroovyShell shell=new GroovyShell();
Script script = shell.parse("ANY_TEXT");
.....
And I'm trying the following code to unload the generated class:
MetaClassRegistry metaClassRegistry = MetaClassRegistry.getIntance(0);
metaClassRegistry.removeMetaClass(script.getMetaClass().getClass());
metaClassRegistry.removeMetaClass(script.getClass());
metaClassRegistry.removeMetaClass(script.getBinding().getClass());
script.setBinding(null);
script.setMetaClass(null);
script = null;
Unfortunately, this doesn't seems to work because I keep on getting a PermGen error. How can I unloaded Groovy classes and keep the PermGen size reasonable?
The reason you are experiencing this issue with Groovy is due to the nature of how scripting languages like Groovy work. They create a lot of extra Class objects behind the scenes, causing your PermGen space requirements to increase.
You can not force a Class loaded into the PermGen space to be removed. However, you can increase the PermGen space:
Or for additional options, like JVM settings to increase PermGen space and add settings to allow unloading of PermGen classes during GC, see this post:
Increase permgen space

Jetty 7: OutOfMemoryError: PermGen space on application redeploy

First time app starts correctly. Then I delete webapp/*.war file and paste new version of *.war. Jetty start deploying new war but error java.lang.OutOfMemoryError: PermGen space occurs. How can I configure Jetty to fix error / make correct redeploy?
This solution doesn't help me.
Jetty version: jetty-7.4.3.v20110701
There is probably no way to configure the problem away. Each JVM has one PermGen memory area that is used for class loading and static data. Whenever your application gets undeployed its classloader should be discarded and all classes loaded by it as well. When this fails because other references to the classloader still exist, garbage collecting the classloader and your applications classes will also fail.
A blog entry and its follow up explain a possible source of the problem. Whenever the application container's code uses a class that holds a reference to one of your classes, garbage collection of your classes is prevented. The example from the mentioned blog entry is the java.util.logging.Level constructor:
protected Level(String name, int value) {
this.name = name;
this.value = value;
synchronized (Level.class) {
known.add(this);
}
}
Note that known is a static member of java.util.logging.Level. The constructor stores a reference to all created instances. So as soon as the Level class was loaded or instantiated from outwith your application's code, garbage collection can't remove your classes.
To solve the problem you could avoid all classes that are in use outwith your own code or ensure no references are held to your classes from outwith your code. Both problems could occur within any class delivered with Java and are thus not feasible to be fixed within your application. You cannot prevent the problem by altering only your own code!
Your options are basically:
Increasing the memory limits and have the error strike less often
Analyze your code as detailed in the linked blog posts and avoid using the classes that store references to your objects
If a PermGen out of memory occurs, you need to restart the jvm, in your case restart jetty. You may increase the PermGen space with the JVM options in your linked solution so this happens later (with later I mean: after more redeploys). But it will happen every once in a while and you can do next to nothing to avoid that. The answer you linked explained well what PermGenSpace is and why it overflows.
Use:
-XX:PermSize=64M -XX:MaxPermSize=128M
or, if that was not enough yet
-XX:PermSize=256M -XX:MaxPermSize=512M
Also, be sure to increase the amount of space available to the vm in general if you use this commands.
use
-Xms128M -Xmx256M
For Jetty 7.6.6 or later this may help http://www.eclipse.org/jetty/documentation/current/preventing-memory-leaks.html.
We used the AppContextLeakPreventer and it helped with the OOM errors due to permgen space
I have this same problem with HotSpot, but with JRockit, which doesn't have a Permanent Generation, the problem has gone away. It's free now, so you might want to try it: https://blogs.oracle.com/henrik/entry/jrockit_is_now_free_and
Looks very much like Permanent Generation leak. Whenever your application left some classes to hang around after it is undeployed, you get this problem. You can try the latest version of Plumbr, may be it will find the left-over classes.
For Readers of the Future (relative to when this question has been asked):
In JDK 8 the Perm Gen Space is gone (not there anymore). Instead there is now Metaspace which is taken from the native space of the machine.
If you had problems of Perm Gen Overflow then you might want to have a look into this explanation and this comments on the removal process.

what causes memory leak in java

I have a web application deployed in Oracle iPlanet web server 7. Website is used actively in Internet.
After deploying, heap size is growing and after 2 or 3 weeks, OutOfMemory error is thrown.
So I began to use profiling tool. I am not familiar with heap dump. All I noticed that char[], hashmap and String objects occupy too much at heap. How can I notice what causes memory leak from heap dump? My assumptations about my memory leak;
I do so much logging in code using log4j for keeping in log.txt file. Is there a problem with it?
may be an error removing inactive sessions?
some static values like cities, gender type stored in static hashmap ?
I have a login mechanism but no logout mechanism. When site is opened again, new login needed. (silly but not implemented yet.) ?
All?
Do you have an idea about them or can you add another assumptions about memory leak?
Since Java has garbage collection a "memory leak" would usually be the result of you keeping references to some objects when they shouldn't be kept alive.
You might be able to see just from the age of the objects which ones are potentially old and being kept around when they shouldn't.
log4j shouldn't cause any problems.
The hashmap should be okay, since you actually want to keep these values around.
Inactive sessions might be the problem if they're stored in memory and if something keeps references to them.
There is one more thing you can try: new project, Plumbr, which aims to find memory leaks in java applications. It is in beta stage, but should be stable enough to give it a try.
As a side node, Strings and char[] are almost always on top of the profilers' data. This rarely means any real problem.

Resources