We recently migrated our application from Java 7 to Java 8. from the day of cut over, we started seeing Out of Memory Metaspace issues. We tried increasing the metaspace space size, but it didn't help. Jvisual VM(and Jconsole) shows that 60 -70 K class files are getting loaded into memory every day and nothing getting unloaded. we tried using all kinds of GC algorithms and nothing helped. What else can possibly go wrong in never Java version ?
After some research, we found the solution to our problem. Adding below JVM argument fixed the issue.
-Dcom.sun.xml.bind.v2.bytecode.ClassTailor.noOptimize=true
Below are article has good info on the issue.
https://issues.apache.org/jira/browse/CXF-2939
Hope this helps.
Related
I'm investigating a memory leak in my nodejs script, by checking process.memoryUsage().heapUsed, the usage is around 3000MB.
chrome://inspect also shows memory usage of around 3000MB. However, every time after I take a heap snapshot, the heap snapshot saved reduced to around 73 MB, process.memoryUsage().heapUsed also reduced to that figure.
Anyone has a theory on how is this happening?
It sounds like the garbage collector is running after you check the usage. Basically every once in awhile it will check to see if there is anything that isn't tied to anything anymore and will remove it, freeing up space. See this article for more details:
https://blog.sessionstack.com/how-javascript-works-memory-management-how-to-handle-4-common-memory-leaks-3f28b94cfbec
Some Context: We have upgraded the environment of a web application from running on Java 7 to running on Java 8 and Tomcat 8 (64-bit arch, Heap size about 2 GB, PermGen size=256 MB, no constraints on metaspace size). After a while, we started getting the following error:
java.lang.OutOfMemoryError: Compressed class space
which means that the space needed for UseCompressedClassPointers exceeded CompressedClassSpaceSize. At that moment VisualVM showed a 2 GB metaspace size.
Now with the VisualVM tool, we can see the Metaspace size is constatnly increasing with every request about 3 MB, however the heap does not seem to do so. The heap usage has a saw zigzag shape going back to the same low point after every GC.
I can tell that the application is leaking Metadata only when using a Java JAXB operation, but I couldn't prove it with VisualVM.
The application depends on webservices-rt-1.4 as a JAXB implementation provider. The application uses marshalling, unmarshalling. The class generation from XSD is done with maven-jaxb2-plugin-0.13.1.
Update:
After tracing class loading and unloading, I found out that the same JAXB classes is loaded into memory by WebAppClassLoader multiple times but never cleaned up. Moreover, there are no instances to them in the heap. I debugged and I saw that JDK is calls the method
javax.xml.bind.JAXBContext com.sun.xml.bind.v2.ContextFactory.createContext by reflection and that's when the class are created.
I though the classes are cleaned up by GC. Is it the responsibility of the classLoader to clean up?
Questions: Is there a way to analyze the metaspace objects? Why do I have a leak in metaspace but not in heap? aren't they related? Is that even possible?
Why would the app work fine with PermGen but not Metaspace?
I am facing similar issue.
In my case, memory leak was caused by JAXBContext.newInstance(...) invoke.
Solutions:
wrap this new instance as singleton (https://github.com/javaee/jaxb-v2/issues/581) or
use -Dcom.sun.xml.bind.v2.bytecode.ClassTailor.noOptimize=true VM parameter, like in answer Old JaxB and JDK8 Metaspace OutOfMemory Issue
I had similar issue and adding -Dcom.sun.xml.bind.v2.bytecode.ClassTailor.noOptimize=true in setenv.sh as JVM OPT arguments it resolve OOM metaspace issue.
I have seen twice recently RHEL 6 boxes showing a swap used value, reported by 'free', of something in the 10^15 bytes range. This is, of course, far in excess of what is allocated. Further, the '-/+ buffers/cache' line shows around 3 GB free. Both of these machines subsequently became unstable and had to be rebooted.
Does anyone have any ideas as to what might cause this? Someone told me that this could be indicative of a memory leak, but I can not find any supporting information online.
Are the systems running kernel-2.6.32-573.1.1.el6.x86_64?
Could be this bug:
access.redhat.com/solutions/1571043
Linux moves unused memory pages to swap (see linux swappiness), so if you still have free memory (considering buffers/cache) you are good to go.
You probably have a process that is not used much and it was swapped out.
My project has started using java 8 from java 7.
After switching to java 8, we are seeing issues like the memory consumed is getting higher with time.
Here are the investigations that we have done :
Issues comes only after migrating from java7 and from java8
As metaspace is the only thing related to memory which is changes from hava 7 to java 8. We monitored metaspace and this does not grow more then 20 MB.
Heap also remains consistent.
Now the only path left is to analyze how the memory gets distributes to process in java 7 and java 8, specifically private byte memory. Any thoughts or links here would be appreciated.
NOTE: this javaw application is a swing based application.
UPDATE 1 : After analyzing the native memory with NMT tool and generated a diff of memory occupied as compare to baseline. We found that the heap remained same but threads are leaking all this memory. So as no change in Heap, I am assuming that this leak is because of native code.
So challenge remains still open. Any thoughts on how to analyze the memory occupied by all the threads will be helpful here.
Below are the snapshots taken from native memory tracking.
In this pic, you can see that 88 MB got increased in threads. Where arena and resource handle count had increased a lot.
in this picture you can see that 73 MB had increased in this Malloc. But no method name is shown here.
So please throw some info in understanding these 2 screenshot.
You may try another GC implementation like G1 introduced in Java 7 and probably the default GC in Java 9. To do so just launch your Java apps with:
-XX:+UseG1GC
There's also an interesting functionality with G1 GC in Java 8u20 that can look for duplicated Strings in the heap and "deduplicate" them (this only works if you activate G1, not with the default Java 8's GC).
-XX:+UseStringDeduplication
Be aware to test thoroughly your system before going to production with such a change!!!
Here you can find a nice description of the diferent GCs you can use
I encountered the exact same issue.
Heap usage constant, only metaspace increase, NMT diffs showed a slow but steady leak in the memory used by threads specifically in the arena allocation. I had tried to fix it by setting the MALLOC_ARENAS_MAX=1 env var but that was not fruitful. Profiling native memory allocation with jemalloc/jeprof showed no leakage that could be attributed to client code, pointing instead to a JDK issue as the only smoking gun there was the memory leak due to malloc calls which, in theory, should be from JVM code.
Like you, I found that upgrading the JDK fixed the problem. The reason I am posting an answer here is because I know the reason it fixes the issue - it's a JDK bug that was fixed in JDK8 u152: https://bugs.openjdk.java.net/browse/JDK-8164293
The bug report mentions Class/malloc increase, not Thread/arena, but a bit further down one of the comments clarifies that the bug reproduction clearly shows increase in Thread/arena.
consider optimising the JVM options
Parallel Collector(throughput collector)
-XX:+UseParallelGC
concurrent collectors (low-latency collectors)
-XX:+UseConcMarkSweepGC
use String Duplicates remover
-XX:+UseStringDeduplication
optimise compact ratio
-XXcompactRatio:
and refer
link1
link2
In this my answer you can see information and references how to profile native memory of JVM to find memory leaks. Shortly, see this.
UPDATE
Did you use -XX:NativeMemoryTracking=detail option? The results are straightforward, they show that the most memory allocated by malloc. :) It's a little bit obviously. Your next step is to profile your application. To analyze native methods and Java I use (and we use on production) flame graphs with perf_events. Look at this blog post for a good start.
Note, that your memory increased for threads, likely your threads grow in application. Before perf I recommend analyze thread dumps before/after to check does Java threads number grow and why. Thread dumps you can get with jstack/jvisualvm/jmc, etc.
This issue does not come with Java 8 update 152. The exact root cause of why it was coming with earlier versions is still not clearly identified.
So the basics are I have Glassfish 2.1 and Java 1.6.0_15 and it will work for a few days but it eats up all the memory it can, seemingly no matter how high the max memory is set to. It's a 32-bit jvm with the max memory now at 4GB and it uses it all up quickly then thrashes with the garbage collector bringing throughput to a crawl. So after a few tries I got a 3GB heap dump and opened it with YourKit.
The usage on this server is a swing client doing a few RMI calls and some REST https calls, plus a php web site calling a lot of REST https services.
It shows:
Name Objects Shallow Size Retained Size
java.lang.Class 22,422 1,435,872 1,680,800,240
java.lang.ref.Finalizer 3,086,366 197,527,424 1,628,846,552
com.sun.net.sll.internal.ssl.SSLSessionImpl 3,082,887 443,935,728 1,430,892,816
byte[] 7,901,167 666,548,672 666,548,672
...and so on. Gee, where did the memory go? Oh, 3 million SSLSessionImpl instances, that's all.
It seems like all the https calls are causing these SSLSessionImpl objects to accumulate, but then they are never GC'ed. Looking at them in YourKit, the finalizer is the GC root. Poking around the web this looks very much like http://forums.sun.com/thread.jspa?threadID=5266266 and http://bugs.sun.com/bugdatabase/view_bug.do;jsessionid=80df6098575e8599df9ba5c9edc1?bug_id=6386530
Where do I go next? How do I get to the bottom of this?
This seems to be fixed now with an upgrade to the latest JVM. 1.6.0_18 fixes bug 4918870 which is related to this. Prior to upgrading the JVM, I had several heap dumps with 100,000-4,000,000 SSLSessionImpl, now there are usually less than 5000 instances.