private bytes increase for a javaw process in java 8 - multithreading

My project has started using java 8 from java 7.
After switching to java 8, we are seeing issues like the memory consumed is getting higher with time.
Here are the investigations that we have done :
Issues comes only after migrating from java7 and from java8
As metaspace is the only thing related to memory which is changes from hava 7 to java 8. We monitored metaspace and this does not grow more then 20 MB.
Heap also remains consistent.
Now the only path left is to analyze how the memory gets distributes to process in java 7 and java 8, specifically private byte memory. Any thoughts or links here would be appreciated.
NOTE: this javaw application is a swing based application.
UPDATE 1 : After analyzing the native memory with NMT tool and generated a diff of memory occupied as compare to baseline. We found that the heap remained same but threads are leaking all this memory. So as no change in Heap, I am assuming that this leak is because of native code.
So challenge remains still open. Any thoughts on how to analyze the memory occupied by all the threads will be helpful here.
Below are the snapshots taken from native memory tracking.
In this pic, you can see that 88 MB got increased in threads. Where arena and resource handle count had increased a lot.
in this picture you can see that 73 MB had increased in this Malloc. But no method name is shown here.
So please throw some info in understanding these 2 screenshot.

You may try another GC implementation like G1 introduced in Java 7 and probably the default GC in Java 9. To do so just launch your Java apps with:
-XX:+UseG1GC
There's also an interesting functionality with G1 GC in Java 8u20 that can look for duplicated Strings in the heap and "deduplicate" them (this only works if you activate G1, not with the default Java 8's GC).
-XX:+UseStringDeduplication
Be aware to test thoroughly your system before going to production with such a change!!!
Here you can find a nice description of the diferent GCs you can use

I encountered the exact same issue.
Heap usage constant, only metaspace increase, NMT diffs showed a slow but steady leak in the memory used by threads specifically in the arena allocation. I had tried to fix it by setting the MALLOC_ARENAS_MAX=1 env var but that was not fruitful. Profiling native memory allocation with jemalloc/jeprof showed no leakage that could be attributed to client code, pointing instead to a JDK issue as the only smoking gun there was the memory leak due to malloc calls which, in theory, should be from JVM code.
Like you, I found that upgrading the JDK fixed the problem. The reason I am posting an answer here is because I know the reason it fixes the issue - it's a JDK bug that was fixed in JDK8 u152: https://bugs.openjdk.java.net/browse/JDK-8164293
The bug report mentions Class/malloc increase, not Thread/arena, but a bit further down one of the comments clarifies that the bug reproduction clearly shows increase in Thread/arena.

consider optimising the JVM options
Parallel Collector(throughput collector)
-XX:+UseParallelGC
concurrent collectors (low-latency collectors)
-XX:+UseConcMarkSweepGC
use String Duplicates remover
-XX:+UseStringDeduplication
optimise compact ratio
-XXcompactRatio:
and refer
link1
link2

In this my answer you can see information and references how to profile native memory of JVM to find memory leaks. Shortly, see this.
UPDATE
Did you use -XX:NativeMemoryTracking=detail option? The results are straightforward, they show that the most memory allocated by malloc. :) It's a little bit obviously. Your next step is to profile your application. To analyze native methods and Java I use (and we use on production) flame graphs with perf_events. Look at this blog post for a good start.
Note, that your memory increased for threads, likely your threads grow in application. Before perf I recommend analyze thread dumps before/after to check does Java threads number grow and why. Thread dumps you can get with jstack/jvisualvm/jmc, etc.

This issue does not come with Java 8 update 152. The exact root cause of why it was coming with earlier versions is still not clearly identified.

Related

Trying to see the GC behaviors, and How could I get the heap size and used memory usage on MAC pro?

I am adding JVM args by using -XX:MinHeapFreeRatio and -XX:MaxHeapFreeRatio to see the GC behavior changes. I got some explanations here (what is the purpose of -XX:MinHeapFreeRatio and -XX:MaxHeapFreeRatio)
The question is how could I get the memory usage picture in the answer. I wanna make sure the modifications I made can really change the GC behavior.
It is a tool called VisualVM
There are some other alternatives such us JConsole or Java Mission Control

JVM memory leak diagnosis (jemalloc)

A piece of software from a vendor eats an unholy amount of memory over time.
This is a Java application and I've inspected heap and metaspace inside the JRE, everything ok there (uses < 1G RAM). The issue rather seems to be some native memory allocation within the code (which I don't have). So I've used jemalloc to profile and the following is the result.
Is it safe to say that I should ask the vendor to fix their code or what could be the issue here? How can the output of jemalloc be interpreted?

GC graph shows there is a memory leak but unable to track in the dump

We have a Java Micorservice in our application which is connected to Postgres as well as Phoenix. We are using Spring Boot 2.x.
The problem is we are executing endurance testing for our application for about 8 hours and we could observe that the used heap is keep on increasing though we used the recommended suggestions for VM arguments, looks like a memory leak. we analysed the heap dump however the root cause is not exactly clear for us, can some experts help based on the results?
The VM arguments that we are actually using are:
-XX:ConcGCThreads=8 -XX:+DisableExplicitGC -XX:InitialHeapSize=536870912 -XX:InitiatingHeapOccupancyPercent=45 -XX:MaxGCPauseMillis=1000 -XX:MaxHeapFreeRatio=70 -XX:MaxHeapSize=536870912 -XX:MinHeapFreeRatio=40 -XX:ParallelGCThreads=16 -XX:+PrintAdaptiveSizePolicy -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:StringDeduplicationAgeThreshold=1 -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseG1GC -XX:+UseStringDeduplication
We are expecting the used heap should be flat in the GC log, however memory consumption is not released and it keeps on increasing.
Heap Dump:
GC graph:
I'm not sure which tool you are using above, but I would be looking for the dominator hierarchy in the heap. Eclipse MAT is a good tool to analyse heap dumps and it can point you in the direction of what's actually holding the memory and you can decide if you want to categorise it as a leak or not. Regardless of the label you attach, if the application is going to crash after a while because it runs out of memory, then it is a problem.
This blog also discusses diagnosing this type of problems.

generate heap dump reduces dramatically after performing manual GC

this is my first post in stack overflow forum. we are recently experiencing some Java OOME issues and using jvisualvm, yourkit and eclipse mat tools able to idenify and fix some issues...
one behavior observed during analysis is that when we create a heapdump manually using jconsole or jvisualvm, the used heap size in jvm reduces dramatically (from 1.3 GB to 200 MB) after generating the heapdump.
can some one please advise on this behavior? this is a boon in disguise since whenever i see the used heapsize is >1.5GB, i perform a manaul GC and the system is back to lower used heapsize numbers resulting in no jvm restarts.
let me know for any additional details
thanks
Guru
when you use JConsole to create the dump file, there are 2 parameters: The first one is the file name to generate (complete path) and the second one (true by default) indicates if you want to perform a gc before taking the dump. Set it to false if you don't want a full gc before dumping
This is an old question but I found it while asking a new question of my own, so I figured I'd answer it.
When you generate a heap dump, the JVM performs a System.gc() operation before it generates the heap dump, which is collecting non-referenced objects and effectively reducing your heap utilization. I am actually looking for a way to disable that System GC so I can inspect the garbage objects that are churning in my JVM.

How do I fix leaking SSLSessionImpl in Glassfish?

So the basics are I have Glassfish 2.1 and Java 1.6.0_15 and it will work for a few days but it eats up all the memory it can, seemingly no matter how high the max memory is set to. It's a 32-bit jvm with the max memory now at 4GB and it uses it all up quickly then thrashes with the garbage collector bringing throughput to a crawl. So after a few tries I got a 3GB heap dump and opened it with YourKit.
The usage on this server is a swing client doing a few RMI calls and some REST https calls, plus a php web site calling a lot of REST https services.
It shows:
Name Objects Shallow Size Retained Size
java.lang.Class 22,422 1,435,872 1,680,800,240
java.lang.ref.Finalizer 3,086,366 197,527,424 1,628,846,552
com.sun.net.sll.internal.ssl.SSLSessionImpl 3,082,887 443,935,728 1,430,892,816
byte[] 7,901,167 666,548,672 666,548,672
...and so on. Gee, where did the memory go? Oh, 3 million SSLSessionImpl instances, that's all.
It seems like all the https calls are causing these SSLSessionImpl objects to accumulate, but then they are never GC'ed. Looking at them in YourKit, the finalizer is the GC root. Poking around the web this looks very much like http://forums.sun.com/thread.jspa?threadID=5266266 and http://bugs.sun.com/bugdatabase/view_bug.do;jsessionid=80df6098575e8599df9ba5c9edc1?bug_id=6386530
Where do I go next? How do I get to the bottom of this?
This seems to be fixed now with an upgrade to the latest JVM. 1.6.0_18 fixes bug 4918870 which is related to this. Prior to upgrading the JVM, I had several heap dumps with 100,000-4,000,000 SSLSessionImpl, now there are usually less than 5000 instances.

Resources