jvm full gc can't unload classes even permgen is full - garbage-collection

Our production server went OOM because permgen is full. Using jmap -permstat to check the permgen area, we found there were many classes loaded by com.sun.xml.ws.client.WSSServiceDelegatingLoader. The loaded classes are com.sun.proxy.$ProxyXXX, where XXX is an int sequence.
the stacktrace for these classloading is as follow:
eventually, the JVM went OOM, full gc can't reclaim any permgen memory.
What is strange is that if I click System GC in VisualVM, the classes are unloaded and the usage of permgen goes down.
Our JDK version is 1.7.0.80, we have added CMSClassUnloadingEnabled.
-XX:+ExplicitGCInvokesConcurrent
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=60
-XX:+UseParNewGC
-XX:+CMSParalledlRemarkEnabled
-XX:+UseCMSCompactAtFullCollection
-XX:+CMSFullGCsBeforeCompaction=0
-XX:+CMSCLassUnloadingEnabled
-XX:MaxTenuringThreshold=18
-XX:+UseCMSInitialtingOccupancyOnly
-XX:SurvivorRatio=4
-XX:ParallecGCThreads=16
Our code has been running for a long time. The most recent operation is a WebLogic patch. This really confused me. Could someone give me some help with this issue, many thanks!

This is a known bug https://github.com/javaee/metro-jax-ws/issues/1161
Every time a JAX-WS client is created, for instance, using library JAX-WS RI 2.2 which is bundled in Weblogic Server 12.1.3
com.sun.xml.ws.client.WSServiceDelegate$DelegatingLoader#1
Client proxy classes are being loaded into classloader:
([Loaded com.sun.proxy.$Proxy979 from com.sun.xml.ws.client.WSServiceDelegate$DelegatingLoader] )
Solution/Workaround:
Replace JAX-WS client where this bug is solved.

Related

ApplicationShutdownHooks Memory leak

java.lang.ApplicationShutdownHooks object might be causing a memory leak.
While running the spring boot application for a long time, this memory leak is coming and the system got crashed.
If you use log4j 2.13+, you can set -Dlog4j2.isWebapp=true, or -Dlog4j.shutdownHookEnabled=false if you use older 2.x log4j version.
Log4j uses hooks via DefaultShutdownCallbackRegistry otherwise.

Issue with heap memory peaking

I've been having trouble with my webapp. My heap memory peak up to nearly to max size for about 30 mins and the it crashes my system.
I have googled and tried nearly everything. I have been monitoring my heap memory using Java VisualVM, jconsole and Oracle Java Mission Control(I know it's outdated).
So what I have tried until know:
Monitored heap memory to see if there is a specific thread running at specific time and it peak the memory. (This is not the case as it doesn't specifically peak at specific times.)
2.Increased my heap memory size.
Followed instructions from:
http://karunsubramanian.com/websphere/top-4-java-heap-related-issues-and-how-to-fix-them/
So my questions are:
Is there any tool that can help me see if I have a memory leak and from where?
Has anyone experienced the same issue.
Any pointers on how to manage this kind of problems.
Btw I am quite new in this area so please be kind.
Tomcat 7 on Windows Server 2012
JAVA 7
If you need more information please comment.
You need to configure the jvm to create a heapdump when an outofmemory occurs.
-XX:+HeapDumpOnOutOfMemoryError
Then analyze the heap dump to find what classes are using the memory

Jboss-6.1 Application running very slow

my application is running on jboss 6.1, and after few days my applications runs very slow., this is the situation I am facing every week,. for this I am killing java and clearing the temp and work folders and restarting the jboss again. Is there any other ways to clean the memory / manage the application. Kindly give me the suggestions for Linux and windows platforms.
Kindly help any one.
Thanks & Regards,
Sharath
Based on your RAM size of the system you can increase following parameters in run.conf(for linux) or run.conf.bat(for windows):
XMS, XMX, MaxPermSize.
-Xms512M -Xmx1024M -XX:MaxPermSize=128M
The flag Xmx specifies the maximum memory allocation pool for a Java Virtual Machine (JVM), while Xms specifies the initial memory allocation pool.
MaxPermSize are used to set size for Permanent Generation
The Permanent Generation is where class files are kept. These are the result of compiled classes and jsp pages. If this space is full, it triggers a Full Garbage Collection. If the Full Garbage Collection cannot clean out old unreferenced classes and there is no room left to expand the Permanent Space, an Out‐of‐ Memory error (OOME) is thrown and the JVM will crash
Hope you are aware of these three flags.

Apparent classloader leak in Play! 2.3.4

I have walked several heap snapshots of a Play 2.3.4 application in JProfiler and VisualVM (ironically VisualVM seems to be more helpful) and have found that upon Play! reload classloaders are not properly replaced and several copies of old classloaders retaining old instances of old classes are retained in memory. After several application reloads the application crashes with an out of memory error (heap consistently exhausts before permanent generation probably due to the memory-intensive nature of the application).
While tracking down GC roots, I have found the following implicating evidence. There are 4 instances of PlayRun$$anonfun$10$$anon$2 with live objects, and as I understand there should only ever be 1 particularly noted by the observation that each of these classloader instances contains duplicate copies of my application's classes:
GC Roots of 4 PlayRun$$anonfun$10$$anon$2 instances retained in memory by thread stack frames:
contextClassLoader of BoneCP-keep-alive-scheduler
inheritedAccessControlContext of play-akka.actor.default-dispatcher-27
contextClassLoader of play-akka.actor.default-dispatcher-32
contextClassLoader of pool-16-thread-3 (java.util.concurrent executor service?)
Why are these other threads retaining references to obsolete Play! application classloaders? Doesn't Play! shutdown dependent threads like these to safeguard against this? Is it possible that some phase of the reload process failed to execute properly resulting in this bad object retention state?
The application is built on top of Play! 2.3.4 and SBT 0.13.6. This problem did not occur prior to upgrading from Play! 2.2.2 / SBT 0.13.1.

SocketException between our Load Balancers and Tomcat during Garbage Collection

We have noticed the following problem: whenever our Tomcat JVM performs full GC, the requests to create a connection between the LB ant the Tomcat are failed. This is very problematic since all these requests will never get the chance to arrive to the application server.
This problem occured even when we have pointed one Tomcat to the other without any LB in between.
Are there any definition that can be done in the JVM / Tomcat / Linux that will make the HTTP connection to wait the time till the GC ends and the application JVM will receive the request.
We are using Java6, Tomcat7, and Linux Ubuntu OS.
Thanks,
Yosi
Have you looked into using the concurrent garbage collector using the 'XX:+UseConcMarkSweepGC' option? This essentially performs garbage collection in the background so that there aren't nearly as many (if any) "stop the world" full GCs.
You may need to enable concurrent garbage collection as described in http://www.oracle.com/technetwork/java/gc-tuning-5-138395.html
-XX:+UseConcMarkSweepGC
Also try other GC configs.

Resources