JAXB peformance issue with WebLogic 12.1.2's EclipseLink.jar - jaxb

When generating a several hundred page docx document file w/ embedded images using docx4j on WebLogic 12.1.2, the performance is about 5 times slower than the same operation on WebLogic 12.1.3 running on the same virtual machine w/ the same JVM configuration -- about 20 minutes on 12.1.2 vs. 4 minutes on 12.1.3.
Ran top to get CPU and memory stats and jstat -gc to get garbage collection stats. While generating the doc, the CPU pegs at 100% on 12.1.2 vs. 15% on 12.1.3. The eden heap usage grows rapidly on 12.1.2 and so gets garbage collected frequently vs. growing quite slowly on 12.1.3. Ran jstack several times while the operation is running to view the thread stack. Nine times out of ten, on 12.1.2 the stack shows that docx4j is calling EclipseLink's JAXB and that JAXB is class loading -- a lot of class loading that takes a lot of CPU time (maybe related to: Do I have a JAXB classloader leak but don't have direct control over docx4j).
The issue has been isolated down to the eclipselink.jar, which contains the JAXB libraries, insofar as when the eclipselink.jar from the WebLogic 12.1.3 shared libraries (EclipseLink v2.5.2) is copied over the WebLogic 12.1.2 version (EclipseLink v2.4.2) and then the docx generation run on WebLogic 12.1.2 the performance is good, just as on 12.1.3.
Question:
Can the performance of WebLogic 12.1.2 w/ EclipseLink 2.4.2 be improved without swapping out libraries? For instance, are there any JVM options that may help?
Current JVM options:
-Xms4096m -server -Xmx4096m -XX:MaxPermSize=512m -XX:MaxGCPauseMillis=69 -XX:ParallelGCThreads=8 -XX:ThreadStackSize=2048 -XX:SurvivorRatio=32 -XX:+DisableExplicitGC -XX:+AggressiveHeap -Xloggc:/var/tmp/gc.log -Djava.awt.headless=true

You could try creating a WebLogic Shared Library containing EclipseLink 2.5.2 and reference that from your application deployed on WebLogic 12.1.2:
http://blog.bdoughan.com/2012/10/updating-eclipselink-in-weblogic.html

Related

Netty webclient memory leak in tomcat server

I am observing swap memory issue in our tomcat servers which is installed in linux machines and when tried to collect heap dump, got this while analyzing heap dump.
16 instances of "io.netty.buffer.PoolArena$HeapArena", loaded by "org.apache.catalina.loader.ParallelWebappClassLoader # 0x7f07994aeb58" occupy 201,697,824 (15.40%) bytes.
Have seen in this blog Memory accumulated in netty PoolChunk that Adding -Dio.netty.allocator.type=unpooled showed significant reduction in the memory. Where do we need to add this property in our tomcat servers?

JVM GC behaviour on heap dump and unnecessary heap usage

We have problem tuning the memory management of JVM's. The very same application running on the k8s cluster, but one of the pods' jvm heap usage rises to ~95% and, when we try to get a heapdump on this vs, somehow gc runs and heap usage drops suddenly, leaving us with a tiny heap dump.
I think the old space has grown unnecessarily, and gc did not work to reclaim memory (for nearly 15 hours). Unfortunately we can't see what is occupying the space, because the heap dump is very small as gc is forced.
All 3 pods are having memory of 1500m and
here is the jvm heap usage percentage graph (3 pods, green being the problematic one):
Details:
openjdk 15.0.1 2020-10-20
OpenJDK Runtime Environment AdoptOpenJDK (build 15.0.1+9)
OpenJDK 64-Bit Server VM AdoptOpenJDK (build 15.0.1+9, mixed mode, sharing)
JVM Parameters:
-XX:MaxRAMPercentage=75
-XX:InitialRAMPercentage=75
-server
-Xshare:off
-XX:MaxMetaspaceSize=256m
-Dsun.net.inetaddr.ttl=60
-XX:-OmitStackTraceInFastThrow
-XX:+ShowCodeDetailsInExceptionMessages
The questions are:
Why a full gc is called when we try to get heap dump?
What is the motivation behind the gc not reclaiming memory and causes the application run with the heap size between ~70% and ~95%, while jvm can use and perfectly work with only 10%?
What can be done to force jvm to do gc more aggresively to avoid this situation? Or should it be done for production environment?
JVM heap dump procedure has 2 modes
live objects - this mode executes Full GC along side with heap dump. This is default options.
all objects - heap dump would include all object on heap both reachable and unreachable.
Heap dump mode is usually possible to choose via tool specific option.
Answering your questions
Why a full gc is called when we try to get heap dump?
Answered above
What is the motivation behind the gc not reclaiming memory and causes the application run with the heap size between ~70% and ~95%, while jvm can use and perfectly work with only 10%?
Reclaiming memory required CPU resources and impacts application latency. While JVM is operating withing memory limits it will mostly avoid expensive GC.
Recent development of containers is driving some changes in JVM GC department, but statement above is still relevant for default GC configuration.
What can be done to force jvm to do gc more aggressively to avoid this situation? Or should it be done for production environment?
Original answers lack problem statement. But general advises are
manage memory limits per container (JVM derive heap size from container limits unless they are overridden explicitly)
forcing GC periodically is possible, though unlikely to be a solution to any problem
G1GC has wide range of tuning options relevant for containers

IBM J9 View nursery and tenure areas using JMX

I'm searching for a possibility to view using JMX (eg. jConsole) the tenure and nursery areas in IBM J9 JVM?
I connected to IBM Websphere instance (which is using gencon GC - I checked it in logs by using verbose GC) and I can see few Memory Pools:
Memory Pool "Java heap"
Memory Pool "JIT code cache"
Memory Pool "class storage"
Memory Pool "JIT data cache"
Memory Pool "miscellaneous non-heap storage"
Unfortunatelly I can't find any way to view the tenured and nursery areas.
I checked in Hotspot and there are explicite memory areas for Eden, Survivor and Old generations.
Is there a way to view those areas in J9 JVM using JMX?
Details about my JVM:
Java(TM) SE Runtime Environment (build pxa6460_26sr8ifix-20140630_01(SR8+IX90144+IV62044))
IBM J9 VM (build 2.6, JRE 1.6.0 Linux amd64-64 Compressed References 20140409_195736 (JIT enabled, AOT enabled)
J9VM - R26_Java626_SR8_20140409_1526_B195736
JIT - r11.b06_20140409_61252
GC - R26_Java626_SR8_20140409_1526_B195736_CMPRSS
J9CL - 20140409_195736)
JCL - 20140406_01
There's no way to get the tenure and nursery areas in IBM J9 JVM.
However IBM provide some consumability tools for parsing verbosegc files (and many other tools too!)
https://www.ibm.com/developerworks/java/jdk/tools/gcmv/
You can load a verbose GC file into this and view either the raw data, structured data or line plots.
In the verbosegc file incidently, the tenured and nursery stats begin with tags such as :
<mem type="nursery"
<mem type="tenured"

Java OutOfMemoryError in Windows Azure Virtual Machine

When I run my Java applications on a Window Azure's Ubuntu 12.04 VM,
with 4 by 1.6GHZ core and 7G RAM, I get the following out of memory error after a few minutes.
java.lang.OutOfMemoryError: GC overhead limit exceeded
I have a swap size of 15G byte, and the max heap size is set to 2G. I am using a Oracle Java 1.6. Increase the max heap size only delays the out of memory error.
It seems the JVM is not doing garbage collection.
However, when I run the above Java application on my local Windows 8 PC (core i7) , with the same JVM parameters, it runs fine. The heap size never exceed 1G.
Is there any extra setting on Windows Azure linux VM for running Java apps ?
On Azure VM, I used the following JVM parameters
-XX:+HeapDumpOnOutOfMemoryError
to get a heap dump. The heap dump shows an actor mailbox and Camel messages are taking up all the 2G.
In my Akka application, I have used Akka Camel Redis to publish processed messages to a Redis channel.
The out of memory error goes away when I stub out the above Camel Actor. It looks as though Akka Camel Redis Actor
is not performant on the VM, which has a slower cpu clock speed than my Xeon CPU.
Shing
The GC throws this exception when too much time is spent in garbage collection without collecting anything. I believe the default settings are 98% of CPU time being spent on GC with only 2% of heap being recovered.
This is to prevent applications from running for an extended period of time while making no progress because the heap is too small.
You can turn this off with the command line option -XX:-UseGCOverheadLimit

Running multiple instances of a jar at the same time: memory issue

Right now I am running multiple instances of a jar (code written in scala) at the same time on a cluster with 24 cores and 64G memory, with Ubuntu 11.04 (GNU/Linux 2.6.38-15-generic x86_64). I observe an issue of heavy memory usage, which is super-linear to the number of instances I run. To be more specific, here is what I am doing
Code in scala and use sbt to pack into a jar.
Login to the cluster, use screen to open a new screen session.
Open multiple windows in this screen.
In each window, run java -cp myjar.jar main.scala.MyClass
What I observe is that, say when I only run 7 instances, about 10G memory is used, and everything is fine. Now I run 14 instances. Memory is quickly eaten up and all 64G is occupied, and then the machine slows down dramatically and it is even difficult to log in. By monitoring the machine through htop, I can see that only a few cores are running at a time. Can anyone tell me what is happening to my program and how to fix it so that I am able to use the computational resources efficiently? Thanks!
To use the computational resources efficiently, you would have to start one jar which starts multiple threads in one JVM. If you start 14 instances of the same jar, you have 14 isolated JVMs running.
Get ids of all java processes using jps
Get the most heavy weight process using jmap
Get heap dump of that process using the same jmap.
Analyze heap usage with jhat
Alternatively you could copy dump locally and explore with tools like Eclipse Memory Analyzer Open Source Project
If after solving this issue you totally loved these shell like tools(as I do) go through complete list of java troubleshooting tools - it will save you a lot of time so you could go to pub earlier instead of staying late and debugging memory/cpu issues

Resources