Heap Usage (mb)
Metric Actual
Max:
8126
Used:
2526
(31%)
Committed:
8126
Init:
8192
Non Heap Usage (mb)
Metric Actual
Max:
2144
Used:
200
(9%)
Committed:
326
Init:
23
Thread Usage
Need Help?
Metric Actual
Live:
585
Daemon:
557 (95%)
I've already setup my server using these configuration:
set "JAVA_OPTS=-Xms8g -Xmx8g -XX:MaxPermSize=2g -XX:+UseParallelGC"
set "JAVA_OPTS=%JAVA_OPTS% -Djava.net.preferIPv4Stack=true"
set "JAVA_OPTS=%JAVA_OPTS% -Djboss.modules.system.pkgs=org.jboss.byteman"
My problem is that sometimes the server will hang on itself and I need to restart it manually. Thread Usage always above 90% when the server started. Is it normal? What should I do to avoid this kind of problem and what are the causes ?
-XX:MaxPermSize=2g
This is very high
-Xms8g -Xmx8g -XX:+UseParallelGC"
This accepts 8 second full GC pauses.
Thread Usage always above 90% when the server started
What do you mean by this?
Then you'll need to figure out what happens when "hang on itself". Check the CPU usage of the Java process when this happens.
When there is no almost usage it could be a deadlock or the application could waiting on something else. A thread dump will help you here.
When it is high it could either be Java or GC related. Have a look at gc logs and run a profiler like async-profiler.
Also have a look at other system activity like swapping.
Try disabling explicit GC collection and run your own GC . In this way , the unwanted and unused garbage will be cleared .
Something like this:(Sample config for Jboss + 8Gb RAM server with a web app)
JAVA_OPTS="-server -Xms7800m -Xmx7800m -XX:NewSize=5632m -XX:MaxNewSize=5632m -XX:+UseParNewGC -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=256m -XX:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=50 -XX:+UseCMSInitiatingOccupancyOnly -XX:ConcGCThreads=4 -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/mydirectory/project/projectgclog-$(date +%Y_%m_%d-%H_%M).log -XX:+DisableExplicitGC -Djava.net.preferIPv4Stack=true"
By this way , my server periodically sweeps out the unwanted garbage and lets the server run without any error.
Hope this helps.
Related
We have a Java Micorservice in our application which is connected to Postgres as well as Phoenix. We are using Spring Boot 2.x.
The problem is we are executing endurance testing for our application for about 8 hours and we could observe that the used heap is keep on increasing though we used the recommended suggestions for VM arguments, looks like a memory leak. we analysed the heap dump however the root cause is not exactly clear for us, can some experts help based on the results?
The VM arguments that we are actually using are:
-XX:ConcGCThreads=8 -XX:+DisableExplicitGC -XX:InitialHeapSize=536870912 -XX:InitiatingHeapOccupancyPercent=45 -XX:MaxGCPauseMillis=1000 -XX:MaxHeapFreeRatio=70 -XX:MaxHeapSize=536870912 -XX:MinHeapFreeRatio=40 -XX:ParallelGCThreads=16 -XX:+PrintAdaptiveSizePolicy -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:StringDeduplicationAgeThreshold=1 -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseG1GC -XX:+UseStringDeduplication
We are expecting the used heap should be flat in the GC log, however memory consumption is not released and it keeps on increasing.
Heap Dump:
GC graph:
I'm not sure which tool you are using above, but I would be looking for the dominator hierarchy in the heap. Eclipse MAT is a good tool to analyse heap dumps and it can point you in the direction of what's actually holding the memory and you can decide if you want to categorise it as a leak or not. Regardless of the label you attach, if the application is going to crash after a while because it runs out of memory, then it is a problem.
This blog also discusses diagnosing this type of problems.
Recently we have upgraded alfresco to 5.2.G with Solar4 , Tomcat Version 7.0.78 ,java version "1.8.0_111" environment -- RHEL -7 ,Virtual machine , 32 core CPU ,32 GB RAM application is started without errors but with in 2 -3 hours application performance is slow & getting high CPU utilization.
Can anyone suggest what are basic tuning parameters need to change in OS & JVM & Alfresco & solar LEVEL, below are the JVM arguments add in tomcat:
JAVA_OPTS="-server -Xms24576m -Xmx24576m -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC
-XX:+CMSIncrementalMode -XX:CMSInitiatingOccupancyFraction=80 -XX:+UseParNewGC
-XX:ParallelGCThreads=6 -XX:+UseCompressedOops -XX:+CMSClassUnloadingEnabled
-Djava.awt.headless=true -Dalfresco.home=/opt/new/alfresco -Dcom.sun.management.jmxremote
-Dsun.security.ssl.allowUnsafeRenegotiation=true -XX:ReservedCodeCacheSize=2048m"
& alfresco-global.properties (line breaks added for readability) :
cifs.serverName=?
system.thumbnail.generate=false
system.enableTimestampPropagation=false
system.workflow.engine.activiti.enabled=false
sync.mode=OFF,system.workflow.engine.jbpm.enabled=false
removed-index.recovery.mode=FULL
I had a similar issue. Keep in mind that when checking CPU for multi-core CPU's using 'top' command, it shows the combined CPU usage over all the cores. To show individual cores press '1'.
I also applied the patch suggested above but it didnt show much difference (also there is an issue with that patch for Alfresco 5.2.g).
I then tried re-indexing all my content which seemed to speed up things a lot.
There is still a lot of usage but only during working hours. Once everyone goes home, it returns back to almost 0% usage
Another thing that really slowed my Alfresco response was a corrupted meta-data database. After I restored a backup, it performed much faster.
I also disabled a lot of unnecessary features including generation of thumbnails
I'm having memory issues in my production environment webapp.
I have a Tomcat running in a AWS EC2 t2.medium instance (2 cores cpu 64 bits + 4gb ram).
This is some info from javamelody:
OS: OS Linux, 3.13.0-87-generic , amd64/64 (2 cores)
Java: Java(TM) SE Runtime Environment, 1.8.0_91-b14
JVM: Java HotSpot(TM) 64-Bit Server VM, 25.91-b14, mixed mode
Tomcat "http-nio-8080": Busy threads = 4 / 200 ++++++++++++
Bytes received = 8.051
Bytes sent = 206.411.053
Request count = 3.204
Error count = 70
Sum of processing times (ms) = 540.398
Max processing time (ms) = 12.319
Memory: Non heap memory = 130 Mb (Perm Gen, Code Cache),
Buffered memory = 0 Mb,
Loaded classes = 12,258,
Garbage collection time = 1,394 ms,
Process cpu time = 108,100 ms,
Committed virtual memory = 5,743 Mb,
Free physical memory = 142 Mb,
Total physical memory = 3,952 Mb,
Free swap space = 0 Mb,
Total swap space = 0 Mb
Free disk space: 27.457 Mb
And my application goes into:
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 12288 bytes for committing reserved memory.
I had applied the following config options, but it seems to failing again.
export JAVA_OPTS="-Dfile.encoding=UTF-8 -Xms1024m -Xmx3072m -XX:PermSize=128m -XX:MaxPermSize=256m"
Is this config ok for my linux configuration?
For further information: my database and file system is running in another t2.medium instance (Windows 2 cores cpu + 4gb ram).
Thanks, and sorry for my english.
EDITED:
The problem is still going on. The weirdest thing is that at logs there was no big proccess running, and the time at it passed was at the very morning (so few people were connected to the application).
In the past I had the application in a Windows environment and non of this was going on. I thinked that a Linux instance would be better but I am driving crazy.
The log:
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000f2d80000, 43515904, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 43515904 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /home/ubuntu/hs_err_pid20233.log
Now my config is this at setenv.sh:
export CATALINA_OPTS="-Dfile.encoding=Cp1252 -Xms2048m -Xmx2048m -server"
And I don't know if makes any sense but the hs_err file have this line:
Memory: 4k page, physical 4046856k(102712k free), swap 0k(0k free)
Is this config ok for your linux configuration? Well, we don't know what else is running on the machine and how much memory will be used by other processes. Thus, the answer is "it depends". However, here's what you can do to figure out the correct setting by yourself:
If you want to know right ahead that the server has enough memory, set -Xmx and -Xms to the same value. This way you'd run into OutOfMemory conditions right when you start the server, not at some randome time in the future. Maybe your OS can just allocate 2.8G instead of 3. (same with the permsize parameters in case you're still running on Java7, otherwise remove them)
You might also want to add -XX:+AlwaysPreTouch to the list of your parameters, so that you can be sure the memory has been allocated right from the beginning.
And lastly, you don't want to set JAVA_OPTS (this will be used for any start of a JVM, including the shutdown command), instead use CATALINA_OPTS (which will only be used for starting up tomcat)
I'm running a Socket.IO server with Node.JS, which normally uses about 400 MB of memory, because there's a lot of data being cached to send to clients. However, after a couple of hours it suddenly starts growing to 1.4 GB of usage over about 40 minutes. Someone told me to use heapdump to find if there is a memory leak.
The problem is that the heapdump only turned out to be 317 MB and nothing in it looks out of the ordinary, so I'm stuck with debugging. I've also run it with nodetime, which says that the V8 heap usage is around 400 MB, but the total V8 heap size is 1.4 GB.
How do I find out where the remaining 1 GB comes from?
Maybe node-memwatch could help you?
https://github.com/lloyd/node-memwatch
From its Readme:
node-memwatch is here to help you detect and find memory leaks in
Node.JS code. It provides:
A leak event, emitted when it appears your code is leaking memory.
A stats event, emitted occasionally, giving you data describing your
heap usage and trends over time.
A HeapDiff class that lets you compare the state of your heap between
two points in time, telling you what has been allocated, and what has
been released.
Currently I'm running an application in Tomcat 7 with the following jvm arguments:
-Dcatalina.home=E:\Tomcat
-Dcatalina.base=E:\Tomcat
-Djava.endorsed.dirs=E:\Tomcat\endorsed
-Djava.io.tmpdir=E:\TomcatE\temp
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
-Djava.util.logging.config.file=E:\Tomcat\conf\logging.properties
-XX:MaxPermSize=512m
-XX:PermSize=512m
-XX:+UseConcMarkSweepGC
-XX:NewSize=7g
-XX:MaxTenuringThreshold=31
-XX:CMSInitiatingOccupancyFraction=90
-XX:+UseCMSInitiatingOccupancyOnly
-XX:SurvivorRatio=6
-XX:TargetSurvivorRatio=90
-verbose:gc -XX:+PrintGCDetails
-XX:+PrintGCApplicationStoppedTime
-XX:+PrintGCDateStamps
-Xloggc:E:\Tomcat7\gc.log
I'm using CMS as garbage collector and the behavior seems to be very strange. Even having 13GB of Old generation, when a major collection is performed (I guess at 90% of occupied space -> -XX:CMSInitiatingOccupancyFraction=90) , CMS is not able to clean a large amount of objects (still having occupied space of at least 7GB). I don't believe that application has so many long-lived objects (not sure!). Is it not supposed that CMS release much more space? Or could be something related to fragmentation?
Because of this behavior I'm having frequent CMS cycles...that I would like to decrease.
Even using a low pause GC, sometimes application stops 15-30 secs... How can I decrease pause time in CMS?
Could be a good idea to have more JVMS instead of having one with 20GB of heap?
Thanks a lot
First, you can dump the heap dump file with:
jmap -heap:live heap.bin ${pid}
command and find the long-lived objects by mat
Second, because the heap size is bigger than 8G and you can try Garbage First(G1) Collector