Out of memory caused by TrimmableHeapBuffer - memory-leaks

I am facing Out Of Memory issues on Glassfish 4.0 (Build 89) server.
Version of grizzly from \glassfish\glassfish\modules\nucleus-grizzly-all.jar is
version="2.3.1"
Whenever I face this issue, there is always at-least one thread with following dump:
"http-listener-2(18)" - Thread t#20756
java.lang.Thread.State: RUNNABLE
at java.nio.ByteBuffer.wrap(ByteBuffer.java:373)
at java.nio.ByteBuffer.wrap(ByteBuffer.java:396)
at org.glassfish.grizzly.memory.HeapBuffer.toByteBuffer0(HeapBuffer.java:1008)
at org.glassfish.grizzly.memory.HeapBuffer.toByteBuffer(HeapBuffer.java:874)
at org.glassfish.grizzly.memory.HeapBuffer.toByteBuffer(HeapBuffer.java:866)
at org.glassfish.grizzly.ssl.SSLConnectionContext.unwrap(SSLConnectionContext.java:172)
at org.glassfish.grizzly.ssl.SSLBaseFilter.unwrapAll(SSLBaseFilter.java:353)
at org.glassfish.grizzly.ssl.SSLBaseFilter.handleRead(SSLBaseFilter.java:252)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:288)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:206)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:136)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:114)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:838)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:113)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:115)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:55)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:135)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:564)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:544)
at java.lang.Thread.run(Thread.java:748)
This happening too much times per day
enter image description here

Related

Application is taking more time to process the JNI weak reference during remark phase of G1GC

The application is running to unexpected behavior due to this long GC and am trying to bring the GC time below 500ms.
Snippet from GC logs:
2020-03-17T16:50:04.505+0900: 1233.742: [GC remark
2020-03-17T16:50:04.539+0900: 1233.776: [GC ref-proc
2020-03-17T16:50:04.539+0900: 1233.776: [SoftReference, 0 refs, 0.0096740 secs]
2020-03-17T16:50:04.549+0900: 1233.786: [WeakReference, 3643 refs, 0.0743530 secs]
2020-03-17T16:50:04.623+0900: 1233.860: [FinalReference, 89 refs, 0.0100470 secs]
2020-03-17T16:50:04.633+0900: 1233.870: [PhantomReference, 194 refs, 9 refs, 0.0168580 secs]
2020-03-17T16:50:04.650+0900: 1233.887: [JNI Weak Reference, 0.9726330 secs], 1.0839410 secs], 1.1263670 secs]
Application is running on Java 7 with the below JVM options:
CommandLine flags: -XX:+AggressiveOpts -XX:GCLogFileSize=52428800 -XX:+HeapDumpOnOutOfMemoryError -XX:InitialHeapSize=4294967296
-XX:+ManagementServer -XX:MaxHeapSize=8589934592 -XX:MaxPermSize=805306368 -XX:MaxTenuringThreshold=15 -XX:NewRatio=5
-XX:NumberOfGCLogFiles=30 -XX:+OptimizeStringConcat -XX:PermSize=268435456 -XX:+PrintGC -XX:+PrintGCDateStamps
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintReferenceGC -XX:+UseCompressedOops
-XX:+UseFastAccessorMethods -XX:+UseG1GC -XX:+UseGCLogFileRotation -XX:+UseStringCache
Changing the parameters like NewRatio, MaxTenuringThreshold, InitialHeapSize etc is changing the frequency of such long GCs, but still there are one or two.
Is there any way to figure out what is contributing to long processing time of JNI weak reference?

Inconsistent Tomcat response time

My Tomcat (8.5.8, behind Apache HTTPD) sometimes (< 1 percent) has high waited time (result from Glowroot).
JVM Thread Stats
CPU time: 422.3 milliseconds
Blocked time: 0.0 milliseconds
Waited time: 26,576.0 milliseconds
Allocated memory: 73.3 MB
The provided stack trace shows Tomcat was waiting for a latch. At that time my Tomcat host memory, CPU, disk IO, garbage collection looks good.
I tried to issue the same request and same data (compressed size 2.5MB, original size about 25MB) was returned. Most of the times were okay (< 2 seconds) but few times it took long (> 20 seconds).
com.fasterxml.jackson.core.json.UTF8JsonGenerator.writeString(UTF8JsonGenerator.java:432)
com.fasterxml.jackson.core.json.UTF8JsonGenerator._writeStringSegments(UTF8JsonGenerator.java:1148)
com.fasterxml.jackson.core.json.UTF8JsonGenerator._flushBuffer(UTF8JsonGenerator.java:2003)
org.springframework.security.web.context.OnCommittedResponseWrapper$SaveContextServletOutputStream.write(OnCommittedResponseWrapper.java:540)
org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:96)
org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:369)
org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:391)
org.apache.catalina.connector.OutputBuffer.append(OutputBuffer.java:713)
org.apache.catalina.connector.OutputBuffer.flushByteBuffer(OutputBuffer.java:808)
org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:351)
org.apache.coyote.Response.doWrite(Response.java:517)
org.apache.coyote.ajp.AjpProcessor$SocketOutputBuffer.doWrite(AjpProcessor.java:1469)
org.apache.coyote.ajp.AjpProcessor.access$900(AjpProcessor.java:54)
org.apache.coyote.ajp.AjpProcessor.writeData(AjpProcessor.java:1353)
org.apache.tomcat.util.net.SocketWrapperBase.write(SocketWrapperBase.java:361)
org.apache.tomcat.util.net.SocketWrapperBase.writeBlocking(SocketWrapperBase.java:419)
org.apache.tomcat.util.net.SocketWrapperBase.doWrite(SocketWrapperBase.java:670)
org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.doWrite(NioEndpoint.java:1259)
org.apache.tomcat.util.net.NioSelectorPool.write(NioSelectorPool.java:157)
org.apache.tomcat.util.net.NioBlockingSelector.write(NioBlockingSelector.java:114)
org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.awaitWriteLatch(NioEndpoint.java:1109)
org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.awaitLatch(NioEndpoint.java:1106)
java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
sun.misc.Unsafe.park(Native Method)
TIMED_WAITING
The waiting is due to what reason? Connectivity issue between client? Slow client? Connectivity issue between Apache HTTPD and Tomcat? Not enough file descriptor?
What is the latch at here?

how to find application suspension time from GC log files

I am new to Garbage collection ,Plz some one help me to get answers for my following questions with clear explanation
I want to find application suspension time and suspension count from the GC logs files for different JVM's :
SUN
jRockit
IBM
of different versions.
A. For SUN i am using JVM options
-Xloggc:gc.log -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+UseParNewGC -XX:+PrintGCDetails -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode
B. For JRockit i am using JVM options
-Xms100m -Xmx100m -Xns50m -Xss200k -Xgc:genconpar -Xverbose:gc -Xverboselog:gc_jrockit.log
My Questions are
Q1. What is suspension time of an application and why it occurs.
Q2. How to say by looking on logs that suspension was occurred.
Q3. Does suspension time of an application = sum of GC Times.
Eg:
2013-09-06T23:35:23.382-0700: [GC 150.505: [ParNew
Desired survivor size 50331648 bytes, new threshold 2 (max 15)
- age 1: 28731664 bytes, 28731664 total
- age 2: 28248376 bytes, 56980040 total
: 688128K->98304K(688128K), 0.2166700 secs] 697655K->163736K(10387456K), 0.2167900 secs] [Times: user=0.44 sys=0.04, real=0.22 secs]
2013-09-06T23:35:28.044-0700: 155.167: [GC 155.167: [ParNew
Desired survivor size 50331648 bytes, new threshold 15 (max 15)
- age 1: 22333512 bytes, 22333512 total
- age 2: 27468336 bytes, 49801848 total
: 688128K->71707K(688128K), 0.0737140 secs] 753560K->164731K(10387456K), 0.0738410 secs] [Times: user=0.30 sys=0.02, real=0.07 secs]
suspensionTime = 0.2167900 secs + 0.0738410 secs
i. If yes do i need to add all times for every gc occurs
ii. If no Plz explain me in detail for those logs we consider that suspension occured and those not consider for different Collectors with logs
Q4. Can we say GC times "0.2167900 , 0.0738410" are equal to GC Pauses ie;TotalGCPause = 0.2167900 + 0.0738410
Q5. Can we calculate suspension time by using only above flags or we need to include extra flags like -XX:+PrintGCApplicationStoppedTime for SUN
Q6. I seen an tool dyna trace it calculating suspension time and count for SUN with out using the flag -XX:+PrintGCApplicationStoppedTime
If you want the most precise information about the amount of time your application was stopped due to GC activity, you should go with -XX:+PrintGCApplicationStoppedTime.
-XX:+PrintGCApplicationStoppedTime enables the printing of the amount of time application threads have been stopped as the result of an internal HotSpot VM operation (GC and safe-point operations).
But, for practical daily usage the information provided by the GC logs is sufficient. You can use the approach described in your question 3. to determine the time spent in the GC.

Scrolling UIImageViews getting "Low Memory Warning"

My application is relatively simple - basically a UIScrollView to look at a couple of hundred (large) JPEGs. Yet it is crashing consistently with a "Low Memory Warning"
The scroll view consists of three UIImageViews (previousPage, currentPage, and nextPage). Upon starting, and every time the current page is scrolled, I "reset" the three UIImageViews with new UIImages.
NSString *previousPath = [[NSBundle mainBundle] pathForResource:previousName ofType:#"jpg"];
previousPage.image = [UIImage imageWithContentsOfFile:previousPath];
currentPage.image = [UIImage imageWithContentsOfFile:currentPath];
nextPage.image = [UIImage imageWithContentsOfFile:nextPath];
Running in Allocations, the number of UIImage objects #living stays constant through the running of the app, but the number of #transitory UIImage objects can grow quite high.
Is this a problem? Is there some way that I can be 'releasing' the UIImage objects? Am I right in thinking this is the source of what must be a memory leak?

Is there a leak in AVPlayers' init method?

I am working on an app that makes extensive use of AVfoundation. Recently I did some leak checking with Instruments. The "leaks" instrument was reporting a leak at a point a in the code where I was instantiating a new AVPlayer, like this:
player1 = [AVPlayer playerWithPlayerItem:playerItem1];
To reduce the problem, I created an entirely new Xcode project with for a single view application, using ARC, and put in the following line.
AVPlayer *player = [[AVPlayer alloc] init];
This produces the same leak report in Instruments. Below is the stack trace. Does anybody know why a simple call to [[AVPlayer alloc] init] would cause a leak? Although I am using ARC, I tried a turning it off and inserting the corresponding [player release]; instruction and it makes no difference. This seems to have to do specifically with AVPlayer.
0 libsystem_c.dylib malloc
1 libsystem_c.dylib strdup
2 libnotify.dylib token_table_add
3 libnotify.dylib notify_register_check
4 AVFoundation -[AVPlayer(AVPlayerMultitaskSupport) _iapdExtendedModeIsActive]
5 AVFoundation -[AVPlayer init]
6 TestApp -[ViewController viewDidLoad] /Users/jason/Synaptic Revival/Project Field Trip/software development/TestApp/TestApp/ViewController.m:22
7 UIKit -[UIViewController view]
--- 2 frames omitted ---
10 UIKit -[UIWindow makeKeyAndVisible]
11 TestApp -[AppDelegate application:didFinishLaunchingWithOptions:] /Users/jason/Synaptic Revival/Project Field Trip/software development/TestApp/TestApp/AppDelegate.m:24
12 UIKit -[UIApplication _callInitializationDelegatesForURL:payload:suspended:]
--- 3 frames omitted ---
16 UIKit _UIApplicationHandleEvent
17 GraphicsServices PurpleEventCallback
18 CoreFoundation __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__
--- 3 frames omitted ---
22 CoreFoundation CFRunLoopRunInMode
23 UIKit -[UIApplication _run]
24 UIKit UIApplicationMain
25 TestApp main /Users/jason/software development/TestApp/TestApp/main.m:16
26 TestApp start
This 48-bytes leak is confirmed by Apple as a known issue, which not only lives in AVPlayer but also in UIScrollView (I have an app happened to use both components.)
Please see this thread to get detail:
Memory leak every time UIScrollView is released
Here's the link to apple's answer on the thead (you may need a developer id to sign in):
https://devforums.apple.com/thread/144449?start=0&tstart=0
Apple's brief quote:
This is a known bug that will be fixed in a future release.
In the meantime, while all leaks are obviously undesirable this isn't going to cause any user-visible problems in the real world. A user would have to scroll roughly 22,000 times in order to leak 1 megabyte of memory, so it's not going to impact daily usage.
It seems any component that refers to notify_register_check and notify_register_mach_port will cause this issue.
Currently no obvious walk around or fix can be found. It is confirmed that this issue remains in iOS versions for 5.1 and 5.1.1. Hopefully apple can fix that in iOS 6 because it is really annoying and destructive.

Resources