Is there a leak in AVPlayers' init method? - memory-leaks

I am working on an app that makes extensive use of AVfoundation. Recently I did some leak checking with Instruments. The "leaks" instrument was reporting a leak at a point a in the code where I was instantiating a new AVPlayer, like this:
player1 = [AVPlayer playerWithPlayerItem:playerItem1];
To reduce the problem, I created an entirely new Xcode project with for a single view application, using ARC, and put in the following line.
AVPlayer *player = [[AVPlayer alloc] init];
This produces the same leak report in Instruments. Below is the stack trace. Does anybody know why a simple call to [[AVPlayer alloc] init] would cause a leak? Although I am using ARC, I tried a turning it off and inserting the corresponding [player release]; instruction and it makes no difference. This seems to have to do specifically with AVPlayer.
0 libsystem_c.dylib malloc
1 libsystem_c.dylib strdup
2 libnotify.dylib token_table_add
3 libnotify.dylib notify_register_check
4 AVFoundation -[AVPlayer(AVPlayerMultitaskSupport) _iapdExtendedModeIsActive]
5 AVFoundation -[AVPlayer init]
6 TestApp -[ViewController viewDidLoad] /Users/jason/Synaptic Revival/Project Field Trip/software development/TestApp/TestApp/ViewController.m:22
7 UIKit -[UIViewController view]
--- 2 frames omitted ---
10 UIKit -[UIWindow makeKeyAndVisible]
11 TestApp -[AppDelegate application:didFinishLaunchingWithOptions:] /Users/jason/Synaptic Revival/Project Field Trip/software development/TestApp/TestApp/AppDelegate.m:24
12 UIKit -[UIApplication _callInitializationDelegatesForURL:payload:suspended:]
--- 3 frames omitted ---
16 UIKit _UIApplicationHandleEvent
17 GraphicsServices PurpleEventCallback
18 CoreFoundation __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__
--- 3 frames omitted ---
22 CoreFoundation CFRunLoopRunInMode
23 UIKit -[UIApplication _run]
24 UIKit UIApplicationMain
25 TestApp main /Users/jason/software development/TestApp/TestApp/main.m:16
26 TestApp start

This 48-bytes leak is confirmed by Apple as a known issue, which not only lives in AVPlayer but also in UIScrollView (I have an app happened to use both components.)
Please see this thread to get detail:
Memory leak every time UIScrollView is released
Here's the link to apple's answer on the thead (you may need a developer id to sign in):
https://devforums.apple.com/thread/144449?start=0&tstart=0
Apple's brief quote:
This is a known bug that will be fixed in a future release.
In the meantime, while all leaks are obviously undesirable this isn't going to cause any user-visible problems in the real world. A user would have to scroll roughly 22,000 times in order to leak 1 megabyte of memory, so it's not going to impact daily usage.
It seems any component that refers to notify_register_check and notify_register_mach_port will cause this issue.
Currently no obvious walk around or fix can be found. It is confirmed that this issue remains in iOS versions for 5.1 and 5.1.1. Hopefully apple can fix that in iOS 6 because it is really annoying and destructive.

Related

Finalizer Blocked Issue

hi I have searched through the various posts and answers regarding a perpetually blocked finalizer thread. Several seem useful, but I am needing additional clarity.
In my application API calls are coming from a Web Service to an NT Service. I am working in a test environment and can reproduce this "effect" over and over. In my test I will follow this general procedure:
Run a load test in Visual Studio (10 users) to make a pattern of API calls. (My time for the load test is set to 99 hours, so that I can run it continuously and simply abort the test when I want to stop the load, and check on the conditions of the process.)
Start Perfmon and monitor key statistics, private bytes, virtual bytes, threads, handles, #time in GC, etc....
Stop the load test.
Attach Windbg to the process. Check to see if something is going wrong. (FinalizeQueue, etc.)
If nothing is wrong, detach and restart the load test.
I'll follow this general procedure and alter the test to check various API call to see what those API's do to the memory profile of the application.
As I was going through this, I noticed that occasionally when I stop the load test, I will get a "thread was being aborted" exception. This will also rarely be seen in the logs from a customer environment. That seems to happen when an API call is made, and then the client is disconnected for whatever reasons.
However, after getting this exception, I am noticing that the finalizer of my process is hung. (The NT class service, not w3wp.)
Regardless of how long the process is left unburdened and ununsed, this is the top of the output from !finalizequeue:
0:002> !finalizequeue
SyncBlocks to be cleaned up: 0
Free-Threaded Interfaces to be released: 0
MTA Interfaces to be released: 0
STA Interfaces to be released: 0
----------------------------------
generation 0 has 15 finalizable objects (0c4b3db8->0c4b3df4)
generation 1 has 9 finalizable objects (0c4b3d94->0c4b3db8)
generation 2 has 771 finalizable objects (0c4b3188->0c4b3d94)
Ready for finalization 178 objects (0c4b3df4->0c4b40bc)
I can add to the objects "Ready For Finalization" by making API calls, but the Finalizer thread never seems to move on and empty the list.
The finalizer thread shown above is thread 002. Here is the call-stack:
0:002> !clrstack
OS Thread Id: 0x29bc (2)
Child SP IP Call Site
03eaf790 77b4f29c [DebuggerU2MCatchHandlerFrame: 03eaf790]
0:002> kb 2000
# ChildEBP RetAddr Args to Child
00 03eae910 773a2080 00000001 03eaead8 00000001 ntdll!NtWaitForMultipleObjects+0xc
01 03eaeaa4 77047845 00000001 03eaead8 00000000 KERNELBASE!WaitForMultipleObjectsEx+0xf0
02 03eaeaf0 770475f5 05b72fb8 00000000 ffffffff combase!MTAThreadWaitForCall+0xd5 [d:\rs1\onecore\com\combase\dcomrem\channelb.cxx # 7290]
03 03eaeb24 77018457 03eaee5c 042c15b8 05b72fb8 combase!MTAThreadDispatchCrossApartmentCall+0xb5 [d:\rs1\onecore\com\combase\dcomrem\chancont.cxx # 227]
04 (Inline) -------- -------- -------- -------- combase!CSyncClientCall::SwitchAptAndDispatchCall+0x38a [d:\rs1\onecore\com\combase\dcomrem\channelb.cxx # 6050]
05 03eaec40 76fbe16b 03eaee5c 03eaee34 03eaee5c combase!CSyncClientCall::SendReceive2+0x457 [d:\rs1\onecore\com\combase\dcomrem\channelb.cxx # 5764]
06 (Inline) -------- -------- -------- -------- combase!SyncClientCallRetryContext::SendReceiveWithRetry+0x29 [d:\rs1\onecore\com\combase\dcomrem\callctrl.cxx # 1734]
07 (Inline) -------- -------- -------- -------- combase!CSyncClientCall::SendReceiveInRetryContext+0x29 [d:\rs1\onecore\com\combase\dcomrem\callctrl.cxx # 632]
08 03eaec9c 77017daa 05b72fb8 03eaee5c 03eaee34 combase!DefaultSendReceive+0x8b [d:\rs1\onecore\com\combase\dcomrem\callctrl.cxx # 590]
09 03eaee10 76f72fa5 03eaee5c 03eaee34 03eaf3d0 combase!CSyncClientCall::SendReceive+0x68a [d:\rs1\onecore\com\combase\dcomrem\ctxchnl.cxx # 767]
0a (Inline) -------- -------- -------- -------- combase!CClientChannel::SendReceive+0x7c [d:\rs1\onecore\com\combase\dcomrem\ctxchnl.cxx # 702]
0b 03eaee3c 76e15eea 05b59e04 03eaef48 17147876 combase!NdrExtpProxySendReceive+0xd5 [d:\rs1\onecore\com\combase\ndr\ndrole\proxy.cxx # 1965]
0c 03eaf358 76f73b30 76f5bd70 76f84096 03eaf390 rpcrt4!NdrClientCall2+0x53a
0d 03eaf378 7706313f 03eaf390 00000008 03eaf420 combase!ObjectStublessClient+0x70 [d:\rs1\onecore\com\combase\ndr\ndrole\i386\stblsclt.cxx # 217]
0e 03eaf388 77026d85 05b59e04 03eaf3d0 011d0940 combase!ObjectStubless+0xf [d:\rs1\onecore\com\combase\ndr\ndrole\i386\stubless.asm # 171]
0f 03eaf420 77026f30 011d0930 73b1a9e0 03eaf4e4 combase!CObjectContext::InternalContextCallback+0x255 [d:\rs1\onecore\com\combase\dcomrem\context.cxx # 4401]
10 03eaf474 73b1a88b 011d0940 73b1a9e0 03eaf4e4 combase!CObjectContext::ContextCallback+0xc0 [d:\rs1\onecore\com\combase\dcomrem\context.cxx # 4305]
11 03eaf574 73b1a962 73b689d0 03eaf60c b42b9326 clr!CtxEntry::EnterContext+0x252
12 03eaf5ac 73b1a9a3 73b689d0 03eaf60c 00000000 clr!RCW::EnterContext+0x3a
13 03eaf5d0 73b1eed3 03eaf60c b42b936a 740ecf60 clr!RCWCleanupList::ReleaseRCWListInCorrectCtx+0xbc
14 03eaf62c 73b6118f b42b90f6 03eaf790 00000000 clr!RCWCleanupList::CleanupAllWrappers+0x119
15 03eaf67c 73b61568 03eaf790 73b60f00 00000001 clr!SyncBlockCache::CleanupSyncBlocks+0xd0
16 03eaf68c 73b60fa9 b42b9012 03eaf790 73b60f00 clr!Thread::DoExtraWorkForFinalizer+0x75
17 03eaf6bc 73a7b4c9 03eaf7dc 011a6248 03eaf7dc clr!FinalizerThread::FinalizerThreadWorker+0xba
18 03eaf6d0 73a7b533 b42b91fe 03eaf7dc 00000000 clr!ManagedThreadBase_DispatchInner+0x71
19 03eaf774 73a7b600 b42b915a 73b7a760 73b60f00 clr!ManagedThreadBase_DispatchMiddle+0x7e
1a 03eaf7d0 73b7a758 00000001 00000000 011b3120 clr!ManagedThreadBase_DispatchOuter+0x5b
1b 03eaf7f8 73b7a81f b42b9ebe 73b7a760 00000000 clr!ManagedThreadBase::FinalizerBase+0x33
1c 03eaf834 73af15a1 00000000 00000000 00000000 clr!FinalizerThread::FinalizerThreadStart+0xd4
1d 03eaf8d0 753c62c4 011ae320 753c62a0 c455bdb0 clr!Thread::intermediateThreadProc+0x55
1e 03eaf8e4 77b41f69 011ae320 9d323ee5 00000000 kernel32!BaseThreadInitThunk+0x24
1f 03eaf92c 77b41f34 ffffffff 77b6361e 00000000 ntdll!__RtlUserThreadStart+0x2f
20 03eaf93c 00000000 73af1550 011ae320 00000000 ntdll!_RtlUserThreadStart+0x1b
By repeating this several times, I have correlated getting the "thread was being aborted" exception, and getting a hung finalizer with this call-stack each time. Can anyone provide a clarification of what the finalizer is waiting on, and anything else related that might help identify a solution for this?
Thanks? Feel free to ask question if you need more information.
Edited Additions follow:
My supervisor sent me the code for System.ComponentModel.Component after reading the analysis by Hans, and I need to try and get clarification on a certain point. (Even if the code that I was sent was the wrong version somehow.)
Here is the finalizer for System.ComponentModel.Component:
~Component() {
Dispose(false);
}
Here is Dispose(bool):
protected virtual void Dispose(bool disposing) {
if (disposing) {
lock(this) {
if (site != null && site.Container != null) {
site.Container.Remove(this);
}
if (events != null) {
EventHandler handler = (EventHandler)events[EventDisposed];
if (handler != null) handler(this, EventArgs.Empty);
}
}
}
}
Hopefully I am not making a dumb blunder here, but if the finalizer thread only calls Dispose(false), then it cannot possibly block on anything. If that is true, then am I barking up the wrong tree? Should I be looking for some other class?
How can I use the dump file that I have to determine the actual object type that the finalizer is hanging on?
2nd Edit:
I ran my load test, and monitored finalization survivors until I started seeing it rise upward. I then stopped the load test. The result was that finalization survivors were "stuck" at 88 even if I caused a GC a few times, and never progressed downward. I resetiis and restarted SQL Server, but that statistic remained stuck at 88.
I took a dump of the process, and captured the finalizequeue output, and while I was puzzling about that, I noticed that perfmon suddenly registered a drop in finalization survivors about 15 minutes after I took the dump.
I took another dump, and recaptured the finalizequeue output and compared it to the previous. Most of it was the same with the following differences:
B A Diff
4 0 -4 System.Transactions.SafeIUnknown
6 0 -6 OurCompany.Xpedite.LogOutResponseContent
20 0 -20 System.Net.Sockets.OverlappedCache
8 2 -6 System.Security.SafeBSTRHandle
21 3 -18 System.Net.SafeNativeOverlapped
6 4 -2 Microsoft.Win32.SafeHandles.SafeFileHandle
8 9 1 System.Threading.ThreadPoolWorkQueueThreadLocals
19 1 -18 System.Net.Sockets.OverlappedAsyncResult
7 2 -5 System.Data.SqlClient.SqlDataAdapter
7 2 -5 System.Data.DataSet
13 7 -6 OurCompany.Services.Messaging.Message
79 13 -66 Microsoft.Win32.SafeHandles.SafeAccessTokenHandle
6 4 -2 System.IO.FileStream
24 3 -21 System.Data.SqlClient.SqlConnection
40 22 -18 Microsoft.Win32.SafeHandles.SafeWaitHandle
17 3 -14 System.Data.SqlClient.SqlCommand
14 4 -10 System.Data.DataColumn
7 2 -5 System.Data.DataTable
21 20 -1 System.Threading.Thread
73 68 -5 System.Threading.ReaderWriterLock
9/16/2022,Additional Information leading to a cause and possible solution:
By examining the call-stack of the finalizer thread, frame 3's variables have an entry called the pOXIDEntry, which contains the thread number of the target thread that the finalizer is waiting on. To illustrate:
In the above screenshot you can see the finalizer is thread 2 (OSID 382c). In the vars passed to MTAThreadDispatchCrossApartmentCall, it is targeting thread 5 (OSID 2e30). You will also note that this target thread is marked (mysteriously) as being an STA, rather than an MTA as all of the other threadpool workers are.
When a COM object that is created on an STA thread is cleaned up, marshalling is done to the target thread as mentioned here: https://learn.microsoft.com/en-us/troubleshoot/windows/win32/descriptions-workings-ole-threading-models
The problem in this case is that the code has erroniously set Thread.CurrentThread.ApartmentState = ApartmentState.STA for some unknown reason (this is under investigation).
When that line of code is executed, the thread is indeed marked as being an STA, but it does not automatically get a message pump to process messages that are sent to it. As such it will never receive a message sent to it, and hence never reply.
The issue of the blocked finalizer does not always happen, because usually, all COM objects are explicitly disposed on the API thread that created it. So in virtually all of the normal processing cases, the Finalizer thread is not involved in the cleanup of these COM objects.
The one rare exception is when a Thread.Abort occurs on one of these threads, which can be imposed on the thread by the code is the WSE2 client library. Depending on exactly when this happens, a COM object can be left for the Finalizer to clean up. It is only in this case that the Finalizer will see this object, and see that it was created on an STA thread and attempt to marshal the call to clean it up. When that happens, the worker thread that was erroniously marked as STA will never reply, and hence the finalizer will be blocked at that point.
This issue is now resolved.
Ultimately, the source of this issue is that calls were being made in one of our Win32 client libraries to CoInitialize using the STA model. This potential issue was discovered prior to 2006 by Microsoft, and deeply detailed in a blog post by Microsoft Principal Developer Tess Ferrandez at this address: https://www.tessferrandez.com/blog/2006/03/26/net-memory-leak-unblock-my-finalizer.html
As detailed in the post, COM can flip a ThreadPool worker thread over to the STA model, but threadpool threads were not designed to support this thread model.
The application worked most of the time because in all cases the COM objects were forcefully disposed on a API thread that created them and the finalizer was not involved in the cleanup. The rare exception was when the WSE2 client library enforced a Thread.Abort on one of our API threads and that chanced to intrude on the execution of the thread at just such a moment that made a forceful dispose impossible, and in those cases the finalizer was involved.
The fix for this issue was to search throughout our client library for calls to CoInitialize(nil), and convert these calls, and the underlying code to use the multithreaded model. Essentially change these to CoInitializeEx(nil, COINIT_MULTITHREADED);
After initializing to the MTA version of COM, there was no longer any need for the finalizer to do any marshalling in these cases of Thread.Abort, and the issue was resolved.

What is PML4 short for?

In Xen code ./xen/include/asm-x86/config.h, I saw the memory layout code is:
/*
137 * Meng: Xen-definitive guide: P81
138 * Memory layout:
139 * 0x0000000000000000 - 0x00007fffffffffff [128TB, 2^47 bytes, PML4:0-255]
140 * Guest-defined use (see below for compatibility mode guests).
141 * 0x0000800000000000 - 0xffff7fffffffffff [16EB]
142 * Inaccessible: current arch only supports 48-bit sign-extended VAs.
143 * 0xffff800000000000 - 0xffff803fffffffff [256GB, 2^38 bytes, PML4:256]
I'm very confused at what the PML4 is short for. I did know that the x86_64 only uses 48 bits out of 64bits. But what is the PML4 short for? It may help me understand the number behind it.
Thanks!
It's short for Page Map Level 4. A bit of explanation can be found here. Basically it's just the way AMD decided to label page tables.

CoreText memory leak from CTFramesetterCreateWithAttributedString

I'm coming across a problematic leak which - thanks to Instruments - appears to be coming from CTFrameSetterCreateWithAttributedString. The call stack is below.
1 CoreText -[_CTNativeGlyphStorage prepareWithCapacity:preallocated:]
2 CoreText -[_CTNativeGlyphStorage initWithCount:]
3 CoreText +[_CTNativeGlyphStorage newWithCount:]
4 CoreText TTypesetterAttrString::Initialize(__CFAttributedString const*)
5 CoreText TTypesetterAttrString::TTypesetterAttrString(__CFAttributedString const*)
6 CoreText TFramesetterAttrString::TFramesetterAttrString(__CFAttributedString const*)
7 CoreText CTFramesetterCreateWithAttributedString
The code generating this call stack is:
CFAttributedStringRef attrRef = (CFAttributedStringRef)self.attributedString;
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString(attrRef);
CFRelease(attrRef);
...
CFRelease(framesetter);
self.attributedString gets released elsewhere. I feel like I'm correctly releasing everything else... Given all this, where could the leak be coming from? It's fairly significant for my purposes - 6-10 MB a pop. Thanks for any help.
Turns out this problem was substantially addressed by not using an ivar for a CFMutableArrayRef. We were using the Akosma Obj-C wrapper for CoreText, and in the AKOMultiColumnTextView class there's a CFMutableArrayRef _frames ivar which - for some reason - is hanging around even after CFRelease calls. Not having the _frames ivar is a bit expensive, but has drastically improved memory usage. Thanks Neevek for your contributions.

monotouch symbols to show up in Instuments

Newbie to monotouch and 'native' iPhone development here. I'm trying to debug some memory leaks and having trouble doing so. My app is UIImage heavy and I'm definitely leaking them, but have trouble finding where.
Trying to use instruments to find where, but looks like I can't see the symbols, just see raw addresses. Any idea on how to get instruments to see correct callstack? Would seem to me that since monotouch doesn't JIT compile, monotouch should be able to generate symbols for the code it generates.
Example of callstack in instruments:
0 libSystem.B.dylib calloc
1 libobjc.A.dylib _internal_class_createInstanceFromZone
2 libobjc.A.dylib class_createInstance
3 CoreFoundation +[NSObject(NSObject) allocWithZone:]
4 CoreFoundation +[NSObject(NSObject) alloc]
5 UIKit GetPartImages
6 UIKit -[UITextFieldRoundedRectBackgroundView _updateImages]
7 UIKit -[UITextFieldBackgroundView initWithFrame:active:]
8 UIKit -[UITextField setBorderStyle:]
9 0xdf40ced
10 0xdf41846
11 0xdf40f0e
12 0xdf3fc75
13 0x88b2051
14 iPhone mono_jit_runtime_invoke
15 iPhone mono_runtime_invoke
16 iPhone mono_runtime_invoke_array
17 iPhone ves_icall_InternalInvoke
18 0xcfc5d93
19 0xcfc57c5
20 0xcfc5648
21 0xcfc9083
22 0xcfc8f2d
23 0xcfc8d93
24 0xdcb02c1
25 0xdcb025b
26 0xdcb0034

Leaking in [AVPlayer addBoundaryTimeObserverForTimes]

I have an instance of AVPlayer in my application. I use the time boundary observing feature:
[self setTimeObserver:[player addBoundaryTimeObserverForTimes:watchedTimes
queue:NULL usingBlock:^{
NSLog(#"A: %i", [timeObserver retainCount]);
[player removeTimeObserver:timeObserver];
NSLog(#"B: %i", [timeObserver retainCount]);
[self setTimeObserver:nil];
}]];
The problem is that according to Instruments I am leaking some arrays and values somewhere around this code. I checked the retain count of the time-observing token returned by AVPlayer on places marked A and B in the sample code. At the A point the retain count is 2, at point B the retain count increases to 3 (!). Adding a local autorelease pool does not change anything. I know that retain count is not a reliable metric, but this seems to be fishy. Any ideas about why the retain count increases or about my leaks? The stack trace at the leak point looks like this:
0 libSystem.B.dylib calloc
1 libobjc.A.dylib _internal_class_createInstanceFromZone
2 libobjc.A.dylib class_createInstance
3 CoreFoundation __CFAllocateObject2
4 CoreFoundation +[__NSArrayI __new::]
5 CoreFoundation -[__NSPlaceholderArray initWithObjects:count:]
6 CoreFoundation +[NSArray arrayWithObjects:count:]
7 CoreFoundation -[NSArray sortedArrayWithOptions:usingComparator:]
8 CoreFoundation -[NSArray sortedArrayUsingComparator:]
9 AVFoundation -[AVPlayerOccasionalCaller initWithPlayer:times:queue:block:]
10 AVFoundation -[AVPlayer addBoundaryTimeObserverForTimes:queue:usingBlock:]
If I understand things correctly, AVPlayerOccasionalCaller is the “opaque” object returned by addBoundaryTimeObserverForTimes:queue:usingBlock:, or the time observer.
Do not use -retainCount.
The absolute retain count of an object is meaningless.
You should call release exactly same number of times that you caused the object to be retained. No less (unless you like leaks) and, certainly, no more (unless you like crashes).
See the Memory Management Guidelines for full details.
In this specific case, the retain count you are printing is entirely irrelevant. removeTimeObserver: is probably retaining and autoreleasing the object. Doesn't really matter; it is an implementation detail.
When using the Leaks template in Instrument, note that the Allocations instrument is configured to record reference counts. When you have detected a "leak", look at the list of reference count events for that object. There will likely be a stack where some code of yours is triggering an extra retain. If not, it might be a framework bug.

Resources