Leaking in [AVPlayer addBoundaryTimeObserverForTimes] - memory-leaks

I have an instance of AVPlayer in my application. I use the time boundary observing feature:
[self setTimeObserver:[player addBoundaryTimeObserverForTimes:watchedTimes
queue:NULL usingBlock:^{
NSLog(#"A: %i", [timeObserver retainCount]);
[player removeTimeObserver:timeObserver];
NSLog(#"B: %i", [timeObserver retainCount]);
[self setTimeObserver:nil];
}]];
The problem is that according to Instruments I am leaking some arrays and values somewhere around this code. I checked the retain count of the time-observing token returned by AVPlayer on places marked A and B in the sample code. At the A point the retain count is 2, at point B the retain count increases to 3 (!). Adding a local autorelease pool does not change anything. I know that retain count is not a reliable metric, but this seems to be fishy. Any ideas about why the retain count increases or about my leaks? The stack trace at the leak point looks like this:
0 libSystem.B.dylib calloc
1 libobjc.A.dylib _internal_class_createInstanceFromZone
2 libobjc.A.dylib class_createInstance
3 CoreFoundation __CFAllocateObject2
4 CoreFoundation +[__NSArrayI __new::]
5 CoreFoundation -[__NSPlaceholderArray initWithObjects:count:]
6 CoreFoundation +[NSArray arrayWithObjects:count:]
7 CoreFoundation -[NSArray sortedArrayWithOptions:usingComparator:]
8 CoreFoundation -[NSArray sortedArrayUsingComparator:]
9 AVFoundation -[AVPlayerOccasionalCaller initWithPlayer:times:queue:block:]
10 AVFoundation -[AVPlayer addBoundaryTimeObserverForTimes:queue:usingBlock:]
If I understand things correctly, AVPlayerOccasionalCaller is the “opaque” object returned by addBoundaryTimeObserverForTimes:queue:usingBlock:, or the time observer.

Do not use -retainCount.
The absolute retain count of an object is meaningless.
You should call release exactly same number of times that you caused the object to be retained. No less (unless you like leaks) and, certainly, no more (unless you like crashes).
See the Memory Management Guidelines for full details.
In this specific case, the retain count you are printing is entirely irrelevant. removeTimeObserver: is probably retaining and autoreleasing the object. Doesn't really matter; it is an implementation detail.
When using the Leaks template in Instrument, note that the Allocations instrument is configured to record reference counts. When you have detected a "leak", look at the list of reference count events for that object. There will likely be a stack where some code of yours is triggering an extra retain. If not, it might be a framework bug.

Related

sys.refcount() returning much greater value then expected python3

I am learning about GIL in python and tried to run sys.refcount() and recevied value of 148. This might be a very simple question , any help would be appreciated.
Why is the value 148 and not 2 ?
import sys
c = 1
print(sys.getrefcount(c))
>>> 148
Your Python code isn't the only thing running. Much of the Python standard library is written in Python, and depending on which shell you use that can cause quite a few modules to be imported before the first thing you type. Here under CPython 3.10.0's IDLE:
>>> import sys
>>> len(sys.modules)
159
So just getting to the prompt imported 159(!) modules "under the covers".
"Small" integer objects are shared, across uses, by the CPython implementation. So every instance of 3 across all those modules adds to 3's refcount. Here are some others:
>>> for i in range(-10, 11):
... print(i, sys.getrefcount(i))
-10 3
-9 3
-8 3
-7 3
-6 3
-5 9
-4 5
-3 12
-2 25
-1 190
0 914
1 804
2 363
3 144
4 202
5 83
6 83
7 38
8 128
9 54
10 64
So 3 is "pretty popular", but 0 is the easy winner. Nothing else is using, e.g., -10 or -9, though.
But do note that knowing this is of no actual value to you. Whether and when Python shares immutable objects is implementation-defined, and can (and does!) change across releases.
int is special.
Its values are very numerous (pun intended) and small, which is the worst-case as far object overhead (it wastes time to allocate, GCs become slower because they have more heap objects to scan, and wastes time to reference count and deallocate). Typically language runtimes go to pretty great lengths to try to optimizations special cases like int, bool, etc.
Depending on the particular implementation of Python, it's possible that int objects are represented as:
Regular, run-of-the-mill heap-allocated objects (i.e., no special optimizations).
As regular heap-allocated objects, but with a pool of shared objects used to represent all the most common values. (e.g. every instance of 1 is the same object, referenced
everywhere where a 1 is used)
Or as a tagged
pointer, which involves no heap-allocation at all (for suitably small integer values).
In case 2 or 3, its reference count will not be what you might expect, had it been a "normal" object.

Which 3GPP spec identifies the maximim number of PDP Contexts a UE can activate?

So I know that the network limit on the max number of contexts a UE can activate is 11. However, I can't find in the 3GPP specs where this is stated explicitly. I've been searching for hours now and can't find anything. Can anyone point me in the right direction?
Okay, I think I've figured it out. It turns out I needed a little bit more time to continue digging. The limit is specified indirectly through the number of available NSAPIs that can be assigned. The number of NSAPIs is limited by the definition of the NSAPI IE specified in TS 24.008 (I'm using Rel4 for this, the situation may have changes in later releases) section 10.5.6.2. The definition allocates 4 bits to the NSAPI which encodes 11 NSAPIs and 5 reserved values. Since each PDP context needs to be allocated a specific NSAPI then there can only be a max of 11 PDP contexts because there is only 11 NSAPIs available.

Spring websevice, single wsdl but different WS provider, performance issue

I am facing a performance issue. In my project, I have a webservice client which gives a call to hardware entity to get its status and other parameter's value. I am using Soap based Spring WS.
I have approx 5000 devices to which I need to make call in parallel using 100-500 threads at a time.
With a single call, it takes less than 5 second per device which is expected.
But when in multi-threading, the time keeps on increasing from 5 seconds to 30 sec and further more, more than 100 seconds even, device per device. And it takes more than 30 min for all devices which should be less than 2 min as per requirement.
We have different uri for each device so we gets URI dynamically so we use Spring's webServiceTemplate's method- marshalSendAndReceive(String uri, Object requestPayload, WebServiceMessageCallback requestCallback).
WebServiceTemplate object is singleton.
Only 1 wsdl but different devices are different WS provider.
Somewhere I found that it might be an issue with marshallers so I have increased the number of marshallers object for singleton webServiceTemplate object but this also didn't work.
Please share me idea to solve such issue. If need more info in order to solve this issue, please let me know if I missed to share any info.
Elaborating some more about the question:
Thanks hagrawal, yes threads cannot increase the response time but somewhere threads are taking time which I am not able to understand but yes, it is taking time when calls to actual webservice to talk to devices. I have taken start and end time to measure the timing for that call and found that first few 100 devices, the time taken is less that 3-4 sec but after that, the time taken keeps on increasing for further devices.
I have checked the JVM also and could not find any issue related to memory but yes, found so many threads blocked multiple times. Looks like these blocking threads consumes most of the time. I have taken the stack trace of those blocked threads, as below.
pool-111757-thread-1 [13184] (BLOCKED)
sun.security.ssl.Handshaker.calculateConnectionKeys line: 1266
sun.security.ssl.Handshaker.calculateKeys line: 1112
sun.security.ssl.ClientHandshaker.serverHelloDone line: 1078
sun.security.ssl.ClientHandshaker.processMessage line: 348
sun.security.ssl.Handshaker.processLoop line: 979
sun.security.ssl.Handshaker.process_record line: 914
sun.security.ssl.SSLSocketImpl.readRecord line: 1062
sun.security.ssl.SSLSocketImpl.performInitialHandshake line: 1375
sun.security.ssl.SSLSocketImpl.starHandshake line: 1403
sun.security.ssl.SSLSocketImpl.startHandshake line: 1387
org.apache.http.conn.ssl.SSSLConnectionSocketFactory.createLayeredSocket line: 275
org.apache.http.conn.ssl.SSSLConnectionSocketFactory.connectSocket line: 254
org.apache.http.impl.conn.HttpClientConnectionOperator.connect line: 123
org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect line: 318
Again just to inform, the time taken increasing for the method when calling to actual ws of devices.

Ada : Variant size in record type

I having some trouble with the type Record with Ada.
I'm using Sequential_IO to read a binary file. To do that I have to use a type where the size is a multiple of the file's size. In my case I need a structure of 50 bytes so I created a type like this ("Vecteur" is an array of 3 Float) :
type Double_Byte is mod 2 ** 16; for Double_Byte'Size use 16;
type Triangle is
record
Normal : Vecteur(1..3);
P1 : Vecteur(1..3);
P2 : Vecteur(1..3);
P3 : Vecteur(1..3);
Byte_count1 : Double_Byte;
end record;
When I use the type triangle the size is 52 bytes, but when I take the size of each one separetely within it I find 50 bytes. Because 52 is not a multiple of my file's size I have execution errors. But I don't know how to fix this size, I ran some test and I think it come from Double_Byte, because when I removed it from the record I found a size of 48 bytes and when I put it back it's 52 bytes again.
Thanks you for your help.
Given Simon's latest comment, it may be impossible to do this portably using Sequential_IO; namely, reading the file on some machines (which don't support unaligned accesses) may leave half its contents unaligned and therefore liable to fail when you access them.
I can't help feeling that a better solution is to divorce the file format (which is fixed by compatibility with other systems) from the machine format (which is not). And therefore moving to Stream_IO and writing your own Read and Write primitives where necessary (e.g. to pack the odd sized Double_Byte component into 2 bytes, whatever its representation in memory) would be a more robust solution.
Then you can guarantee a file format compatible with other systems, and an internal memory format guaranteed to work.
The compiler is in no way obligated to use a specific size for Triangle unless you specify it. As you don't, it chooses whatever size it sees fit for fast access to the data. Even if you specify representation details for every component type of the record, the compiler might still choose to use more space for the record itself than necessary.
Considering the sizes you give, it seems obvious that one component of Vecteur has 4 bytes, which gives a total payload of 50 bytes for Triangle. The compiler now chooses to add 2 bytes padding, so that the record size is a multiple of the size of a 4-byte word. You can override this behavior with:
for Triangle'Size use 50 * 8;
This will force the compiler to use only 50 bytes for the record. As this is a tight fit, there is only one way to represent the record, and no further specification is necessary. If you do need to specify how exactly the record is represented, you can use a record representation clause.
Edit:
The representation clause specifies the size for the type. However, each object of this type may still take up more space unless you additionally specify
pragma Pack (Triangle);
Edit 2:
After Simon's comment, I had a closer look at this and realized that there is a far better and cleaner solution. Instead of setting the 'Size and using pragma Pack, do this:
for Triangle use record at mod 2;
Normal at 0 range 0 .. 95;
P1 at 12 range 0 .. 95;
P2 at 24 range 0 .. 95;
P3 at 36 range 0 .. 95;
Byte_count1 at 48 range 0 .. 15;
end record;
The initial mod 2 defines that the record is to be aligned at a multiple of 2 bytes. This eliminates the padding at the end without the need of pragma Pack (which is not guaranteed to work the same way on every compiler).

Is there a leak in AVPlayers' init method?

I am working on an app that makes extensive use of AVfoundation. Recently I did some leak checking with Instruments. The "leaks" instrument was reporting a leak at a point a in the code where I was instantiating a new AVPlayer, like this:
player1 = [AVPlayer playerWithPlayerItem:playerItem1];
To reduce the problem, I created an entirely new Xcode project with for a single view application, using ARC, and put in the following line.
AVPlayer *player = [[AVPlayer alloc] init];
This produces the same leak report in Instruments. Below is the stack trace. Does anybody know why a simple call to [[AVPlayer alloc] init] would cause a leak? Although I am using ARC, I tried a turning it off and inserting the corresponding [player release]; instruction and it makes no difference. This seems to have to do specifically with AVPlayer.
0 libsystem_c.dylib malloc
1 libsystem_c.dylib strdup
2 libnotify.dylib token_table_add
3 libnotify.dylib notify_register_check
4 AVFoundation -[AVPlayer(AVPlayerMultitaskSupport) _iapdExtendedModeIsActive]
5 AVFoundation -[AVPlayer init]
6 TestApp -[ViewController viewDidLoad] /Users/jason/Synaptic Revival/Project Field Trip/software development/TestApp/TestApp/ViewController.m:22
7 UIKit -[UIViewController view]
--- 2 frames omitted ---
10 UIKit -[UIWindow makeKeyAndVisible]
11 TestApp -[AppDelegate application:didFinishLaunchingWithOptions:] /Users/jason/Synaptic Revival/Project Field Trip/software development/TestApp/TestApp/AppDelegate.m:24
12 UIKit -[UIApplication _callInitializationDelegatesForURL:payload:suspended:]
--- 3 frames omitted ---
16 UIKit _UIApplicationHandleEvent
17 GraphicsServices PurpleEventCallback
18 CoreFoundation __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__
--- 3 frames omitted ---
22 CoreFoundation CFRunLoopRunInMode
23 UIKit -[UIApplication _run]
24 UIKit UIApplicationMain
25 TestApp main /Users/jason/software development/TestApp/TestApp/main.m:16
26 TestApp start
This 48-bytes leak is confirmed by Apple as a known issue, which not only lives in AVPlayer but also in UIScrollView (I have an app happened to use both components.)
Please see this thread to get detail:
Memory leak every time UIScrollView is released
Here's the link to apple's answer on the thead (you may need a developer id to sign in):
https://devforums.apple.com/thread/144449?start=0&tstart=0
Apple's brief quote:
This is a known bug that will be fixed in a future release.
In the meantime, while all leaks are obviously undesirable this isn't going to cause any user-visible problems in the real world. A user would have to scroll roughly 22,000 times in order to leak 1 megabyte of memory, so it's not going to impact daily usage.
It seems any component that refers to notify_register_check and notify_register_mach_port will cause this issue.
Currently no obvious walk around or fix can be found. It is confirmed that this issue remains in iOS versions for 5.1 and 5.1.1. Hopefully apple can fix that in iOS 6 because it is really annoying and destructive.

Resources