NSMutableArray might first retain and then release twice when send remove message? - nsmutablearray

This is a question related to this one.
When remove object from a mutable array, I noticed that there might be a 'retain' message sent to that object, so I searched out the above question, where w.m gave an answer mentioned that the internal implementation of NSMutableArray might first retain object, then release it twice when remove that object.
My question is that: Is there any evidence for this? Or anybody know any related details?
I met this issue when I was analyzing the following logs, I know there is something wrong with my code, but my concern is not about the bug itself, but whether it is a fact that 'there would be some retain work when removeObject'.
Exception Type: EXC_BAD_INSTRUCTION (SIGILL)
Exception Codes: 0x0000000000000001, 0x0000000000000000
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Application Specific Information:
objc[299]: FREED(id): message retain sent to freed object=0x23f62a0
Thread 0 Crashed: Dispatch queue: com.apple.main-thread
0 libobjc.A.dylib 0x9a3694fd _objc_error + 116
1 libobjc.A.dylib 0x9a369533 __objc_error + 52
2 libobjc.A.dylib 0x9a36783a _freedHandler + 58
3 com.apple.CoreFoundation 0x9879a8cb -[NSMutableArray removeObject:range:identical:] + 331
4 com.apple.CoreFoundation 0x9879a770 -[NSMutableArray removeObject:] + 96

It doesn't matter what NSMutableArray does internally. It is of no concern to you. As long as it follows the memory management rules, i.e. it retains anything it needs to keep for later, and releases only things it has retained, it doesn't matter if it also retains and releases things 20 extra times in random places. Adding an extra retain-release pair never reduces the correctness of a program.
If you are getting a crash, then you are doing something wrong in your code.

Related

Delay in marking items in queue as exception - BluePrism

I have encountered something that has basically left me scratching my head (BP 6.4):
Description:
I have a file with about 5000 cases. First thing the process does is to add these cases in the queue. But before it adds these cases to the queue, it cross references the case with a database to check if any of them are already closed. If their are cases in the file that are already closed (in my file there are 321 cases that are already closed), process does the following: it first adds all cases that are NOT closed in the queue and then adds all cases that are closed in the queue. Process then marks all cases that are already closed as exception in the queue
Issue:
I'm seeing bizarre behaviour at this stage: what happens is that when the process marks the closed cases (321 cases from my file) in the queue as exception, not all of them get marked as exception. I always get about 40 odd cases that don't get marked as exception. But if i check a few hours later, they are marked as exception. As I am stopping the process after cases are added and marked in the queue, none of these cases have been worked.. they just seem to take time before they get marked as exception
Has anyone seen this behaviour?
As answered in the comments:
once the 40 odd cases are marked as exception, date\time of exception is set to exactly the time when the initial cases were marked as exception (well i say exactly.. round about the same time) and not 2-3 hours later when these cases were marked as exception
Knowing this fact, it is likely that what you're observing is something purely cosmetic. Unless in your workflow you require this data to be up-to-date in real time, it is unlikely that this affects workflows' functionality in any material way.
While trying to investigate the issue (and navigating in Control Room), Bluprism threw the following exception:
Below is the actual exception detail (from detail section)
************** Exception Text **************
System.NullReferenceException: Object reference not set to an instance of an object.
at System.Windows.Forms.ListViewItem.set_Selected(Boolean value)
at AutomateUI.ctlWorkQueueList.SetSelectedQueue(Predicate`1 pred)
at AutomateUI.ctlWorkQueueList.set_SelectedId(Guid value)
at AutomateUI.ctlWorkQueueManagement.SelectQueue(QueueGroupMember q)
at AutomateUI.ctlControlRoom.ChangePanel(TreeNode node)
at AutomateUI.ctlControlRoom.HandleAfterSelect(Object sender, TreeViewEventArgs e)
at System.Windows.Forms.TreeView.OnAfterSelect(TreeViewEventArgs e)
at System.Windows.Forms.TreeView.TvnSelected(NMTREEVIEW* nmtv)
at System.Windows.Forms.TreeView.WmNotify(Message& m)
at System.Windows.Forms.TreeView.WndProc(Message& m)
at AutomateControls.Trees.FlickerFreeTreeView.WndProc(Message& m)
at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m)
at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)
at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
It looks like ListViewItem threw a NullReferenceException. This could be an environment issue or connection issue causing this problem. I will contact support to investigate further
Saying that, as per esqew's comment, this is looks like a cosmetic issue rather than an actual bug

go on OS X - Two libraries call system functions

I am finding it difficult to write something with more or less common UI at least for Mac. My application has to have tray icon and be able to show system notifications
The issue is the goroutines themselves. Any call to UI frameworks on Mac requires that the call is made from main thread, or at least in a thread-safe manner.
The issue arise when I am already running UI (well, for GUI application that is a must, no?) and try to show notification. The reason for this seems to be that systray package Init function has to be locked to main thread using runtime.LockOsThread and never releases it. Then if I try to show notification which also requires runtime.LockOsThread it causes following error:
2016-01-11 22:56:27.973 main[30162:4094392] *** Assertion failure in +[NSUndoManager _endTopLevelGroupings], /Library/Caches/com.apple.xbs/Sources/Foundation/Foundation-1256.1/Misc.subproj/NSUndoManager.m:359
2016-01-11 22:56:27.974 main[30162:4094392] +[NSUndoManager(NSInternal) _endTopLevelGroupings] is only safe to invoke on the main thread.
2016-01-11 22:56:27.977 main[30162:4094392] (
0 CoreFoundation 0x00007fff8d42bae2 __exceptionPreprocess + 178
1 libobjc.A.dylib 0x00007fff8bb03f7e objc_exception_throw + 48
2 CoreFoundation 0x00007fff8d42b8ba +[NSException raise:format:arguments:] + 106
3 Foundation 0x00007fff8cb4c88c -[NSAssertionHandler handleFailureInMethod:object:file:lineNumber:description:] + 198
4 Foundation 0x00007fff8cad24c1 +[NSUndoManager(NSPrivate) _endTopLevelGroupings] + 170
5 AppKit 0x00007fff8514206a -[NSApplication run] + 844
6 main 0x0000000004166200 nativeLoop + 128
7 main 0x0000000004165bca _cgo_8c6479959095_Cfunc_nativeLoop + 26
8 main 0x000000000405a590 runtime.asmcgocall + 112
)
2016-01-11 22:56:27.977 main[30162:4094392] *** Assertion failure in +[NSUndoManager _endTopLevelGroupings], /Library/Caches/com.apple.xbs/Sources/Foundation/Foundation-1256.1/Misc.subproj/NSUndoManager.m:359
2016-01-11 22:56:27.978 main[30162:4094392] An uncaught exception was raised
2016-01-11 22:56:27.978 main[30162:4094392] +[NSUndoManager(NSInternal) _endTopLevelGroupings] is only safe to invoke on the main thread.
Is there a workaround that? All I could think of so far is to put UI and Notifications into separate binaries and make them to communicate with main over some sort of IPC. But I may be missing something.
Since there is not enough traction on this question I've decided to post my own solution I found while trying to workaround this issue. This won't be marked as answer yet since someone else may provide better solution.
I have moved one of the UI processes (namely part that uses systray) into another binary and call it using cmd := exec.Command(...) and cmd.Start() then I pipe stdin and stdout and communicate with this child process through those.
The example code can be found on Github. Warning there is an error in this gist where after child exits main process will start burning through CPU cycles. Feel free to fix it yourself.
The reason why I did not want to go through with RPC is because this will become slightly too complex to what I want to achieve and does not provide easy way to do two way communication.
It looks like the two libraries that you are using both correctly use runtime.LockOSThread to make main-thread-only API calls; unfortunately, to use more than one such library, you'll have to do something fancier than the example code that either provides. You'll need to write your own main thread / main.Main-invoked message loop that handles calls to multiple MTO APIs.
runtime.LockOSThread is part of the solution to operating with APIs such as this; the golang wiki has a page about how to use it to interact with "call from main thread only" APIs.
An extremely short description of how your program should change:
You'll want to use runtime.LockOSThread in main.init to make sure that the main thread is running main.Main; main.Main should be refactored into two parts:
starts a goroutine or goroutines that run what previously was in main.Main;
enters a message loop receiving messages to take certain main-thread actions on one or more channels

Why use nextTimeout in sp_session_process_events()?

I am writing a Spotify app in C#.
I am currently verifying that the sp_session_process_events() call is working properly.
Trying to be very scientific about it, I've been using the out parameter nextTimeout to try and prevent the need for the lib to call NotifyMainThreadCallback.
The call seem to be as frequent with that feature as without it. The value of nextTimeout does not seem to be that valid everytime either. Below is a short example when I am only calling sp_session_process_events when required bý NotifyMainThreadCallback.
00:00:08.299: - NotifyMainThreadCallback
00:00:08.312: sp_session_process_events() next process requested in 1000 ms
00:00:08.376: - NotifyMainThreadCallback
00:00:08.381: - NotifyMainThreadCallback
00:00:08.389: sp_session_process_events() next process requested in 922 ms
00:00:08.396: - UserinfoUpdatedCallback
00:00:08.401: - NotifyMainThreadCallback
00:00:08.409: sp_session_process_events() next process requested in 15 ms
00:00:08.415: - MetadataUpdatedCallback
00:00:08.419: sp_session_process_events() next process requested in 891 ms
So why use the nextTimeout at all? As far as I can see it can be ignored.
The next_timeout value is there to prevent you from calling sp_session_process_events too frequently, and is not necessarily intended to reduce the number of main thread 'wake-ups'. I don't see anything unusual about the timeout values you're seeing.
The notify_main_thread callback is often invoked from sp_session_process_events, which you should be calling from the main thread anyway. This shouldn't cause you huge problems. I suppose you could add some extra logic to stay in the event loop rather than signalling in those cases, but that could require more synchronisation than you already have.

Is NSIndexPath threadsafe?

Apple's multithreading docs don't list NSIndexPath as threadsafe or not! As an immutable class, I'd generally expect it to be threadsafe.
Previously, I'm sure the documentation used to state that NSIndexPath instances were shared and globally unique. That seems to have disappeared now though, leading me to suspect that design was revised for iOS5 / Mac OS X 10.7.
I'm seeing quite a lot of crash reports from customers on Mac OS X 10.6 (Snow Leopard) which appear to be crashing trying to access an index path. Thus I wonder: are the actual instances thread safe, but that the logic for pulling them out of the shared cache isn't? Does anybody have any insight?
Here's an example stack trace BTW:
Dispatch queue: com.apple.root.default-priority
0 libobjc.A.dylib 0x96513f29 _cache_getImp + 9
1 libobjc.A.dylib 0x965158f0 class_respondsToSelector + 59
2 com.apple.CoreFoundation 0x948bcb49 ___forwarding___ + 761
3 com.apple.CoreFoundation 0x948bc7d2 _CF_forwarding_prep_0 + 50
4 com.apple.Foundation 0x994b10c5 -[NSIndexPath compare:] + 93
5 com.apple.Foundation 0x99415686 _NSCompareObject + 76
6 com.apple.CoreFoundation 0x948af61c __CFSimpleMergeSort + 236
7 com.apple.CoreFoundation 0x948af576 __CFSimpleMergeSort + 70
8 com.apple.CoreFoundation 0x948af38c CFSortIndexes + 252
9 com.apple.CoreFoundation 0x948fe80d CFMergeSortArray + 125
10 com.apple.Foundation 0x994153d3 _sortedObjectsUsingDescriptors + 639
11 com.apple.Foundation 0x994150d8 -[NSArray(NSKeyValueSorting) sortedArrayUsingDescriptors:] + 566
To me, that is an NSIndexPath instance trying to compare itself to a deallocated instance.
So far the best answer I have is as I suspect:
As of OS X 10.7 and iOS 5, NSIndexPath is thread safe. Prior to that, instances are thread safe because they are immutable, but the shared retrieval of existing instances is not.
To my method which returns index paths on-demand, I did this:
- (NSIndexPath *)indexPath;
{
NSIndexPath *result = … // create the path as appropriate
return [[result retain] autorelease];
}
Since implementing that last line of code, we've had no more crash reports from index paths.
The index paths are created by -indexPathByAddingIndex: or +indexPathWithIndex:.
The results I am seeing make me pretty certain that (prior to 10.7/iOS5) these methods are returning an existing NSIndexPath instance. That instance is not retained by the current thread in any way though, so the thread which first created the instance (main in our case) is releasing the path (likely through popping the autorelease pool) and leaving our worker thread with a dangling pointer, which crashes when used, as seen in the question.
It's all a bit terrifying, because if my analysis is correct, the retain/autorelease dance I've added is simply replacing one race condition with another, less-likely one.
Prior to 10.7/iOS5, I can think of only one true workaround: Limit all creation of index paths to the main thread. That could be rather slow if such code gets called a lot, so could be improved — at the cost of memory — by maintaining some kind of instance cache of your own for background threads to use. If the cache is retaining a path, then you know it won't be deallocated by the main thread.
Apple don't specifically list NSIndexPath as thread safe, but they do say that immutable classes are generally safe and mutable ones generally aren't. Since NSIndexPath is immutable it's safe to assume it's thread safe.
But "thread safe" doesn't mean that it can't cause crashes by being released by one thread before you use it on another though. Thread safe generally just means that its mutator methods contain locking to prevent glitches due to two threads setting properties concurrently (which is why classes without mutator methods are generally thread safe, although lazy getters and shared instances can also cause problems).
It sounds like your bug is more likely due to using an autorelease pool or some other mechanism that causes your object to be released at a time outside your control. You should probably ensure that any concurrently accessed objects are stored in properties of long-lived classes so that you can control their lifespan.
Creating an autoreleased object and accessing it from another thread after you've removed all strong references to it is a dangerous racing game that is likely to cause hard-to-trace crashes regardless of whether the object in question is "thread safe".

Interview Question on .NET Threading

Could you describe two methods of synchronizing multi-threaded write access performed
on a class member?
Please could any one help me what is this meant to do and what is the right answer.
When you change data in C#, something that looks like a single operation may be compiled into several instructions. Take the following class:
public class Number {
private int a = 0;
public void Add(int b) {
a += b;
}
}
When you build it, you get the following IL code:
IL_0000: nop
IL_0001: ldarg.0
IL_0002: dup
// Pushes the value of the private variable 'a' onto the stack
IL_0003: ldfld int32 Simple.Number::a
// Pushes the value of the argument 'b' onto the stack
IL_0008: ldarg.1
// Adds the top two values of the stack together
IL_0009: add
// Sets 'a' to the value on top of the stack
IL_000a: stfld int32 Simple.Number::a
IL_000f: ret
Now, say you have a Number object and two threads call its Add method like this:
number.Add(2); // Thread 1
number.Add(3); // Thread 2
If you want the result to be 5 (0 + 2 + 3), there's a problem. You don't know when these threads will execute their instructions. Both threads could execute IL_0003 (pushing zero onto the stack) before either executes IL_000a (actually changing the member variable) and you get this:
a = 0 + 2; // Thread 1
a = 0 + 3; // Thread 2
The last thread to finish 'wins' and at the end of the process, a is 2 or 3 instead of 5.
So you have to make sure that one complete set of instructions finishes before the other set. To do that, you can:
1) Lock access to the class member while it's being written, using one of the many .NET synchronization primitives (like lock, Mutex, ReaderWriterLockSlim, etc.) so that only one thread can work on it at a time.
2) Push write operations into a queue and process that queue with a single thread. As Thorarin points out, you still have to synchronize access to the queue if it isn't thread-safe, but it's worth it for complex write operations.
There are other techniques. Some (like Interlocked) are limited to particular data types, and there are even more (like the ones discussed in Non-blocking synchronization and Part 4 of Joseph Albahari's Threading in C#), though they are more complex: approach them with caution.
In multithreaded applications, there are many situations where simultaneous access to the same data can cause problems. In such cases synchronization is required to guarantee that only one thread has access at any one time.
I imagine they mean using the lock-statement (or SyncLock in VB.NET) vs. using a Monitor.
You might want to read this page for examples and an understanding of the concept. However, if you have no experience with multithreaded application design, it will likely become quickly apparent, should your new employer put you to the test. It's a fairly complicated subject, with many possible pitfalls such as deadlock.
There is a decent MSDN page on the subject as well.
There may be other options, depending on the type of member variable and how it is to be changed. Incrementing an integer for example can be done with the Interlocked.Increment method.
As an excercise and demonstration of the problem, try writing an application that starts 5 simultaneous threads, incrementing a shared counter a million times per thread. The intended end result of the counter would be 5 million, but that is (probably) not what you will end up with :)
Edit: made a quick implementation myself (download). Sample output:
Unsynchronized counter demo:
expected counter = 5000000
actual counter = 4901600
Time taken (ms) = 67
Synchronized counter demo:
expected counter = 5000000
actual counter = 5000000
Time taken (ms) = 287
There are a couple of ways, several of which are mentioned previously.
ReaderWriterLockSlim is my preferred method. This gives you a database type of locking, and allows for upgrading (although the syntax for that is incorrect in the MSDN last time I looked and is very non-obvious)
lock statements. You treat a read like a write and just prevent access to the variable
Interlocked operations. This performs an operations on a value type in an atomic step. This can be used for lock free threading (really wouldn't recommend this)
Mutexes and Semaphores (haven't used these)
Monitor statements (this is essentially how the lock keyword works)
While I don't mean to denigrate other answers, I would not trust anything that does not use one of these techniques. My apologies if I have forgotten any.

Resources