Using AFIncrementalStore without NSFetchedResultsController or NSArrayController - core-data

I'm trying to fetch some data without NSFetchedResultsController or NSArrayController. The problem is that due to AFIncrementalStore's asynchronous nature, I'm always getting stale data (or no data if it hasn't been fetched) data.
Is there any recommended approach to handle this problem?

I think that preferred approach to this problem is to use NSFetchedResultsControllers (at least on iOS). If you really don't want to use it, you might want to subscribe to one of these notifications, depending on the granularity of changes you expect to receive:
NSManagedObjectContextObjectsDidChangeNotification
NSManagedObjectContextDidSaveNotification
and then refetch the data when one of these notifications is posted.

You can also try using the AFIncrementalStore notification for notification on when remote data has been fetched in order to update the UI and your objects.
AFIncrementalStoreContextDidFetchRemoteValues etc
https://github.com/AFNetworking/AFIncrementalStore/blob/master/AFIncrementalStore/AFIncrementalStore.m#L29

Related

How to update bunch of data with Event Sourcing

I wondering how to update bunch of data in Event Sourcing concept for any aggregate.
In traditional application I would take some data such as name, date of birth etc. and put them into existing object; as I understand, in ES concept this approach is wrong, so that should I perform different Events to update different parts of aggregate root? If so, that how to build REST API? How to handle with validation?
In traditional application I would take some data such as name, date of birth etc. and put them into existing object; as I understand, in ES concept this approach is wrong,
Short answer: that approach is fine -- what changes in event sourcing is how you keep track of the changes in your service.
A way to think of a stream of events is a sequence of patch-documents. There's nothing wrong with changing multiple fields in a single patch document, and that is fine in events as well.
This question is really too broad for SO. You should google “event sourcing basics in azure” to find detailed articles, github projects, videos, and other responses to these questions.
In general, in Event Sourcing there two main ideas you need – Messages and Events. A typical process (not the only option, but a common one) is as follows. A message is created by your UI which makes a request for a change to be made to an AR. Validation for that message is done on the message creation source.
The message is then sent to an API, where it is validated again since you can't trust all possible senders. The request is processed, resulting in changes made to an AR. An event is then created describing the changes made, and that event is placed on an event source (Azure Event Hub, Kafka, Kinesis, a DB, whatever). This list of events is kept forever and describes each and every change made to that AR throughout time, including the initial creation request. To get the current state of the AR, just add up all the events.
The key idea that is confusing when learning Event Sourcing is the two different types of “events”. Messages ask for a change to be made, Events record that a change has been made.
As already answered, the batch update approach is fine.
I suggest to focus on the event consumption code. If all you have in your ReadSide is a complete aggregate representation, then generic *_UPDATED event is ok.
But if you do have parts of you system interested only in particular part of your aggregate, you might want to update that part separately, so that system doesn't have to analyze all events and dig for particular data.
For example, some demographic analysis system is only interested in the birthdate. It would be much easier for this system to have a BURTHDATE_SET event that it would listen to, and ignore all others.
Fine grained events like this also reduces coupling, because require less knowledge of the internal event data structure.
It feels like you still have an active record way of looking at things.
You should model the things that happen to your entity as events rather than the impact of things happening.
So to my mind all of that data might be gathered in a "Person was registered" event but an "Address added" event might also exist - in which case your single command might end up appending two events to the event stream.

Read/Write custom objects on multiple threads

I need to be able to to grab objects from Core Data and keep them in a mutable array in memory in order to avoid constant fetching and slow UI/UX. The problem is that I grab the objects on other threads. I also do writing to these objects at times on other threads. Because of this I can't just save the NSManagedObjects in an array and just call something like myManagedObjectContext.performBlock or myObject.managedObjectContext.PerformBlock since you are not supposed to pass MOCs between threads.
I was thinking of using a custom object to throw the data I need from the CD objects into. This feels a little stupid since I already made a Model/NSManagedObject class for the entities and since the custom object would be mutable it still would not be thread safe. This means I would have to do something like a serial queue for object manipulation on multiple threads? So for example any time I want to read/write/delete an object I have to throw it into my object serialQueue.
This all seems really nasty so I am wondering are there any common design patterns for this problem or something similar? Is there a better way of doing this?
I doubt you need custom objects between Core Data and your UI. There is a better answer:
Your UI should read from the managed objects that are associated with the main thread (which it sounds like you are doing).
When you make changes on another thread those changes will update the objects that are on your main thread. That is what Core Data is designed to do.
You just need to listen to those changes and have your UI react to them.
There are several ways to do this:
NSFetchedResultsController. Kind of like your mutable array but has a delegate it will notify when objects change. Highly recommended
Listen for KVO changes on the property that you are displaying in your UI. Whenever the property changes you get a KVO notification and can react to it. More code but also more narrowly focused.
Listen for NSManagedObjectContextDidSaveNotification events via the NSNotification center and react to the notification. The objects that are being changed will be in the userInfo of the notification.
Of the three, using a NSFetchedResultsController is usually the right answer. When that in place you just change what you need to change on other threads, save the context and you are done. The UI will update itself.
One pattern is to pass along only the object ids, which are NSString objects, immutable and thus thread safe, and query on the main thread after those ids. This way every NSManagedObject will belong to the appropriate thread.
Alternatively, you can use mergeChangesFromContextDidSaveNotification which will update the objects from the main thread with the changes made on the secondary thread. You'd still need fetching for new objects, though.
The "caveat" is that you need to save the secondary context in order to get your hands on a notification like this. Also any newly created, but not saved objects from the main thread will be lost after applying the merge - however this might not pose problems if your main thread only consumes CoreData objects.

RestKit two consecutive enqueueBatchOfObjectRequestOperations not directly mapped to core data

I'm using RestKit 0.20.2 in combination with MagicalRecord (important for any eventual context problems that you can think of).
My app is a saving tickets(entity) with items(entity) and each item has a tax(entity). My use case is this : I need to sync core data with my webserver when the ipad reconnects to the internet after a long period of time beeing unable to send the data (for whatever reason)
My problem is to be able to sync a lot of objects (it can go from 100 to a 1000 to even more), to be able to post a lot of objects without timeouts I set the restkit concurrency :
[RKObjectManager sharedManager].operationQueue.maxConcurrentOperationCount = 3;
Now this is working absolutely fine. But my problem is that i have alot of redundant entities that syncs with every item.
For instance each item has a tax, but i only have two taxes in my model that need to be synced with the web service and then sent as a relationship with the item (i only put the id of the tax). So to circumvent that problem for each postItem i check if the related Tax has an ID, if yes, then i can parse the item directly with the tax relationship in it, if not, i need to sync the tax first then the item with the returned taxID.
The work around is working as expected too. But again there is a problem, because between each postItem RestKit isn't saving the newly TaxID between two requests, so instead of sending it once, it sends it every time it encounters it inside an item and when all the operations are done it saves the newly created taxIDs
To improve that I digged a bit in restkit and found
- (void)enqueueBatchOfObjectRequestOperations:(NSArray *)operations
progress:(void (^)(NSUInteger numberOfFinishedOperations, NSUInteger totalNumberOfOperations))progress
completion:(void (^)(NSArray *operations))completion
So now i'm building RKManagedObjectRequestOperations for my tax entities and batch them.
Then i sync the items. It's more efficient, and i don't need to set a dependency between operations (because i need them to be executed in a certain order, tax then items then the whole ticket.)
PROBLEM Between the two enqueueBatchOperations, RestKit doesn't map the result of the first batch immediately, even if i explicitly call
[[RKManagedObjectStore defaultStore].mainQueueManagedObjectContext saveToPersistentStore:&error]
It isn't mapped because after the first batch of taxes, i send all the items, and i can see that the taxID's aren't set, but after all the batches are done, i can clearly see them mapped correctly in my core data file. So I thought it was a context issue, but when i digg into RestKit and more specifically in
appropriateObjectRequestOperationWithObject:(id)object
method:(RKRequestMethod)method
path:(NSString *)path
parameters:(NSDictionary *)parameters
I can see line 580 :
NSManagedObjectContext *managedObjectContext = [object respondsToSelector:#selector(managedObjectContext)] ? [object managedObjectContext] : self.managedObjectStore.mainQueueManagedObjectContext;
That sets the mainQueueContext (and not the object context) for the operations (i've checked with breakpoints), so calling save or saveToPersistentStore should propagate the changes from child contexts to the mainQueue and... it's where i've lost hope and turned to stackoverflow ;)
As it usually happens i found the solution after posting on SO :)
The problem was that the RKManagedObjectRequestOperations where all created before restkit actually sent the information. So the context was the same for all the requests (as mentioned in the appropriateObjectRequestOperationWithObject method, and the changes weren't propagated because the context reference was the "old" reference.
To have the information on the next requests i just build the RKManagedObjectRequestOperations in the enqueueBatchOfOperations completion block, and now all is working fine with the newly created taxID ;)

Core Data: Deleting causes 'NSObjectInaccessibleException' from NSOperation with a reference to a deleted object

My application has NSOperation subclasses that fetch and operate on managed objects. My application also periodically purges rows from the database, which can result in the following race condition:
An background operation fetches a bunch of objects (from a thread-specific context). It will iterate over these objects and do something with their properties.
A bunch of rows are deleted in the main managed object context.
The background operation accesses a property on an object that was deleted from the main context. This results in an 'NSObjectInaccessibleException', reason: 'CoreData could not fulfill a fault'
Ideally, the objects that are fetched by the NSOperation can be operated on even if one is deleted in the main context. The best way I can think to achieve this is either to:
Call [request setReturnsObjectsAsFaults:NO] to ensure that Core Data won't try to fulfill a fault for an object that no longer exists in the main context. The problem here is I may need to access the object's relationships, which (to my understanding) will still be faulted.
Iterate through the managed objects up front and copy the properties I will need into separate non-managed objects. The problem here is that (I think) I will need to synchronize/lock this part, in case an object is deleted in the main context before I can finish copying.
Am I missing something obvious? It doesn't seem like what I'm trying to accomplish is too out of the ordinary. Thanks for your help.
You said each thread has its own context. That's good. However, they also need to stay synchronized with changes to each other (how depends on their hierarchy).
Are the all assigned to the same persistent store coordinator, or do they have parent/child relationships?
Siblings should monitor NSManagedObjectContextObjectsDidChangeNotification from other siblings. Parents will automatically get notified when a child context saves.
I ended up mitigating this by perform both fetches and deletes on the same queue.
Great question, I can only provide a partial answer and would really like to know this too. Unfortunately your own solution is more of a workaround but not really an answer. What if the background operation is very long and you can't resort to running it on the main thread?
One thing I can say is that you don't have to call [request setReturnsObjectsAsFaults:NO] since the fetch request will load the data into the row cache and will not go back to the database when a fault fires for one of the fetched objects (see Apples documentation for NSFetchRequest). This doesn't help with relationships though.
I've tried the following:
On NSManagedObjectContextWillSave notification, wait for the current background task to finish and prevent new tasks from starting with something like
-(void)contextWillSave:(NSNotification *)notification {
dispatch_sync(self.backgroundQueue, ^{
self.suspendBackgroundOperation = YES;
});
}
Unset suspendBackgroundOperation on NSManagedObjectContextDidSave notification
However the dispatch_sync call introduces possible dead locks so this doesn't really work either (see my related question). Plus it would still block the main thread until a potentially lengthy background operation finishes.

What's the best way to keep count of the data set size information in Core Data?

Right now whenever I need to access my data set size (and it can be quite frequently), I perform a countForFetchRequest on the managedObjectContext. Is this a bad thing to do? Should I manage the count locally instead? The reason I went this route is to ensure I am getting 100% correct answer. With Core Data being accessed from more than one places (for example, through NSFetchedResultsController as well), it's hard to keep an accurate count locally.
-countForFetchRequest: is always evaluated in the persistent store. When using the Sqlite store, this will result in IO being performed.
Suggested strategy:
Cache the count returned from -countForFetchRequest:.
Observe NSManagedObjectContextObjectsDidChangeNotification for your own context.
Observe NSManagedObjectContextDidSaveNotification for related contexts.
For the simple case (no fetch predicate) you can update the count from the information contained in the notification without additional IO.
Alternately, you can invalidate your cached count and refresh via -countForFetchRequest: as necessary.

Resources