NSFetchedResultsController: keep sort order same as IN predicate - core-data

I have a list of uids, and I need to get Core Data objects with those uids:
NSPredicate *predicate = [NSPredicate predicateWithFormat:#"uid IN %#", uids];
[request setPredicate:predicate];
Now, I want to keep the objects in the same order as the listed uids. So if the array is [3, 1, 2], then I want the returned objects to be in that order as well.
Not setting an sort descriptor isn't an option with NSFetchedResultsController, so what should I set the sort descriptor to?

The sort descriptor can be set on a NSFetchRequest instance.
[request setSortDescriptors:descriptors];
where descriptors is an array of NSSortDescriptor instances.
A NSSortDescriptor can be created as follows:
NSSortDescriptor *sortDescriptorForKey = [[NSSortDescriptor alloc] initWithKey:#"someKey" ascending:YES]; // YES or NO based on your needs
About your question, what is the order of uids? Based on my experience (I haven't think on this until now) you cannot do what you are trying to achieve. Anyway, you could just replicate the descriptor for your uids in the NSFetchRequest.
Maybe someone could have some other ideas.

Objects in a persistent store are unordered. Typically you should impose order at the controller or view layer, based on an attribute such as creation date. If there is order inherent in your data, you need to explicitly model that.(https://developer.apple.com/library/mac/documentation/cocoa/conceptual/coredata/articles/cdFAQ.html)

Related

iCloud Core Data Not Synching All Records On Second Device

When I create a sqlite core data store in iCloud and seed on one device, it works fine and has around 6,000 records. When I create the store on the second device, it syncs, but only ends up with around 4,000 records (presumably from the transaction logs). My sync code in response to NSPersistentStoreDidImportUbiquitousContentChangesNotification reads...
NSUndoManager * undoManager = [context undoManager];
[undoManager disableUndoRegistration];
[context mergeChangesFromContextDidSaveNotification:info];
[context processPendingChanges];
[undoManager enableUndoRegistration];
However, when I insert a debug statement...
NSSet *updatedObjects = [[info userInfo] objectForKey:NSUpdatedObjectsKey];
NSSet *deletedObjects = [[info userInfo] objectForKey:NSDeletedObjectsKey];
NSSet *insertedObjects = [[info userInfo] objectForKey:NSInsertedObjectsKey];
NSLog(#"found transaction log. updated %d, deleted %d, inserted %d", updatedObjects.count, deletedObjects.count, insertedObjects.count);
I get an unexpected '0' for EVERYTHING, yet I know it's synching something because the database has around 4,000 records in it.
Does anyone have any advice on what might be going on here?
Thanks
Ray
If you have a look to the sqlite store itself, you will see that iOS is adding columns and tables to keep its reference. There's no way to add a primary key via code, unless you calculate it and store in a #property. But then again, core data doesn't care about it. Precalculated ID only serves you within your application logic.
For the iCloud part, you are right: iCloud transfer data by means of transaction log. The transaction log is copied to each device, and sqlite store get populated from there.
Documentation says that the transaction log is keep for a reasonable amount of time, then it is trashed. That could be the reason of only 4000 records instead of 6000.
Also, it is not your case, but beside that, in my experience, it is better to introduce iCloud since the beginning, when all devices are empty. If you introduce iCloud when device are partially filled with its own data, you could run into weird problems.
If you still haven't done that, I also suggest to turn on iCloud debugging in Xcode scheme.
My personal opinion, to synchronize more databases it is quite a difficult task, and I still wonder how it can be accomplished correctly by only dealing with transaction log sharing. Infact, at least with iOS 5 Core Data and iCloud was completely unreliable, and dangerous.

Core Data - How often to fetch objects?

So I do alot of fetching of my objects. At startup for instance I set an unread count for the badge on a tab. To get this unread count I need to fetch my datamodel objects to see which objects have the flag unread. So there we have a fetch. Then right after that method I do another fetch of all my datamodel objects to do something else. And then on the view controller I need to display my datamodel objects so I do another fetch there and so on.
So there are alot of calls like this : NSArray *dataModelObjects = [moc executeFetchRequest:request error:&error];
This seems kind of redundant to me? Since I will be working alot with my datamodel objects can I not just fetch them once in the application and access them through an instance variable whenever I need to access them? But I always want to have up-to-date data. So there can be added and/or deleted datamodel objects.
Am I making any sense on what I want to achieve here?
On of the concepts and benefits of Core Data is that you don't need to access the database every time you need an object - that's why NSManagedObjectContext was created - it stores those objects, retrieved from the database, so if you will try to get the object you've already get fromt the database, it wil be really fast.
And every change in those objects, made in the NSManagedObjectContext are automaticly brought to you.
But if you had some changes in the database, they may not be reflected in the NSManagedObjectContext, so you'll have to refresh them. You can read about keeping objects up-to-date here.

Preferred way to store a child object in Azure Table Storage

I did a little expirement with storing child objects in azure table storage today.
Something like Person.Project where Person is the table entity and Person is just a POCO. The only way I was able to achieve this was by serializing the Project into byte[]. It might be what is needed, but is there another way around?
Thanks
Rasmus
Personally I would prefer to store the Project in a different table with the same partition key that its parent have, which is its Person's partition key. It ensures that the person and underlying projects will be stored in the same storage cluster. On the code side, I would like to have some attributes on top of the reference properties, for example [Reference(typeof(Person))] and [Collection(typeof(Project))], and in the data context class I can use some extension method it retrieve the child elements on demand.
In terms of the original question though, you certainly can store both parent and child in the same table - were you seeing an error when trying to do so?
One other thing you sacrifice by separating out parent and child into separate tables is the ability to group updates into a transaction. Say you created a new 'person' and added a number of projects for that person, if they are in the same table with same partition key you can send the multiple inserts as one atomic operation. With a multi-table approach, you're going to have to manage atomicity yourself (if that's a requirement of your data consistency model).
I'm presuming that when you say person is just a POCO you mean Project is just a POCO?
My preferred method is to store the child object in its own Azure table with the same partition key and row key as the parent. The main reason is that this allows you to run queries against this child object if you have to. You can't run just one query that uses properties from both parent and child, but at least you can run queries against the child entity. Another advantage is that it means that the child class can take up more space, the limit to how much data you can store in a single property is less than the amount you can store in a row.
If neither of these things are a concern for you, then what you've done is perfectly acceptable.
I have come across a similar problem and have implemented a generic object flattener/recomposer API that will flatten your complex entities into flat EntityProperty dictionaries and make them writeable to Table Storage, in the form of DynamicTableEntity.
Same API will then recompose the entire complex object back from the EntityProperty dictionary of the DynamicTableEntity.
Have a look at: https://www.nuget.org/packages/ObjectFlattenerRecomposer/
Usage:
//Flatten complex object (of type ie. Order) and convert it to EntityProperty Dictionary
Dictionary<string, EntityProperty> flattenedProperties = EntityPropertyConverter.Flatten(order);
// Create a DynamicTableEntity and set its PK and RK
DynamicTableEntity dynamicTableEntity = new DynamicTableEntity(partitionKey, rowKey);
dynamicTableEntity.Properties = flattenedProperties;
// Write the DynamicTableEntity to Azure Table Storage using client SDK
//Read the entity back from AzureTableStorage as DynamicTableEntity using the same PK and RK
DynamicTableEntity entity = [Read from Azure using the PK and RK];
//Convert the DynamicTableEntity back to original complex object.
Order order = EntityPropertyConverter.ConvertBack<Order>(entity.Properties);

Pre-cache a fetch with Core Data?

I've got an application using Core Data (successfully) and I have an object graph that has a few root nodes that each have a to-many to tens of thousands of children. During my app's run, based on user input, I do the following (pseudocode):
Find the user selected root node,
Then fetch all its children, in ascending alphabetic order based on a property.
The trick here is that this data is constant. My Core Data store is read-only.
Doing this fetch is very time consuming. How can I pre-cache these results so that I completely avoid that fetch and store? The fetched results are used to fill a UITableView. When a user selects a cell of that tableview it drills down in to deeper data in the Core Data store - that data doesn't need to be precached..
If you are using NSFetchedResultsController with your tableview you can set a cacheName. You can set a batchSize in NSFetchRequest to bring the results in chunks. And also make sure you set the attribute your sorting on as an index in the core data model.
This code come from the Core Data Programming guide (chapter "Core Data Performance
") and shows how to make a prefetch.
NSManagedObjectContext *context = /* get the context */;
NSEntityDescription *employeeEntity = [NSEntityDescription
entityForName:#"Employee" inManagedObjectContext:context];
NSFetchRequest *request = [[NSFetchRequest alloc] init];
[request setEntity:employeeEntity];
[request setRelationshipKeyPathsForPrefetching:
[NSArray arrayWithObject:#"department"]];
Also use a limit on what is retruned from the real fetch. That will help performance.

Core Data - Limiting number of saved objects

Is there way to limit the number of saved objects in coredata like stack, where the last objects are automatically deleted?
No, that is something you would need to implement in code. You can, just before you call -save: on the NSManagedObjectContext query the NSManagedObjectContext for how many objects are going to be inserted and perform your custom logic at that point.

Resources