Use NSPersistentContainer viewContext to save to disk? - core-data

Is it safe to use the viewContext to save into the persistent store if I have large amounts of data? For example, I have 1000 records on my temporary background context which is a child of the viewContext of the NSPersistentContainer. Once I am done saving all 1000 records in the bg context, I want to save it using the viewContext to persist on the database. Is this the right approach or I should create a background context for saving to the persistent store?

Generally, I would use a background context for saving large amounts of data and let the main context pick up the changes from the persistent store.
I try use the main context as a read only context as much as possible in apps and use background or child contexts for saving and editing.

Related

Hazelcast Map reload on demand

HazeCast 3.2-RC1 Evaluation:
I am not able find any HazelCast api to re-load i.e, trigger MapLoader (loadAllKeys(), loadAll()) on-demand.
I see this autoload (ALL) happens only when Server starts, But I need a control to re-load on demand when required to re-synchronize with underlying database.
Map.clear() clears all the data, but not finding any control to to re-load automatically rather write additional code to populate the data and push it to the cache?
Can some advise if there are any workarounds?
Thanks
the documentation says that the MapStore is called if a key is not in memory. So after you clear the Map will be populated by simply call get() on it. You will only have the data in memory that is really used.
On the other hand, MapStore is called "when the map is first touched/used". Maybe you can create a new hazelcast map and switch to the new map.
see http://www.hazelcast.org/docs/latest/manual/html-single/hazelcast-documentation.html#persistence for more information.
Regards
Thorsten

Preferred way to store a child object in Azure Table Storage

I did a little expirement with storing child objects in azure table storage today.
Something like Person.Project where Person is the table entity and Person is just a POCO. The only way I was able to achieve this was by serializing the Project into byte[]. It might be what is needed, but is there another way around?
Thanks
Rasmus
Personally I would prefer to store the Project in a different table with the same partition key that its parent have, which is its Person's partition key. It ensures that the person and underlying projects will be stored in the same storage cluster. On the code side, I would like to have some attributes on top of the reference properties, for example [Reference(typeof(Person))] and [Collection(typeof(Project))], and in the data context class I can use some extension method it retrieve the child elements on demand.
In terms of the original question though, you certainly can store both parent and child in the same table - were you seeing an error when trying to do so?
One other thing you sacrifice by separating out parent and child into separate tables is the ability to group updates into a transaction. Say you created a new 'person' and added a number of projects for that person, if they are in the same table with same partition key you can send the multiple inserts as one atomic operation. With a multi-table approach, you're going to have to manage atomicity yourself (if that's a requirement of your data consistency model).
I'm presuming that when you say person is just a POCO you mean Project is just a POCO?
My preferred method is to store the child object in its own Azure table with the same partition key and row key as the parent. The main reason is that this allows you to run queries against this child object if you have to. You can't run just one query that uses properties from both parent and child, but at least you can run queries against the child entity. Another advantage is that it means that the child class can take up more space, the limit to how much data you can store in a single property is less than the amount you can store in a row.
If neither of these things are a concern for you, then what you've done is perfectly acceptable.
I have come across a similar problem and have implemented a generic object flattener/recomposer API that will flatten your complex entities into flat EntityProperty dictionaries and make them writeable to Table Storage, in the form of DynamicTableEntity.
Same API will then recompose the entire complex object back from the EntityProperty dictionary of the DynamicTableEntity.
Have a look at: https://www.nuget.org/packages/ObjectFlattenerRecomposer/
Usage:
//Flatten complex object (of type ie. Order) and convert it to EntityProperty Dictionary
Dictionary<string, EntityProperty> flattenedProperties = EntityPropertyConverter.Flatten(order);
// Create a DynamicTableEntity and set its PK and RK
DynamicTableEntity dynamicTableEntity = new DynamicTableEntity(partitionKey, rowKey);
dynamicTableEntity.Properties = flattenedProperties;
// Write the DynamicTableEntity to Azure Table Storage using client SDK
//Read the entity back from AzureTableStorage as DynamicTableEntity using the same PK and RK
DynamicTableEntity entity = [Read from Azure using the PK and RK];
//Convert the DynamicTableEntity back to original complex object.
Order order = EntityPropertyConverter.ConvertBack<Order>(entity.Properties);

Prepopulated stored data in iOS4

I work with iPhone iOS 4.3.
In my project I need a read-only, repopulated table of data (say a table with 20 rows and 20 fields).
This data has to be fetched by key on the row.
What is better approach? CoreData Archives, SQLite, or other? And how can I prepare and store this table?
Thank you.
I would use core data for that. Drawback: You have to write a program (Desktop or iOS) to populate the persistent store.
How to use a pre-populated store, you should have a look into the Recipes sample code at apple's.
The simplest approach would be to use an NSArray of NSDictionary objects and then save the array to disk as a plist. Include the plist in your build and then open it read only from the app bundle at runtime.
Each "row" would be the element index of the array which would return a dictionary object wherein each "column" would be a key-value pair.
I've done this 2 different ways:
Saved all my data as dictionaries in a plist, then deserialized everything and loaded it into the app during startup
Created a program during development that populates the Core Data db. Save that db to the app bundle, then copy the db during app startup into the Documents folder for use as the Persistent Store
Both options are relatively easy, and if your initial data requirements get very large, it's also proven to be the most performant for me.

Multithread UnitOfWork in Nhibernate

We are working on a c# windows service using NHibernate which is supposed to process a batch of records.
The service has to process about 6000 odd records and its taking about 3 hours at present to process these. There are a lot of db hits incurred and while we are trying to minimize these , we are also exploring multithreading options to improve performance.
We are using the UnitOfWork pattern to access the NHibernate session.
This is roughly how the service looks :
public class BatchService
{
public DoWork()
{
StartUnitOfWork();
foreach ( var record in recordsToBeProcessed)
{
Process(record);
// Perform lots of db operations
}
StopUnitOfWork();
}
}
We were thinking of using the Task Parallel Library to try to process these records in batches ( using the Parallel.Foreach () method).
From what I have read about NHibernate so far , we should provide each thread a separate NHibernate session.
My query is how do we supply this ..considering the UnitOfWork pattern which only allows one session to be available.
Should I be looking at wrapping a UnitOfWork around the processing of a single record ?
Any help much appreciated.
Thanks
The best way is to start a new unitofwork for each thread, use a thread-static contextual session NHibernate.Context.ThreadStaticSessionContext. You must be aware of dettached entities.
The easiest way is to wrap each processing of a record in it's own unit of work and then run each UOW on it's own thread. You need to make sure that each UOW & session is started, used and completed on a single thread.
To gain performance you could split the batch of records in smaller batches and then wrap the processing of this smaller batches into UOWs and execute them on separate threads.
Depending on your workload using a second level cache (memcached/membase) might dramatically improve your performance. (eg if you need to read some records from the db for each processing )

CoreData design pattern: persisting a single object of many -or-how many NSPessistentObjectContexts should I have?

I'm converting an app from SQLitePersistentObjects to CoreData.
In the app, have a class that I generate many* instances of from an XML file retrieved from my server. The UI can trigger actions that will require me to save some* of those objects until the next invocation of the app.
Other than having a single NSManagedObjectContext for each of these objects (shared only with their subservient objects which can include blobs). I can't see a way how I can have fine grained control (i.e. at the object level) over which objects are persisted. If I try and have a single context for all newly created objects, I get an exception when I try to move one of my objects to a new context so I can persist it on ots own. I'm guessing this is because the objects it owns are left in the 'old' context.
The other option I see is to have a single context, persist all my objects and then delete the ones I don't need later - this feels like it's going to be hitting the database too much but maybe CoreData does magic.
So:
Am I missing something basic about the way my CoreData app should be architected?
Is having a context per object a good design pattern?
Is there a better way to move objects between contexts to avoid 2?
* where "many" means "tens, maybe hundreds, not thousands" and "some" is at least one order of magnitude less than "many"
Also cross posted to the Apple forums.
Core Data is really not an object persistence framework. It is an object graph management framework that just happens to be able to persist that graph to disk (see this previous SO answer for more info). So trying to use Core Data to persist just some of the objects in an object graph is going to be working against the grain. Core Data would much rather manage the entire graph of all objects that you're going to create. So, the options are not perfect, but I see several (including some you mentioned):
You could create all the objects in the Core Data context, then delete the ones you don't want to save. Until you save the context, everything is in-memory so there won't be any "going back to the database" as you suggest. Even after saving to disk, Core Data is very good at caching instances in the contexts' row cache and there is surprisingly little overhead to just letting it do its thing and not worrying about what's on disk and what's in memory.
If you can create all the objects first, then do all the processing in-memory before deciding which objects to save, you can create a single NSManagedObjectContext with a persistent store coordinator having only an in-memory persistent store. When you decide which objects to save, you can then add a persistent (XML/binary/SQLite) store to the persistent store coordinator, assign the objects you want to save to that store (using the context's (void)assignObject:(id)object toPersistentStore:(NSPersistentStore *)store) and then save the context.
You could create all the objects outside of Core Data, then copy the objects to-be-saved into a Core Data context.
You can create all the objects in a single in-memory context and write your own methods to copy those objects' properties and relationships to a new context to save just the instances you want. Unless the entities in your model have many relationships, this isn't that hard (see this page for tips on migrating objects from one store to an other using a multi-pass approach; it describes the technique in the context of versioning managed object models and is no longer needed in 10.5 for that purpose, but the technique would apply to your use case as well).
Personally, I would go with option 1 -- let Core Data do its thing, including managing deletions from your object graph.

Resources