Core Data in-memory saving - core-data

I am building an app which uses a TabBarController and has multiple views showing the SAME data, but in different ways. One of the views is a TableView and the other is a Map view.
The data comes from a server and I would like to have a way to store this data in which it is accessible from multiple view controllers (have a "single source of truth"). I believe that Core Data is a good choice, especially because I find the NSFetchedResultsController class rather convenient to work with when dealing with table views.
The data only needs to be around while the app is being used, so I am thinking about using Core Data without actually saving anything to the disk. I saw that there exists an In-memory store type which I believe is what I need. However, I found that just by inserting a new entity into my context (not yet calling context.save()), the NSFetchedResultsController can already detect the changes and update my UI.
Question 1:
Is it really neccessary to call context.save() when using the In-memory store type?
I believe it might be necessary in the case of multiple contexts.
Question 2:
If it is not necessary to call context.save(), does it even matter what persistent store type I use?
Any help is appreciated!

Question 1: Is it really neccessary to call context.save() when using the In-memory store type?
I believe it might be necessary in the case of multiple contexts.
That's correct. If you use the common pattern of having one context for the UI and a different one to handle incoming network data, you'll need to save changes for updates from one context to be available in the other.
Question 2: If it is not necessary to call context.save(), does it even matter what persistent store type I use?
If you use an in-memory store, your persistent store type is NSInMemoryStoreType. It matters in that choosing the right store type is how you get it to be an in-memory store.
Keep in mind that using an in-memory store means that users won't be able to use the app offline in any way. Whether that's important depends on your app, but it can be useful to let people view older data when they don't have a network connection.

Related

Core Data: make object-by-object copy of database

I would like to make a backup copy of my Core Data database, without using either the File Manager to make a copy of the sqlite file, or using the Persistent Store Coordinator's migratePersistentStore method (for reasons that are too long to explain here). What I want to do is to open a new persistent store with the same MOMD as the original file, create a new Managed Object Context, then iterate over all the objects in my database and insert them into the new context.
This will work for simple entities, but the problem is that my model has about 20 entities, many of them with one-to-many and many-to-many relationships. The slightly more complex solution would be to insert every object into the new MOC, and then hold all the new Managed Objects in memory and use them to tie up all the relationships between the objects in a subsequent passes. But it seems like a really messy solution.
Is there a clean, elegant way to achieve this, that might work for any kind of data model, not just a customized solution for my own model, and without having to hold every object in memory at the same time?
Thanks.
Copying the persistent store is far and above the easiest way to do this – I'd suggest revisiting your reasons against it or explaining what they are.
Copying objects from one context to another – from one on-disk persistent store to another – doesn't necessarily hold them all in memory at the same time. Core Data can turn them into faults.

Should I use NSFileWrappers in UIManagedDocument?

I am trying to store a plist and several binary files (let's say images) as part of an UIManagedDocument. The name of the binary files are an attribute in Core Data and I don't need to enumerate them, just access the right one when showing the related entity.
The file structure that I want to have is:
- <File yyyyMMdd-HHmmss>.extdoc
- StoreContent
- persistentStore
- AdditionalContent
- ListStatus.plist (used to store per document defaults)
- Images
- uuid1.png
- uuid2.png
- ...
- uuidn.png
So far, I have successfully followed the instructions in How do I save additional content into my UIManagedDocument file packages?, but when I try to add the binary files there are some things that I don't know how to do.
Should I treat the URL /the/path/File yyyyMMdd-HHmmss.extdoc/AdditionalContent (the default one provided with readAdditionalContentFromURL:error:) as a NSFileWrapper? Are there any advantages/disadvantages vs just using the URLs? I find it more complicated to use the file wrapper, since the plist has to be read using the file wrapper accessors and NSCoder (I guess), and the files, I have to store the file wrapper for the Images directory and then obtain the corresponding node with objectForKey (I assume). But Apple's Document-Based Apps Programming Guide for iOS regarding custom formats instead of NSData or NSFileWrapper, states "Keep in mind that your code will have to duplicate what UIDocument does for you, and so you must deal with greater complexity and a greater possibility of error." Am I misunderstanding this?
Per document defaults are declared as properties: the setter modifies the NSDictionary that maps the plist and marks the document as updated, and the getter accesses the dictionary with the proper key. How do I expose the ability to read/write the binary files? Should I add a method to my subclass of UIManagedDocument? - (void)writeImage:(NSString*)uuid; and -(UIImage *)readImage:(NSString *)uuid; And should I keep this data in memory until the document is saved? How?
Assuming that NSFileWrapper is the way to go, if I plan to use this document with iCloud should I use file coordinators with the file wrapper? If so, how?
Any source code for each question will be greatly appreciated. Thank you.
P.S.: I know that I could save some binary data inside of Core Data, but I don't feel comfortable with that solution. Among other reasons, I rather store the PNG data for image files that a serialized version of UIImage that won't be compatible with NSImage if I want to create a desktop app.
I'd like to say that, in general I rather like UIManagedDocument. It has a few advantages over raw Core Data. For example, it sets up the entire core data stack for you automatically. It also sets up nested managed object contexts for you, so you get free background saving. None of that is particularly earth-shattering, but it's a lot of functionality from a tiny amount of code.
I haven't played around with saving additional information...but here are my thoughts.
First, you shouldn't need to treat the new URL as a file wrapper. You should just be able to do regular file operations on the provided URL. Just make sure you have everything implemented properly in additionalContentForURL:error:, writeAdditionalContent:toURL:originalContentsURL:error: and readAdditionalContentFromURL:error:. The read and write operations need to be symmetric. And you should probably snapshot your data in additionalContentsForURL:error: so that everything will be saved in a known, good state (since the save operations are asynchronous).
As an alternative, have you considered using the Store in External Record File flag in your data model instead of saving it manually? This should force Core Data to (depending on the size of the binary data) automatically store them externally. I looked at the release notes, and I didn't see anything saying you couldn't use this feature with iCloud. That might be the easiest fix.
Attacking a side point for the moment (as I have not had ANY good experience with UIManagedDocument).
You can save the binary inside of Core Data for a iOS 5.0+ application using the external file reference. Then you can save the PNG of the image to Core Data directly and not need to worry about a UIManagedDocument or about bloating the sqlite file.
There is nothing stopping you from storing the PNG instead of a UIImage.
One other thought. You may need to use an NSFileCoordinator for the read and write operations. Technically, any read or write operations in the iCloud container need to use a file coordinator (to coordinate with the iCloud sync service--this prevents accidentally corrupting a file by reading it while another process is writing to it).
I know that UIDocument wraps most of its input and output methods automatically. I'd guess that these methods are similarly wrapped (since they give you a URL to use)--However, the docs aren't very clear.

Drupal: Avoid database when dealing with node type info?

I'm writing a Drupal module that deals with creating new nodes from CSV files. The way I've been doing it currently, the user provides a node type, and my module goes to the database to find the fields for that node.
After the user matches the node fields to the CSV fields, I want to validate the data. This requires finding out the types of the node fields. I'm not entirely sure how to do that. (Maybe look at the content_node_field table?)
Then, I have to create the nodes. Currently, the module creates a new StdClass object, populates it with the necessary data, and saves it.
But what if I could abstract away from the database entirely and avoid dealing with it? What if I asked the user to a node of this type that already exists? I could node_load() this node, and use that to determine node fields. When it comes time to save the nodes, I could use the "seed" node to figure out what the structure of the new nodes needs to be.
One downside: this requires at least one node of this type to exist before the module can function.
Also, would this be slower than accessing the db directly?
I fear that over time, db names could change, and content types could be defined across multiple tables. By working only from a pre-existing node, I could get around many of these issues. Right?
Surely node_load will be hitting the database anyway? The node fields are stored in the database so if you need to get them, at some point you have to talk to the database. Given that some page loads on Drupal invoke hundreds (or even thousands!) of database queries I really wouldn't worry about one or two!
Table names are unlikely to change and the schema should stay fixed between point versions of Drupal at least. It would be better practice to use the API to get the data you want if it is possible though, and this would give better protection against change. I don't know if that's possible.

How to implement CQS with in memory changes?

Having Watched this video by Greg Yound on DDD
http://www.infoq.com/interviews/greg-young-ddd
I was wondering how you could implement Command-Query Separation (CQS) with DDD when you have in memory changes?
With CQS you have two repositories, one for commands, one for queries.
As well as two object groups, command objects and query objects.
Command objects only have methods, and no properties that could expose the shape of the objects, and aren't to be used to display data on the screen.
Query objects on the other hand are used to display data to the screen.
In the video the commands always go to the database, and so you can use the query repository to fetch the updated data and redisplay on the screen.
Could you use CQS with something like and edit screen in ASP.NET, where changes are made in memory and the screen needs to be updated several times with the changes before the changes are persisted to the database?
For example
I fetch a query object from the query repository and display it on the screen
I click edit
I refetch a query object from the query object repository and display it on the form in edit mode
I change a value on the form, which autoposts back and fetches the command object and issues the relevant command
WHAT TO DO: I now need to display the updated object as the command made changes to the calculated fields. As the command object has not been saved to the database I can't use the query repository. And with CQS I'm not meant to expose the shape of the command object to display on the screen. How would you get a query object back with the updated changes to display on the screen.
A couple of possible solutions I can think of is to have a session repository, or a way of getting a query object from the command object.
Or does CQS not apply to this type of scenario?
It seems to me that in the video changes get persisted straight away to the database, and I haven't found an example of DDD with CQS that addresses the issue of batching changes to a domain object and updating the view of the modified domain object before finally issuing a command to save the domain object.
So what it sounds like you want here is a more granular command.
EG: the user interacts with the web page (let's say doing a check out with a shopping cart).
The multiple pages getting information are building up a command. The command does not get sent until the user actually checks out where all the information is sent up in a single command to the domain let's call it a "CheckOut" command.
Presentation models are quite helpful at abstracting this type of interaction.
Hope this helps.
Greg
If you really want to use CQS for this, I would say that both the Query repo and the Write repo both have a reference to the same backing store. Usually this reference is via an external database - but in your case it could be a List<T> or similar.
Also for the rest of your concerns ...
These are more so concerns with eventual consistency as opposed to CQRS. You do not need to be eventually consistent with CQRS you can make the processing of the command also write to the reporting store (or use the same physical store for both as mentioned) in a consistent fashion. I actually recommend people to do this as their base architecture and to later come throught and introduce eventual consistency where needed as there are costs azssociated with it.
In memory, you would usually use the Observer design pattern.
Actually, you always want to use this pattern but most databases don't offer an efficient way to call a method in your app when something in the DB changes.
The Unit of Work design pattern from Patterns of Enterprise Application Architecture matches CQS very well - it is basically a big Command that persist stuff in the database.
JdonFramework is CQRS DDD java framework, it supply a domain events + Asynchronous pattern, more details https://jdon.dev.java.net/

Concerns about Core Data

I'm getting ready to dive into my first Core Data adventure. While evaluating the framework two questions came up that really got me thinking about using Core Data at all for this project or to stick with SQLite.
My app will heavily rely upon importing data from an external source. I'm aware that one can import into Core Data but handling complex relationships seems complicated and tedious. Is there an easy way to accomplish complex imports?
The app has to be able to execute complex queries spanning multiple tables or having multiple conditions. Building these predicates and expressions simply scares me...
Is it worth to take the plunge and use Core Data or should I stick with SQLite?
As I and others have said before, Core Data is really an object-graph management framework. It manages the relationships between model objects, including constraints on their cardinality, and manages cascading deletes etc. It also manages constraints on individual attributes. Core Data just happens to also be able to persist that object graph to disk. It can do this in a number of formats, including XML, binary, and via SQLite. Thus, Core Data is really orthogonal to SQLite. If your task is dealing with an embedded SQL-compatible database, go with SQLite. If your task is managing the model layer of an MVC app, go with Core Data. In specific answers to your questions:
There is no magic that can automatically import complex data into any model. That said, it is relatively easy in Core Data. Taking a multi-pass approach and using the SQLite backend can help with memory consumption by allowing you to keep only a subset of the data in memory at a time. If the data sets can be kept in memory, you can write a custom persistent store format that reads/writes directly to your legacy data format from within Core Data (see the Atomic Store Programming Guide).
Building a complex NSPredicate declaratively is somewhat verbose but shouldn't scare you. The Predicate Programming Guide is a good place to start. You can, of course, also write predicates using a string format, much like a string-formatted SQL statement. It's worth noting that, as described above, the predicates in Core Data are on the objects and object graph, not on the SQL tables. If you really want to think at the level of tables, stick with SQLite and write your own wrapper.
I can't really speak to your first point.
However, regarding your second point, using Core Data means you don't have to really worry about complex queries since you can just pretend that all the relationships are properly established in memory already (Apple's implementation details aside). It doesn't matter how complex a join it might be in a database environment because you really aren't in a database environment. If you need to get the fourth child of the grandparent of your current object and then find that child's pet's name and breed, all you do is traverse up the object tree in code using a series of messages or properties. No worries about joins or anything. The only problem is it might be really slow depending on your objects' relationships, but I can't really speak accurately to that since I haven't actually implemented anything using Core Data (I've just read about it extensively on Apple's and others' websites).
If the data importer from an external source is written based on the same core data model (for the targeted/destination side of the import) - nothing will be conceptually different as compare to using/updating the same data (through the core data stack from your actual application).
If you create the data importer without using the core data stack, make sure you learn well the db schema that would be generated/expected by the core data based model. There is nothing magic there - just make sure you follow how the cross entity relationships are implemented and how entity hierarchies are stored.
I had to create recently a data importer from Access database into the core data based Sqlite store as a .NET app. Once my destination core data model was define, I created a small app that populated the Sqlite store with randomly generated entities (including all the expected relationships). Then, I reverse engineered how the core data actually created the Sqlite store for the model and how it handles the relationships by learning from the generated and persisted data. Then, I implemented the .NET based importer/data-transformer according to my observations. At the end, I got perfect core data friendly data store that could be open an modified from the application that was using the core data stack on Mac OSX.

Resources