Core Data Migration Multiple Passes? - core-data

I am implementing a core data data version changes for one of my ipad applications. Apparently some user of my app has more than 1GB large database. As a result doing data migration using "light weight" will blow up the memory. Therefore, I was trying to do a customized data migration with multiple passes(Suggested by Apple). However, I am not sure how would I divide one mapping model into several small mapping models(ideally one per each entity), since on the generated mapping model entity mappings are all related.
I won't be able to post a image because of I am new to stack overflow
Inside of the mapping models I added two more mappings. For one DataMedia, I need to create two ASData to store the media binary data in a separate table. The large data is store initially in "DataMedia" table(in a worst case that table is almost 800MB large).
So here is my question:
1. What is the best way to do this migration with out blowing out the memory?
2. Is multiple passes migration a solution? if so how do I divided the entity mappings with relationships to each-other into separate mapping models? Does that mean that I need to manually implement the "Relationship Mapping"?

Related

GAE datastore data model recommendation for nested "same kind" relations

I have followed through Bookshelf App tutorial (in node.js) by google and instead of books catalogue I would like to model a production part catalogue.
Where a part consists of "sub"-parts and tasks.
Every "sub"-part can have again "sub"-parts and tasks (manufacturing steps).
Current implementation: At the moment I have only two kinds Parts and Tasks.
A relations between the parts is managed via a property storing the unique key (parentId) of the parent part in its child part. A bigger headache I have at the moment (for example) is a price change of a highly nested sub-part would be recursively need to update all parent parts...
Question: What would be the recommended datastore design for such an application?
It should solve or be more efficient doing:
If i change a "sub-sub-sub"-parts price this need to change the price of all parent parts according the chosen calculation methodology.
Should not be limited in depth of sub-parts (I did read limits on datastore "nested entity values" to be 20 (but probably did not understand it correctly).
Should not be limited to 1 write per second per (part and all its sub-parts) "entity group". I've read about this limit but I am not sure whether this also applies to so called Transactions (which I think you can do on entity groups).
One potential solution is avoid storing aggregate prices in Datastore entirely. Instead, the "price" on each part or task should only include the cost of that thing itself, but not the sub-parts.
Instead calculate the price on the fly when needed, adding up the entire tree of parts/sub-parts/tasks. Store this in memcache if you want to speed up calculation (but make sure to delete the memcache key when updating prices).

Cassandra Data sync issues

I'm Researching on Cassandra for over 2 weeks just have the full grasp on the same. I've read almost all the web about Cassandra and still not clear over some concepts. Following are the ones:-
As per the documentation :- We model our Column Families as per our queries. Hence we need to know our queries before-hand, which is not at all possible in a real world scenario. We can have a certain set of queries before-hand, which all keeps changing with time. Hence if I'd designed a model based on my previous queries, then after a new requirement comes i, I need to redesign a the model. And as read over one SO thread It’s very hard to fix a bad Cassandra data model in the future. For Eg:- I'd a user model having fields say
name, age,phone,imei,address, state,city,registration_type, created_at
Currently, I need to filter by (lets say) only by state. I'll make a PK as state. Lets name the model UserByState.
Now after 2-3 months, I came with a requirement of filtering by created_at. Now I'll create a model UserByCreatedAt with PK as created_at.
Now there are 2 problems:-
a) If I create a new model when the requirement comes in, then I need to migrate the data into the new model, ie if I create a new model, I need to have the previous data in the current model as well. Hence I need to migrate the data from UserByState to UserByCreatedAt, ie I need to write a script to copy the data from UserByState to UserByCreatedAt. Correct me if Im wrong!!!
If another new filtering requirement comes in, I'll be creating new models and then migration and so on.
b) To create models before-hand as per the queries, I need to keep data in sync, ie in the above case of Users, I created 2 models for 2 queries.
UserByState and UserByCreatedAt
So do I need to apply 2 different write queries??, ie
UserByState.create(row = value,......)
UserByCreatedAt.create(row = value,......)
And if I've other models, such as 'UserByGender' and so on. do I need to apply different write queries to different models MANUALLY or does it happen on its own??? The problem of keeping the data in sync arises.
There is no free lunch in distributed systems and you've hit some of key limitations on the head.
If you want extremely performant writes that scale horizontally you end up having to make concessions on other pats of the database. Cassandra chose to sacrifice flexibility in query patterns to ensure extremely fast access to well defined query patterns.
When most users reach a situation where they need to have to extremely different and frequent query patterns, they build a second table and update both at once. To get atomicity with the multi-table writes, logged batching can be used to make sure that either all of the data is written or none of it is. Logged batching increases the cost so this is still yet another tradeoff with performance. Beyond that the normal consistency level tradeoffs all still apply.
For moving data from the old table to the new one Hadoop/Spark are good options. These are batch based systems so they will not provide low latency but are great for one-offs like rebuilding a table with a new index and cronjob operations.

Core Data: make object-by-object copy of database

I would like to make a backup copy of my Core Data database, without using either the File Manager to make a copy of the sqlite file, or using the Persistent Store Coordinator's migratePersistentStore method (for reasons that are too long to explain here). What I want to do is to open a new persistent store with the same MOMD as the original file, create a new Managed Object Context, then iterate over all the objects in my database and insert them into the new context.
This will work for simple entities, but the problem is that my model has about 20 entities, many of them with one-to-many and many-to-many relationships. The slightly more complex solution would be to insert every object into the new MOC, and then hold all the new Managed Objects in memory and use them to tie up all the relationships between the objects in a subsequent passes. But it seems like a really messy solution.
Is there a clean, elegant way to achieve this, that might work for any kind of data model, not just a customized solution for my own model, and without having to hold every object in memory at the same time?
Thanks.
Copying the persistent store is far and above the easiest way to do this – I'd suggest revisiting your reasons against it or explaining what they are.
Copying objects from one context to another – from one on-disk persistent store to another – doesn't necessarily hold them all in memory at the same time. Core Data can turn them into faults.

Strategies for search across disparate data sources

I am building a tool that searches people based on a number of attributes. The values for these attributes are scattered across several systems.
As an example, dateOfBirth is stored in a SQL Server database as part of system ABC. That person's sales region assignment is stored in some horrible legacy database. Other attributes are stored in a system only accessible over an XML web service.
To make matters worse, the the legacy database and the web service can be really slow.
What strategies and tips should I consider for implementing a search across all these systems?
Note: Although I posted an answer, I'm not confident its a great answer. I don't intend to accept my own answer unless no one else gives better insight.
You could consider using an indexing mechanism to retrieve and locally index the data across all the systems, and then perform your searches against the index. Searches would be an awful lot faster and more reliable.
Of course, this just shifts the problem from one part of your system to another - now your indexing mechanism has to handle failures and heterogeneous systems, but that may be an easier problem to solve.
Another factor is how often the data changes. If you have to query data in real-time that goes stale very quickly, then indexing may not be practical.
If you can get away with a restrictive search, start by returning a list based on the search criteria corresponding to the fastest data source. Then join up those records with the other systems and remove records which don't match the search criteria.
If you have to implement OR logic, this approach is not going to work.
While not an actual answer, this might at least get you partway to a workable solution. We had a similar situation at a previous employer - lots of data sources, different ways of accessing those data sources, different access permissions, military/government/civilian sources, etc. We used Mule, which is built around the Enterprise Service Bus concept, to connect these data sources to our application. My details are a bit sketchy, as I wasn't the actual implementor, just an integrator, but what we did was define a channel in Mule. Then you write a simple integration piece to go between the channel and the data source, and the application and the channel. The integration piece does the work of making the actual query, and formatting the results, so we had a generic SQL integration piece for accessing a database, and for things like web services, we had some base classes that implemented common functionality, so the actual customization of the integration piecess was a lot less work than it sounds like. The application could then query the channel, which would handle accessing the various data sources, transforming them into a normalized bit of XML, and return the results to the application.
This had a lot of advantages for our situation. We could include new data sources for existing queries by simply connecting them to the channel - the application didn't have to know or care what data sources where there, as it only looked at the data from the channel. Since data can be pushed or pulled from the channel, we could have a data source update the application when, for example, it was updated.
It took a while to get it configured and working, but once we got it going, we were pretty successful with it. In our demo setup, we ended up with 4 or 5 applications acting as both producers and consumers of data, and connecting to maybe 10 data sources.
Have you thought of moving the data into a separate structure?
For example, Lucene stores data to be searched in a schema-less inverted indexed. You could have a separate program that retrieves data from all your different sources and puts them in a Lucene index. Your search could work against this index and the search results could contain a unique identifier and the system it came from.
http://lucene.apache.org/java/docs/
(There are implementations in other languages as well)
Have you taken a look at YQL? It may not be the perfect solution but I might give you starting point to work from.
Well, for starters I'd parallelize the queries to the different systems. That way we can minimize the query time.
You might also want to think about caching and aggregating the search attributes for subsequent queries in order to speed things up.
You have the option of creating an aggregation service or middleware that aggregates all the different systems so that you can provide a single interface for querying. If you do that, this is where I'd do the previously mentioned cache and parallize optimizations.
However, with all of that it you will need weighing up the development time/deployment time /long term benefits of the effort against migrating the old legacy database to a faster more modern one. You haven't said how tied into other systems those databases are so it may not be a very viable option in the short term.
EDIT: in response to data going out of date. You can consider caching if your data if you don't need the data to always match the database in real time. Also, if some data doesn't change very often (e.g. dates of birth) then you should cache them. If you employ caching then you could make your system configurable as to what tables/columns to include or exclude from the cache and you could give each table/column a personalizable cache timeout with an overall default.
Use Pentaho/Kettle to copy all of the data fields that you can search on and display into a local MySQL database
http://www.pentaho.com/products/data_integration/
Create a batch script to run nightly and update your local copy. Maybe even every hour. Then, write your query against your local MySQL database and display the results.

Concerns about Core Data

I'm getting ready to dive into my first Core Data adventure. While evaluating the framework two questions came up that really got me thinking about using Core Data at all for this project or to stick with SQLite.
My app will heavily rely upon importing data from an external source. I'm aware that one can import into Core Data but handling complex relationships seems complicated and tedious. Is there an easy way to accomplish complex imports?
The app has to be able to execute complex queries spanning multiple tables or having multiple conditions. Building these predicates and expressions simply scares me...
Is it worth to take the plunge and use Core Data or should I stick with SQLite?
As I and others have said before, Core Data is really an object-graph management framework. It manages the relationships between model objects, including constraints on their cardinality, and manages cascading deletes etc. It also manages constraints on individual attributes. Core Data just happens to also be able to persist that object graph to disk. It can do this in a number of formats, including XML, binary, and via SQLite. Thus, Core Data is really orthogonal to SQLite. If your task is dealing with an embedded SQL-compatible database, go with SQLite. If your task is managing the model layer of an MVC app, go with Core Data. In specific answers to your questions:
There is no magic that can automatically import complex data into any model. That said, it is relatively easy in Core Data. Taking a multi-pass approach and using the SQLite backend can help with memory consumption by allowing you to keep only a subset of the data in memory at a time. If the data sets can be kept in memory, you can write a custom persistent store format that reads/writes directly to your legacy data format from within Core Data (see the Atomic Store Programming Guide).
Building a complex NSPredicate declaratively is somewhat verbose but shouldn't scare you. The Predicate Programming Guide is a good place to start. You can, of course, also write predicates using a string format, much like a string-formatted SQL statement. It's worth noting that, as described above, the predicates in Core Data are on the objects and object graph, not on the SQL tables. If you really want to think at the level of tables, stick with SQLite and write your own wrapper.
I can't really speak to your first point.
However, regarding your second point, using Core Data means you don't have to really worry about complex queries since you can just pretend that all the relationships are properly established in memory already (Apple's implementation details aside). It doesn't matter how complex a join it might be in a database environment because you really aren't in a database environment. If you need to get the fourth child of the grandparent of your current object and then find that child's pet's name and breed, all you do is traverse up the object tree in code using a series of messages or properties. No worries about joins or anything. The only problem is it might be really slow depending on your objects' relationships, but I can't really speak accurately to that since I haven't actually implemented anything using Core Data (I've just read about it extensively on Apple's and others' websites).
If the data importer from an external source is written based on the same core data model (for the targeted/destination side of the import) - nothing will be conceptually different as compare to using/updating the same data (through the core data stack from your actual application).
If you create the data importer without using the core data stack, make sure you learn well the db schema that would be generated/expected by the core data based model. There is nothing magic there - just make sure you follow how the cross entity relationships are implemented and how entity hierarchies are stored.
I had to create recently a data importer from Access database into the core data based Sqlite store as a .NET app. Once my destination core data model was define, I created a small app that populated the Sqlite store with randomly generated entities (including all the expected relationships). Then, I reverse engineered how the core data actually created the Sqlite store for the model and how it handles the relationships by learning from the generated and persisted data. Then, I implemented the .NET based importer/data-transformer according to my observations. At the end, I got perfect core data friendly data store that could be open an modified from the application that was using the core data stack on Mac OSX.

Resources