I am developing an application that is rolled out in stages. For each sprint, there are database changes so core data migration has been implemented. So far we have had 3 stage releases. Whenever successive up gradation is done , the application runs fine. But whenever I try to upgrade from version 1 to version 3, 'unable to add persistent store' error occurs'. Can someone help me with the issue ?
Core Data migration does not have a concept of versions as you would expect them. As far as Core Data is concerned there are only two versions, the version of the NSPersistentStore and the version you are currently using.
To use lightweight migration, you must test every version of your store and make sure that it will migrate to the current version directly. If it does not then you cannot use lightweight migration for that specific use case and you either need to develop a migration model or come up with another solution.
Personally, on iOS, I avoid heavy migration as it is very expensive in terms of memory and time. If I cannot use a lightweight migration I most often will explore export/import solutions (exporting to JSON for example and importing in to the new model) or look at refreshing data from the server.
My problem is i am trying to change my attribute datatype during automatic lightweight migration, as automatic lightweight core data migration does not support data type change. I resolved this issue by resetting the data type to older one.
Related
I have an app with multiple updates on the AppStore already, funny thing happened, I thought that the lightweight migration happens automatically, however, my recent discovery that I need to add the
NSDictionary *storeOptions = #{NSMigratePersistentStoresAutomaticallyOption:#YES, NSInferMappingModelAutomaticallyOption:#YES};
to my persistentStoreCoordinator shook my confidence when I realized I already have 5 core data models.
The question is: when I add the above line to the next version of the app, is it going to work for everyone when they update? Because right now everything that happens when they open the app .. is a fancy CRASH.
Thx
It will work if automatic lightweight migration is possible for the migration you're trying to perform. Whether this will work depends on the differences between the model used for the existing data and the current version of the model. Many common changes permit automatic lightweight migration but not all. You'll need to review the docs on this kind of migration and decide whether it will work in your case.
If it doesn't work, there are other ways to handle it, for example by creating a mapping file to tell Core Data how to make changes that it can't infer automatically.
I want to run a Core Data migration that applies a value transformation on a property, specifically mapping one string value to another, which I don't believe can be handled by a lightweight migration.
Eventually (but not in the next release of my app), I want to add iCloud sync. I read that iCloud sync requires you to only use light-weight migrations. Can I use a non-lightweight migration now and then integrate iCloud sync later, and will doing so make things harder for me later?
Yes, you can implement iCloud later, i.e. after the non-lightweight migration. No, things should not be harder for you later. You can assume iCloud does not store your versioned models to construct the final managed object model, but just takes the final one. It is the migration itself that iCloud would not support.
That being said, I have had dismal experiences with iCloud and Core Data. Don't say you have not been warned.
We have several legacy SQL Server databases that we occasionally make schema changes to. We currently have a utility written in C++ that allows users to update their DB's with these schema changes. The utility currently generates dynamic sql to create all DB objects. I am looking into redoing this and thought EF migrations might be a good way to go. I have read up a bit on the subject and I have a general idea of how it works. But I'm having a bit of a hard time figuring out how I would set it up to replace our current procedure (or if it is even possible). Currently, a client could be on any one of a number of previous versions. I'm assuming I would have to go back to the oldest possible version and create my model/initial migration from that, then generate incremental migrations for each version change in order to support updates from all versions. Is that a correct assumption? Also, currently our clients could be using sql server 2000, 2005, or 2008. Would this have any effect on how I would set things up (or if I even could)? Further, the goal is to create a utility with a (C# - probably WPF) UI that the user can use to manipulate the migrations (up or down, preferably). I've seen a lot of examples of how to manipulate migrations from command-line within package manager but not a lot of stuff on how to create a utility with a friendly UI for upgrading/downgrading DB's in production. Also, I have not seen anything that shows how to create stored procedures in a migration (our DBs rely on some stored procedures). I'm assuming that, if nothing else, I can use the Sql() method to generate a SQL query to create a SP. Is that correct? Is there a better way?
I know my questions are a bit non-specific and I apologize for that. But I'm still in the beginning processes of learning this and I'd like to get an idea of whether or not this is a good way to go. Any guidance would be greatly appreciated.
Thanks,
Dennis
Firstly, on SQL Server support, Entity Framework doesn't really support SQL Server 2000. See this question:
EntityFramework SQL Server 2000?
On the question of supporting all the multiple versions, you have the right idea about needing to generate an initial migration for the oldest version first then incrementally altering the model and generating migrations to support the later versions. This will be a pain as the migrations are opinionated about how they represent the model in the database and you will be doing a lot of messing about to end up with a model and a set of migrations that fully represent that. Specific concerns are indexes, column lengths, data types, stored procedures, triggers, functions, partitioning.
The Sql() function gets you around most issues, though also helpful in the migrations are functions like CreateIndex and AlterColumn.
For automating this, the migrations are definitely available as powershell cmdlets which are themselves just .Net objects so can be called programmatically.
As this question is a year old, I assume you will have made a decision on whether to do this. My opinion is that it is hard to see that it's worth the effort. If you were re-platforming the code base that uses this database to Entity Framework then it would make sense. Otherwise there are bound to be better tools out there for database version management. My first port of call would be Redgate.
I'm working on a node.js server, and using MongoDB with node-mongo-native.
I'm looking for a db migration framework, similar to Rails migrations. Any recommendations?
I'm not aware of a specific native Node.js tool for doing MongoDB migrations .. but you do have the option of using tools written in other languages (for example, Mongoid Rails Migrations).
It's worth noting that the approach to Schema design and data modelling in MongoDB is different from relational databases. In particular, there is no requirement for a collection to have a consistent or predeclared schema so many of the traditional migration actions such as adding and removing columns are not required.
However .. migrations which involve data transformations can still be useful.
If your application is expecting data to be in a certain format (eg. you want to split a "name" field into "first name" and "last name") there are several strategies you could use if the idea of using migration tools written in another programming language isn't appealing:
handle data differences in your application logic, so old and new data formats are both acceptable (perhaps "upgrading" records to match a newer format as they are updated)
write a script to do a once off data migration
contribute MongoDB helpers to node-migrate
I've just finished writing a basic migration framework based on node-mongo-native: https://github.com/afloyd/mongo-migrate. It will allow you to migrate up & down, as well as migrating up/down to a specific revision number. It was initially based on node-migrate, but obviously needed to be changed a bit to make it work.
The revision history is stored in mongodb and not on the file system like node-migrate, allowing collaboration on the same project using a single database. Otherwise each developer running migrations could cause migrations to run more than once against a database.
The migrations themselves are file-based, also helping with collaboration on a single project where each developer is (or is not) not using the same database. So when each dev runs the migration, all migration files not already run against his/her database will be run.
Check out the documentation for more info.
I recently submitted an upgrade of my app which included a lightweight coredata migration (including new fields in existing tables and a couple of new tables). I followed every tip regarding this migration, including some I found on this site.
I thoroughly tested the update on three different devices and it all went ok!!!
However, this update is crashing an all my devices and probably on all my customers. I can't explain why this is happening.
Could you please help me understand this debacle?
To truly test your app and migration, you need to run your original app to create data store according to the original data model. Then you need to run your new app, opening data store that was generated with original app. This can be a real pain and is easier (at least initially) to do in Simulator because you have more control over the file system and can swap in a saved original data store. On iDevice you need to regenerate original data store for each test.
If you are testing on your own development devices then you have already migrated your data store. Is it possible that your test devices created their data stores with new data model - and never actually performed a migration?
I only generally use automatic migration during beta testing, for quick revisions, other than that I always use a mapping model, so that you have control.
the other issue is that if your model shifts far enough between releases, auto migration from v1-v2 could be fine, and v2-v3 could be ok, but v1-v3 could be too drastic to be inferred. by making maps for them, you retain control of the migration.