I recently submitted an upgrade of my app which included a lightweight coredata migration (including new fields in existing tables and a couple of new tables). I followed every tip regarding this migration, including some I found on this site.
I thoroughly tested the update on three different devices and it all went ok!!!
However, this update is crashing an all my devices and probably on all my customers. I can't explain why this is happening.
Could you please help me understand this debacle?
To truly test your app and migration, you need to run your original app to create data store according to the original data model. Then you need to run your new app, opening data store that was generated with original app. This can be a real pain and is easier (at least initially) to do in Simulator because you have more control over the file system and can swap in a saved original data store. On iDevice you need to regenerate original data store for each test.
If you are testing on your own development devices then you have already migrated your data store. Is it possible that your test devices created their data stores with new data model - and never actually performed a migration?
I only generally use automatic migration during beta testing, for quick revisions, other than that I always use a mapping model, so that you have control.
the other issue is that if your model shifts far enough between releases, auto migration from v1-v2 could be fine, and v2-v3 could be ok, but v1-v3 could be too drastic to be inferred. by making maps for them, you retain control of the migration.
Related
I'm currently learning how to use CloudKit Framework and lack of documentation or examples showing how to sync Core Data and CloudKit.
I have watched all WWDC videos (2014, 2015, 2016) Dedicated to CloudKit, but none of them telling us how to implement syncing with Core Data. I can't find any fresh examples, tutorial or books, showing how to implement this syncing.
I know that it is effective to use Operations API by CloudKit (not Convenience API) and to Subscribe to changes as it said in the new WWDC 2016 videos, dedicated to CloudKit, but mapping with CoreData is a real problem.
For example, let's say I would like to create an app similar to Notes app. while offline, user can create his notes and work with them saving them to his core data database. When the device going online the app checks what changed on the server and saves newly created records to server (CloudKit).
When the app starts, it also fetches for changes from the CloudKit and if there are changes , it updates local cache (Core Data) with the new changes.
I would appreciate to have a common pattern of syncing. Where to place syncing with Core Data methods and how they should look like?
Would appreciate any information or help about this.
I'm using Swift 3, Xcode 8 , iOS 10.
As of iOS 13, there are new API's that simplify this synchronization for developers. I would recommend you to watch the WWDC19 session about the new synchronization between CoreData and CloudKit. Please note that these new API's only work for iOS 13+.
Video: https://developer.apple.com/videos/play/wwdc2019/202/
In short, you need to start using NSPersistentCloudKitContainer instead of NSPersistentContainer. This will let the syncing work automatically using automatic conflict resolution with a last-writer-wins merge strategy. If you want to build a good working app, you'll also need to do some modifications to improve the syncing for your app.
Official documentation can be found at:
Setting Up Core Data with CloudKit
Syncing a Core Data Store with CloudKit
Data modeling for collaboration (conflict-free replicated data type)
At the end of the session they also demonstrated an example of better sync merging than the default 'last-writer-wins merge strategy'. The usage of Causal Trees allow multiple users to edit the same string (and to some extend other types of data) without losing any data. I would really recommend everyone to read this article from Archagon that describes how this works and how to implement it (also with CloudKit syncing, but not the new automatic one). As demonstrated in the session, you can also implement this with the new automatic syncing between CoreData and CloudKit.
Core Data already provides the user with the ability to sync to iCloud. There's no need to use CloudKit.
Design For Core Data In iCloud
But yes, Core Data with iCloud has been deprecated. Even so, it has not been discontinued, and there are no immediate plans at apple to discontinue it, they just want to discourage its use. But it also has problems with rationalising updates from multiple devices.
In any case, I have been looking into the question of how to do this with cloud kit myself. Two answers; the first is to use the following;
Seam in GitHub
The second is to do it manually;
Designing for CloudKit
The key here is that Cloud Kit needs the record metadata to be able to handle record updates reliably, so you have to save that metadata in your Core Data database. The CKRecord class includes a method encodeSystemFields(with:) which will encode those fields into a Data record that can be stored in your database, and then your can use the appropriate decoder when you need to restore the CKRecord.
Anyway, I am about to start doing this myself. I'll update this with more information when I have it.
Apple has recently published a guide that seems to answer this question. Check out Apple's Maintaining a Local Cache of CloudKit Records to see how to store CloudKit data on device.
While this guide doesn't provide sample code to write to the device, it does answer the rest of the question. This tells you how to fetch changes from CloudKit and create data which can be stored on the device.
I have an app with multiple updates on the AppStore already, funny thing happened, I thought that the lightweight migration happens automatically, however, my recent discovery that I need to add the
NSDictionary *storeOptions = #{NSMigratePersistentStoresAutomaticallyOption:#YES, NSInferMappingModelAutomaticallyOption:#YES};
to my persistentStoreCoordinator shook my confidence when I realized I already have 5 core data models.
The question is: when I add the above line to the next version of the app, is it going to work for everyone when they update? Because right now everything that happens when they open the app .. is a fancy CRASH.
Thx
It will work if automatic lightweight migration is possible for the migration you're trying to perform. Whether this will work depends on the differences between the model used for the existing data and the current version of the model. Many common changes permit automatic lightweight migration but not all. You'll need to review the docs on this kind of migration and decide whether it will work in your case.
If it doesn't work, there are other ways to handle it, for example by creating a mapping file to tell Core Data how to make changes that it can't infer automatically.
I want to run a Core Data migration that applies a value transformation on a property, specifically mapping one string value to another, which I don't believe can be handled by a lightweight migration.
Eventually (but not in the next release of my app), I want to add iCloud sync. I read that iCloud sync requires you to only use light-weight migrations. Can I use a non-lightweight migration now and then integrate iCloud sync later, and will doing so make things harder for me later?
Yes, you can implement iCloud later, i.e. after the non-lightweight migration. No, things should not be harder for you later. You can assume iCloud does not store your versioned models to construct the final managed object model, but just takes the final one. It is the migration itself that iCloud would not support.
That being said, I have had dismal experiences with iCloud and Core Data. Don't say you have not been warned.
I have two Azure Websites set up - one that serves the client application with no database, another with a database and WebApi solution that the client gets data from.
I'm about to add a new table to the database and populate it with data using a temporary Seed method that I only plan on running once. I'm not sure what the best way to go about it is though.
Right now I have the database initializer set to MigrateDatabaseToLatestVersion and I've tested this update locally several times. Everything seems good to go but the update / seed method takes about 6 minutes to run. I have some questions about concurrency while migrating:
What happens when someone performs CRUD operations against the database while business logic and tables are being updated in this 6-minute window? I mean - the time between when I hit "publish" from VS, and when the new bits are actually deployed. What if the seed method modifies every entry in another table, and a user adds some data mid-seed that doesn't get hit by this critical update? Should I lock the site while doing it just in case (far from ideal...)?
Any general guidance on this process would be fantastic.
Operations like creating a new table or adding new columns should have only minimal impact on the performance and be transparent, especially if the application applies the recommended pattern of dealing with transient faults (for instance by leveraging the Enterprise Library).
Mass updates or reindexing could cause contention and affect the application's performance or even cause errors. Depending on the case, transient fault handling could work around that as well.
Concurrent modifications to data that is being upgraded could cause problems that would be more difficult to deal with. These are some possible approaches:
Maintenance window
The most simple and safe approach is to take the application offline, backup the database, upgrade the database, update the application, test and bring the application back online.
Read-only mode
This approach avoids making the application completely unavailable, by keeping it online but disabling any feature that changes the database. The users can still query and view data while the application is updated.
Staged upgrade
This approach is based on carefully planned sequences of changes to the database structure and data and to the application code so that at any given stage the application version that is online is compatible with the current database structure.
For example, let's suppose we need to introduce a "date of last purchase" field to a customer record. This sequence could be used:
Add the new field to the customer record in the database (without updating the application). Set the new field default value as NULL.
Update the application so that for each new sale, the date of last purchase field is updated. For old sales the field is left unchanged, and the application at this point does not query or show the new field.
Execute a batch job on the database to update this field for all customers where it is still NULL. A delay could be introduced between updates so that the system is not overloaded.
Update the application to start querying and showing the new information.
There are several variations of this approach, such as the concept of "expansion scripts" and "contraction scripts" described in Zero-Downtime Database Deployment. This could be used along with feature toggles to change the application's behavior dinamically as the upgrade stages are executed.
New columns could be added to records to indicate that they have been converted. The application logic could be adapted to deal with records in the old version and in the new version concurrently.
The Entity Framework may impose some additional limitations in the options, because it generates the SQL statements on behalf of the application, so you would have to take that into consideration when planning the stages.
Staging environment
Changing the production database structure and executing mass data changes is risky business, especially when it must be done in a specific sequence while data is being entered and changed by users. Your options to revert mistakes can be severely limited.
It would be necessary to do extensive testing and simulation in a separate staging environment before executing the upgrade procedures on the production environment.
I agree with the maintenance window idea from Fernando. But here is the approach I would take given your question.
Make sure your database is backed up before doing anything (I am assuming its SQL Azure)
Put up a maintenance page on the Client Application
Run the migration via Visual Studio to your database(I am assuming you are doing this through the console) or a unit test
Publish the website/web api websites
Verify your changes.
The main thing is working with the seed method via Entity Framework is that its easy to get it wrong and without a proper backup while running against Prod you could get yourself in trouble real fast. I would probably run it through your test database/environment first (if you have one) to verify what you want is happening.
We have several legacy SQL Server databases that we occasionally make schema changes to. We currently have a utility written in C++ that allows users to update their DB's with these schema changes. The utility currently generates dynamic sql to create all DB objects. I am looking into redoing this and thought EF migrations might be a good way to go. I have read up a bit on the subject and I have a general idea of how it works. But I'm having a bit of a hard time figuring out how I would set it up to replace our current procedure (or if it is even possible). Currently, a client could be on any one of a number of previous versions. I'm assuming I would have to go back to the oldest possible version and create my model/initial migration from that, then generate incremental migrations for each version change in order to support updates from all versions. Is that a correct assumption? Also, currently our clients could be using sql server 2000, 2005, or 2008. Would this have any effect on how I would set things up (or if I even could)? Further, the goal is to create a utility with a (C# - probably WPF) UI that the user can use to manipulate the migrations (up or down, preferably). I've seen a lot of examples of how to manipulate migrations from command-line within package manager but not a lot of stuff on how to create a utility with a friendly UI for upgrading/downgrading DB's in production. Also, I have not seen anything that shows how to create stored procedures in a migration (our DBs rely on some stored procedures). I'm assuming that, if nothing else, I can use the Sql() method to generate a SQL query to create a SP. Is that correct? Is there a better way?
I know my questions are a bit non-specific and I apologize for that. But I'm still in the beginning processes of learning this and I'd like to get an idea of whether or not this is a good way to go. Any guidance would be greatly appreciated.
Thanks,
Dennis
Firstly, on SQL Server support, Entity Framework doesn't really support SQL Server 2000. See this question:
EntityFramework SQL Server 2000?
On the question of supporting all the multiple versions, you have the right idea about needing to generate an initial migration for the oldest version first then incrementally altering the model and generating migrations to support the later versions. This will be a pain as the migrations are opinionated about how they represent the model in the database and you will be doing a lot of messing about to end up with a model and a set of migrations that fully represent that. Specific concerns are indexes, column lengths, data types, stored procedures, triggers, functions, partitioning.
The Sql() function gets you around most issues, though also helpful in the migrations are functions like CreateIndex and AlterColumn.
For automating this, the migrations are definitely available as powershell cmdlets which are themselves just .Net objects so can be called programmatically.
As this question is a year old, I assume you will have made a decision on whether to do this. My opinion is that it is hard to see that it's worth the effort. If you were re-platforming the code base that uses this database to Entity Framework then it would make sense. Otherwise there are bound to be better tools out there for database version management. My first port of call would be Redgate.