Xcode Swiftui -- when Core Data entities are deleted, app crashes on My Mac(designed for iPad) - core-data

Whenever entity changes to CoreData are made (especially on deletion of an entity)-- other than adding attributes, the simulator will crash.
Deleting Derived Data does not help...I am assuming because the Simulator stores data from previous builds. On ios Simulators it can be solved by deleting the app instance on the simulator; but using the "My Mac (designed for iPad)" proxy this cannot be done... or am I missing something?
The only thing that helped was to find the app sqlite file and delete that (not easy to find), which forces the project to reset everything.
Any other suggestions, or is this a bug?

From your description it sounds like you're editing the data model, in the Xcode model editor. Basically, you can't just do that unless you configure your app to migrate existing data from the old model to the new one. When the app launches, Core Data needs to match data it already has (the persistent store) to the data model the app has. If they don't match and Core Data can't figure out how to automatically convert the data, the app crashes.
For relatively simple changes, Core Data can figure out what to do and takes care of things. For other changes, it can't do that, so it's up to you.
If you don't need to keep old data (like, you're still developing the app and you only have test data), then what you're doing is normal. Delete the older version of the app and start fresh. There's no need to update your test data to match the new model.
If you do need to keep old data, you need to create a new version of the data model but keep the old one around. Core Data knows how to handle multiple versions of the data model; you'll tell it which one is current, and the others will all be old versions. Then, depending on what exact changes you made in the model, you can migrate the data to the new version. This is a whole topic on its own and if that's the case, please post a new question with the exact details of your changes and someone may be able to help.

Related

Releasing new Core Data Schema to iCloud production

I have an app out in the App Store, and I am working on a lightweight migration (adding new attributes and new entities, not deleting anything). From extensive research, I know that I need to add a new version of my current Core Data Model for the local version of the data model. Anyone who updates their app and only uses the local data will automatically be migrated over.
However, I can not find anything about what happens when I update the iCloud schema (from icloud.developer.apple.com). Mainly, I'm concerned about users who are on older versions of the app and are using iCloud. When I update the schema in the iCloud website, will users on an older version of the app lose their current data or not be able to sync their data since their local schema will be different from the iCloud one?
Also, I'm using an NSPersistentCloudKitContainer for syncing the Core Data with CloudKit.
Any help is greatly appreciated as I do not want to mess up anyone's data!
No, their data still be on iCloud and they could continue to use your app.
When your Schema is deployed to the Production environment, you can not change types of Records or delete them, so all your changes will be done only in addition to the current Schema settings and does not affect users, which have not updated the app yet.
I had a similar question previously and was quite anxious about updating my app Schema, but everything went well - no problems for users and no data was lost.
Do not forget to initialize your new scheme from the app and deploy changes to the Production on iCloud dashboard.
You could initialize your scheme in your AppDelegate when you initialize your NSPersistentCloudKitContainer with following code:
let options = NSPersistentCloudKitContainerSchemaInitializationOptions()
try? container.initializeCloudKitSchema(options: options)
After that you could comment out these lines until the next update of Core Data model.
You could check that all changes are uploaded in the iCloud dashboard by clicking on Deploy Schema Changes - you will see a confirmation window with all the changes to the model which will be deployed.
It is also possible to change your Scheme directly in the iCloud dashboard, but it is not so convenient (unless you need to add just one Record type).
Since changes in the Schema are not affecting existing users, I usually move them to Production before I submit the app for review, but after all testing related to new Record types is done and I am not planning to change anything there.

Cosmos DB - Microsoft.Azure.Documents.AddressResolver.EnsureRoutingMapPresent

Ive been getting some odd issues with Cosmos DB as part of a data migration. The migration consisted of deleting and recreating our production collection and then using the Azure Cosmos DB migration tool to copy documents from our Development collection.
I wanted a full purge of the data already in the production collection rather than copying the new documents on top, so to achieve this I did the following process…
Deleted the production collection, named “Production_Products”
Recreated the Production collection with the same name and partition key
Using the Azure Cosmos DB Data Migration Tool, I copied the documents from our development collection into the newly created and empty production collection “Production_Products”
Once the migration was complete we tested the website and we kept getting the following error…
Microsoft.Azure.Documents.NotFoundException: at
Microsoft.Azure.Documents.AddressResolver.EnsureRoutingMapPresent
This was very confusing as we could query the data from Azure no problem. After multiple application restarts and checking the config we created a new collection “Production_Products_Test” and repeated the migration steps.
This worked fine. Later in the day we reverted our changes by recreating a new collection with the original name “Production_Products” and that failed. We had to revert back to using the “_Test” collection.
Can anyone offer any insight into why this is happening?
Based on the comments.
The DocumentClient maintains address caches, if you delete and recreate the collection externally (not through the DocumentClient or at least, not through that particular DocumentClient instance since you describe there are many services), the issue that might arise is that the address cache that that instance has is invalid. Newer versions of the SDK contain fixes that would react and refresh the cache (see the Change log here https://learn.microsoft.com/azure/cosmos-db/sql-api-sdk-dotnet).
The SDK 2.1.3 is rather old (more than 2 years) and the recommendation would be to update it (2.10.3 is the latest at this point).
The reason for the invalidation of those caches is that when you delete and recreate, the new collection has a different ResourceId.
Having said that, there is a scenario that won't be easily fixed, and that is if when you delete and recreate a collection, your code is using ResourceIds (for example, using the SelfLinks) instead of the names/ids to do operations. In those cases, if you are caching or holding a reference to the ResourceId of the previous collection, those requests will fail. Instead, you would need to use the names/ids through UriFactory.
Normally in these cases knowing the full stack trace of the exception (not just the name of the type) helps understand what is going on exactly.

The best way to publish new version to Azure app/services?

Say I have 1 azure app which calls 1 azure api service. Now I need to update both applications to a newer version, in the most extended scale, i.e. database not compatible, api has changes to existing method signatures that are not compatible to old version invocation either. I use visual studio's publish profile to directly update. The problem I've been facing is that during the publish process, although it's only a few seconds of time, there're still active end users doing things on the web app and making api calls. I've personally seen results in such situations which are unstable, unpredictable and the saved data might be simply corrupt data.
So is there a better way to achieve some sort of 'flash update' which causes absolutely no side effect to end users? Thanks.
You should look at a different deployment strategy. First update the database, maybe with accepting null values, deploy a new API next to the current one. Validate it. Switch the traffic from current to new. Same for the website. It is a blue green deployment strategy, requires some more effort but solves the downtime or errors. https://www.martinfowler.com/bliki/BlueGreenDeployment.html
For the web app, you should use the deployment slots, deploy your new version to a staging slot and once you are ready, it is a matter of pointing the site URL to the new slot. This doesn't take anytime at all.
For the database, I believe you should freeze updates, take a backup and let the users work in readonly mode, and once you finish all your DB migration and changes, point the application to the new database and that is it.

Access 2013 web app - restoring previous app snapshot package without reverting data (structured staging environment)

I have a reasonably complex Access 2013 web app which is now in production on hosted O365 Sharepoint. I would like to take a backup (package snapshot) into a test environment, and then migrate this to production once development is complete (I certainly don't want to do development on the production system!). The problem is that the snapshot also backs up all data so uploading the new package over the top of the existing package in the sharepoint app repository reverts the data to the time of snapshot as well. Alternatively, rolling back to the original snapshot if there are issues would lose all data after the new package was applied.
I can easily get a second version of the app going by saving as a new application etc but this creates a new product ID etc in the app store. We also use a Access 2013 desktop accdb frontend to hook directly into the Azure SQL database to do all the stuff that the web app can't provide (formatted reports etc) so I cant just create a new app every time without dealing with all of the credential and database renaming issues.
So my question is, does anybody know how to safely operate a test environment for Access 2013 web app development? One needs to be able to apply an updated version, or rollback to the old one if there are problems without rolling back the data. With the desktop client I can just save a new copy of the accdb file every time obviously. I dont mind creating a new instance or link to the app on sharepoint each time, however this obviously generates a totally new database (server name, db location, login id's etc) as well. You would hope there is a way to upload and replace your app without touching the data, else how else can you develop without working directly in production?
Any answers would be really appreciated.
Thanks.

WCF model class from metadata not updating

I have a WCF service that uses a separate project for a DAL, which I have a reference to, and can access the entity objects with the DAL, through the service as such:
[OperationContract]
GeoLocations GetLocations();
This returns a GeoLocations object.
The issue is that I have updated the DAL as my database has changed, and I see all the new fields in the code, however when I do a 'view source' on GeoLocations I see the following file:
GeoLocations [from metadata]
... which doesn't contain any of the new fields, and is locked in the IDE.
I have tried cleaning the project, deleting all the DLL's, etc., but I still see the old class.
How can I update this with the new properties?
Thanks.
I faced such issue in many time and i found problem is related to this.
1.As you change in DAL. It is neccesarry that you build that project first.
WCF pproject as Reference of DAL. So you need to build this too and verify that it has updated DLL.
Now go to project and update service reference. of that Service in project or application where you consume your WCF service.
A couple of things:
When updating the service reference, depending on how large the service is that is being consumed, all the property definitions may not be updated right away. Also, after the service reference has been updated, I recommend building the project and then continuing. This seems to avoid the previous issue.

Resources