Ive been getting some odd issues with Cosmos DB as part of a data migration. The migration consisted of deleting and recreating our production collection and then using the Azure Cosmos DB migration tool to copy documents from our Development collection.
I wanted a full purge of the data already in the production collection rather than copying the new documents on top, so to achieve this I did the following process…
Deleted the production collection, named “Production_Products”
Recreated the Production collection with the same name and partition key
Using the Azure Cosmos DB Data Migration Tool, I copied the documents from our development collection into the newly created and empty production collection “Production_Products”
Once the migration was complete we tested the website and we kept getting the following error…
Microsoft.Azure.Documents.NotFoundException: at
Microsoft.Azure.Documents.AddressResolver.EnsureRoutingMapPresent
This was very confusing as we could query the data from Azure no problem. After multiple application restarts and checking the config we created a new collection “Production_Products_Test” and repeated the migration steps.
This worked fine. Later in the day we reverted our changes by recreating a new collection with the original name “Production_Products” and that failed. We had to revert back to using the “_Test” collection.
Can anyone offer any insight into why this is happening?
Based on the comments.
The DocumentClient maintains address caches, if you delete and recreate the collection externally (not through the DocumentClient or at least, not through that particular DocumentClient instance since you describe there are many services), the issue that might arise is that the address cache that that instance has is invalid. Newer versions of the SDK contain fixes that would react and refresh the cache (see the Change log here https://learn.microsoft.com/azure/cosmos-db/sql-api-sdk-dotnet).
The SDK 2.1.3 is rather old (more than 2 years) and the recommendation would be to update it (2.10.3 is the latest at this point).
The reason for the invalidation of those caches is that when you delete and recreate, the new collection has a different ResourceId.
Having said that, there is a scenario that won't be easily fixed, and that is if when you delete and recreate a collection, your code is using ResourceIds (for example, using the SelfLinks) instead of the names/ids to do operations. In those cases, if you are caching or holding a reference to the ResourceId of the previous collection, those requests will fail. Instead, you would need to use the names/ids through UriFactory.
Normally in these cases knowing the full stack trace of the exception (not just the name of the type) helps understand what is going on exactly.
Related
Whenever entity changes to CoreData are made (especially on deletion of an entity)-- other than adding attributes, the simulator will crash.
Deleting Derived Data does not help...I am assuming because the Simulator stores data from previous builds. On ios Simulators it can be solved by deleting the app instance on the simulator; but using the "My Mac (designed for iPad)" proxy this cannot be done... or am I missing something?
The only thing that helped was to find the app sqlite file and delete that (not easy to find), which forces the project to reset everything.
Any other suggestions, or is this a bug?
From your description it sounds like you're editing the data model, in the Xcode model editor. Basically, you can't just do that unless you configure your app to migrate existing data from the old model to the new one. When the app launches, Core Data needs to match data it already has (the persistent store) to the data model the app has. If they don't match and Core Data can't figure out how to automatically convert the data, the app crashes.
For relatively simple changes, Core Data can figure out what to do and takes care of things. For other changes, it can't do that, so it's up to you.
If you don't need to keep old data (like, you're still developing the app and you only have test data), then what you're doing is normal. Delete the older version of the app and start fresh. There's no need to update your test data to match the new model.
If you do need to keep old data, you need to create a new version of the data model but keep the old one around. Core Data knows how to handle multiple versions of the data model; you'll tell it which one is current, and the others will all be old versions. Then, depending on what exact changes you made in the model, you can migrate the data to the new version. This is a whole topic on its own and if that's the case, please post a new question with the exact details of your changes and someone may be able to help.
I have an app out in the App Store, and I am working on a lightweight migration (adding new attributes and new entities, not deleting anything). From extensive research, I know that I need to add a new version of my current Core Data Model for the local version of the data model. Anyone who updates their app and only uses the local data will automatically be migrated over.
However, I can not find anything about what happens when I update the iCloud schema (from icloud.developer.apple.com). Mainly, I'm concerned about users who are on older versions of the app and are using iCloud. When I update the schema in the iCloud website, will users on an older version of the app lose their current data or not be able to sync their data since their local schema will be different from the iCloud one?
Also, I'm using an NSPersistentCloudKitContainer for syncing the Core Data with CloudKit.
Any help is greatly appreciated as I do not want to mess up anyone's data!
No, their data still be on iCloud and they could continue to use your app.
When your Schema is deployed to the Production environment, you can not change types of Records or delete them, so all your changes will be done only in addition to the current Schema settings and does not affect users, which have not updated the app yet.
I had a similar question previously and was quite anxious about updating my app Schema, but everything went well - no problems for users and no data was lost.
Do not forget to initialize your new scheme from the app and deploy changes to the Production on iCloud dashboard.
You could initialize your scheme in your AppDelegate when you initialize your NSPersistentCloudKitContainer with following code:
let options = NSPersistentCloudKitContainerSchemaInitializationOptions()
try? container.initializeCloudKitSchema(options: options)
After that you could comment out these lines until the next update of Core Data model.
You could check that all changes are uploaded in the iCloud dashboard by clicking on Deploy Schema Changes - you will see a confirmation window with all the changes to the model which will be deployed.
It is also possible to change your Scheme directly in the iCloud dashboard, but it is not so convenient (unless you need to add just one Record type).
Since changes in the Schema are not affecting existing users, I usually move them to Production before I submit the app for review, but after all testing related to new Record types is done and I am not planning to change anything there.
We recently migrated a bunch of document updates up from a pre-production server to our production server. We'd attempted to use content staging, which had worked mostly OK in the past, but this time it failed with a lot of parent records not found errors. Our outsourced developer used the Documents tab of the Staging module to sync subtrees across. However a few files got missed, or didn't work correctly the first time. So I'm trying to move them now, and I'm running into a problem.
After expanding the content tree and clicking on the document in the Documents tab, and selecting the correct target server (we've got bi-directional staging set up), we're getting an error: Dependent task 'Move document See Things Differently' failed:Synchronization server error: Exception occurred: SyncHelper.ServerError: Document node not found, please synchronize the document node first.
Looking at the tasks listed, I don't even see a Move document task anywhere queued up for the target server.
Is there any way I can move this document up to our production instance? I've looked at the site export as an alternative, but it doesn't look like I can export just this one page. Am I going to have to recreate the page on Production instead?
The best way is to attempt this sync is to clear out all the staging tasks and do a full sync from the root of the website. Most likely what happened to some of the documents which are stating "moved..." is the pages were reordered. Which means every document below that document's parent will be updated on that level. So simply moving or reordering one document out of 10, will trigger 10 staging tasks. If you don't sync those to the production site, the order will be off according to the staging site.
I have had a problems similar to this before.
This typically works:
Create a copy of the document, put it in the same location in the
content tree.
Delete the original document.
Make any changes to the new document name, URL, aliases, etc (remove the '1' for example)
Then push that new document with Kentico Staging.
Its a bit of a hack but sometimes necessary.
Brenden is right on target about clearing the staging tasks listed under "All tasks" before you try syncing again. We've run into these errors on our sites when we've tried pushing a large number of docs from staging to production. What worked for us was deleting all pending and failed "Pages" tasks, then under the content tree in "Pages," navigating to the first child level and syncing "Current page" all the way to the closest parent directory, and then syncing "Current subtree."
For instance, if the problem doc is in, say, the "18" dir, select Articles and sync current page, then 2016, then 01, and for 18 sync current subtree.
content tree syncing screenshot
The best way is to use Kentico in-built Staging module and use that to first move objects and then the pages.
I have never faced any problem moving a large number of nodes(around 8000). That's the best possible approach.
In case your website a large no of custom table items let's say 50K, then I would do an export/import of the table. Synching so many entries usually has given Connection Time Out error before.
Thanks,
Chetan
For some reason, when I published my site to Azure from Visual Studio, it created a new DB. My DB was named something like MySiteDB and it created a database called mysite_db. I finally found the issue in the publish settings and changed it so that it pointed to the right database. I then deleted the old database since it was essentially empty. However, it's still trying to connect to the old database (that is, mysite_db). Where else could this be set? I've searched the entire solution for mysite_db and it's nowhere to be found.
While regular publish to Azure with WebDeploy, had checked Execute Code First Migrations, which i did before.
But this time the Use this connection string at runtime, was also checked, and i published without noticing it. as a result the remote azure db was wiped and instead is seeded with what looks like a default database with aspnetmemembership tables and _Migrations table that only has migrations related to identity tables.
The production data w db structures is gone and I did not yet setup backup on azure, doing it now.
Is there way to restore the database from some sort of auto backup on azure, i have web version w 1Gb size selected, I do not see any options
this suggests that web version would not have any daily backup, but also that web version is discountinued as of april, but i still have it. http://msdn.microsoft.com/en-us/library/jj650016.aspx
and another questions, i understand everything that happened? But it seems extremly dangerous that its so easy to wipe out the whole database and VS shows no warning nor publishing to azure notifies of anything. Is there anything that can be done to prevent dumb but yet very costly erros like this ?
TIA