Azure DocumentDB Owner resource does not exist - azure

I having the same error icrosoft.Azure.Documents.DocumentClientException: Message: {"Errors":["Owner resource does not exist"]} , this is my scenario. When I deployed my webapp to Azure and try to get some document from docDb it throws this error. The docdb exists in azure and contains the document that i looking for.
The weird is that from my local machine(running from VS) this works fine. I'm using the same settings in Azure and local. Somebody have and idea about this.
Thanks

Owner resource does not exist
occurs when you have given a wrong database name.
For example, while reading a document with client.readDocument(..), where the client is DocumentClient instance, the database name given in docLink is wrong.

This error for sure appears to be related to reading a database/collection/document which does not exist. I got the same exact error for a database that did exist but I input the name as lower case, this error appears to happen regardless of your partition key.
The best solution I could come up with for now is to wrap the
var response = await client.ReadDocumentAsync(UriFactory.CreateDocumentUri(database, collection, "documentid"));
call in a try catch, not very elegant and I would much rather the response comes back with some more details but this is Microsoft for ya.
Something like the below should do the trick.
Model myDoc = null;
try
{
var response = await client.ReadDocumentAsync(UriFactory.CreateDocumentUri(database, collection, document));
myDoc = (Model )(dynamic)response.Resource;
}
catch { }
if (myDoc != null)
{
//do your work here
}
That is to get a better idea of the error then create the missing resource so you dont get the error anymore.
A few resources I had to go through before coming to this conclusion:
https://github.com/DamianStanger/DocumentDbDemo
Azure DocumentDB Read Document Resource Not Found

I had the same problem. I found that Visual Studio 2017 is publishing using my Release configuration instead of the Test configuration I've selected. In my case the Release configuration had a different CosmosDB database name that doesn't exist, which resulted in the "owner resource does not exist" error when I published to my Azure test server. Super frustrating and a terrible error message.

It may also be caused by a not found attachment to a document.
This is a common scenario when you move cosmos db contents using Azure Cosmos DB Data Migration tool which moves all the documents with their complete definition but unfortunately not the real attachment content.
Therefore this result in a document that states it has an attachment, and also states the attachment link, but at that given link no attachment can be found because the tool have not moved it.
Now i wrap my code as follow
try{
var attachments = client.CreateAttachmentQuery(attacmentLink, options);
[...]
}
catch (DocumentClientException ex)
{
throw new Exception("Cannot retrieve attachment of document", ex);
}
to have a meaningful hint of what is going on.

I ran into this because my dependency injection configuration had two instances of CosmosClient being created for different databases. Thus any code trying to query the first database was running against the second database.
My fix was to create a CosmosClientCollection class with named instances of CosmosClient

Related

Azure Function does not connect to Cosmos DB

I keep getting an error in application insight when trying to run my Azure function, it works locally on my computer but does not work in Azure. Here is the error message:
System.InvalidOperationException at Microsoft.Azure.WebJobs.Extensions.CosmosDB.CosmosDBTriggerAttributeBindingProvider.ResolveConnectionString
I have added CosmosDbTrigger as a connection string in the settings configurations, so I do not understand what's wrong. The function is triggered when a change is made in the cosmos db so if I change something in the db, and am running the function localy on my pc it works, It's only in the azure portal it keeps failing. The triggers are also working, so I think it's only the function that's the problem.
As there are many other discussions happening with similar issue from this link and check this link and try re checking your configuration file.
internal string ResolveConnectionString(string unresolvedConnectionString, string propertyName)
{
// First, resolve the string.
if (!string.IsNullOrEmpty(unresolvedConnectionString))
{
string resolvedString = _configuration.GetConnectionStringOrSetting(unresolvedConnectionString);
if (string.IsNullOrEmpty(resolvedString))
{
throw new InvalidOperationException($"Unable to resolve app setting for property '{nameof(CosmosDBTriggerAttribute)}.{propertyName}'. Make sure the app setting exists and has a valid value.");
}
return resolvedString;
}
// If that didn't exist, fall back to options.
return _options.ConnectionString;
}
Also, check your _options.ConnectionString value and set that dynamically.
builder.Services.PostConfigure<CosmosDBOptions>(options =>
{
options.ConnectionString = "dynamically set the connection string value here";
});

Revit model locked after relinquishing ownership through Revit API with a BIM360 project

We have a process where we check out all elements in the Revit model using worksharing to make sure everything is available when processing the data. After processing the data, then it's time to relinquish all the elements again but here we get an error when doing this on larger projects on BIM360.
The method used to relinquish the elements is this one:
var relinquishOptions = new RelinquishOptions(true);
try
{
// Relinquish everything
WorksharingUtils.RelinquishOwnership(doc, relinquishOptions, null);
}
catch (Exception e)
{
MsgBox.ShowMsg(e.ToString(), "Relinquish ownership failed");
throw;
}
This triggers this error message on the "sync with central" dialog from BIM360:
After this the model can't be synced with the central model on BIM360. Using the Revit API to sync with central we get this exception:
Autodesk.revit.Exceptions.CentralModelContentionException: The central model is being accessed by another client.
The model is locked by our user and the only way to "unblock" it is by reuploading the file to BIM360.
This only happens on files on BIM360 and only on larger files. The file is around 350MB.
Are we missing something?
I can't seem to find any documentation on RevitAPI mentioning any restrictions on the integration with BIM360 that could explain the issue.
Do any of you know why this is happening?
Thank you in advance!

Core Data and CloudKit integration issue when renaming relationship (code 134110)

I currently have an app using Core Data in the App Store: the app allows people to record their water and sailing activities (think of it like Strava for sailors). I have not updated the app for 3 years, the app seems to be still working fine on latest iOS versions but I recently planned to improve the app.
I am currently working on an update for this app, and need to change the data model and schema. I would like to have an automatic lightweight migration. I renamed some entities, properties and relationships, but I made sure to put the previous ids in the Renaming ID field in the editor.
I want to take advantage of the opportunity to sync the updated schema on CloudKit. I followed the instruction on Apple Developer documentation to setup the sync. I also made sure to initialize the schema using initializeCloudKitSchema(). When I visit the dashboard, I see the correct schema. The container is only in development mode, not pushed into production.
When I launch the app with a sqlite file generated by the available app, it seems the migration works well because the data is still here and correct. I can navigate in the app normally and when I visit the CloudKit dashboard, the data is correctly saved.
But sometimes, the app crashes at launch with the following error:
UserInfo={reason=CloudKit integration forbids renaming 'crewMembers' to 'sailors'.
Older devices can't process the new relationships.
NSUnderlyingException=CloudKit integration forbids renaming 'crewMembers' to 'sailors'.
Older devices can't process the new relationships.}}}
Fatal error: Unresolved error Error Domain=NSCocoaErrorDomain Code=134110 "An error occurred during persistent store migration."
The concerned entities were renamed, as the relationships and the relationship is a many-to-many, optional on both sides. This is occurring even if I reset the CloudKit development container. I don’t really have a clear idea of when this is appearing (seems random, after I updated some data or after I update the Core Data model). Any idea why the app is crashing? I would like as much as possible to keep the new naming for my entities and relationships.
SKPRCrewMemberMO renamed to Sailor
SKPRTrackMO renamed to Activity
crewMembers <<--->> tracks renamed sailors <<--->> activities
Here are some screenshots of the previous and updated data model for the entity at the origin of the migration issue, as well as some code regarding my Core Data stack initialization and the console error il getting.
PS: the app is used by few hundreds of people. That’s not a lot, but still, some of them have dozens of recorded activities and I don’t want to break anything and lose or corrupt data. I could launch a new app but users would lose their progress as it’s only saved locally in a shared container (app group was used as I wanted to share the Core Data with an Apple Watch extension). And I would lose the user base and App Store related things.
private init() {
container = NSPersistentCloudKitContainer(name: "Skipper")
guard let description = container.persistentStoreDescriptions.first else {
fatalError("###\(#function): Failed to retrieve a persistent store description.")
}
description.setOption(true as NSNumber, forKey: NSPersistentHistoryTrackingKey)
description.setOption(true as NSNumber, forKey: NSPersistentStoreRemoteChangeNotificationPostOptionKey)
let id = "iCloud.com.alepennec.sandbox20201013"
let options = NSPersistentCloudKitContainerOptions(containerIdentifier: id)
description.cloudKitContainerOptions = options
container.loadPersistentStores(completionHandler: { (storeDescription, error) in
if let error = error as NSError? {
fatalError("Unresolved error \(error), \(error.userInfo)")
}
})
do {
try container.initializeCloudKitSchema()
} catch {
print("Unable to initialize CloudKit schema: \(error.localizedDescription)")
}
container.viewContext.automaticallyMergesChangesFromParent = true
container.viewContext.mergePolicy = NSMergeByPropertyObjectTrumpMergePolicy
}

files in server root not loading properly in Azure

I'm working on an app that instead of a database uses file system in the server's root directory. It's basically a note application that allows me to save notes. Each note is a serialized object of Note class represented by following structure \Data\Notes\MyUsername\Title.txt
When I'm testing this on localhost through IIS Express everything works fine and I can easily go step by step there.
However, once I publish the app to Azure, the folder structure is still there (made a test Controller that uses Directory.GetFiles() and .GetDirectories() to simulate folder browsing so I'm sure that the files are there) but the file simply doesn't get loaded.
Loading script that's being called:
public T Load<T>(string filePath) where T : new()
{
StreamReader reader = null;
try
{
reader = new StreamReader(filePath);
var RawDB = reader.ReadToEnd();
return JsonConvert.DeserializeObject<T>(RawDB);
}
catch
{
return default(T);
}
finally
{
if (reader != null)
reader.Dispose();
}
}
Since I can't normally debug the app on Azure I tried to dump as much info as I can through ViewData and even there, everything looks okay and the paths match, but the deserialized object is still null, and this is only when trying to open an existing note WITHOUT creating a new one first (more on that later)
Additionally, like I said, those new notes get saved in the folder structure, and there's a Note sidebar on the left that allows users to switch between notes. The note browser is nothing more but a list that's collected with a .GetFiles() of that folder.
On Azure, this works normally and if I were to delete one manually it'd be removed from the sidebar as well.
Now here's the kicker. On localhost, adding a note adds it to the sidebar and I can switch between them normally.
Adding a note on Azure makes all Views only display that new note regardless of which note I open and the new note does NOT get stored in the structure (I don't know where it ended up at all!) even though the path is defined at that point normally and it should save just like it does on localhost.
var model = new ViewNoteModel()
{
Note = Load<Note>($#"{NotePath}\{Title}.txt"), //Works on localhost, fails on Azure on many levels. Title is a URL param.
MyNotes = GetMyNotes() //works fine, reads right directory on local and Azure
};
To summarize:
Everything works fine on localhost, Important part doesn't work on Azure.
If new note is not created but an existing note is opened, Correct note gets loaded (based on URL Param) on Localhost, it breaks on Azure and loads default Note object (not null, just the default constructor data since it's required by JsonConvert)
If a new note is created, you'll see it on Localhost and you'll be able to open all other notes regardless, you will see only the new note on Azure regardless of note picked.
It's really strange and I have no idea what could cause this? I thought it had something to do with Azure requests being handled differently so maybe controller pushes the View before the model is initialized completely but that doesn't make sense since there's nothing async here.
However the fact that it loads a note that doesn't exist on the server it's even more apsurd and I have no explanation for that.
Additionally this issue is not linked with a session. I logged in through my phone and it showed the fake note there as well right away.
P.S. Before you say anything about storage, please note this. Our university grants us a very limited Azure subscription. Simple lowest tier App service and 5DTU SQL server and 99% of the rest is locked out of our subscription. This is why I'm storing stuff on the server, not because I believe it's the smart thing to do.

DocumentDB Data migration Tool, can't migrate from db to db

I'm using DocumentDB Data Migration Tool to migrate a documentDB db to a newly created documentDB db. The connectionStrings verify say it is ok.
It doesn't work (no data transferred (=0) but not failure written in the log file (Failed = 0).
Here is what is done :
I've tried many things such as :
migrate / transfer a collection to a json file
migrate to partitionned / non partitionned documentdb db
for the target indexing policy I've taken the source indexing policy (json got from azure, documentdb db collection settings).
...
Actually nothing's working, but I have no error logs, maybe a problem of documentdb version ?
Thanx in advance for your help.
After debugging the solution from the tool's repo I figure the tools fail silently if you mistyped the database's name like I did.
DocumentDBClient just returns an empty async enumerator.
var database = await TryGetDatabase(databaseName, cancellation);
if (database == null)
return EmptyAsyncEnumerator<IReadOnlyDictionary<string, object>>.Instance;
I can import from an Azure Cosmos DB DocumentDB API collection using DocumentDB Data Migration tool.
Besides, based on my test, if the name of the collection that we specify for Source DocumentDB is not existing, no data will be transferred and no error logs is written.
Import result
Please make sure the source collection that you specified is existing. And if possible, you can try to create a new collection and import data from this new collection, and check if data can be transferred.
I've faced same problem and after some investigation found that internal document structure was changed. Therefor after migration with with tool documents are present but couldn't be found with data explorer (but with query explorer using select * they are visible)
I've migrated collection through mongo api using Mongichef
#fguigui: To help troubleshoot this, could you please re-rerun the same data migration operation using the command line option? Just launch dt.exe from the same folder as Data Migration Tool for syntax required. Then after you launch it with required parameters, please paste the output here and I'll take a look what's broken.

Resources