I'm using DocumentDB Data Migration Tool to migrate a documentDB db to a newly created documentDB db. The connectionStrings verify say it is ok.
It doesn't work (no data transferred (=0) but not failure written in the log file (Failed = 0).
Here is what is done :
I've tried many things such as :
migrate / transfer a collection to a json file
migrate to partitionned / non partitionned documentdb db
for the target indexing policy I've taken the source indexing policy (json got from azure, documentdb db collection settings).
...
Actually nothing's working, but I have no error logs, maybe a problem of documentdb version ?
Thanx in advance for your help.
After debugging the solution from the tool's repo I figure the tools fail silently if you mistyped the database's name like I did.
DocumentDBClient just returns an empty async enumerator.
var database = await TryGetDatabase(databaseName, cancellation);
if (database == null)
return EmptyAsyncEnumerator<IReadOnlyDictionary<string, object>>.Instance;
I can import from an Azure Cosmos DB DocumentDB API collection using DocumentDB Data Migration tool.
Besides, based on my test, if the name of the collection that we specify for Source DocumentDB is not existing, no data will be transferred and no error logs is written.
Import result
Please make sure the source collection that you specified is existing. And if possible, you can try to create a new collection and import data from this new collection, and check if data can be transferred.
I've faced same problem and after some investigation found that internal document structure was changed. Therefor after migration with with tool documents are present but couldn't be found with data explorer (but with query explorer using select * they are visible)
I've migrated collection through mongo api using Mongichef
#fguigui: To help troubleshoot this, could you please re-rerun the same data migration operation using the command line option? Just launch dt.exe from the same folder as Data Migration Tool for syntax required. Then after you launch it with required parameters, please paste the output here and I'll take a look what's broken.
Related
I did my homework before posting this question
So the case is that I want to create a utility in my nodejs application that will move specific collections from my main database to an archive database and vice versa. I am using mongo db atlas for my application. I have been doing my research and I found two possible ways one is to create a mongodump and store and other is to create a backup file myself using my node application and upload it to archive db. Using the later approach will cause to loose my collection indexes.
I am planning to use mongodump for the purpose but can't find a resource that shows how to achieve that. Any help would be appreciated. Also if any one has any experience with similar situation I am open to suggestions as well.
I recently created a mongodump & mongorestore wrapper for nodejs: node-mongotools
What does it mean?
you have to install mongo binary on your host by following official mongo documentation(example) and then, you could use node-mongotools to call them from nodeJS.
Here is an example but tool doc contains more details:
var mt = new MongoTools();
const dumpResult = await mt.mongodump({ uri, path })
.catch(console.log);
Can I write an UPDATE statement for Azure Cosmos DB? The SQL API supports queries, but what about updates?
In particular, I am looking to update documents without having to retrieve the whole document. I have the ID for the document and I know the exact path I want to update within the document. For example, say that my document is
{
"id": "9e576f8b-146f-4e7f-a68f-14ef816e0e50",
"name": "Fido",
"weight": 35,
"breed": "pomeranian",
"diet": {
"scoops": 3,
"timesPerDay": 2
}
}
and I want to update diet.timesPerDay to 1 where the ID is "9e576f8b-146f-4e7f-a68f-14ef816e0e50". Can I do that using the Azure SQL API without completely replacing the document?
The Cosmos DB SQL language only supports the Select statement.
Partially updating a document isn't supported via the sql language or the SQL API.
People have made it clear that that's a feature they want so there is a highly upvoted request for it that can be found here: https://feedback.azure.com/forums/263030-azure-cosmos-db/suggestions/6693091-be-able-to-do-partial-updates-on-document
Microsoft has already started to work on that so the only thing you can do is wait.
To elaborate more on this, Updating partially a document in CosmosDb on the server isn’t possible, instead you can do whatever you need to do in the memory.In order to literally UPDATE the document, you will have to to retrieve the entire document from the CosmosDb, update the property/properties that you need to update and then call the ‘Replace’ method in the CosmosDb SDK to replace the document in question. Alternatively you can also use ‘Upsert’ which checks if the document already exists and performs an ‘Insert’ if true or ‘Replace’ if false.
NOTE : Make sure you have the latest version of the document before you commit an update!
UPDATE :
CosmosDB support for Partial Update is GAed. You can read more from here
You can use Patch API for partial updates.
Refer: https://learn.microsoft.com/en-us/azure/cosmos-db/partial-document-update
As of 25.05.2021 this feature is available in private preview and will hopefully be in GA soon.
https://azure.microsoft.com/en-us/updates/partial-document-update-for-azure-cosmos-db-in-private-preview/
Use Robo3T to update/delete multiple records with mongo db commands. Make sure your query includes the partition key.
db.getCollection('ListOfValues').updateMany({type:'PROPOSAL', key:"CITI"},{$set: {"key":"CITIBANK"}})
db.getCollection('ListOfValues').deleteMany({type:'PROPOSAL', key:"CITI"})
I having the same error icrosoft.Azure.Documents.DocumentClientException: Message: {"Errors":["Owner resource does not exist"]} , this is my scenario. When I deployed my webapp to Azure and try to get some document from docDb it throws this error. The docdb exists in azure and contains the document that i looking for.
The weird is that from my local machine(running from VS) this works fine. I'm using the same settings in Azure and local. Somebody have and idea about this.
Thanks
Owner resource does not exist
occurs when you have given a wrong database name.
For example, while reading a document with client.readDocument(..), where the client is DocumentClient instance, the database name given in docLink is wrong.
This error for sure appears to be related to reading a database/collection/document which does not exist. I got the same exact error for a database that did exist but I input the name as lower case, this error appears to happen regardless of your partition key.
The best solution I could come up with for now is to wrap the
var response = await client.ReadDocumentAsync(UriFactory.CreateDocumentUri(database, collection, "documentid"));
call in a try catch, not very elegant and I would much rather the response comes back with some more details but this is Microsoft for ya.
Something like the below should do the trick.
Model myDoc = null;
try
{
var response = await client.ReadDocumentAsync(UriFactory.CreateDocumentUri(database, collection, document));
myDoc = (Model )(dynamic)response.Resource;
}
catch { }
if (myDoc != null)
{
//do your work here
}
That is to get a better idea of the error then create the missing resource so you dont get the error anymore.
A few resources I had to go through before coming to this conclusion:
https://github.com/DamianStanger/DocumentDbDemo
Azure DocumentDB Read Document Resource Not Found
I had the same problem. I found that Visual Studio 2017 is publishing using my Release configuration instead of the Test configuration I've selected. In my case the Release configuration had a different CosmosDB database name that doesn't exist, which resulted in the "owner resource does not exist" error when I published to my Azure test server. Super frustrating and a terrible error message.
It may also be caused by a not found attachment to a document.
This is a common scenario when you move cosmos db contents using Azure Cosmos DB Data Migration tool which moves all the documents with their complete definition but unfortunately not the real attachment content.
Therefore this result in a document that states it has an attachment, and also states the attachment link, but at that given link no attachment can be found because the tool have not moved it.
Now i wrap my code as follow
try{
var attachments = client.CreateAttachmentQuery(attacmentLink, options);
[...]
}
catch (DocumentClientException ex)
{
throw new Exception("Cannot retrieve attachment of document", ex);
}
to have a meaningful hint of what is going on.
I ran into this because my dependency injection configuration had two instances of CosmosClient being created for different databases. Thus any code trying to query the first database was running against the second database.
My fix was to create a CosmosClientCollection class with named instances of CosmosClient
After reading the documentation I was expecting that this field is automatically set by Azure Mobile Services. Apparently it isn't.
Should I configure something extra?
Other options that I see (to do for each table):
* add an axtra line to the node js update(item, user, request) function:
item.__updatedAt = new Date();
* create an update trigger in the database
Anybody experience with this?
Thx!
The __updatedAt column is updated by a trigger created in the underlying SQL Server database, so it should be updated any time a row is updated. Note that this requires a database operation to occur for it to be updated.
when I Post data and Query a Table with the database as: Dev datastorage (emulator) it works.
When I Post data Table with the data in Azure data base (have account) it works.
When I Get data from Table with the data in Azure data base (have account) it does not works.
In both the cases the code is the same.except the key and account credentials.
Is it I should do anything to Query ?
var query = azure.TableQuery
.select().from('dummytable').where('PartitionKey eq ?', key);
can any one suggest why Query is not working.
should there be anything else that need to be done
From Storage Explorer it works, I am able to see the entities.
only from the program I am not able to get the response. But in the same program "PUT"operation is working.
Was happening the same to me.. did an upgrade from the azure npm package 0.6.1 to 0.6.7 and now works, hope this helps.
I would look at the value that is on your partition key. There are some values that aren't on the list of invalid characters that Azure has issue with. For example before SDK 1.7 you could safely insert a % in a key, but if you queried for it specifically it wouldn't work. To test if this is the problem try running your query but without the filter and make sure your row is returned.
After reading the msdn mailing lists, I have upgraded the azure npm with the latest package 0.6.7 and it works. looks to be an issue with azure