This is my scenario:
I want to have one single collection where I can insert and query documents using SqlApi and create vertices and edges using Graph Api, all in the same collection.
I believed that that was possible taken into account what I've read on the documentations.
My first try was using Microsoft.Azure.Graphs.dll
With this approach I was able to do CRUD operations with the Sql Api and execute gremlin scripts against the collection.
It is important to note that the documents created with the Sql api Insert command where used as vertices. Then I created edges using the graph api connecting the documents created with the Sql Api. This works as expected.
The only problem I had was that if the document contains an array property the graph Api returned and error : Invalid cast from 'System.String' to 'Newtonsoft.Json.Linq.JObject'.
After investigating I was told that the Microsoft.Azure.Graphs.dll was deprecated and I should use Gremlin.Net instead.
I have not read in any place that this assembly is deprecated and even in the official documentation and examples I can see that this assembly is being used.
Is it really deprecated? Should I not use it?
Then this was my second try:
I moved to Gremlin.net.
First issue: I was connecting to a collection created originally for Sql Api. When I tried to connect with Gremlin.Net Client it tolds me that can not connect to the server.
When I created another database and collection for graph Api I was able to connect.
Conclusion: It's not possible to use Gremlin.net with a collection created with Sql Api.
Or is it possible to activate the gremlin endpoint in a database with Sql Api?
Using the new collection for Graph Api I tried again to create documents with the Sql Api and then connect using Graph Api. I see that in this case both endpoints are created: SqlApi+Gremlin.
I had a few problems making this works. For example I had to set the GraphSon writer and reader to version 2, if not I received a null reference exception.
Second issue: I was able to create documents with Sql Api but I had the same problem with the array property (Example Document { "id":"aaa" "Prop": [ "1", "2"] } )
I get the same error when I query with gremlin: g.V('aaa') => Invalid cast from 'System.String' to 'Newtonsoft.Json.Linq.JObject'.
Conclusion: My first issue with previous library was not solved with the new one.
Third issue: The json returned when I query with SqlApi the edges or vertices are different from received when I used Microsoft.Azure.Graphs.dll. It looks like the cosmos db engine handles differently the gremlin scripts depending on the assembly. Which the json format I should expect?
Notes:
-Why I need to create vertices using SqlApi?
Because I want to create properties with custom objects and I'm not able to do it with graphApi:
Example: { "Id": "aaa", "Custom" : { "Id": 456, "Code" : { "Id": 555, "Name": "Hi"} }
-Why I want to query graph using SqlApi?
Because I want to access my custom properties.
Because I want to paginate using tokens. (Range gremlin function it is not performant. It traverse all the vertex/edges from 0 to the last page I want)
Has someone some information about this situations?
Help will be appreciated.
I know this post is a bit older, but it looks like it is possible to use graph and sql api. The key is in adopting the gremlin's underlying storage format when using the sql api for querying/manipulating. Please check this post out:
https://vincentlauzon.com/2017/09/05/hacking-accessing-a-graph-in-cosmos-db-with-sql-documentdb-api/
Related
So I'm trying to do something I expected to be simple - using Logic App to insert a json object into Cosmos DB.
I've created a CosmosDB (based on Core SQL API) and created a container called Archive with the partition key /username (which is in my json object).
Now to Logic App.
First, I YOLO'ed the simple approach.
Which gave me error: "The input content is invalid because the required properties - 'id; ' - are missing"
But my json object doesnt have an id field. Maybe use the IsUpsert parameter? I dont feel like manipulating my input document to add a field called 'id'.
Which gave me error: "One of the specified inputs is invalid" Okay - feels even worse.
I now tried to use the other logic app connector (Not V2, but the original).
which gave me error: "The partition key supplied in x-ms-partitionkey header has fewer components than defined in the the collection."
I saw that this connector(unlike the V2 one) has a Partition Key Value parameter from UI, which I added to pass the value of my username
which gave me error "The input content is invalid because the required properties - 'id; ' - are missing".
At this point I thought, let me just give the bloody machine what it wants, and so I added "id" to my json object.
and yes that actually worked.
So questions are
With Logic Apps connectors, are you only able to insert json objects into Cosmos DB without that magic field "id" in the message payload?
If the partitioning key is required. Why is it not available from the V2 connector parameter UI?
Thanks.
The v1 version of this thing doesn't have partition key because it's so old, Cosmos didn't have partitioned containers. You will want to use the v2 preview. I think that will go GA at some point here soon because it's been in preview for a while now.
And yes, With Cosmos DB you can't insert anything without it's id so you will need to generate from whatever calls your endpoint and pass it in the body of your http request.
Can I write an UPDATE statement for Azure Cosmos DB? The SQL API supports queries, but what about updates?
In particular, I am looking to update documents without having to retrieve the whole document. I have the ID for the document and I know the exact path I want to update within the document. For example, say that my document is
{
"id": "9e576f8b-146f-4e7f-a68f-14ef816e0e50",
"name": "Fido",
"weight": 35,
"breed": "pomeranian",
"diet": {
"scoops": 3,
"timesPerDay": 2
}
}
and I want to update diet.timesPerDay to 1 where the ID is "9e576f8b-146f-4e7f-a68f-14ef816e0e50". Can I do that using the Azure SQL API without completely replacing the document?
The Cosmos DB SQL language only supports the Select statement.
Partially updating a document isn't supported via the sql language or the SQL API.
People have made it clear that that's a feature they want so there is a highly upvoted request for it that can be found here: https://feedback.azure.com/forums/263030-azure-cosmos-db/suggestions/6693091-be-able-to-do-partial-updates-on-document
Microsoft has already started to work on that so the only thing you can do is wait.
To elaborate more on this, Updating partially a document in CosmosDb on the server isn’t possible, instead you can do whatever you need to do in the memory.In order to literally UPDATE the document, you will have to to retrieve the entire document from the CosmosDb, update the property/properties that you need to update and then call the ‘Replace’ method in the CosmosDb SDK to replace the document in question. Alternatively you can also use ‘Upsert’ which checks if the document already exists and performs an ‘Insert’ if true or ‘Replace’ if false.
NOTE : Make sure you have the latest version of the document before you commit an update!
UPDATE :
CosmosDB support for Partial Update is GAed. You can read more from here
You can use Patch API for partial updates.
Refer: https://learn.microsoft.com/en-us/azure/cosmos-db/partial-document-update
As of 25.05.2021 this feature is available in private preview and will hopefully be in GA soon.
https://azure.microsoft.com/en-us/updates/partial-document-update-for-azure-cosmos-db-in-private-preview/
Use Robo3T to update/delete multiple records with mongo db commands. Make sure your query includes the partition key.
db.getCollection('ListOfValues').updateMany({type:'PROPOSAL', key:"CITI"},{$set: {"key":"CITIBANK"}})
db.getCollection('ListOfValues').deleteMany({type:'PROPOSAL', key:"CITI"})
I am working on Microsoft Azure-based application where I am using Azure Cosmos Trigger to get the change feed from the collections. I have nested records in a single collection. From UI, users can modify the nested records. Now, my requirement is to get the information about the record which was modified from the UI but cosmos trigger is returning all the data from the collection whereas I want to get a single modified record from the nested collection. Any suggestions how this can be done, if feasible? Returning whole collection will take too much time for UI to load.
I published a tutorial to the Cosmos DB documentation. This uses the CreateDocumentChangeFeedQuery
var results = await query.ExecuteNextAsync<dynamic>().ConfigureAwait(false);
if (results.Count > 0)
docs.AddRange(results.Where(doc => doc.resourceType == resourceType));
Allows you to filter
I'm using DocumentDB Data Migration Tool to migrate a documentDB db to a newly created documentDB db. The connectionStrings verify say it is ok.
It doesn't work (no data transferred (=0) but not failure written in the log file (Failed = 0).
Here is what is done :
I've tried many things such as :
migrate / transfer a collection to a json file
migrate to partitionned / non partitionned documentdb db
for the target indexing policy I've taken the source indexing policy (json got from azure, documentdb db collection settings).
...
Actually nothing's working, but I have no error logs, maybe a problem of documentdb version ?
Thanx in advance for your help.
After debugging the solution from the tool's repo I figure the tools fail silently if you mistyped the database's name like I did.
DocumentDBClient just returns an empty async enumerator.
var database = await TryGetDatabase(databaseName, cancellation);
if (database == null)
return EmptyAsyncEnumerator<IReadOnlyDictionary<string, object>>.Instance;
I can import from an Azure Cosmos DB DocumentDB API collection using DocumentDB Data Migration tool.
Besides, based on my test, if the name of the collection that we specify for Source DocumentDB is not existing, no data will be transferred and no error logs is written.
Import result
Please make sure the source collection that you specified is existing. And if possible, you can try to create a new collection and import data from this new collection, and check if data can be transferred.
I've faced same problem and after some investigation found that internal document structure was changed. Therefor after migration with with tool documents are present but couldn't be found with data explorer (but with query explorer using select * they are visible)
I've migrated collection through mongo api using Mongichef
#fguigui: To help troubleshoot this, could you please re-rerun the same data migration operation using the command line option? Just launch dt.exe from the same folder as Data Migration Tool for syntax required. Then after you launch it with required parameters, please paste the output here and I'll take a look what's broken.
I am currently working on an inventory management software in Node js and MongoDB. I am pretty new to MongoDB, having worked in Oracle and MySQL for most of my projects.
Is it possible to create a separate database schema for every client who uses my software, with each client having access only to his copy of the database schema and collections?
The equivalent of selecting data in Oracle database would be
Select * from User1.table,
Select * from User2.table etc
Also, if it were possible, how would it be implemented using a node js mongo db client like mongoose?
I looked at MongoDB documentation, but it talks mainly about adding users to a database for authorization.
I apologize if it seems like a silly question, but id appreciate it if someone could point me in the right direction for this.
Before starting to invest a lot of time in the development of your project, check out other possible approaches to the scenario that you are trying to build.
I did a quick search on SO and found some additional threads with similar scenarios:
MongoDB Database vs. Collection
MongoDB Web App - Database per User
Additional info about mongoose database creation
Whenever you call the connect method on the mongoose object, you are either connecting to an existing database or you are creating it in case it doesn't already exist.
You could have a function that allows you to pass in a name argument with the name and create databases programmatically:
function createDatabase(name) {
var conn_string = 'mongodb://localhost/';
if (typeof name == 'string') {
conn_string += name;
}else{
return false;
}
mongoose.connect(conn_string);
}
Also, be aware that a database will be created when you first insert a record in a collection of that particular database.
It is not sufficient to only connect to the database, you also have to insert a record.
As per my previous example, you could also pass a schema parameter to the function, tailored to each user's profile and fire an insert statement after you connect to that database.