How to delete a document from a collection in cosmos db using Java? - azure

How can I delete a document from a collection.
AsyncDocumentClient client = getDBClient();
RequestOptions options = new RequestOptions();
options.setPartitionKey(new PartitionKey("143003"));
client.deleteDocument(String.format("dbs/test-lin/colls/application/docs/%s", document.id()), options);
I am trying to delete a set of documents from collection based on some condition. I have set the partition key. The read-write keys are being used (So no permission issue).
There are no errors when executing this code. The document is not getting deleted from the collection.
How to fix the issue?

#Suj Patil
You should call subscribe(). The publisher does not do anything until some one subscribes.
client.deleteDocument(String.format("dbs/test-lin/colls/application/docs/%s", document.id()), options).subscribe()

Related

DocumentDB: Bulk-Import Stored Procedure: Insert multiple partition key documents in COSMOS DB

I am working on Bulk insert stored procedure in Cosmos database using document client but the challenge that I am facing is, I need to insert documents in bulk that may have different partition keys.
Is there any way to achieve it?
I am currently using the below code:
Uri uri = UriFactory.CreateStoredProcedureUri("test-db", "test-collection", "sp_bulk_insert");
RequestOptions options = new RequestOptions { PartitionKey = new PartitionKey("patient")};
var result = await _client.ExecuteStoredProcedureAsync<string>(uri, options , patientInfo, pageInfo);
return result;
But I also have pageInfo object having partition key: "page" but given PartitionKey in RequestOptions is "patient" that is the partition key of patientInfo object
When I am trying to execute the SP it is giving following error:
Requests originating from scripts cannot reference partition keys other than the one for which client request was submitted
Stored procedures are scoped to a single partition key so this not possible. Also there is no reason to use stored procedures for bulk operations. You are better off using the .NET SDK v3 and leveraging the bulk support in there. https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/BulkSupport
Thanks.

How to show unique keys on Cosmos DB container?

This link implies that unique keys can be seen in a Cosmos DB container by looking at the settings. However I can't seem to find them using both the portal and the storage explorer. How can you view the unique keys on an existing Cosmos DB container? I have a document that fails to load due to a key violation which should be impossible so I need to confirm what the keys are.
A slightly easier way to view your Cosmos DB unique keys is to view the ARM template for your resource.
On your Cosmos DB account, click Settings/ Export Template- let the template be generated and view online once complete. You will find them under the "uniqueKeyPolicy" label.
Based on this blob, unique keys policy should be visible like below:
"uniqueKeyPolicy": {
"uniqueKeys": [
{
"paths": [
"/name",
"/country"
]
},
{
"paths": [
"/users/title"
]
}
]
}
However, I could not see it on the portal as same as you. Maybe it's a bug here.
You could use cosmos db sdk as a workaround to get the unique keys policy, please see my java sample code.
ResourceResponse<DocumentCollection> response1 = documentClient.readCollection("dbs/db/colls/test", null);
DocumentCollection coll =response1.getResource();
UniqueKeyPolicy uniqueKeyPolicy = coll.getUniqueKeyPolicy();
Collection<UniqueKey> uniqueKeyCollections = uniqueKeyPolicy.getUniqueKeys();
for(UniqueKey uniqueKey : uniqueKeyCollections){
System.out.println(uniqueKey.getPaths());
}
Here is the basic code that worked for me. The code that writes the collection is output in Json format. I think this is similar to what you see in the portal but it skips or omits the uniqueKeyPolicy information.
As a side note I think I found a bug or odd behavior. Inserting a new document can throw unique index constraint violation but updates do not.
this.EndpointUrl = ConfigurationManager.AppSettings["EndpointUrl"];
this.PrimaryKey = ConfigurationManager.AppSettings["PrimaryKey"];
string dbname = ConfigurationManager.AppSettings["dbname"];
string containername = ConfigurationManager.AppSettings["containername"];
this.client = new DocumentClient(new Uri(EndpointUrl), PrimaryKey);
DocumentCollection collection = await client.ReadDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri(dbname, containername));
Console.WriteLine("\n4. Found Collection \n{0}\n", collection);
Support for showing unique key policy in collection properties will be added soon. Meanwhile you can use DocumentDBStudio to see unique keys in collection. Once unique key policy is set, it cannot be modified.
WRT odd behavior, can you please share full isolated repro and explain expected and actual behavior.
Here you can view the ARM template in your Azure Portal, and as the winner comment says You will find the unique keys under the "uniqueKeyPolicy" label.

Cosmos DB succeeds and fails on randomly on the same query, saying they are cross partition when they aren't

I have a collection with the partition key "flightConversationId".
I am doing a very simple query, BY THE PARTITON KEY FIELD
SELECT * from root WHERE root.flightConversationId="b36d13c0-cbec-11e7-a4ad-8fcedf370f98"
When doing this query via the nodeJS SDK, it will work one second, and fail the next with the error:
Cross partition query is required but disabled. Please set x-ms-documentdb-query-enablecrosspartition to true, specify x-ms-documentdb-partitionkey, or revise your query to avoid this exception.
I realize I could enable cross-partition querying, but I do not need cross partition queries. What is going on???
This situation seemed to resolve itself over time.
My theory is that when we deleted a collection and recreated it with a new partition key, it took a long time for all remnants of the original collection to really be deleted from the cloud, and that some requests were going to the "old" collection that had the same name as the "new".
You have to explicitly scope the query to a partition by providing an FeedOptions or RequestOptions class with a partitionKey property. Using the PartitionKey in your where clause isn't enough without that explicit scope. This is for C# but should be same object model:
https://learn.microsoft.com/en-us/azure/cosmos-db/documentdb-partition-data
Document result = await client.ReadDocumentAsync(
UriFactory.CreateDocumentUri("db", "coll", "XMS-001-FE24C"),
new RequestOptions { PartitionKey = new PartitionKey("XMS-0001") });
jsDoc:
http://azure.github.io/azure-documentdb-node/global.html#RequestOptions
http://azure.github.io/azure-documentdb-node/global.html#FeedOptions

couchDB conflicts when supplying own ID with large inserts using _bulk_docs

Same code works fine when letting couch auto generate UUID's. I am starting off with a new completely empty database yet I keep getting this
error: conflict
reason: Document update conflict
To reiterate I am posting new documents to an empty database so not sure how I can get update conflicts when nothing is being updated. Even stranger the conflicting documents still show up in the DB with only a single revision, but overall there are missing records.
I am trying to insert about 38,000 records with _bulk_docs in batches of 100. I am getting these records (100 at a time) from a RETS server, each record already has a unique ID that I want to use for the couchDB _id instead of their UUID's. I am using a promised based library to get the records and axios to insert them into couch. After getting the first batch of 100 I then run this code to add an _id to each of the 100 records before inserting
let batch = [];
batch = records.results.map((listing) => {
let temp = listing;
temp._id = listing.ListingKey;
return temp;
});
Then insert:
axios.post('http://127.0.0.1:5984/rets_store/_bulk_docs', { docs: batch })
This is all inside of a function that I call recursively.
I know this probably wont be enough to see the issue but thought Id start here. I know for sure it has something to do with my map() and adding the _id = ListingKey
Thanks!

UserInfoProvider.DeleteUser() vs DeleteData(whereCondition)

I know that DeleteUser() will run procedures to delete all relationships etc. Will the private internal DeleteData with a where condition also delete all relationships or will it just try deleting the main record from the table? If any relational data exists will it throw an error?
If you call UserInfoProvider.DeleteData() it won't delete the related data. It just executes the object's deletion SQL query. It won't even look for the cms.user.removedependencies query.
On the other hand, calling DeleteData() upon an info object would cause the related data to be deleted.
If you need to bulk delete users then retrieve them from the DB using object query (make sure you restrict columns, UserID should be enough) first. And then iterate through the collection calling Delete() on each one of them.
foreach (var user in UserInfoProvider.GetUsers().Where("UserEnabled=0").Columns("UserID").TypedResult.Items)
{
user.Delete();
}

Resources