I can use this TableClient SDK (for Azure Tables) to create, update, retrieve, delete....etc.
But I'm not sure how to do updates (multiple records) in a transaction.
I dont see any documentation of this anywhere (other than the mere mention of doing transactions as a possible design pattern when working with Azure Tables).
How to do this?
Reference to the document: https://www.nuget.org/packages/Azure.Data.Tables/
Found the answer on my own.
Use the TableTransactionActions.
Then call tableclient.SubmitTransaction(actionsList)
Related
Can I delete all documents by one partition in one collection in Cosmos db by one query?
In my cosmos db database I have more then 100 thousand notifications because of bug in my code. Now I fix bug and need delete this notifications. I try to find solution how to delete this in the best way. Could you help me?
You need to write code or use a third party tool like Cerebrata which can delete all documents based on a query.
Disclaimer: I do not work for Cerebrata nor do I get paid for recommending them I am simply a happy user of their product.
I'd like to know if Azure Search offers any ability to trigger an Azure Function when a document gets indexed or inserted into Azure search or if there are any other events I can take advantage of.
I'd like to avoid a timed event which continuously scans Azure search for new documents.
If you're using an indexer, you can add a skillset with a WebApiSkill to invoke your Azure Function for each inserted document. However, there's no transactional consistency guarantees - a document for which your function will be invoked is not guaranteed to be successfully inserted into the index.
Unfortunately, there isn't a great way to do this today. Eugene's suggestion will work, but isn't super efficient and also does indeed have the limitation of the document might not actually make it to the index if something else goes wrong later in the indexer. Please vote on the following uservoice item which is related to implementation for triggered events for Azure Cognitive Search if you are interested in seeing a more well defined option for this scenario: https://feedback.azure.com/forums/263029-azure-search/suggestions/10095111-azure-search-alerts
MS Azure documentation does not talk anything about it. Formal bulk executor documentations talks only about insert and update options, not delete. There is a suggested java script server side program to create a stored procedure which sounds very good, but that requires us to input the partition key value. It wont make sense if our documents are spread across millions of logical partitions.
This is a very simple business need. While migrating huge volume of data in a sql api cosmos collection, if we insert some wrong data, there seems to be no option to delete other then restore to previous state. I have explored for few hrs now, but couldnt find a solution. Even raised a case with MS support, they directed to some .net code which I see need to see as that does not look straightforward. What if someone dont know .net.
Cant we easily bulk delete docs spread across several logical partitions in MS Cosmos SQL API ? Feels disgusting ..
I hope you can provide some accurate details. How to achieve this with some simple straight forward sample code and steps as well. Hope MS and Cosmos db experts to share views as well.
Even raised a case with MS support, they directed to some .net code
which I see need to see as that does not look straightforward.
Obviously,you have already made some efforts to find any solutions except below 2 scenarios:
1.Bulk delete Stored procedure:https://github.com/Azure/azure-cosmosdb-js-server/blob/master/samples/stored-procedures/bulkDelete.js
2.Bulk delete executor:
.NET: https://github.com/Azure/azure-cosmosdb-bulkexecutor-dotnet-getting-started/blob/master/BulkDeleteSample/BulkDeleteSample/Program.cs
Java: https://github.com/Azure/azure-cosmosdb-bulkexecutor-java-getting-started/blob/master/samples/bulkexecutor-sample/src/main/java/com/microsoft/azure/cosmosdb/bulkexecutor/bulkdelete/BulkDeleter.java
So far, only above official solutions are supported. Another workaround is TTL for cosmos db.I believe you have your own logic to judge which part of data is correct and which part of data is wrong,should be deleted. You could set TTL on those data so that they could be killed as soon as expired data arrivals.
Has anyone tried this .. looks like a good solution in java
https://github.com/Azure/azure-cosmosdb-bulkexecutor-java-getting-started#bulk-delete-api
If you write a batch job to do that delete documents over night by using some date configuration we could achieve it. Here is the article published on how to do it.
https://medium.com/#vaibhav.medavarapu/bulk-delete-documents-from-azure-cosmos-db-using-asp-net-core-8bc95dd20411
I've recently started working with Azure CosmosDB and functions. While reading documentation https://learn.microsoft.com/pl-pl/azure/cosmos-db/change-feed-processor I found something that is quite hard to understand for me. Is it actually possible to share a change feed between many functions so they will be triggered by one and same db operation? What is the lease collection and what problem does it solve. What is the purpose of lease? I'd like to have a basic explaination of these terms. In the link i provided it is said that it is possible to share a lease between two functions but then it is said that a lease object has an owner property.
Yes you can have multiple functions being triggered from the same change. However this requires you to have separate leases for them. They can live in the same lease collection but they need a different prefix. There is a setting for that. In Azure functions it's the leaseCollectionPrefix attribute property.
Leases are really just documents like any other in Cosmos DB that will be used to keep track of the consumers for this change feed processor and save some checkpoints so they know where to continue if your app restarts.
I have been reading docs and articles on pouchdb/couchdb/cloudant. I am not able to create this simple architecture in my head. I need help!
So there are many users on the app. Each user has a separate database (which I read is the approach in pouch/couch/cloudant setup).
Now lets just focus on a single user. This user has some remote data already present on our server(couchdb). He has 3 separate docs stored.
He accesses docs 1 and docs 2 from browser 1. And docs 2 and docs 3 from browser 2.
Content in both the browsers must be in sync.
Should I be using Sync api of pouchdb? But as I read, it sync's the whole database. How can I use this api to sync only a subset of the central database. Is filtered replication answer here?
And also I don't want to push both the docs in a single call. He can access docs as he needs.
What is the correct approach to implement this logic with pouch/couch databases. If you can explain with a little code, that will be great. I just need basic ideas.
Is this kind of problem easily solvable in upcoming releases of CouchDB 2.0 and PouchDB-find.
Thanks a lot!
If you take a look at the PouchDB documentation, you should see the options.doc_ids. This parameter let you setup a replication on certain document ids. In your scenario, this would be solving your problem.