I have a cloud function which listens for document updates within a collection. If a special field is updated, I want to move this document to another collection.
Can I simply update the location of the document?
Or do I have to write a transaction which contains a get(), write() and delete() of the document, or is there a better solution?
That seems to be a common use case but I cant find any documentation.
Firestore currently doesn't offer a "move" operation. You'll have to do what you proposed in your question: copy the document yourself in a transaction.
Firebase Firestore do not support moving data; Its better to copy to new and delete existing.
If your data is kind of transaction data;
make sure execute operations in transaction block (delete only when
copy is success)
Also you can ensure using [ .validate() ] delete only if its exist in copied node
Related
I have a scenario, where we have items save in one documentDb collection e.g. under /items/{documentId}. The document looks similar to:
{
id: [guid],
rating: 5,
numReviews: 1
}
I have a second document collection under /user-reviews/{userIdAsPartitionKey}/{documentId}
The document will look like so:
{
id: [guid],
itemId: [guidFromItemsCollection],
userId: [userId],
rating: 4
}
Upon uploading of this document, I want a trigger to be fired which takes as input this new user rating document, is able to retrieve the relevant document from the items collection, transform the items document based on the new data.
The crux of my problem is: how can I trigger off a document upsert, and how can I retrieve and modify a document from another collection, all within a Funciton App?
I've investigated the following links, which tease at the idea of Triggers being possible on the CosmosDB, but the table suggests we can't hook up a trigger to document DB upload.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-triggers-bindings
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-documentdb
If it's not possible to set up directly, my assumption is I should have a middle tier service handling the upsert (currently using DocumentClient from client side), which can kick off this processing itself, but I like the simplicity of the serverless function apps if possible.
Operations are scoped to a collection. You cannot trigger an operation in Collection B from an event in Collection A.
You'd either need to implement this in your app tier (as you suggested) or... store both types of documents in the same collection (a common scenario). You might need to add some type of doctype property to help filter your queries, but since documents are schema-free, you can store heterogeneous documents in the same collection.
Also: You mentioned an Azure Function. Within a function, there's nothing stopping you from making multiple database calls (e.g. when something happens in collection a and causes your function to be called, your function can perform an operation in collection b). Just note that this won't be transactional.
I know this is a pretty old question.
The Change Feed was built for this exact scenario.
In today's Azure Portal, there's even a menu option in the CosmosDB blade that allows you to create and trigger Function based on changes in one collection which allows you to detect and react to changes - i.e. to create a document in another collection.
I have a database called "development-records" that has a MapReduce view with a "dbcopy" declaration that creates a view in a new database called "development-chained".
When we make an update the view in "development-records", we do the usual steps of:
1. Create a duplicate copy of the design document that we want to change, for example by adding _OLD to its name: _design/fetch_OLD.
2. Put the new or 'incoming' design document into the database, using a name with the suffix _NEW: _design/fetch_NEW.
3. Query the fetch_NEW view, to ensure that it starts to build.
4. Poll the _active_tasks endpoint and wait until the index has finished building.
5. Put a duplicate copy of the new design document into _design/fetch.
6. Delete Design Document _design/fetch_NEW.
7. Delete Design Document _design/fetch_OLD.
The problem is that the documents specified in the dbcopy database "development-chained" don't seem to be updated -- all the old records stay. Is there a way to trigger the dbcopy database to perform the MapReduce again?
Unfortunately, according to the official Cloudant documentation, "The dbcopy feature can cause problems under some circumstances." Use of this feature is strongly discouraged, and has otherwise been removed from the documentation. I hope knowing that helps a little. The new documentation is hard to find.
I am working on a node.js app, and I've been searching for a way around using the Model.save() function because I will want to save many documents at the same time, so it would be a waste of network and processing doing it one by one.
I found a way to bulk insert. However, my model has two properties that makes them unique, an ID and a HASH (I am getting this info from an API, so I believe I need these two informations to make a document unique), so, I wanted that if I get an already existing object it would be updated instead of inserted into the schema.
Is there any way to do that? I was reading something about making concurrent calls to save the objects, using Q, however I still think this would generate an unwanted load on the Mongo server, wouldn't it? Does Mongo or Mongoose have a method to bulk insert or update like it does with insert?
Thanks in advance
I think you are looking for the Bulk.find(<query>).upsert().update(<update>) function.
You can use it this way:
bulk = db.yourCollection.initializeUnorderedBulkOp();
for (<your for statement>) {
bulk.find({ID: <your id>, HASH: <your hash>}).upsert().update({<your update fields>});
}
bulk.execute(<your callback>)
For each document, it will look for a document matching the {ID: <your id>, HASH: {your hash}} criteria. Then:
If it finds one, it will update that document using {<your update fields>}
Otherwise, it will create a new document
As you need, it will not make a connection to the mongo server on each iteration of the for loop. Instead a single call will be made on the bulk.execute() line.
So I've been trying to move data from one database to another. I've already move them but I need to clear the documents which I've already moved from the old database. I've been using ektorp's execute bulk to perform bulk operations. But for some reason I keep getting document update conflict when I try to delete bulk by inserting _deleted.
I might be doing it wrong, here is what I did.
Fetch by bulk with include docs. (For some reason, this doesn't work with just id and rev.)
Then include the _deleted field to each document.
Post using executebulk.
It works for some documents but keeps getting document update conflict for some documents.
Any solution/suggestions please..
This is the preferred way of deleting docs in bulk:
List<Object> bulkDocs = ...
MyClass toBeDeleted = ...
bulkDocs.add(BulkDeleteDocument.of(toBeDeleted));
db.executeBulk(bulkDocs);
If you only need a way to delete/update docs in bulk and you don't need to necessarily implement it in your own software, you can use the great couchapp at:
https://github.com/harthur/costco
You need to upload it to your own server with a couchapp deployment tool, and use a function like
function(doc) {
if(doc.istodelete) // replace this or remove to delete all docs
return null;
}
Read instructions and examples
Is it possible to have couch update or change fields on the fly when you create/update a doc? For example in the design view.... validate_doc_update:
function(newDoc, oldDoc, userCtx) {
}
Within that function I can throw errors like:
if(!newDoc.user_email && !newDoc.user_name && !newDoc.user_password){
throw({forbidden : 'all fields required'});
}
My Question is how would I reassign a field? I tried this:
newDoc.user_password ="changed";
with changed being some new value or hashed value. My overall goal is to build a user registration/login system with node and couchdb and have not found very good examples.
The validate_doc_update function cannot have any side effects and cannot change the document before storage. It only has the power to block an update or to let it through. This is important, because the function is not only called when a user requests an update, but also when changes are replicated from one CouchDB instance to another. So the function can be called multiple times for one document.
However, CouchDB now supports Document Update Handlers that can modify a document or even build it from scratch. These can be used to convert non-JSON input data into usable documents. You can find some documentation in the CouchDB Wiki.
Before you build your own user registration/login system, I'd suggest you look into the built-in CouchDB security features (if you haven't - some information here). They might not be enough for you (e.g. if you need email validation or something similar), but maybe you can build on them.