Firestore - Cloud functions - Get uid of user who deleted a document - node.js

I am trying to get the UID within an onWrite cloud function of any authenticated user who deletes a document in firestore (not the real time database which doesn't have this issue). The reason is... I am trying to create a log of all actions performed on documents in a collection. I have to use cloud functions as the client could hypothetically create/edit/delete a document and then prevent the corresponding log entry from being sent.
I have seen in other stackoverflow questions like:
Firestore - Cloud Functions - Get uid
Getting the user id from a Firestore Trigger in Cloud Functions for Firebase?
That firestore will not include any auth data for firestore in the onWrite function, and that the accepted workaround is to have fields like updated_by, created_by, created_at, updated_at in the document being created/updated which are verified using firebase permissions. This is great for documents being inserted or updated, but deleted documents in onWrite cloud functions only have change.before data, and no change.after data, meaning you have no way to see who deleted the document, and at best who updated the document last before deletion.
I am in the middle of trying out some work arounds as follows (but they have serious detractors):
Sending an update to a document right before it is to be deleted. Issues -> Might have timing issue, debounce issues, requires messy permissions to ensure that a document is only deleted if it has the proceeding update.
Updating it with a field that tags it for deletion and watching for this tag in a cloud function that then does the deleting. Issues -> leads to a very noticeable lag before the item is deleted.
Does anyone have a better way of doing something like this? Thanks!

Don't delete the document at all. Just add a field called "deleted", set it to true, and add another field with the UID of the user that deleted it. Use these new fields in your queries to decide if you want to deal with deleted documents or not for any given query.
Use a different document in a separate collection that records deletions. Query that collection whenever you need to know if a document has been deleted. Or create a different record in a different database that marks the deletion.
There are really no other options. Either use the existing document or create a new document to record the change. Existing documents are the only things that can be queried - you can't query data that doesn't exist in Firestore.

Related

Locking documents in firestore or don't allow two users edit a document at the same time

stack: NodeJS, ExpressJS, Firebase DB, VueJs
Question:
How to lock a firebase doc? I want to not allow two users editing a same document on front-end.
Example: Like in front-end if a user fetches a document by some id ant starts editing, editing takes like 10 minutes because there are a lot of inputs, but then a second user comes and tries to edit the same document by id. How to prevent it?
My solution: Create a database collection storing the id of currently edited document. And whenever a user tries to edit an document, there should be a check if the id doesnt exist in the collection and on save button the id should be removed from the collection.
Is my solution is good?
Maybe there are other solutions...
There is no pessimistic locking built into Firestore, but I'd typically implement this by adding a field to the document that is being locked. Something like currentEditor with as its value the UID of the current editor.
To manipulate this field you'll want to use a transaction to prevent users from overwriting each other's data, and you'll then want to use server-side security rules to enforce this.

How to do distributed transaction cordination around SQL API and GraphDB in CosmosDB?

I have a Customer container with items representing a single customer in SQL API (DocumentDB) in CosmosDB. I also have a Gremlin API (GraphDB) with the customers' shoppingcart data. Both these data are temporary/transient. The customer can choose clear shopping cart which will delete the temporary customer and the shoppingcart data.
Currently I make separate calls, one to the SQL API (DocumentDB) and Gremlin API (GraphDB) which works but I want to do both as a transaction (ACID principle). To delete a customer, I call the Gremblin API and delete the shoppingcart data, then call the SQL API to delete the customer. But if deleting the customer with the SQL API (second step) fails, I want to roll back the changes done in the first call which will roll back the shoppingcart data which were deleted. In the T-SQL world, this is done with a commit and rollback.
How can I achieve distributed transaction coordination around the delete operations of the customer and shoppingcart data?
Since you don't have transactions in Cosmos DB across different collections (only within the partition of one container), this won't be directly possible.
Next best thing could be to use the Change Feed. It gets triggered whenever an item gets changed or inserted. But: It does not get triggered on deletes. So you need another little workaround of "soft deletes". Bascially you create a flag to that document ("to-be-deleted" etc.) and set its TTL to something very soon. This does trigger then change feed and you can from there delete the item in the other collection.
Is all that better than what you currently have? Honestly, not really if you ask me.
//Update: To add to the point regarding commit/rollback: This also does not exist in Cosmos DB. One possible workaround for this that comes to mind:
Update elements in collection shopping cart. Set a flag to-be-deleted to true and set the TTL for those elements to something like now() + 5 minutes
Delete the element in customer collection. If this works, all good.
If deletion failed, update the shoppingcart again. Remove the to-be-deleted flag and remove the TTL so Comsos DB won't automatically delete it.
Of course, you also need to update any queries you run against your shoppingcart to exclude any elements with the deletion flag in place.

Cloudant/CouchDB triggers an event by deleting a document

I am trying "update handlers" to catch create/update/delete events in IBM cloudant. It works when a document is created or updated, but not deleted. Is there any other way I can catch an event that a document is deleted and then create a document in another database to record this event? Thank you.
If you want to monitor a couchDB/Cloudant database for changes take a look at the /_changes feed: http://docs.couchdb.org/en/2.0.0/api/database/changes.html. You could implement an app that continuously monitors the feed and "logs" the desired information whenever a document is inserted, updated or deleted. For some programming languages there are libraries (such as https://www.npmjs.com/package/follow for Node.js) that make it easy to manage/process the feed.

Purging documents in Couchbase Lite

I have a mobile app using Couchbase lite. When the user logouts, I want to remove some of the documents on the device; the user-specific documents. I do not want to remove all of the documents. Documents have a purgeDocument() method that I thought I could call on those user-specific documents.
The problem is that the purged documents are not re-synced down to the device if the user logs back in and a pull replication is run.
Based on the little I know of CouchDB sync protocol, it makes sense that those are not re-synced down because there are not newer Sequence updates on those user-specific documents to trigger a re-sync.
How should I approach this problem?
Possibilities
Delete the whole database (including common documents) and lose performance.
Somehow reset the last sequence for the replicator and hope the replicator does not transfer the already-downloaded docs over the wire. (Probably would screw up CBL)
Have separate databases, one that stores the user-specific docs and one that contains common docs. Databases can have filtered replicators (by channel) so it would be feasible to partition the incoming data into separate databases. The problem would be the seamless reference loading between documents of differing databases when using CBLModel objects wrappers.
As i understand from the official documentation in the subsection Purging documents, you are not retrieving the document again just because it has not been modified/updated (in short, its rev is the same) on the server side.
You can try to create again a dummy document with the same type and, for example, username (or whatever you are using to identify the user's configuration) when the user logs again in your app so that you trigger the pull replication from the server. You probably will have a conflict that can easily be solved taking the revision from the server.
I hope this idea helps a little.
UPDATE AFTER COMMENT
The idea is to store somewhere the id and type of the user's documents you're going to purge. That way you can create a new dummy document with those two fields when the user logs in again. Perhaps this new dummy document triggers the pull replication.
NOTE: I haven't tried this method. I am just guessing what it might be a work around to your problem.
I would suggest that your backend modifies the selected documents - this could be just a timestamp update - upon user login, which will post the new revisions to the device
You can keep purging the documents when the user logs in.
To solve problem of re-syncing specific document, I think the easiest way is to use filtered replication where the filter is document id.
These document IDs can be created in a manner which can be derived. For example it can be as UserDocument::.
Now when the user logs in you can start one shot replication with document ID as filter. This can only be done in one shot. And when this One Shot finishes you can start replication again by changing the setting of the replication(changing filter/channel).
following is the URL by Couchbase which explains filtered replication by document ID.
https://developer.couchbase.com/documentation/mobile/1.4/guides/couchbase-lite/native-api/replication/index.html#filtering-by-document-ids
Try Push after Purging the document with Couchbase Lite which allows you to Pull the document from the server at a later point.

Efficient way to read+write data from CouchDB

I am implementing an application that includes a user who logs in to access a document stored in a hosted CouchDB store. The user provides their credentials to the app, and once the app authenticates them, the app then has two jobs:
Get the Document ID associated with that user's data
Update the "lastOpened" value stored in that document
I am having to do those two things in a way that seems rather inefficient: I read a View which maps the app's user identifier (their email address in this case) to their Document ID. Once I have the Document ID (and have added it to the session for later use) I then have to request the Document, uptick the "lastOpened" value, then save the Document back to the store.
That looks like 3 trips to the database to me: 1. Get the Document ID from the View, 2. Get the Document, using that ID, 3. Save the updated Document.
Is there anyway to reduce that work to fewer database trips?
If you can change the document structure, you could use the user's login name as the document ID. That way, you don't have to use a view. Using update handlers, you could even do all the work in one request.
That looks like 3 trips to the database to me: 1. Get the Document ID from the View, 2. Get the Document, using that ID, 3. Save the updated Document.
Is there anyway to reduce that work to fewer database trips?
You can fetch document from a view by adding "?include_docs=true" query parameter in request. So two steps instead of three.

Resources