Purging documents in Couchbase Lite - couchdb

I have a mobile app using Couchbase lite. When the user logouts, I want to remove some of the documents on the device; the user-specific documents. I do not want to remove all of the documents. Documents have a purgeDocument() method that I thought I could call on those user-specific documents.
The problem is that the purged documents are not re-synced down to the device if the user logs back in and a pull replication is run.
Based on the little I know of CouchDB sync protocol, it makes sense that those are not re-synced down because there are not newer Sequence updates on those user-specific documents to trigger a re-sync.
How should I approach this problem?
Possibilities
Delete the whole database (including common documents) and lose performance.
Somehow reset the last sequence for the replicator and hope the replicator does not transfer the already-downloaded docs over the wire. (Probably would screw up CBL)
Have separate databases, one that stores the user-specific docs and one that contains common docs. Databases can have filtered replicators (by channel) so it would be feasible to partition the incoming data into separate databases. The problem would be the seamless reference loading between documents of differing databases when using CBLModel objects wrappers.

As i understand from the official documentation in the subsection Purging documents, you are not retrieving the document again just because it has not been modified/updated (in short, its rev is the same) on the server side.
You can try to create again a dummy document with the same type and, for example, username (or whatever you are using to identify the user's configuration) when the user logs again in your app so that you trigger the pull replication from the server. You probably will have a conflict that can easily be solved taking the revision from the server.
I hope this idea helps a little.
UPDATE AFTER COMMENT
The idea is to store somewhere the id and type of the user's documents you're going to purge. That way you can create a new dummy document with those two fields when the user logs in again. Perhaps this new dummy document triggers the pull replication.
NOTE: I haven't tried this method. I am just guessing what it might be a work around to your problem.

I would suggest that your backend modifies the selected documents - this could be just a timestamp update - upon user login, which will post the new revisions to the device

You can keep purging the documents when the user logs in.
To solve problem of re-syncing specific document, I think the easiest way is to use filtered replication where the filter is document id.
These document IDs can be created in a manner which can be derived. For example it can be as UserDocument::.
Now when the user logs in you can start one shot replication with document ID as filter. This can only be done in one shot. And when this One Shot finishes you can start replication again by changing the setting of the replication(changing filter/channel).
following is the URL by Couchbase which explains filtered replication by document ID.
https://developer.couchbase.com/documentation/mobile/1.4/guides/couchbase-lite/native-api/replication/index.html#filtering-by-document-ids

Try Push after Purging the document with Couchbase Lite which allows you to Pull the document from the server at a later point.

Related

Firestore - Cloud functions - Get uid of user who deleted a document

I am trying to get the UID within an onWrite cloud function of any authenticated user who deletes a document in firestore (not the real time database which doesn't have this issue). The reason is... I am trying to create a log of all actions performed on documents in a collection. I have to use cloud functions as the client could hypothetically create/edit/delete a document and then prevent the corresponding log entry from being sent.
I have seen in other stackoverflow questions like:
Firestore - Cloud Functions - Get uid
Getting the user id from a Firestore Trigger in Cloud Functions for Firebase?
That firestore will not include any auth data for firestore in the onWrite function, and that the accepted workaround is to have fields like updated_by, created_by, created_at, updated_at in the document being created/updated which are verified using firebase permissions. This is great for documents being inserted or updated, but deleted documents in onWrite cloud functions only have change.before data, and no change.after data, meaning you have no way to see who deleted the document, and at best who updated the document last before deletion.
I am in the middle of trying out some work arounds as follows (but they have serious detractors):
Sending an update to a document right before it is to be deleted. Issues -> Might have timing issue, debounce issues, requires messy permissions to ensure that a document is only deleted if it has the proceeding update.
Updating it with a field that tags it for deletion and watching for this tag in a cloud function that then does the deleting. Issues -> leads to a very noticeable lag before the item is deleted.
Does anyone have a better way of doing something like this? Thanks!
Don't delete the document at all. Just add a field called "deleted", set it to true, and add another field with the UID of the user that deleted it. Use these new fields in your queries to decide if you want to deal with deleted documents or not for any given query.
Use a different document in a separate collection that records deletions. Query that collection whenever you need to know if a document has been deleted. Or create a different record in a different database that marks the deletion.
There are really no other options. Either use the existing document or create a new document to record the change. Existing documents are the only things that can be queried - you can't query data that doesn't exist in Firestore.

Locking documents in firestore or don't allow two users edit a document at the same time

stack: NodeJS, ExpressJS, Firebase DB, VueJs
Question:
How to lock a firebase doc? I want to not allow two users editing a same document on front-end.
Example: Like in front-end if a user fetches a document by some id ant starts editing, editing takes like 10 minutes because there are a lot of inputs, but then a second user comes and tries to edit the same document by id. How to prevent it?
My solution: Create a database collection storing the id of currently edited document. And whenever a user tries to edit an document, there should be a check if the id doesnt exist in the collection and on save button the id should be removed from the collection.
Is my solution is good?
Maybe there are other solutions...
There is no pessimistic locking built into Firestore, but I'd typically implement this by adding a field to the document that is being locked. Something like currentEditor with as its value the UID of the current editor.
To manipulate this field you'll want to use a transaction to prevent users from overwriting each other's data, and you'll then want to use server-side security rules to enforce this.

Avoid local changes after replication

I have a Notes application that is used offline on a local replica most of the time.
Users can create and update documents.
On the server, an agent processes all new documents.
The idea is that - once the agent processed the documents - the users are no longer allowed to update the documents.
In general, this is quite simple to setup by setting author access on the documents processed by the agent.
But, because users work on the local replica and the agent runs on the server, this scenario is possible:
user creates document offline
replication of document (creating of doc on server)
agent runs on server / user updates document locally
replication of document (updating author access locally / updating changes on server) ==> Causes save conflict or inconsistent data
Is there a way to make sure that the user can no longer update a document once it is replicated to the server.
Or is there a way to force the agent to run on replication and immediately replicate the access update?
I was thinking of creating a button the user can click to replicate/update all documents, but to avoid users that forget to click the button, I prefer the default replication settings to make sure everything is replicated when possible.
When I investigated a few years ago replication does a "pull", then a "push", so doing something on the server won't work. There are a couple of options.
A separate "flag" document which server processing updates, instead of updating the actual document. This would allow for updates causing a second set of processing.
Store a config document / environment variable with the last replicated date, and check against that in the Form's queryModeChange and queryOpen (if editMode). You can then prevent editing if the document was created before the last replicated date.
Instead of using Author fields for the "wrong" reason, I'd add a non-editable Status field, with values like "Initial", "Ready", and all the rest you might need. Then, replication should be set up differently, using a formula that only replicates documents with Status!="Initial". The user might have 2 buttons to save a document: one just saves to the local database and the other also changes the status to Ready. Once Status="Ready", the user can no longer modify the document.
By the way, did you set document replication to "Merge conflicts"? You might reduce the number of conflicts considerably.
One alternative would be to set up the form so that the user never actually saves the document locally. Instead, the document is emailed to the server where an agent triggered by mail delivery performs the actual update. When the agent is done with the update, it sends an email back to the user telling him/her that the updates are available and instructing them to replicate in order to retrieve them. If the Notes client is actually being used for email, you can probably even put a button into the email and say "Click here to replicate and open your document".

Efficient way to read+write data from CouchDB

I am implementing an application that includes a user who logs in to access a document stored in a hosted CouchDB store. The user provides their credentials to the app, and once the app authenticates them, the app then has two jobs:
Get the Document ID associated with that user's data
Update the "lastOpened" value stored in that document
I am having to do those two things in a way that seems rather inefficient: I read a View which maps the app's user identifier (their email address in this case) to their Document ID. Once I have the Document ID (and have added it to the session for later use) I then have to request the Document, uptick the "lastOpened" value, then save the Document back to the store.
That looks like 3 trips to the database to me: 1. Get the Document ID from the View, 2. Get the Document, using that ID, 3. Save the updated Document.
Is there anyway to reduce that work to fewer database trips?
If you can change the document structure, you could use the user's login name as the document ID. That way, you don't have to use a view. Using update handlers, you could even do all the work in one request.
That looks like 3 trips to the database to me: 1. Get the Document ID from the View, 2. Get the Document, using that ID, 3. Save the updated Document.
Is there anyway to reduce that work to fewer database trips?
You can fetch document from a view by adding "?include_docs=true" query parameter in request. So two steps instead of three.

Can't Reindex All Search Indexes

I've recently deleted 120,000 Users from my Liferay database using an automated script. Before that however, I manually deleted 2 Users from the database using DELETE FROM User_ WHERE userId=1234567 - just to see what might happen with any associations that User might have had.
The User was deleted, but all other table rows holding that userId (1234567) remained. Fine.
So now I'm at a point where I'd like to reindex all search indexes to get a current list of users, but LR throws an exception:
08:07:41,922 ERROR [http-bio-20110-exec-290][LuceneIndexer:136] Error encountere
d while reindexing
com.liferay.portal.kernel.search.SearchException: com.liferay.portal.NoSuchUserE
xception: No User exists with the key {contactId=1234568}
at com.liferay.portal.kernel.search.BaseIndexer.getDocument(BaseIndexer.j
ava:179)
at com.liferay.portlet.usersadmin.util.ContactIndexer$1.performAction(Con
tactIndexer.java:203)
at com.liferay.portal.kernel.dao.orm.BaseActionableDynamicQuery.performActions
InSingleInterval(BaseActionableDynamicQuery.java:309)
at com.liferay.portal.kernel.dao.orm.BaseActionableDynamicQuery.performActi
This contactId seems to be a single digit higher than the userId for any user created (I could be wrong about that)
So my question is, how can I fix this problem so I can perform the reindex?
Liferay EE 6.2
Tomcat 7.0.33
SQL Server
I found out the contactId for my manually deleted user was still in the Contact_ Table. I deleted him from the table and can now perform the reindex. I can now see all the user & user groups after reindexing.
From LR:
rule #1 with using Liferay - The database is not yours, you should
never be in it and you should never be issuing sql against it.
The Liferay API is the only way to modify data. Period.
The Liferay API supports user deletion. Had you used the Liferay API,
the users would have been deleted and your indexes, etc., would have
been fine.
Okay, I know that's going to come across as a little aggressive or
something, but it's important. The whole Liferay system depends upon
it's data, so any time you tweak the data manually it potentially
breaks the system. If you dig through the actual process that the
Liferay API does for a user deletion, you'd see that the "delete from
user_ where ..." is just a small part.
I always tell people new to Liferay to just forget that the database
exists. It's definitely their database, not yours, and it's not to be
messed with.

Resources