Avoid local changes after replication - lotus-notes

I have a Notes application that is used offline on a local replica most of the time.
Users can create and update documents.
On the server, an agent processes all new documents.
The idea is that - once the agent processed the documents - the users are no longer allowed to update the documents.
In general, this is quite simple to setup by setting author access on the documents processed by the agent.
But, because users work on the local replica and the agent runs on the server, this scenario is possible:
user creates document offline
replication of document (creating of doc on server)
agent runs on server / user updates document locally
replication of document (updating author access locally / updating changes on server) ==> Causes save conflict or inconsistent data
Is there a way to make sure that the user can no longer update a document once it is replicated to the server.
Or is there a way to force the agent to run on replication and immediately replicate the access update?
I was thinking of creating a button the user can click to replicate/update all documents, but to avoid users that forget to click the button, I prefer the default replication settings to make sure everything is replicated when possible.

When I investigated a few years ago replication does a "pull", then a "push", so doing something on the server won't work. There are a couple of options.
A separate "flag" document which server processing updates, instead of updating the actual document. This would allow for updates causing a second set of processing.
Store a config document / environment variable with the last replicated date, and check against that in the Form's queryModeChange and queryOpen (if editMode). You can then prevent editing if the document was created before the last replicated date.

Instead of using Author fields for the "wrong" reason, I'd add a non-editable Status field, with values like "Initial", "Ready", and all the rest you might need. Then, replication should be set up differently, using a formula that only replicates documents with Status!="Initial". The user might have 2 buttons to save a document: one just saves to the local database and the other also changes the status to Ready. Once Status="Ready", the user can no longer modify the document.
By the way, did you set document replication to "Merge conflicts"? You might reduce the number of conflicts considerably.

One alternative would be to set up the form so that the user never actually saves the document locally. Instead, the document is emailed to the server where an agent triggered by mail delivery performs the actual update. When the agent is done with the update, it sends an email back to the user telling him/her that the updates are available and instructing them to replicate in order to retrieve them. If the Notes client is actually being used for email, you can probably even put a button into the email and say "Click here to replicate and open your document".

Related

Locking documents in firestore or don't allow two users edit a document at the same time

stack: NodeJS, ExpressJS, Firebase DB, VueJs
Question:
How to lock a firebase doc? I want to not allow two users editing a same document on front-end.
Example: Like in front-end if a user fetches a document by some id ant starts editing, editing takes like 10 minutes because there are a lot of inputs, but then a second user comes and tries to edit the same document by id. How to prevent it?
My solution: Create a database collection storing the id of currently edited document. And whenever a user tries to edit an document, there should be a check if the id doesnt exist in the collection and on save button the id should be removed from the collection.
Is my solution is good?
Maybe there are other solutions...
There is no pessimistic locking built into Firestore, but I'd typically implement this by adding a field to the document that is being locked. Something like currentEditor with as its value the UID of the current editor.
To manipulate this field you'll want to use a transaction to prevent users from overwriting each other's data, and you'll then want to use server-side security rules to enforce this.

Purging documents in Couchbase Lite

I have a mobile app using Couchbase lite. When the user logouts, I want to remove some of the documents on the device; the user-specific documents. I do not want to remove all of the documents. Documents have a purgeDocument() method that I thought I could call on those user-specific documents.
The problem is that the purged documents are not re-synced down to the device if the user logs back in and a pull replication is run.
Based on the little I know of CouchDB sync protocol, it makes sense that those are not re-synced down because there are not newer Sequence updates on those user-specific documents to trigger a re-sync.
How should I approach this problem?
Possibilities
Delete the whole database (including common documents) and lose performance.
Somehow reset the last sequence for the replicator and hope the replicator does not transfer the already-downloaded docs over the wire. (Probably would screw up CBL)
Have separate databases, one that stores the user-specific docs and one that contains common docs. Databases can have filtered replicators (by channel) so it would be feasible to partition the incoming data into separate databases. The problem would be the seamless reference loading between documents of differing databases when using CBLModel objects wrappers.
As i understand from the official documentation in the subsection Purging documents, you are not retrieving the document again just because it has not been modified/updated (in short, its rev is the same) on the server side.
You can try to create again a dummy document with the same type and, for example, username (or whatever you are using to identify the user's configuration) when the user logs again in your app so that you trigger the pull replication from the server. You probably will have a conflict that can easily be solved taking the revision from the server.
I hope this idea helps a little.
UPDATE AFTER COMMENT
The idea is to store somewhere the id and type of the user's documents you're going to purge. That way you can create a new dummy document with those two fields when the user logs in again. Perhaps this new dummy document triggers the pull replication.
NOTE: I haven't tried this method. I am just guessing what it might be a work around to your problem.
I would suggest that your backend modifies the selected documents - this could be just a timestamp update - upon user login, which will post the new revisions to the device
You can keep purging the documents when the user logs in.
To solve problem of re-syncing specific document, I think the easiest way is to use filtered replication where the filter is document id.
These document IDs can be created in a manner which can be derived. For example it can be as UserDocument::.
Now when the user logs in you can start one shot replication with document ID as filter. This can only be done in one shot. And when this One Shot finishes you can start replication again by changing the setting of the replication(changing filter/channel).
following is the URL by Couchbase which explains filtered replication by document ID.
https://developer.couchbase.com/documentation/mobile/1.4/guides/couchbase-lite/native-api/replication/index.html#filtering-by-document-ids
Try Push after Purging the document with Couchbase Lite which allows you to Pull the document from the server at a later point.

Can't Reindex All Search Indexes

I've recently deleted 120,000 Users from my Liferay database using an automated script. Before that however, I manually deleted 2 Users from the database using DELETE FROM User_ WHERE userId=1234567 - just to see what might happen with any associations that User might have had.
The User was deleted, but all other table rows holding that userId (1234567) remained. Fine.
So now I'm at a point where I'd like to reindex all search indexes to get a current list of users, but LR throws an exception:
08:07:41,922 ERROR [http-bio-20110-exec-290][LuceneIndexer:136] Error encountere
d while reindexing
com.liferay.portal.kernel.search.SearchException: com.liferay.portal.NoSuchUserE
xception: No User exists with the key {contactId=1234568}
at com.liferay.portal.kernel.search.BaseIndexer.getDocument(BaseIndexer.j
ava:179)
at com.liferay.portlet.usersadmin.util.ContactIndexer$1.performAction(Con
tactIndexer.java:203)
at com.liferay.portal.kernel.dao.orm.BaseActionableDynamicQuery.performActions
InSingleInterval(BaseActionableDynamicQuery.java:309)
at com.liferay.portal.kernel.dao.orm.BaseActionableDynamicQuery.performActi
This contactId seems to be a single digit higher than the userId for any user created (I could be wrong about that)
So my question is, how can I fix this problem so I can perform the reindex?
Liferay EE 6.2
Tomcat 7.0.33
SQL Server
I found out the contactId for my manually deleted user was still in the Contact_ Table. I deleted him from the table and can now perform the reindex. I can now see all the user & user groups after reindexing.
From LR:
rule #1 with using Liferay - The database is not yours, you should
never be in it and you should never be issuing sql against it.
The Liferay API is the only way to modify data. Period.
The Liferay API supports user deletion. Had you used the Liferay API,
the users would have been deleted and your indexes, etc., would have
been fine.
Okay, I know that's going to come across as a little aggressive or
something, but it's important. The whole Liferay system depends upon
it's data, so any time you tweak the data manually it potentially
breaks the system. If you dig through the actual process that the
Liferay API does for a user deletion, you'd see that the "delete from
user_ where ..." is just a small part.
I always tell people new to Liferay to just forget that the database
exists. It's definitely their database, not yours, and it's not to be
messed with.

check document is opened by other user xpages

I m working with xpages for following scenario.
I have one agent that will update the value to one of the field of datasource from notesview. sometimes, while one user is opening the datasource via xpage and other user run the agent in the same time. at that time, agent can run and update the field of datasource. but from the xpages side, we can catch the exception for the document is modified by other user and cannot save the xpages.
i would like to prevent this from agent side. i would like to know whether there is a way to know that document is opened by one of the user from agent side, so that agent wont update the value to that datasource.
thank for your help.
First of all: mixing agent and XPages is more trouble than it is worth, you are better off converting your agent code into a Java class (and pay the technical debt accumulated over time in the agent).
One BIG reason: an agent and XPages do not share anything other than the document in memory (if handed over) on that one user's session.
If you launch the agent from an XPage: you can use an ApplicationScope variable (e.g. a java.util.HashMap) that you fill with the unid and username when a user opens a document. Before you launch that agent, you check the scope if the unid is inside with a different username. If yes, don't run the agent.
You need to build a mechanism to expire and renew these locks otherwise you end up with dead lock entries.
If the agent is launched directly or on schedule things get a little more complicated. You could implement a web service servlet that handles the locks since both XPages and agents can talk to a web service.

How to make an emailed document form make changes to my local database?

I have this sample application regarding Change Requests.
If the form is saved, it will send a form as an email to the listed approvers.
The form has 2 actions - Approve and Reject.
Let's say the approver approves the CR. It will update the emailed form document but the document that resides in my local database won't. Is there a way for me to update the documents in my local database automatically if the recipient(approver) has approved/rejected the document form?
Not automatically, but you can add logic to the approve and reject actions to update the database.
If this database is shared on a server, one way would be to make it a mail-in database. Your approval actions could then trigger an email that goes to that mail-in database address. Your database would then need an agent to process the emails, perhaps simply just parsing the subject line which could contain the UNID or some key that says which document to update along with the response of approved or rejected. This would work in a distributed environment.
If the environment is not distributed, say everyone is always on the same network connected to the same Notes server, then you could write some Lotusscript code to update the remote database directly.
Remember the context that you'll be in. When the emailed form is open in an approvers Notes client, he or she doesn't have access to your local databases. So you'll need to have a place on the server that the response action can update.
The safest design for a highly distributed workflow application, (replicas on multiple servers and local replicas on users laptops) is to have the approvals and updates posted as new responses and not have updates directly to the main WF document. The WF document should then compute the statues based on the responses. Finally, an agent running on ONE server can post the status updates to the document and archive the responses.
This construct will eliminate (or reduce significantly) the possibility of replication and save conflicts. It is particularly needed for WF items that require multiple approvals from people who are disconnected or connected to different servers.

Resources