I m working with xpages for following scenario.
I have one agent that will update the value to one of the field of datasource from notesview. sometimes, while one user is opening the datasource via xpage and other user run the agent in the same time. at that time, agent can run and update the field of datasource. but from the xpages side, we can catch the exception for the document is modified by other user and cannot save the xpages.
i would like to prevent this from agent side. i would like to know whether there is a way to know that document is opened by one of the user from agent side, so that agent wont update the value to that datasource.
thank for your help.
First of all: mixing agent and XPages is more trouble than it is worth, you are better off converting your agent code into a Java class (and pay the technical debt accumulated over time in the agent).
One BIG reason: an agent and XPages do not share anything other than the document in memory (if handed over) on that one user's session.
If you launch the agent from an XPage: you can use an ApplicationScope variable (e.g. a java.util.HashMap) that you fill with the unid and username when a user opens a document. Before you launch that agent, you check the scope if the unid is inside with a different username. If yes, don't run the agent.
You need to build a mechanism to expire and renew these locks otherwise you end up with dead lock entries.
If the agent is launched directly or on schedule things get a little more complicated. You could implement a web service servlet that handles the locks since both XPages and agents can talk to a web service.
Related
I have a Notes application that is used offline on a local replica most of the time.
Users can create and update documents.
On the server, an agent processes all new documents.
The idea is that - once the agent processed the documents - the users are no longer allowed to update the documents.
In general, this is quite simple to setup by setting author access on the documents processed by the agent.
But, because users work on the local replica and the agent runs on the server, this scenario is possible:
user creates document offline
replication of document (creating of doc on server)
agent runs on server / user updates document locally
replication of document (updating author access locally / updating changes on server) ==> Causes save conflict or inconsistent data
Is there a way to make sure that the user can no longer update a document once it is replicated to the server.
Or is there a way to force the agent to run on replication and immediately replicate the access update?
I was thinking of creating a button the user can click to replicate/update all documents, but to avoid users that forget to click the button, I prefer the default replication settings to make sure everything is replicated when possible.
When I investigated a few years ago replication does a "pull", then a "push", so doing something on the server won't work. There are a couple of options.
A separate "flag" document which server processing updates, instead of updating the actual document. This would allow for updates causing a second set of processing.
Store a config document / environment variable with the last replicated date, and check against that in the Form's queryModeChange and queryOpen (if editMode). You can then prevent editing if the document was created before the last replicated date.
Instead of using Author fields for the "wrong" reason, I'd add a non-editable Status field, with values like "Initial", "Ready", and all the rest you might need. Then, replication should be set up differently, using a formula that only replicates documents with Status!="Initial". The user might have 2 buttons to save a document: one just saves to the local database and the other also changes the status to Ready. Once Status="Ready", the user can no longer modify the document.
By the way, did you set document replication to "Merge conflicts"? You might reduce the number of conflicts considerably.
One alternative would be to set up the form so that the user never actually saves the document locally. Instead, the document is emailed to the server where an agent triggered by mail delivery performs the actual update. When the agent is done with the update, it sends an email back to the user telling him/her that the updates are available and instructing them to replicate in order to retrieve them. If the Notes client is actually being used for email, you can probably even put a button into the email and say "Click here to replicate and open your document".
Does IBM Domino track the last login date for web users(UserName/Password and internet certificate)? I know the access logs contains this information but wanted to know if there may be something built into Domino (maybe in Address Book). Trying to come up with a method to disable web accounts that have not accessed a domino server in a specified time period.
Thanks,
Kev
The User Activity area in the Database Properties picks up from the log.nsf, which is where this information is stored. But, typically, the log.nsf will only have a few days' worth of information. When I've had this requirement before, I've manually captured it via a custom login page or an initUser function I've had in applications.
One of the easiest solutions is to trigger an action from a live web page that generates a database.nsf?openagent event.
like:
or
Ideally you've use the openagent to print a content type and a response, but if you don't browsers do pretty well with invalid responses from servers.
inside your "myagent" you will have the users name available to you to write it to a document.
Your next challenge will be in getting the agent to trigger, but, not too often, ideally only on login.
When a user uses a custom login form it submits the username/password and redirection url in POST method. You could change that to ...?openagent&nexturl=/blablabla.nsf
Your tiny little agent would run one and only one time upon login and update a document in a your custom logging database.
That's a developer's solution.
There are also admin solutions. The server does keep track of active web sessions, but, it does not drop them into the log.nsf like it does upon session ending for a notes session. I don't think it would be too much work from an admin standpoint to get that information there are a lot of event triggers available to you. It's just been way too long since I worked on any server that anyone cared about statistics.
I'm looking to modify one of my existing portlets which is used concurrently by many users to be able to automatically poll for updates and pull down the latest data in that portlet. That way users don't have to refresh the page to see the new data. In otherwords its automatically checking for new data every 10 seconds and refreshing the data.
Almost like a chat client but its pulling down a JSON object every 10 seconds asynchronously.
No problem. On the browser side, query <portlet:resourceURL/> - this goes to the resource-serving phase of your portlet. From there you can deliver any content type you want (kind of like a servlet)
On the server side, you'll need to query for updated data from all the different users, but that's something independent of the portlet spec and rather considered business logic.
I have this sample application regarding Change Requests.
If the form is saved, it will send a form as an email to the listed approvers.
The form has 2 actions - Approve and Reject.
Let's say the approver approves the CR. It will update the emailed form document but the document that resides in my local database won't. Is there a way for me to update the documents in my local database automatically if the recipient(approver) has approved/rejected the document form?
Not automatically, but you can add logic to the approve and reject actions to update the database.
If this database is shared on a server, one way would be to make it a mail-in database. Your approval actions could then trigger an email that goes to that mail-in database address. Your database would then need an agent to process the emails, perhaps simply just parsing the subject line which could contain the UNID or some key that says which document to update along with the response of approved or rejected. This would work in a distributed environment.
If the environment is not distributed, say everyone is always on the same network connected to the same Notes server, then you could write some Lotusscript code to update the remote database directly.
Remember the context that you'll be in. When the emailed form is open in an approvers Notes client, he or she doesn't have access to your local databases. So you'll need to have a place on the server that the response action can update.
The safest design for a highly distributed workflow application, (replicas on multiple servers and local replicas on users laptops) is to have the approvals and updates posted as new responses and not have updates directly to the main WF document. The WF document should then compute the statues based on the responses. Finally, an agent running on ONE server can post the status updates to the document and archive the responses.
This construct will eliminate (or reduce significantly) the possibility of replication and save conflicts. It is particularly needed for WF items that require multiple approvals from people who are disconnected or connected to different servers.
The administrator of the web application I am working on, asks for a 'I wanna know everything' log. He wants to track all what the other users did when they were connected (logged in) to the web app :
What pages he/she visited.
What actions he/she performed.
On what entities (JPA Entities) he/she performed actions.
At what exact time she/he performed a given action if successful.
What attribute of the a given record he/she modified.
The user principals of this user.
All I could do now, is export a CSV file where the administrator finds the user principals, the time this user logged in and logged out.
I also created an example history table in database populated by EclipseLink Customizer to track changes for a corresponding table. (The problem with this EclipseLink customizer is that it is not flexible, because database tables are changing with time (adding/removing attributes) and so are their corresponding entities. And the user does not want to modify things two times (one time in the main table and the second time in the history table)!
Could something like a third library 'log4j' do that?
Is there any alternatives, solutions, or better practices related to my issu!?
Best regards.
Look into Interceptors/Listeners - both for JSF to get page/action and JPA to get data accessed. In your interceptors/listener you could then log to file using log4j.
EclipseLink Listeners
Hibernate Interceptors (for comparison)
JSF Listeners