Hibernate session synchronization - query cache issue - multithreading

This following code of mine is facing synchronization issue with hibernate sessions. There are few parallel threads in my code, each of them own its own hibernate session. The problem is, changes made by one session is not perceived by others for some unknown reasons. The code is located in github here
The problem :
Here I explain it with three threads: PRODUCER, CONSUMER_1, CONSUMER_2. The CONSUMER_1 waits for producer to finish its work, even after that, at the end, it doesn't see the changes made by PRODUCER thread. Why is it so?
package org.example.hibernate;
import org.example.hibernate.model.User;
import org.example.hibernate.util.HibernateUtil;
import java.util.Random;
public class Main {
/**
* This object acts as synchronisation semaphore between threads.
* (Note : aware that wait within hibernate session is discouraged)
* Here it is used to show that the consumer tries to read/get after
* producer has successfully completed the transaction.
* So here, the producer notifies waiting threads with this object
*/
public static final Object LOCK = new Object();
/**
* user Id is primary key, a random int is suffixed to preserve uniqueness
* Here, Producer saves an Object of this ID, then consumer tries to read it
*/
private static final String USER_ID = "user-" + new Random().nextInt(10000);
/**
* This is producer thread, it inserts a record and notifies about it to
* other waiting threads.
*/
private static Thread PRODUCER = new Thread("producer") {
// this this creates a user and notifies threads waiting for some event
#Override
public void run() {
HibernateUtil.getInstance().executeInSession(new Runnable() {
#Override
public void run() {
User user = new User();
user.setId(USER_ID);
user.setName("name-" + USER_ID);
user.save();
}
});
// outside the session
synchronized (LOCK) {
print("Notifying all consumers");
LOCK.notifyAll();
}
print("dying...");
}
};
/**
* This thread tries to read first, if it misses, then waits for the producer to
* notify, after it receives notification it tries to read again
*/
private static Thread CONSUMER_1 = new Thread("consumer_one"){
// this thread checks if data available(user with specific ID),
// if not available, waits for the the producer to notify it
#Override
public void run() {
HibernateUtil.getInstance().executeInSession(new Runnable() {
#Override
public void run() {
try {
User readUser = User.getById(USER_ID);
if(readUser == null) { // data not available
synchronized (LOCK) {
print("Data not available, Waiting for the producer...");
LOCK.wait(); // wait for the producer
print("Data available");
}
print("waiting for some more time....");
Thread.sleep(2 * 1000);
print("Enough of waiting... now going to read");
}
readUser = User.getById(USER_ID);
if(readUser == null) {
// why does this happen??
throw new IllegalStateException(
Thread.currentThread().getName()
+ " : This shouldn't be happening!!");
} else {
print("SUCCESS: Read user :" + readUser);
}
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
});
print("dying...");
}
};
/**
* this thread waits for the the producer to notify it, then tries to read
*/
private static Thread CONSUMER_2 = new Thread("consumer_two"){
#Override
public void run() {
HibernateUtil.getInstance().executeInSession(new Runnable() {
#Override
public void run() {
try {
synchronized (LOCK) {
print("Data not available, Waiting for the producer...");
LOCK.wait(); // wait for the producer notification
print("Data available");
}
print("waiting for some more time....");
Thread.sleep(2 * 1000);
print("Enough of waiting... now going to read");
User readUser = User.getById(USER_ID);
if(readUser == null) {
throw new IllegalStateException(
Thread.currentThread().getName() +
" : This shouldn't be happening!!");
} else {
print("SUCCESS :: Read user :" + readUser);
}
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
});
print("dying...");
}
};
/**
* Just another print method to include time stamp and thread name
* #param msg
*/
public static void print(String msg) {
System.out.println(Thread.currentThread().getName() + " : "
+ System.currentTimeMillis()+ " : "+ msg);
}
public static void main(String[] args) throws InterruptedException {
// Initialise hibernate in main thread
HibernateUtil.getInstance();
PRODUCER.start();
CONSUMER_1.start();
CONSUMER_2.start();
PRODUCER.join();
CONSUMER_1.join();
CONSUMER_2.join();
print("Exiting....");
}
}
And the Output :
INFO: HHH000232: Schema update complete
[main] INFO org.example.hibernate.util.HibernateUtil - Hibernate Initialised..
consumer_two : 1415036718712 : Data not available, Waiting for the producer...
[producer] INFO org.example.hibernate.util.HibernateUtil - Starting the transaction...
[consumer_two] INFO org.example.hibernate.util.HibernateUtil - Starting the transaction...
[consumer_one] INFO org.example.hibernate.util.HibernateUtil - Starting the transaction...
consumer_one : 1415036718831 : Data not available, Waiting for the producer...
[producer] INFO org.example.hibernate.util.HibernateUtil - Committing the transaction...
producer : 1415036718919 : Notifying all consumers
producer : 1415036718919 : dying...
consumer_one : 1415036718919 : Data available
consumer_one : 1415036718919 : waiting for some more time....
consumer_two : 1415036718919 : Data available
consumer_two : 1415036718919 : waiting for some more time....
[producer] INFO org.example.hibernate.util.HibernateUtil - Session was closed...
consumer_one : 1415036720919 : Enough of waiting... now going to read
consumer_two : 1415036720920 : Enough of waiting... now going to read
Nov 03, 2014 11:15:20 PM com.mchange.v2.c3p0.stmt.GooGooStatementCache assimilateNewCheckedOutStatement
INFO: Multiply prepared statement! select user0_.id as id1_0_0_, user0_.name as name2_0_0_ from user user0_ where user0_.id=?
java.lang.IllegalStateException: consumer_one : This shouldn't be happening!!
at org.example.hibernate.Main$2$1.run(Main.java:79)
at org.example.hibernate.util.HibernateUtil.executeInSession(HibernateUtil.java:60)
at org.example.hibernate.Main$2.run(Main.java:61)
[consumer_one] INFO org.example.hibernate.util.HibernateUtil - Committing the transaction...
[consumer_one] INFO org.example.hibernate.util.HibernateUtil - Session was closed...
consumer_one : 1415036720931 : dying...
consumer_two : 1415036720940 : SUCCESS :: Read user :User{id='user-422', name='name-user-422'} org.example.hibernate.model.User#4666d804
consumer_two : 1415036720943 : dying...
[consumer_two] INFO org.example.hibernate.util.HibernateUtil - Committing the transaction...
[consumer_two] INFO org.example.hibernate.util.HibernateUtil - Session was closed...
main : 1415036720943 : Exiting....
Here is my hibernate config :
<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE hibernate-configuration SYSTEM "classpath://org/hibernate/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
<session-factory>
<property name="hibernate.hbm2ddl.auto">update</property>
<property name="connection.driver_class">com.mysql.jdbc.Driver</property>
<property name="connection.url">jdbc:mysql://localhost:3306/hib_ex</property>
<property name="connection.username">hibuser</property>
<property name="connection.password">hibpass</property>
<!-- JDBC connection pool (use the built-in) -->
<property name="connection.pool_size">10</property>
<property name="hibernate.c3p0.min_size">5</property>
<property name="hibernate.c3p0.max_size">20</property>
<property name="hibernate.c3p0.timeout">1800</property>
<property name="hibernate.c3p0.max_statements">50</property>
<property name="connection.provider_class"> org.hibernate.connection.C3P0ConnectionProvider</property>
<property name="hibernate.cache.use_second_level_cache">false</property>
<property name="hibernate.cache.use_query_cache">false</property>
<property name="hibernate.cache.use_minimal_puts">true</property>
<property name="dialect">org.hibernate.dialect.MySQLDialect</property>
<property name="hibernate.current_session_context_class">thread</property>
<property name="hibernate.cache.provider_class">org.hibernate.cache.internal.NoCachingRegionFactory</property>
<property name="show_sql">false</property>
<mapping class="org.example.hibernate.model.User" />
</session-factory>
The hibernate Utility
public enum HibernateUtil {
INSTANCE;
private final Logger LOG = LoggerFactory.getLogger(HibernateUtil.class);
private final String CONFIG_FILE = "hibernate.xml";
private final SessionFactory sessionFactory;
HibernateUtil(){
LOG.info("Initialising hibernate...");
URL configUrl = getClass().getClassLoader().getResource(CONFIG_FILE);
final Configuration configuration = new Configuration();
try {
configuration.configure(configUrl);
ServiceRegistry serviceRegistry = new StandardServiceRegistryBuilder()
.applySettings(configuration.getProperties())
.build();
sessionFactory = configuration.buildSessionFactory(serviceRegistry);
LOG.info("Hibernate Initialised..");
} catch (Exception e){
throw new IllegalStateException("Could not init hibernate!");
}
}
public Session getSession(){
if(sessionFactory.getCurrentSession() != null
&& sessionFactory.getCurrentSession().isOpen()) {
return sessionFactory.getCurrentSession();
} else {
LOG.info("Opening a session");
return sessionFactory.openSession();
}
}
public void executeInSession(Runnable runnable){
Session session = getSession();
Transaction transaction = session.getTransaction();
if(!transaction.isActive()){
LOG.info("Starting the transaction...");
transaction.begin();
}
try {
runnable.run();
} catch (Exception e){
e.printStackTrace();
} finally {
if(transaction.isActive()) {
LOG.info("Committing the transaction...");
transaction.commit();
} else {
LOG.info("Transaction was committed...");
}
if(session.isOpen()){
LOG.info("Closing the session...");
session.close();
} else {
LOG.info("Session was closed...");
}
}
}
public static HibernateUtil getInstance(){
return INSTANCE;
}
}
Please help me understand :
- Why CONSUMER_1 thread's User.getById(userId) returns null even after the PRODUCER thread's transaction successfully completes?
- How CONSUMER_2 thread's User.getById(userId) is able to get the same object almost at the same time when CONSUMER_1 is getting null ?
To save your valuable time, get the complete code from github repo

My guess is that your database transaction isolation guarantees repeatable reads. Since the consumer 1 start by reading the entity and finds it is null, and then later executes the same query in the same transaction, the same result is returned: null. Transactions run in isolation, that's the I is ACID. Your transactions should be as short as possible. You shouldn't let the transaction and the session opened when you found that the expected entity is not available. So, instead of doing
open session and transaction
get entity
wait for entity to be available
get entity again
close the transaction and session
you should do
open session and transaction
get entity
close the transaction and session
wait for entity to be available
open session and transaction
get entity again
close the transaction and session

Related

How to abort a Task in JavaFX?

Is it possible to abort a Task in JavaFX? My Task could run into situations where I want to cancel the rest of the operations within it.
I would need to return a value, somehow, so I can handle the cause of the abort in the JFX Application Thread.
Most of the related answers I've seen refer to handling an already-canceled Task, but now how to manually cancel it from within the Task itself.
The cancel() method seems to have no effect as both messages below are displayed:
public class LoadingTask<Void> extends Task {
#Override
protected Object call() throws Exception {
Connection connection;
// ** Connect to server ** //
updateMessage("Contacting server ...");
try {
connection = DataFiles.getConnection();
} catch (SQLException ex) {
updateMessage("ERROR: " + ex.getMessage());
ex.printStackTrace();
cancel();
return null;
}
// ** Check user access ** //
updateMessage("Verifying user access ...");
try {
String username = System.getProperty("user.name");
ResultSet resultSet = connection.createStatement().executeQuery(
SqlQueries.SELECT_USER.replace("%USERNAME%", username));
// If user doesn't exist, block access
if (!resultSet.next()) {
}
} catch (SQLException ex) {
}
return null;
}
}
And example would be greatly appreciated.
Why not just let the task go into a FAILED state if it fails? All you need (I also corrected the errors with the type of the task and return type of the call method) is
public class LoadingTask extends Task<Void> {
#Override
protected Void call() throws Exception {
Connection connection;
// ** Connect to server ** //
updateMessage("Contacting server ...");
connection = DataFiles.getConnection();
// ** Check user access ** //
updateMessage("Verifying user access ...");
String username = System.getProperty("user.name");
ResultSet resultSet = connection.createStatement().executeQuery(
SqlQueries.SELECT_USER.replace("%USERNAME%", username));
// I am not at all sure what this is supposed to do....
// If user doesn't exist, block access
if (!resultSet.next()) {
}
return null;
}
}
Now if an exception is thrown by DataFiles.getConnection(), the call method terminates immediately with an exception (the remained is not executed) and the task enters a FAILED state. If you need access to the exception in the case that something goes wrong, you can do:
LoadingTask loadingTask = new LoadingTask();
loadingTask.setOnFailed(e -> {
Throwable exc = loadingTask.getException();
// do whatever you need with exc, e.g. log it, inform user, etc
});
loadingTask.setOnSucceeded(e -> {
// whatever you need to do when the user logs in...
});
myExecutor.execute(loadingTask);

hazelcast ScheduledExecutorService lost tasks after node shutdown

I'm trying to use hazelcast ScheduledExecutorService to execute some periodic tasks. I'm using hazelcast 3.8.1.
I start one node and then the other, and the tasks are distributed between both nodes and properly executed.
If I shutdown the first node, then the second one will start to execute the periodic tasks that were previously on the first node.
The problem is that, if I stop the second node instead of the first, then its tasks are not rescheduled to the first one. This happens even if I have more nodes. If I shutdown the last node to receive tasks, those tasks are lost.
The shutdown is always done with ctrl+c
I've created a test application, with some sample code from hazelcast examples and with some pieces of code I've found on the web. I start two instances of this app.
public class MasterMember {
/**
* The constant LOG.
*/
final static Logger logger = LoggerFactory.getLogger(MasterMember.class);
public static void main(String[] args) throws Exception {
Config config = new Config();
config.setProperty("hazelcast.logging.type", "slf4j");
config.getScheduledExecutorConfig("scheduler").
setPoolSize(16).setCapacity(100).setDurability(1);
final HazelcastInstance instance = Hazelcast.newHazelcastInstance(config);
Runtime.getRuntime().addShutdownHook(new Thread() {
HazelcastInstance threadInstance = instance;
#Override
public void run() {
logger.info("Application shutdown");
for (int i = 0; i < 12; i++) {
logger.info("Verifying whether it is safe to close this instance");
boolean isSafe = getResultsForAllInstances(hzi -> {
if (hzi.getLifecycleService().isRunning()) {
return hzi.getPartitionService().forceLocalMemberToBeSafe(10, TimeUnit.SECONDS);
}
return true;
});
if (isSafe) {
logger.info("Verifying whether cluster is safe.");
isSafe = getResultsForAllInstances(hzi -> {
if (hzi.getLifecycleService().isRunning()) {
return hzi.getPartitionService().isClusterSafe();
}
return true;
});
if (isSafe) {
System.out.println("is safe.");
break;
}
}
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
threadInstance.shutdown();
}
private boolean getResultsForAllInstances(
Function<HazelcastInstance, Boolean> hazelcastInstanceBooleanFunction) {
return Hazelcast.getAllHazelcastInstances().stream().map(hazelcastInstanceBooleanFunction).reduce(true,
(old, next) -> old && next);
}
});
new Thread(() -> {
try {
Thread.sleep(10000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
IScheduledExecutorService scheduler = instance.getScheduledExecutorService("scheduler");
scheduler.scheduleAtFixedRate(named("1", new EchoTask("1")), 5, 10, TimeUnit.SECONDS);
scheduler.scheduleAtFixedRate(named("2", new EchoTask("2")), 5, 10, TimeUnit.SECONDS);
scheduler.scheduleAtFixedRate(named("3", new EchoTask("3")), 5, 10, TimeUnit.SECONDS);
scheduler.scheduleAtFixedRate(named("4", new EchoTask("4")), 5, 10, TimeUnit.SECONDS);
scheduler.scheduleAtFixedRate(named("5", new EchoTask("5")), 5, 10, TimeUnit.SECONDS);
scheduler.scheduleAtFixedRate(named("6", new EchoTask("6")), 5, 10, TimeUnit.SECONDS);
}).start();
new Thread(() -> {
try {
// delays init
Thread.sleep(20000);
while (true) {
IScheduledExecutorService scheduler = instance.getScheduledExecutorService("scheduler");
final Map<Member, List<IScheduledFuture<Object>>> allScheduledFutures =
scheduler.getAllScheduledFutures();
// check if the subscription already exists as a task, if so, stop it
for (final List<IScheduledFuture<Object>> entry : allScheduledFutures.values()) {
for (final IScheduledFuture<Object> objectIScheduledFuture : entry) {
logger.info(
"TaskStats: name {} isDone() {} isCanceled() {} total runs {} delay (sec) {} other statistics {} ",
objectIScheduledFuture.getHandler().getTaskName(), objectIScheduledFuture.isDone(),
objectIScheduledFuture.isCancelled(),
objectIScheduledFuture.getStats().getTotalRuns(),
objectIScheduledFuture.getDelay(TimeUnit.SECONDS),
objectIScheduledFuture.getStats());
}
}
Thread.sleep(15000);
}
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}).start();
while (true) {
Thread.sleep(1000);
}
// Hazelcast.shutdownAll();
}
}
And the task
public class EchoTask implements Runnable, Serializable {
/**
* serialVersionUID
*/
private static final long serialVersionUID = 5505122140975508363L;
final Logger logger = LoggerFactory.getLogger(EchoTask.class);
private final String msg;
public EchoTask(String msg) {
this.msg = msg;
}
#Override
public void run() {
logger.info("--> " + msg);
}
}
I'm I doing something wrong?
Thanks in advance
-- EDIT --
Modified (and updated above) the code to use log instead of system.out. Added logging of task statistics and fixed usage of the Config object.
The logs:
Node1_log
Node2_log
Forgot to mention that I wait until all the task are running in the first node before starting the second one.
Bruno, thanks for reporting this, and it really is a bug. Unfortunately it was not so obvious with multiple nodes as it is with just two. As you figured by your answer its not losing the task, but rather keep it cancelled after a migration. Your fix, however is not safe because a Task can be cancelled and have null Future at the same time, eg. when you cancel the master replica, the backup which never had a future, just gets the result. The fix is very close to what you did, so in the prepareForReplication() when in migrationMode we avoid setting the result. I will push a fix for that shortly, just running a few more tests. This will be available in master and later versions.
I logged an issue with your finding, if you don't mind, https://github.com/hazelcast/hazelcast/issues/10603 you can keep track of its status there.
I was able to do a quick fix for this issue by changing the ScheduledExecutorContainer class of the hazelcast project (used 3.8.1 source code), namely the promoteStash() method. Basically I've added a condition for the case were we task was cancelled on a previous migration of data.
I don't now the possible side effects of this change, or if this is the best way to do it!
void promoteStash() {
for (ScheduledTaskDescriptor descriptor : tasks.values()) {
try {
if (logger.isFinestEnabled()) {
logger.finest("[Partition: " + partitionId + "] " + "Attempt to promote stashed " + descriptor);
}
if (descriptor.shouldSchedule()) {
doSchedule(descriptor);
} else if (descriptor.getTaskResult() != null && descriptor.getTaskResult().isCancelled()
&& descriptor.getScheduledFuture() == null) {
// tasks that were already present in this node, once they get sent back to this node, since they
// have been cancelled when migrating the task to other node, are not rescheduled...
logger.fine("[Partition: " + partitionId + "] " + "Attempt to promote stashed canceled task "
+ descriptor);
descriptor.setTaskResult(null);
doSchedule(descriptor);
}
descriptor.setTaskOwner(true);
} catch (Exception e) {
throw rethrow(e);
}
}
}

Threading in Spring

I'm trying to do some optimization in my code and would like to spawn a thread where I do a time consuming operation. During the implementation of that optimization I was running into an issue which was driving me crazy. I simplified the issue and created a test case for that specific issue: (I'm using SpringJUnit4ClassRunner so the transaction is properly started at the beginning of the testCRUD method)
Could someone help me understand why the foundParent is null in the thread ?
private Semaphore sema = new Semaphore(0, false);
private long parentId;
#Test
public void testCRUD() {
//create
DBParent parent = null;
{
parent = new DBParent();
parentDao.persist(parent);
parentId = parent.getId();
assertTrue(parentId > 0);
parentDao.flush();
}
(new Thread(
new Runnable() {
public void run()
{
System.out.println("Start adding childs !");
DBParent foundParent = parentDao.findById(parentId);
assertTrue(foundParent != null); //ASSERTION FAILS HERE !!!!
System.out.println("Releasing semaphore !");
sema.release();
System.out.println("End adding childs !");
}
})).start();
try {
System.out.println("Acquiring semaphore !");
sema.acquire();
}
catch (InterruptedException e) {
e.printStackTrace();
}
}
=============================EDITED===================================
As per one comment suggestion, I created a threadManager bean which spawn the thread. Here is the code of the threadManager:
public class ThreadManager {
#Transactional(propagation=Propagation.REQUIRES_NEW)
public void executeTask(String Name, Runnable task) {
(new Thread(task, Name)).start();
}
}
Then in the previous test, instead of staring the thread manually, I just post it in the thread manager like this:
#Autowired private ParentDao parentDao;
#Autowired private ThreadManager threadManager;
private Semaphore sema = new Semaphore(0, false);
private long parentId;
#Test
public void testCRUD() {
//create
DBParent parent = null;
{
parent = new DBParent();
parentDao.persist(parent);
parentId = parent.getId();
assertTrue(parentId > 0);
parentDao.flush();
}
threadManager.executeTask("BG processing...",
new Runnable() {
public void run()
{
System.out.println("Start adding childs !");
DBParent foundParent = parentDao.findById(parentId);
assertTrue(foundParent != null); //ASSERTION FAILS HERE !!!!
System.out.println("Releasing semaphore !");
sema.release();
System.out.println("End adding childs !");
}
});
try {
System.out.println("Acquiring semaphore !");
sema.acquire();
}
catch (InterruptedException e) {
e.printStackTrace();
}
}
Unfortunately this doesn't work either !!! :-(
The transaction context is bound to the thread. So the code in the spawned thread doesn't run in the same transaction context as the code in the initial thread. So, due to transaction isolation (the I in ACID), the spawned thread doesn't see what the initial thread's transaction is inserting in the database.
You can bind Spring transaction to a new thread, to run transactions & Hibernate/JPA access in it. But this has to be a different TX and JPA/HB session from other threads.
Spring code for OpenSessionInViewFilter, is a reasonable an example of how to bind Hibernate session to Spring's TX management. You can strip this down to fairly minimal code.
See:
org.springframework.orm.hibernate3.support.OpenSessionInViewFilter
OpenSessionInViewFilter.doFilterInternal() -- this is where it actually binds it
TransactionSynchronizationManager.bindResource()
TransactionSynchronizationManager.unbindResource()
TransactionSynchronizationManager.getResource()
In one project (IIRC) I wrapped this functionality into a 'ServerThreadHb' class, to setup & save previous thread-bindings on construction -- with a restore() method to be called in a finally block, to restore previous bindings.
For your posted code sample, there isn't much point in running work on a separate thread -- since you synchronously wait for the work to be done. However I assume you were planning to remove that constraint & extend that functionality.

PostSaveDocument call agent asynchronously

I have an Xpage page with a single Notes document datasource.
After saving a document I want to (conditionally) trigger an agent. The agent takes some time to process and we don't want the user to have to wait for the result, so it should be executed asynchronously.
I've managed to get it working from client side JS by using an XHR to the agent URL, but I would like to do it server side so I can control the "Next page" better. When using .run() or .runonserver() the client waits till the agent completes.
Any idea how I could trigger an agent (from SSJS) on PostSaveDocument without the client waiting for the result?
Try to look at Thread and Jobs application on OpenNTF.org. There are nice demos of running task in background, check it here
As Martin suggested I used the JobScheduler example on OpenNtf and modified it to suit my needs. Resulting code can be found below. Any comments or improvements are welcome.
import java.security.AccessController;
import java.security.PrivilegedAction;
import lotus.domino.Agent;
import lotus.domino.Database;
import lotus.domino.NotesException;
import lotus.domino.Session;
import org.eclipse.core.runtime.IProgressMonitor;
import org.eclipse.core.runtime.IStatus;
import org.eclipse.core.runtime.Status;
import org.eclipse.core.runtime.jobs.IJobChangeEvent;
import org.eclipse.core.runtime.jobs.Job;
import org.eclipse.core.runtime.jobs.JobChangeAdapter;
import com.ibm.domino.xsp.module.nsf.ThreadSessionExecutor;
public class JobRunner {
public static void start(String dbPath, String agentName, String paramDocId) {
synchronized (JobRunner.class) {
runningJob = new ISPJob(dbPath, agentName, paramDocId);
runningJob.addJobChangeListener(new JobChangeAdapter() {
public void done(IJobChangeEvent event) {
System.out.println("Done event");
runningJob = null;
}
});
AccessController.doPrivileged(new PrivilegedAction<Object>() {
public Object run() {
runningJob.schedule();
return null;
}
});
}
}
private static ISPJob runningJob;
private static final class ISPJob extends Job {
private ThreadSessionExecutor<IStatus> executor;
private String docId;
private String dbPath;
private String agentName;
public ISPJob(String paramDbPath, String paramAgentName, String paramDocId) {
super(paramDocId);
this.docId = paramDocId;
this.dbPath = paramDbPath;
this.agentName = paramAgentName;
this.executor = new ThreadSessionExecutor<IStatus>() {
#Override
protected IStatus run(Session session) throws NotesException {
System.out.println("Job started" + docId);
System.out.println(" >> Session created: "
+ session.getUserName() + ", Effective User:"
+ session.getEffectiveUserName());
Database db = session.getDatabase(null,dbPath);
if (db != null) {
try {
if (!db.isOpen()) db.open();
if (db.isOpen()) {
System.out.println(" >> Database opened: "
+ db.getTitle());
Agent agent = db.getAgent(agentName);
try {
System.out.println(" >> Agent Started: " + agent.getName());
agent.run(docId);
System.out.println(" >> Agent Ran: " + agent.getName());
} finally {
agent.recycle();
}
}
} finally {
db.recycle();
}
}
System.out.println("Job completed");
return Status.OK_STATUS;
}
};
}
protected IStatus run(IProgressMonitor monitor) {
try {
return executor.run();
} catch (Exception ex) {
return Status.CANCEL_STATUS;
}
}
};
}
You could use a session bean (so it won't get destroyed) that kicks off an Java thread. Or you could issue in code a server console command. Or you implement a DOTS listener.
This may/may not be an option depending on your application requirements but I am having good success calling function in the onClientLoad event which essentially kicks off the process after the XPage has fully loaded.

Jackrabbit and concurrent modification

After we have done some performance testing for our application which uses jackrabbit we faced with the huge problem with concurrent modification jackrabbit's repository. Problem appears when we add nodes or edit them in multithread emulation. Then I wrote very simple test which shows us that problem is not in our environment.
There is it:
Simple Stateless Bean
#Stateless
#Local(TestFacadeLocal.class)
#Remote(TestFacadeRemote.class)
public class TestFacadeBean implements TestFacadeRemote, TestFacadeLocal {
public void doAction(int name) throws Exception {
new TestSynch().doAction(name);
}
}
Simple class
public class TestSynch {
public void doAction(int name) throws Exception {
Session session = ((Repository) new InitialContext().
lookup("java:jcr/local")).login(
new SimpleCredentials("username", "pwd".toCharArray()));
List added = new ArrayList();
Node folder = session.getRootNode().getNode("test");
for (int i = 0; i <= 100; i++) {
Node child = folder.addNode("" + System.currentTimeMillis(),
"nt:folder");
child.addMixin("mix:versionable");
added.add(child);
}
// saving butch changes
session.save();
//checking in all created nodes
for (Node node : added) {
session.getWorkspace().getVersionManager().checkin(node.getPath());
}
}
}
And Test class
public class Test {
private int c = 0;
private int countAll = 50;
private ExecutorService executor = Executors.newFixedThreadPool(5);
public ExecutorService getExecutor() {
return executor;
}
public static void main(String[] args) {
Test test = new Test();
try {
test.start();
} catch (Exception e) {
e.printStackTrace();
}
}
private void start() throws Exception {
long time = System.currentTimeMillis();
TestFacadeRemote testBean = (TestFacadeRemote) getContext().
lookup( "test/TestFacadeBean/remote");
for (int i = 0; i < countAll; i++) {
getExecutor().execute(new TestInstallerThread(i, testBean));
}
getExecutor().shutdown();
while (!getExecutor().isTerminated()) {
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
System.out.println(c + " shutdown " +
(System.currentTimeMillis() - time));
}
class TestInstallerThread implements Runnable {
private int number = 0;
TestFacadeRemote testBean;
public TestInstallerThread(int number, TestFacadeRemote testBean) {
this.number = number;
this.testBean = testBean;
}
#Override
public void run() {
try {
System.out.println("Installing data " + number);
testBean.doAction(number);
System.out.println("STOP" + number);
} catch (Exception e) {
e.printStackTrace();
c++;
}
}
}
public Context getContext() throws NamingException {
Properties properties = new Properties();
//init props
..............
return new InitialContext(properties);
}
}
If I initialized executor with 1 thread in pool all done without any error. If I initialized executor with 5 thread I got sometimes errors:
on client
java.lang.RuntimeException: javax.transaction.RollbackException: [com.arjuna.ats.internal.jta.transaction.arjunacore.commitwhenaborted] [com.arjuna.ats.internal.jta.transaction.arjunacore.commitwhenaborted] Can't commit because the transaction is in aborted state
at org.jboss.aspects.tx.TxPolicy.handleEndTransactionException(TxPolicy.java:198)
on server at the beginning warn
ItemStateReferenceCache [ItemStateReferenceCache.java:176] overwriting cached entry 187554a7-4c41-404b-b6ee-3ce2a9796a70
and then
javax.jcr.RepositoryException: org.apache.jackrabbit.core.state.ItemStateException: there's already a property state instance with id 52fb4b2c-3ef4-4fc5-9b79-f20a6b2e9ea3/{http://www.jcp.org/jcr/1.0}created
at org.apache.jackrabbit.core.PropertyImpl.restoreTransient(PropertyImpl.java:195) ~[jackrabbit-core-2.2.7.jar:2.2.7]
at org.apache.jackrabbit.core.ItemSaveOperation.restoreTransientItems(ItemSaveOperation.java:879) [jackrabbit-core-2.2.7.jar:2.2.7]
We have tried synchronize this method and other workflow for handle multithread calls as one thread. Nothing helps.
And one more thing - when we have done similar test without ejb layer - all worked fine.
It looks like container wrapped in own transaction and then all crashed.
Maybe somebody faced with such a problem.
Thanks in advance.
From the Jackrabbit Wiki:
The JCR specification explicitly states that a Session is not thread-safe (JCR 1.0 section 7.5 and JCR 2.0 section 4.1.2). Hence, Jackrabbit does not support multiple threads concurrently reading from or writing to the same session. Each session should only ever be accessed from one thread.
...
If you need to write to the same node concurrently, then you need to use multiple sessions, and use JCR locking to ensure there is no conflict.

Resources