Have a table 'temp' ..
Code:
CREATE TABLE `temp` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`student_id` bigint(20) unsigned NOT NULL,
`current` tinyint(1) NOT NULL DEFAULT '1',
`closed_at` datetime NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `unique_index` (`student_id`,`current`,`closed_at`),
KEY `studentIndex` (`student_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
The corresponding Java pojo is http://pastebin.com/JHZwubWd . This table has a unique constraint such that only one record for each student can be active.
2) I have a test code which does try to continually add records for a student ( each time making the older active one as inactive and adding a new active record) and also in a different thread accessing some random ( non-related ) table.
Code:
public static void main(String[] args) throws Exception {
final SessionFactory sessionFactory = new AnnotationConfiguration().configure().buildSessionFactory();
ExecutorService executorService = Executors.newFixedThreadPool(1);
int runs = 0;
while(true) {
Temp testPojo = new Temp();
testPojo.setStudentId(1L);
testPojo.setCurrent(true);
testPojo.setClosedAt(new Date(0));
add(testPojo, sessionFactory);
Thread.sleep(1500);
executorService.submit(new Callable<Object>() {
#Override
public Object call() throws Exception {
Session session = sessionFactory.openSession();
// Some dummy code to print number of users in the system.
// Idea is to "touch" the DB/session in this background
// thread.
System.out.println("No of users: " + session.createCriteria(User.class).list().size());
session.close();
return null;
}
});
if(runs++ > 100) {
break;
}
}
executorService.shutdown();
executorService.awaitTermination(1, TimeUnit.MINUTES);
}
private static void add(final Temp testPojo, final SessionFactory sessionFactory) throws Exception {
Session dbSession = null;
Transaction transaction = null;
try {
dbSession = sessionFactory.openSession();
transaction = dbSession.beginTransaction();
// Set all previous state of the student as not current.
List<Temp> oldActivePojos = (List<Temp>) dbSession.createCriteria(Temp.class)
.add(Restrictions.eq("studentId", testPojo.getStudentId())).add(Restrictions.eq("current", true))
.list();
for(final Temp oldActivePojo : oldActivePojos) {
oldActivePojo.setCurrent(false);
oldActivePojo.setClosedAt(new Date());
dbSession.update(oldActivePojo);
LOG.debug(String.format(" Updated old state as inactive:%s", oldActivePojo));
}
if(!oldActivePojos.isEmpty()) {
dbSession.flush();
}
LOG.debug(String.format(" saving state:%s", testPojo));
dbSession.save(testPojo);
LOG.debug(String.format(" new state saved:%s", testPojo));
transaction.commit();
}catch(Exception exception) {
LOG.fatal(String.format("Exception in adding state: %s", testPojo), exception);
transaction.rollback();
}finally {
dbSession.close();
}
}
Upon running the code, after a few runs, I am getting an index constraint exception. It happens because for some strange reason, it does not find the latest active record but instead some older stale active record and tries marking it as inactive before saving ( though the DB actually has a new active record already present).
Notice that both the code share the same sessionfactory and the both code works on a totally different tables. My guess is that some internal cache state gets dirty. If I use 2 different sessionfactory for the foreground and background thread, it works fine.
Another weird thing is that in the background thread ( where I print the no of users), if I wrap it in a transaction ( even though it is only a read operation), the code works fine! Sp looks like I need to wrap all DB operations ( irrespective of read / write ) in a transaction for it to work in a multithreaded environment.
Can someone point out the issue?
Yes, basically, transaction demarcation is always needed:
Hibernate documentation says:
Database, or system, transaction boundaries are always necessary. No communication with the database can occur outside of a database transaction (this seems to confuse many developers who are used to the auto-commit mode). Always use clear transaction boundaries, even for read-only operations. Depending on your isolation level and database capabilities this might not be required, but there is no downside if you always demarcate transactions explicitly.
When trying to reproduce your setup I experienced some problems caused by the lack of transaction demarcation (though not the same as yours). Further investigation showed that sometimes, depending on connection pool configuration, add() is executed in the same database transaction as the previous call(). Adding beginTransaction()/commit() to call() fixed that problem. This behaviour can be responsible for your problem, since, depending on transaction isolation level, add() can work with the stale snapshot of the database taken at the begining of transaction, i.e. during the previous call().
Related
Upon deletion of my custon item LELettering in which I have a view with ARRegister lines, I want to :
- Reverse the payment application for every Invoice ARRegister lines for the payment also included in my ARRegister lines.
- Clean the LetteringCD I did put in a custom field in my ARRegister extension.
Now when I pick one or the other alone, it works.
My problem is when I do both : reverseapplication() do it's job, but this, as a side effect, updates the ARRegister records when I call the reverseapplication method from the ARPaymentEntry.
Which lead to an error : "Another process has update the ARRegister record and your changes will be lost", when I try to update the ARRegister records to clean my custom field LetteringCD.
I think my problem is my view Lines is not refreshed once reverseApplication is called, so it still has the not yet updated records of ARRegister.
I tried ClearQueryCache() but it doesnt seem to work, how to I force a refresh on my view Lines so I can update them again ?
public PXSelect<LELettering> Piece;
public PXSelect<ARRegister> Lines;
protected virtual void LELettering_RowDeleting(PXCache sender, PXRowDeletingEventArgs e)
{
// Cancel the lettering by removing every LetteringCD from the ARRegister lines and reverse application paiements
cancelLettering();
}
protected void cancelLettering()
{
reverseApplication();
eraseLetteringCD();
}
protected void reverseApplication()
{
string refNbr = "";
List<ARRegister> lines = new List<ARRegister>();
foreach (ARRegister line in PXSelect<ARRegister, Where<ARRegisterLeExt.lettrageCD,
Equal<Required<ARRegisterLeExt.lettrageCD>>>>.Select(this, Piece.Current.LetteringCD))
{
if (line.DocType == "PMT") refNbr = line.RefNbr;
else lines.Add(line);
}
ARPaymentEntry graphPmt = getGraphPayment(refNbr, "PMT");
foreach(ARAdjust line in graphPmt.Adjustments_History.Select())
{
graphPmt.Adjustments_History.Current = line;
graphPmt.reverseApplication.Press();
}
graphPmt.release.Press();
graphPmt.Actions.PressSave();
}
// Here is my problem
protected void eraseLetteringCD()
{
foreach (var line in Lines.Select())
{
line.GetItem<ARRegister>().GetExtension<ARRegisterLeExt>().LettrageCD = null;
Lines.Current = Lines.Update(line);
}
Actions.PressSave();
}
protected ARPaymentEntry getGraphPayment(string refNbr, string docType)
{
ARPaymentEntry graphPmt = CreateInstance<ARPaymentEntry>();
ARPayment pmt = PXSelect<ARPayment, Where<ARPayment.refNbr, Equal<Required<ARPayment.refNbr>>,
And<ARPayment.docType, Equal<Required<ARPayment.docType>>>>>
.Select(this, refNbr, docType);
if (pmt == null) throw new PXException(Constantes.errNotFound);
graphPmt.Document.Current = pmt;
return graphPmt;
}
Edit:
The problem comes from the fact the records ARRegister are saved two times, once with the reversepaymentapplication, and once in the eraseLetteringCD, but I dont know how to avoid this in my case.
Some things I might try...
I do see that there are multiple graphs involved. The second graph will need to refresh the results before it can process. There are a few ways of doing this that I try...
You can try to clear the query cache as shown below. My guess when you call Lines.Select it has an old cached value?
protected void cancelLettering()
{
reverseApplication();
Lines.Cache.ClearQueryCache()
eraseLetteringCD();
}
I find it helpful some items in reverse if the select is not returning the cached results to find the cached row myself. The reverse could be to use PXSelectReadonly<> as your foreach select statement as this should use the records from the DB vs any cached values.
protected void eraseLetteringCD()
{
// Also try PXSelectReadonly<> in place of Lines.Select()
foreach (ARRegister line in Lines.Select())
{
//Get cached row
var cachedRow = (ARRegister)Lines.Cache.Locate(line) ?? line;
cachedRow.GetExtension<ARRegisterLeExt>().LettrageCD = null;
Lines.Update(cachedRow );
}
Actions.PressSave();
}
After the first press save you could also try to just clear the cache and the query cache to setup for the next call. Ideally if possibly just do one persist.
If the first graph is running a long operation you can make the code pause to wait for the operation to complete. This would be useful when using multiple graphs. The second graph persist should not run until the first long running process has finished. Example using the ID of your graphPmt instance:
graphPmt.release.Press();
PXLongOperation.WaitCompletion(graphPmt.UID)
Faced with the following issue: I am actively use DominoDocument class (wrapped Document) in my projects, particularly as basis for my business model objects.
Very often I have a need to access / iterate my business model objects as Anonymous user thus underlying lotus.domino.Document retrieved based on SessionAsSigner session object (for example in case of some REST Services, or in case of xAgent, etc).
The behavior of restoreWrappedDocument() method in such cases really breaks all flexibility of using such architecture: this method tries to restore wrapped document based on current execution environment access rights, and of course that causes errors with ACL.
Let’s consider the following code snippet as example:
public void test3() {
try {
System.out.println(">>>>>");
System.out.println(">>>>> START");
lotus.domino.Database db = AppBean.getSessionAsSigner().getDatabase(AppBean.getInstance().getContactsDBserverName(), AppBean.getInstance().getContactsDBname(), false);
Document notesDoc = db.getAllDocuments().getFirstDocument();
String dbName = notesDoc.getParentDatabase().getServer() + "!!" + notesDoc.getParentDatabase().getFilePath();
DominoDocument ds = DominoDocument.wrap(dbName, notesDoc, null, "exception", false, "UseWeb", null);
System.out.println(">> 1 >> " + ds.getValue("form"));
ds.getDocument().recycle();
try {
ds.restoreWrappedDocument();
}catch(Throwable e2){
System.out.println(">> 2 - exception - >> " + e2.toString());
e2.printStackTrace();
}
try {
System.out.println(">> 3 >> " + ds.getValue("form"));
}catch(Throwable e3){
System.out.println(">> 3 - exception - >> " + e3.toString());
}
System.out.println(">>>>> END");
System.out.println(">>>>>");
}catch(Exception e){
e.printStackTrace();
}
}
1) Scenario 1: executing this code by authenticated user that has access to target DB gives the following result:
So method works as expected and everything perfect.
2) Scenario 2: executing this code by Anonymous user causes Exception (generally, what is expected):
You can clearly see that restoreWrappedDocument() executes some helper methods in order to get DB, and of course that is done with current user access level (Anonymous).
Possible solutions:
The obvious solution is to add custom logic to my business object model, which will perform custom restore (basically based on Server&DB names and document UNID or NoteID).
What I am very curious whether there is any more smart or built-in method exist for restoring wrapped documents with SessionAsSigner rights?
Thanks!
I don't think there's a proper way to do this, other than your option 1, for better or for worse.
However, and I'm not saying this is a good idea, it seems like DominoDocument likely gets to its session through the current request map. If you want to be tricky, you could try temporarily swapping session out for sessionAsSigner in the request scope, calling restoreWrappedDocument, and then swapping it back.
A solution with a Helper class using Java Reflection:
(Incomplete, missing some parts)
package ch.hasselba.xpages;
import java.lang.reflect.Field;
import lotus.domino.Database;
import lotus.domino.Document;
import lotus.domino.NotesException;
import com.ibm.xsp.FacesExceptionEx;
import com.ibm.xsp.model.domino.DominoUtils;
import com.ibm.xsp.model.domino.wrapped.DominoDocument;
public class DominoDocumentUtil {
private static final long serialVersionUID = 1L;
private transient final Field wrappedObj;
private transient final DominoDocument dominoDoc;
public DominoDocumentUtil(DominoDocument doc) throws SecurityException,
NoSuchFieldException {
dominoDoc = doc;
wrappedObj= doc.getClass().getDeclaredField("_wrappedObject");
wrappedObj.setAccessible(true);
}
public void restoreWrappedDocument(Database db)
throws IllegalArgumentException, IllegalAccessException {
try {
Document doc = DominoUtils.getDocumentById(db, dominoDoc
.getDocumentId(), dominoDoc.isAllowDeletedDocs());
this.wrappedObj.set(dominoDoc, doc);
} catch (NotesException ne) {
throw new FacesExceptionEx(ne.getMessage());
}
}
}
To use the class you can call the restoreWrappedDocument method with a database opened with sessionAsSigner:
DominoDocumentUtil util = new DominoDocumentUtil(ds);
util.restoreWrappedDocument(db);
I have a database manager class that manages access do the database. It contains the connections pool and 2 DAOs. Each for a different table. Looks something like this:
public class ActivitiesDatabase {
private final ConnectionSource connectionSource;
private final Dao<JsonActivity, String> jsonActivityDao;
private final Dao<AtomActivity, String> atomActivityDao;
private ActivitiesDatabase() {
try {
connectionSource = new JdbcPooledConnectionSource(Consts.JDBC);
TableUtils.createTableIfNotExists(connectionSource, JsonActivity.class);
jsonActivityDao = DaoManager.createDao(connectionSource, JsonActivity.class);
TableUtils.createTableIfNotExists(connectionSource, AtomActivity.class);
atomActivityDao = DaoManager.createDao(connectionSource, AtomActivity.class);
} catch (SQLException e) {
throw new RuntimeException(e);
}
}
public long insertAtom(String id, String content) throws SQLException {
long additionTime = System.currentTimeMillis();
atomActivityDao.createIfNotExists(new Activity(id, content, additionTime));
return additionTime;
}
public long insertJson(String id, String content) throws SQLException {
long additionTime = System.currentTimeMillis();
jsonActivityDao.createIfNotExists(new Activity(id, content, additionTime));
return additionTime;
}
public AtomResult getAtomEntriesBetween(long from, long to) throws SQLException {
long updated = System.currentTimeMillis();
PreparedQuery<Activity> query = atomActivityDao.queryBuilder().limit(500L).orderBy(Activity.UPDATED_FIELD, true).where().between(Activity.UPDATED_FIELD, from, to).prepare();
return new Result(atomActivityDao.query(query), updated);
}
public JsonResult getJsonEntriesBetween(long from, long to) throws SQLException {
long updated = System.currentTimeMillis();
PreparedQuery<Activity> query = jsonActivityDao.queryBuilder().limit(500L).orderBy(Activity.UPDATED_FIELD, true).where().between(Activity.UPDATED_FIELD, from, to).prepare();
return new Result(jsonActivityDao.query(query), updated);
}
}
In addition, I have two thread running using the same database manager. Each thread writes to a different table. There are also threads who read from the database. A reading thread can read from any table.
I noticed in the ConnectionsSource documentation that it is not thread safe.
my question is. Should I synchronize the function that write to the database.
Would the answer to my question be different if both write thread were to write to the the same table?
I noticed in the ConnectionsSource documentation that it is not thread safe.
Right but you are using the JdbcPooledConnectionSource which is thread-safe.
Would I synchronize the function that write to the database.
You shouldn't have a problem with ORMLite doing this. However, you need to make sure that your database supports multiple concurrent database updates. For example, you won't have a problem if you are using MySQL, Postgres, or Oracle. You'll need to read up on H2 multithreading to see what options you will need to use to get that to work.
Would the answer to my question be different if both write thread were to write to the the same table?
That would increase the concurrency so (uh) maybe? Again it depends on the database type.
You may use Connection pool for multithreading working with ORMLite here is javaDoc
error is : The context cannot be used while the model is being created.
I'm using this code :
Parallel.Invoke(AddDataParallel);
private void AddDataParallel()
{
Parallel.For(1001, 2001, delegate(int i)
{
User user = new User();
user.UserName = "user" + i;
_userService2.Add(user);
});
}
error :
public T Add(T entity)
{
return _entities.Add(entity);//The context cannot be used while the model is being created.
}
why ?
You seem to use only one context instance (wrapped in _userService2). But an ObjectContext (or DbContext) is not thread-safe as per MSDN. See Remarks:
The ObjectContext class is not thread safe. The integrity of data objects in an ObjectContext cannot be ensured in multithreaded scenarios.
So you have to re-design your insert scenario. Parallellization against a database is always tricky as you make yourself your own concurrent user. If you want fast inserts, take a look at BulkInsert.
I found strange hibernate behavior and I cannot explain it.
If I create an object in default thread inside transaction and make manual flush
then I cannot find it in other thread.
If I create an object in one special thread with the same conditions then everything is all right.
Here is the code that I described above:
// transaction template with propagation required
ttNew.execute(new TransactionCallbackWithoutResult() {
#Override
protected void doInTransactionWithoutResult(TransactionStatus status) {
Assert.assertEquals(envStDao.getAll().size(), 0);
g = new Group();
g.setDescription("trial");
// in debugger I get id = 1
groupDao.save(g);
groupDao.flush();
accDao.flush();
}
});
// second stage right after the first - searching the group
Thread t2 = new Thread(new Runnable() {
#Override
public void run() {
ttNew.execute(new TransactionCallbackWithoutResult() {
#Override
protected void doInTransactionWithoutResult(TransactionStatus status) {
// here I get NULL!
Group gg = groupDao.get(1);
}
});
}
});
t2.start();
t2.join();
If I wrap first block of the code into thread just as former I get the group.
Are any ideas?
I run above code in junit test. Dao objects use HibernateTemplate.
Due to transaction isolation you cannot see uncommitted data in another transaction. you have two different transaction here , so one cannot see uncommitted data of another.
The default isolationist is read committed. flush dosnt mean commit. commit will be done only at the end of the transaction. so when you flush the data in first transaction the data is written to the db , but doesn’t commit, so transaction 2 cannot see it.