Is my application service obtaining a lock using JDBC LockRepository supposed to run inside an #Transaction ?
We have a sample application service that updates a JDBCRepository and since this application can run on multiple JVMS (headless). We needed a global lock to serialize those updates.
I looked at your test and was hoping my use case would work too. ... JdbcLockRegistryDifferentClientTests
My config has a DefaultLockRepository and JdbcLockRegistry;
I launched( java -jar boot.jar) my application on two terminals to simulate. When I obtain a lock and issue a tryLock() without #Transaction on my application service both of them get the lock (albeit) one after the other almost immediately. I expected one of them to NOT get it for at least 10 seconds (Default expiry).
Service (Instance -1) {
Obtain("KEY-1")
tryLock()
DoWork()
unlock();
close();
}
Service (Instance -2) {
Obtain("KEY-1")
tryLock() <-- Wait until the lock expires or the unlock happens
DoWork()
unlock();
close();
}
I also noticed here DefaultLockRepository that the transaction scope (if not inherited) is only around the JDBC operation.
When I change my service to
#Transaction
Service (Instance -1) {
Obtain("KEY-1")
tryLock()
DoWork()
unlock();
close();
}
It works as expected.
I am quite sure I missed something ? But I expect my lock operation to honor global-locks (the fact that a lock exists in a JDBC store with an expiration) until an unlock or expiration.
Is my understanding incorrect ?
This works as designed. I didnt configure the DefaultLockRepository correctly and the default ttl was shorter than my service (artificial wait) lock duration. My apologies. :) Josh Long helped me figure this out :)
You have to use different client ids. The same is means the same client. That for special use-case. Use different client ids as they are different instances
The behavior here is subtle (or obvious once you see how this is working) and the general lack of documentation unhelpful, so here's my experience.
I created a lock table by looking at the SQL in DefaultLockRepository, which appeared to imply a composite primary key of REGION, LOCK_KEY and CLIENT_ID - THIS WAS WRONG.
I subsequently found the SQL script in the spring-integration-jdbc JAR, where I could see that the composite primary key MUST BE on just REGION and LOCK_KEY as #ArtemBilan says.
The reason is that the lock doesn't care about the client, obviously, so the primary key must be just the REGION and LOCK_KEY columns. These columns are used when acquiring a lock and it is the key violation that occurs should another client attempt to obtain the lock that is used to restrict other client IDs.
This also implies that, again as #ArtemBilan says, each client instance must have a unique ID, which is the default behavior when no ID specified at construction time.
Related
I am using Spring Batch and Partition to do parallel processing. Hibernate and Spring Data Jpa for db. For the partition step, the reader, processor and writer have stepscope and so I can inject partition key and range(from-to) to them. Now in processor, I have one synchronized method and expected this method to be ran once at time, but it is not the case.
I set it to have 10 partitions , all 10 Item reader read the right partitioned range. The problem comes with item processor. Blow code has the same logic I use.
public class accountProcessor implementes ItemProcessor{
#override
public Custom process(item) {
createAccount(item);
return item;
}
//account has unique constraints username, gender, and email
/*
When 1 thread execute that method, it will create 1 account
and save it. If next thread comes in and try to save the same account,
it should find the account created by first thread and do one update.
But now it doesn't happen, instead findIfExist return null
and it try to do another insert of duplicate data
*/
private synchronized void createAccount(item) {
Account account = accountRepo.findIfExist(item.getUsername(), item.getGender(), item.getEmail());
if(account == null) {
//account doesn't exist
account = new Account();
account.setUsername(item.getUsername());
account.setGender(item.getGender());
account.setEmail(item.getEmail());
account.setMoney(10000);
} else {
account.setMoney(account.getMoney()-10);
}
accountRepo.save(account);
}
}
The expected output is that only 1 thread will run this method at any given time and so that there will be no duplicate inserttion in db as well as avoid DataintegrityViolationexception.
Actually result is that second thread can't find the first account and try to create a duplicate account and save to db, which will cause DataintegrityViolationexception, unique constraints error.
Since I synchronized the method, thread should execute it in order, second thread should wait for first thread to finish and then run, which mean it should be able to find the first account.
I tried with many approaches, like a volatile set to contains all unique accounts, do saveAndFlush to make commits asap, using threadlocal whatsoever, no of these works.
Need some help.
Since you made the item processor step-scoped, you don't really need synchronization as each step will have its own instance of the processor.
But it looks like you have a design problem rather than an implementation issue. You are trying to sychronize threads to act in a certain order in a parallel setup. When you decide to go parallel and divide the data into partitions and give each worker (either local or remote) a partition to work on, you must admit that these partitions will be processed in an undefined order and that there should be no relation between records of each partition or between the work done by each worker.
When 1 thread execute that method, it will create 1 account
and save it. If next thread comes in and try to save the same account,
it should find the account created by first thread and do one update. But now it doesn't happen, instead findIfExist return null and it try to do another insert of duplicate data
That's because the transaction of thread1 may not be committed yet, hence thread2 won't find the record you think have been inserted by thread1.
It looks like you are trying to create or update some accounts with a partitioned setup. I'm not sure if this setup is suitable for the problem at hand.
As a side note, I would not call accountRepo.save(account); in an item processor but rather do that in an item writer.
Hope this helps.
I have a lot of C++11 threads running which all need database access at some time. In main I do initalize the database connection and open the database. Qt documentation says that queries are not threadsafe so I use a global mutex until a QSqlQuery exists inside a thread.
This works but is that guaranteed to work or do I run into problems at some time?
A look at the Documentation tells us, that
A connection can only be used from within the thread that created it.
Moving connections between threads or creating queries from a
different thread is not supported.
So you do indeed need one connection per thread. I solved this by generating dynamic names based on the thread:
auto name = "my_db_" + QString::number((quint64)QThread::currentThread(), 16);
if(QSqlDatabase::contains(name))
return QSqlDatabase::database(name);
else {
auto db = QSqlDatabase::addDatabase( "QSQLITE", name);
// open the database, setup tables, etc.
return db;
}
In case you use threads not managed by Qt make use of QThreadStorage to generate names per thread:
// must be static, to be the same for all threads
static QThreadStorage<QString> storage;
QString name;
if(storage.hasLocalData())
name = storage.localData();
else {
//simple way to get a random name
name = "my_db_" + QUuid::createUuid().toString();
storage.setLocalData(name);
}
Important: Sqlite may or may not be able to handle multithreading. See https://sqlite.org/threadsafe.html. As far as I know, the sqlite embedded into Qt is threadsafe, as thats the default, and I could not find any flags that disable it in the sourcecode. But If you are using a different sqlite version, make shure it does actually support threads.
You can write class with SQL functions and use signals-slots to do the queries and get result from database.
It's thread-safe also no need to use mutex.
You choose not well approach. Should use shared QSqlDatabase object instead QSqlQuery. Please check next example of multithreading database access. If that will not clear for you please let me know. Will explain more.
I'm dealing with a situation where multiple threads are accessing this method
using (var tx = StateManager.CreateTransaction())
{
var item = await reliableDictioanary.GetAsync(tx, key);
... // Do work on a copy of item
await reliableDictioanary.SetAsync(tx, key, item);
await tx.CommitAsync();
}
Single threading this works well, but when I try accessing the dictionary this way using multiple threads I encounter a System.TimeOutException.
The only way I've been able to get around it is to use LockMode.Update on the GetAsync(...) method. Has anyone here experienced something like this?
I'm wondering if there is a way to read with snapshot isolation, which would allow a read with no lock on it, as opposed to a read with a shared lock on the record.
I've tried doing this with both a shared transaction as shown above as well as individual transactions for the get and the set. Any help would be appreciated.
The default lock when reading, is a shared lock. (caused by GetAsync)
If you want to write, you need an exclusive lock. You can't get it if shared locks exist.
Getting the first lock as an update lock prevents this, like you noticed.
Snapshot isolation happens when enumerating records, which you're not doing with GetAsync.
More info here.
Is there any way to determine if an object is locked in C#? I have the unenviable position, through design where I'm reading from a queue inside a class, and I need to dump the contents into a collection in the class. But that collection is also read/write from an interface outside the class. So obviously there may be a case when the collection is being written to, as the same time I want to write to it.
I could program round it, say using delegate but it would be ugly.
You can always call the static TryEnter method on the Monitor class using a value of 0 for the value to wait. If it is locked, then the call will return false.
However, the problem here is that you need to make sure that the list that you are trying to synchronize access to is being locked on itself in order to synchronize access.
It's generally bad practice to use the object that access is being synchronized as the object to lock on (exposing too much of the internal details of an object).
Remember, the lock could be on anything else, so just calling this on that list is pointless unless you are sure that list is what is being locked on.
Monitor.TryEnter will succeed if the object isn't locked, and will return false if at this very moment, the object is locked. However, note that there's an implicit race here: the instance this method returns, the object may not be locked any more.
I'm not sure if a static call to TryEnter with a time of 0 will guarantee that the lock will not be acquired if it is available. The solution I did to test in debug mode that the sync variable was locked was using the following:
#if DEBUG
// Make sure we're inside a lock of the SyncRoot by trying to lock it.
// If we're able to lock it, that means that it wasn't locked in the first
// place. Afterwards, we release the lock if we had obtained it.
bool acquired = false;
try
{
acquired = Monitor.TryEnter(SyncRoot);
}
finally
{
if (acquired)
{
Monitor.Exit(SyncRoot);
}
}
Debug.Assert(acquired == false, "The SyncRoot is not locked.");
#endif
Monitor.IsEntered
Determines whether the current thread holds the lock on the specified object.
Available since 4.5
Currently you may call Monitor.TryEnter to inspect whether object is locked or not.
In .NET 4.0 CLR team is going to add "Lock inspection API"
Here is a quotation from Rick Byers article:
lock inspection
We're adding some simple APIs to ICorDebug which allow you to explore managed locks (Monitors). For example, if a thread is blocked waiting for a lock, you can find what other thread is currently holding the lock (and if there is a time-out).
So, with this API you will be able to check:
1) What object is holding a lock?
2) Who’s waiting for it?
Hope this helps.
What I need is a system I can define simple objects on (say, a "Server" than can have an "Operating System" and "Version" fields, alongside other metadata (IP, MAC address, etc)).
I'd like to be able to request objects from the system in a safe way, such that if I define a "Server", for example, can be used by 3 clients concurrently, then if 4 clients ask for a Server at the same time, one will have to wait until the server is freed.
Furthermore, I need to be able to perform requests in some sort of query-style, for example allocate(type=System, os='Linux', version=2.6).
Language doesn't matter too much, but Python is an advantage.
I've been googling for something like this for the past few days and came up with nothing, maybe there's a better name for this kind of system that I'm not aware of.
Any recommendations?
Thanks!
Resource limitation in concurrent applications - like your "up to 3 clients" example - is typically implemented by using semaphores (or more precisely, counting semaphores).
You usually initialize a semaphore with some "count" - that's the maximum number of concurrent accesses to that resource - and you decrement this counter every time a client starts using that resource and increment it when a client finishes using it. The implementation of semaphores guarantees the "increment" and "decrement" operations will be atomic.
You can read more about semaphores on Wikipedia. I'm not too familiar with Python but I think these two links can help:
Python Threading Library
Semaphore Objects in Python.
For Java there is a very good standard library that has this functionality:
http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/package-summary.html
Just create a class with Semaphore field:
class Server {
private static final MAX_AVAILABLE = 100;
private final Semaphore available = new Semaphore(MAX_AVAILABLE, true);
// ... put all other fields (OS, version) here...
private Server () {}
// add a factory method
public static Server getServer() throws InterruptedException {
available.acquire();
//... do the rest here
}
}
Edit:
If you want things to be more "configurable" look into using AOP techniques i.e. create semaphore-based synchronization aspect.
Edit:
If you want completely standalone system, I guess you can try to use any modern DB (e.g. PostgreSQL) system that support row-level locking as semaphore. For example, create 3 rows for each representing a server and select them with locking if they are free (e.g. "select * from server where is_used = 'N' for update"), mark selected server as used, unmark it in the end, commit transaction.