#Stateless or #Singleton instead of static helper class? - multithreading

I'm maintaining some older JEE code which runs fine but is using some static helper classes where an entity manager is passed in the methods from the calling EJB(s) like this:
public class StaticHelper {
public static void helpingOut(EntityManager entityManager, String value) {
// i.e. insert value
}
}
Since this doesn't seem to fit JEE very well and is not nice to unit-test, I've converted these helpers to #Stateless EJBs like so:
#Stateless
public class StatelessHelper {
#PersistenceContext(unitName="SuperUnit")
private EntityManager entityManager;
public void helpingOut(String value) {
// i.e. insert value
}
}
Like that I can inject a mocked helper in the calling EJB with CDI-Unit.
Now, depending on the load, 1-3 instances of that stateless helper is created by the container which isn't a problem at all I would say, but anyway I thought about a #Singleton using either #ConcurrencyManagement(ConcurrencyManagementType.BEAN) or #Lock(LockType.READ) to make it multithreaded - but this doesn't seem to be a good idea since EntityManager is not thread-safe. Or does this explained here still apply?
"...The container serializes calls to each stateful and stateless
session bean instance. Most containers will support many instances of
a session bean executing concurrently; however, each instance sees
only a serialized sequence of method calls. Therefore, a stateful or
stateless session bean does not have to be coded as reentrant..."

Business methods in Java EE (or the more recent denomination Jakarta EE) should be implemented in #Stateless beans. That is what they are ment for. So the approach you just describe perfectly fits into the Java EE paradigm.
#Singletons are ment for instances containing application-wide state.
#Singleton for beans containing business methods are models from other techologies, like Spring or Guice. In those, the business methods are not synchronized, so you have to beware that every class level attribute must be thread safe. This is not Java EE model, in which one thread is assured to access one instance of a Session Bean at any specific time (by specification), and that's what makes it safe to use with EntityManager.
This doesn't happen with #Singletons, and so, to use them concurrently you have to tune them with #ConcurrencyManagement annotations.
What you just did is just right.
Clarification
It looks like I said Singleton Session Beans are not thread safe. What I ment is that concurrent access to the single instance is not allowed by default (the other way round as in Spring or Guice), and so they can be bottlenecks for business methods. To allow concurrent access you have to tune them with the aformentioned #ConcurrencyManagement.

I created a simple project for checking/testing how the container handles transactions in SLSB and Singleton. Cases I covered are:
Using #PersistenceContext EntityManager inside SLSB
Using directly a #Datasource inside SLSB
Using a #Datasource inside a #Singleton
Below the conclusions of the test.
EntityManager (DashboardEM)
EntityManager is reliable. With the default isolation level it is enough to avoid data inconsistent.
When a concurrent exception happens the container will rollback, so handle appropriately system exceptions in order to not lose data. In our case a OptimisticLockException is thrown so we resend the point to the dashboard.
A "copy" of EnityManager is injected to each SLSB instances by the container. Afterwards it is the EntityManager responsible for data consistency.
SLSB is safe in sense that container guarantees only one thread at time can execute a single instance (but different instances runs concurrently in separate threads)
#Singleton (DSSegmentSingleton)
Makes sense to use #Singleton only if you are using directly a source. With Lock.WRITE you increase the isolation levels between threads/instances to SERIALIZABLE.
It creates a bottleneck, all the threads (clients) have to wait after each other in order to execute the method.
Lose the benefit of having multiple stateless instances in the pool (implies that multiple clients can do stuff at the same time).
In case of #Singleton the execution time would increase with the increase of clients because each would wait on each other. For 100clients, the 100th client would wait 99 x single execution time.
In case of #Singleton the execution time increases as the number of concurrent clients goes up. For example: if 100 clients call the Singleton SLSB at the same time, the last client would have an execution time of ( 99 x execution time ).
Other Solutions when you don't have EntityManager
Use isolation level directly in DB (select ... for update). See DashboardDSSelectForUpdate
Using TransactionManagement(BEAN) and changing isolation level of the connection like conn.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE); See: DashboardDSTxBean
See also
project here: The source code
JSR-318 EJB 3.1 Spec - Enterprise JavaBean 3.1 Specification
Concurrency in a Singleton - 34.2.1.2 Managing Concurrent Access in a Singleton SB
Isolation Levels - Real example of Dirty Read, Phantom Read and Non Repeatable Read
Reentrant Lock - Simple explanation of reentrant locks in Java.
OptimisticLockException - Example when OptimisticLockException is thrown.

Related

How to avoid breaking encapsulation when using dependency injection

After reading and watching some videos on dependency injection I still don't understand how to use it properly without breaking encapsulation.
Note: I read How to use Dependency Injection without breaking encapsulation? but I'm still not 100% sure.
My code is a very simple implementation of thread pool, which contains objects of class Worker which is a package-private class that I don't want to expose to the outside world (and it's really non of their concern).
My thread pool constructor requires a parameter Worker[] workers (I don't need a factory since I know in advance exactly how many workers I need).
Since my Worker class is package-private I thought that the right way to construct the thread factory would be to implement a static factory method in the ThreadPool class as follows:
public static ThreadPool createThreadPool(int numOfWorkers,
BlockingQueue<Runnable> jobQueue,
ThreadFactory threadFactory) {
Worker workers[] = new Worker[numOfWorkers];
for (int i = 0; i < workers.length; i++) {
workers[i] = new Worker(jobQueue, threadFactory, i);
// worker needs the factory in order to provide itself as Runnable
}
return new ThreadPool(workers, jobQueue);
}
So, is creating all these new objects in the static factory method the right way to hide the Worker class from other packages, or is there something I'm missing here?
Dependency Injection would mean hiding the creation of the Workers from the ThreadPool. Ideally, Runnables should be passed into the ThreadPool constructor, and the ThreadPool shouldn't even know that the Runnables happen to be Workers.
Creation of the Workers should occur in the composition root.

Concurrency in Message Driven Bean - Thread safe Java EE5 vs. EE6

I have a situation where I need a set of operations be enclosed into a single transaction and be thread safe from a MDB.
If thread A executes the instruction 1, do not want other threads can read, at least not the same, data that thread A is processing.
In the code below since IMAGE table contains duplicated data, coming from different sources, this will lead in a duplicated INFRANCTION. Situation that needs to be avoided.
The actual solution that I found is declaring a new transaction for each new message and synchronize the entire transaction.
Simplifying the code:
#Stateless
InfranctionBean{
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
checkInfranction(String plate){
1. imageBean.getImage(plate); // read from table IMAGE
2. infranctionBean.insertInfranction(String plate); // insert into table INFRANCTION
3. imageBean.deleteImage(String plate); //delete from table IMAGE
}
}
#MessageDriven
public class ImageReceiver {
private static Object lock = new Object();
public void onMessage(Message msg){
String plate = msg.plate;
synchronized (lock) {
infanctionBean.checkInfranction(plate);
}
}
}
I am aware that using synchronized blocks inside the EJB is not recommanded by EJB specification. This can lead even in problems if the applicaton server runs in two node cluster.
Seems like EE6 has introduced a solution for this scenario, which is the EJB Singleton.
In this case, my solution would be something like this:
#ConcurrencyManagement(ConcurrencyManagementType.CONTAINER)
#Singleton
InfranctionBean{
#Lock(LockType.WRITE)
checkInfranction(String plate){
1...
2...
3...
}
}
And from MDB would not be neccessary the usage of synchronized block since the container will handle the concurrency.
With #Lock(WRITE) the container guarantees the access of single thread inside checkInfranction().
My queston is: How can I handle this situation in EE5? There is a cleaner solution without using synchronized block?
Environment: Java5,jboss-4.2.3.GA,Oracle10.
ACTUAL SOLUTION
#Stateless
InfranctionBean{
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
checkInfranction(String plate){
1. imageBean.lockImageTable(); // lock table IMAGE in exclusive mode
2. imageBean.getImage(plate); // read from table IMAGE
3. infranctionBean.insertInfranction(String plate); // insert into table INFRANCTION
4. imageBean.deleteImage(String plate); //delete from table IMAGE
}
}
#MessageDriven
public class ImageReceiver {
public void onMessage(Message msg){
infanctionBean.checkInfranction(msg.plate);
}
}
On 20.000 incoming messages (half of them simultaneously) seems the application works ok.
#Lock(WRITE) is only a lock within a single application/JVM, so unless you can guarantee that only one application/JVM is accessing the data, you're not getting much protection anyway. If you're only looking for single application/JVM protection, the best solution in EE 5 would be a ReadWriteLock or perhaps a synchronized block. (The EJB specification has language to dissuade applications from doing this to avoid compromising the thread management of the server, so take care that you don't block indefinitely, that you don't ignore interrupts, etc.)
If you're looking for a more robust cross-application/JVM solution, I would use database locks or isolation levels rather than trying to rely on JVM synchronized primitives. That is probably the best solution regardless of the EJB version being used.

Spring Batch thread-safe ItemReader (process indicator pattern)

I'm already implemented Remote Chunking using AMQP (RabbitMQ). Now I need to run parallel jobs from within a web container.
My simple controller (testJob use remote chunking):
#Controller
public class JobController {
#Autowired
private JobLauncher jobLauncher;
#Autowired
private Job testJob;
#RequestMapping("/job/test")
public void test() {
JobParametersBuilder jobParametersBuilder = new JobParametersBuilder();
jobParametersBuilder.addDate("date",new Date());
try {
jobLauncher.run(personJob,jobParametersBuilder.toJobParameters());
} catch (JobExecutionAlreadyRunningException | JobRestartException | JobParametersInvalidException | JobInstanceAlreadyCompleteException e) {
e.printStackTrace();
}
}
}
testJob reads data from filesystem (master chunk) and send it to remote chunk (slave chunk). The problem is that ItemReader is not thread safe.
There are some practical limitations of using multi-threaded Steps for some common Batch use cases. Many participants in a Step (e.g. readers and writers) are stateful, and if the state is not segregated by thread, then those components are not usable in a multi-threaded Step. In particular most of the off-the-shelf readers and writers from Spring Batch are not designed for multi-threaded use. It is, however, possible to work with stateless or thread safe readers and writers, and there is a sample (parallelJob) in the Spring Batch Samples that show the use of a process indicator (see Section 6.12, “Preventing State Persistence”) to keep track of items that have been processed in a database input table.
I'm considered on parallelJob sample on spring batch github repository
https://github.com/spring-projects/spring-batch/blob/master/spring-batch-samples/src/main/java/org/springframework/batch/sample/common/StagingItemReader.java
I'm a bit confused about Process indicator pattern. Where I can find more detailed information about this pattern?
If all you're concerned with is that the ItemReader instance would be shared across job invocations, you can declare the ItemReader as a step scope and you'll get a new instance per invocation which would remove the threading concerns.
But to answer your direct question about the process indicator pattern I'm not sure where good documentation on it by itself is. There is a sample of it's implementation in the Spring Batch Samples (the parallel job uses it).
The idea behind it is that you provide a status to the records you are going to process. At the beginning of the job/step you mark those records as in process. As the records are committed, you mark them as processed. This removes the need to track the state in the reader since your state is actually in the db (your query only looks for records marked as in process).

Can different threads access different methods of a specific instance of a stateless EJB?

#Stateless
public class MyBean1 {
pulic void method1() {
//method implementation
}
pulic void method2() {
//method implementation
}
}
Consider a specific instance of MyBean1. Then we know that method1() or method2() cannot be accessed by multiple threads at the same time. But, while method1() is being accessed by a thread, can method2() be accessed by another thread?
I think that secion 4.3.14 of the ejb 3.1 spec gives the answer.
4.3.14 Serializing Session Bean Methods
The following requirements apply to Stateless and Stateful session beans.
See Section 4.8.5 for Singleton session bean concurrency requirements.
The container serializes calls to each stateful and stateless session bean instance.
Most containers will support many instances of a session bean executing concurrently;
however, each instance sees only a serialized sequence of method calls.
Therefore, a stateful or stateless session bean does not have to be coded as reentrant.
The container must serialize all the container-invoked callbacks
(that is, the business method interceptor methods, lifecycle callback interceptor methods,
timeout callback methods, beforeCompletion, and so on), and it must serialize these callbacks
with the client-invoked business method calls.
....
As far as I understand the EJB spec you should use Singletons if you want a fine-grained control over concurrency (bean managed, container managed).
Let's modify your example a bit
#Stateless
public class MyBean1 {
#Resource
private SessionContext sessionContext;
pulic void method1() {
// method implementation
// As a side-effect, something is written into a database
// using an XA data source,
// and a message is sent using XA JMS
// (both under control of an XA transaction)
}
pulic int method2(int i) {
return i * i;
}
}
For example, the session context is used to get the UserTransaction and getCallerPrincipal. They are not necessarily always the same (when two clients all the EJB). As for the UserTransaction: This one is bound to the current thread (see Javadoc).
As the session context is stored in a field (and not passed an argument to each individual method), the same EJB instance cannot be accessed by two different clients.
Therefore the specification requires the container to serialize calls to the same instance.
If you look at method2, a purely functional implementation without any side-effect, there is no need for EJBs.

What does threadsafe mean?

Recently I tried to Access a textbox from a thread (other than the UI thread) and an exception was thrown. It said something about the "code not being thread safe" and so I ended up writing a delegate (sample from MSDN helped) and calling it instead.
But even so I didn't quite understand why all the extra code was necessary.
Update:
Will I run into any serious problems if I check
Controls.CheckForIllegalCrossThread..blah =true
Eric Lippert has a nice blog post entitled What is this thing you call "thread safe"? about the definition of thread safety as found of Wikipedia.
3 important things extracted from the links :
“A piece of code is thread-safe if it functions correctly during
simultaneous execution by multiple threads.”
“In particular, it must satisfy the need for multiple threads to
access the same shared data, …”
“…and the need for a shared piece of data to be accessed by only one
thread at any given time.”
Definitely worth a read!
In the simplest of terms threadsafe means that it is safe to be accessed from multiple threads. When you are using multiple threads in a program and they are each attempting to access a common data structure or location in memory several bad things can happen. So, you add some extra code to prevent those bad things. For example, if two people were writing the same document at the same time, the second person to save will overwrite the work of the first person. To make it thread safe then, you have to force person 2 to wait for person 1 to complete their task before allowing person 2 to edit the document.
Wikipedia has an article on Thread Safety.
This definitions page (you have to skip an ad - sorry) defines it thus:
In computer programming, thread-safe describes a program portion or routine that can be called from multiple programming threads without unwanted interaction between the threads.
A thread is an execution path of a program. A single threaded program will only have one thread and so this problem doesn't arise. Virtually all GUI programs have multiple execution paths and hence threads - there are at least two, one for processing the display of the GUI and handing user input, and at least one other for actually performing the operations of the program.
This is done so that the UI is still responsive while the program is working by offloading any long running process to any non-UI threads. These threads may be created once and exist for the lifetime of the program, or just get created when needed and destroyed when they've finished.
As these threads will often need to perform common actions - disk i/o, outputting results to the screen etc. - these parts of the code will need to be written in such a way that they can handle being called from multiple threads, often at the same time. This will involve things like:
Working on copies of data
Adding locks around the critical code
Opening files in the appropriate mode - so if reading, don't open the file for write as well.
Coping with not having access to resources because they're locked by other threads/processes.
Simply, thread-safe means that a method or class instance can be used by multiple threads at the same time without any problems occurring.
Consider the following method:
private int myInt = 0;
public int AddOne()
{
int tmp = myInt;
tmp = tmp + 1;
myInt = tmp;
return tmp;
}
Now thread A and thread B both would like to execute AddOne(). but A starts first and reads the value of myInt (0) into tmp. Now for some reason, the scheduler decides to halt thread A and defer execution to thread B. Thread B now also reads the value of myInt (still 0) into it's own variable tmp. Thread B finishes the entire method so in the end myInt = 1. And 1 is returned. Now it's Thread A's turn again. Thread A continues. And adds 1 to tmp (tmp was 0 for thread A). And then saves this value in myInt. myInt is again 1.
So in this case the method AddOne() was called two times, but because the method was not implemented in a thread-safe way the value of myInt is not 2, as expected, but 1 because the second thread read the variable myInt before the first thread finished updating it.
Creating thread-safe methods is very hard in non-trivial cases. And there are quite a few techniques. In Java you can mark a method as synchronized, this means that only one thread can execute that method at a given time. The other threads wait in line. This makes a method thread-safe, but if there is a lot of work to be done in a method, then this wastes a lot of space. Another technique is to 'mark only a small part of a method as synchronized' by creating a lock or semaphore, and locking this small part (usually called the critical section). There are even some methods that are implemented as lock-less thread-safe, which means that they are built in such a way that multiple threads can race through them at the same time without ever causing problems, this can be the case when a method only executes one atomic call. Atomic calls are calls that can't be interrupted and can only be done by one thread at a time.
In real world example for the layman is
Let's suppose you have a bank account with the internet and mobile banking and your account have only $10.
You performed transfer balance to another account using mobile banking, and the meantime, you did online shopping using the same bank account.
If this bank account is not threadsafe, then the bank allows you to perform two transactions at the same time and then the bank will become bankrupt.
Threadsafe means that an object's state doesn't change if simultaneously multiple threads try to access the object.
You can get more explanation from the book "Java Concurrency in Practice":
A class is thread‐safe if it behaves correctly when accessed from multiple threads, regardless of the scheduling or interleaving of the execution of those threads by the runtime environment, and with no additional synchronization or other coordination on the part of the calling code.
A module is thread-safe if it guarantees it can maintain its invariants in the face of multi-threaded and concurrence use.
Here, a module can be a data-structure, class, object, method/procedure or function. Basically scoped piece of code and related data.
The guarantee can potentially be limited to certain environments such as a specific CPU architecture, but must hold for those environments. If there is no explicit delimitation of environments, then it is usually taken to imply that it holds for all environments that the code can be compiled and executed.
Thread-unsafe modules may function correctly under mutli-threaded and concurrent use, but this is often more down to luck and coincidence, than careful design. Even if some module does not break for you under, it may break when moved to other environments.
Multi-threading bugs are often hard to debug. Some of them only happen occasionally, while others manifest aggressively - this too, can be environment specific. They can manifest as subtly wrong results, or deadlocks. They can mess up data-structures in unpredictable ways, and cause other seemingly impossible bugs to appear in other remote parts of the code. It can be very application specific, so it is hard to give a general description.
Thread safety: A thread safe program protects it's data from memory consistency errors. In a highly multi-threaded program, a thread safe program does not cause any side effects with multiple read/write operations from multiple threads on same objects. Different threads can share and modify object data without consistency errors.
You can achieve thread safety by using advanced concurrency API. This documentation page provides good programming constructs to achieve thread safety.
Lock Objects support locking idioms that simplify many concurrent applications.
Executors define a high-level API for launching and managing threads. Executor implementations provided by java.util.concurrent provide thread pool management suitable for large-scale applications.
Concurrent Collections make it easier to manage large collections of data, and can greatly reduce the need for synchronization.
Atomic Variables have features that minimize synchronization and help avoid memory consistency errors.
ThreadLocalRandom (in JDK 7) provides efficient generation of pseudorandom numbers from multiple threads.
Refer to java.util.concurrent and java.util.concurrent.atomic packages too for other programming constructs.
Producing Thread-safe code is all about managing access to shared mutable states. When mutable states are published or shared between threads, they need to be synchronized to avoid bugs like race conditions and memory consistency errors.
I recently wrote a blog about thread safety. You can read it for more information.
You are clearly working in a WinForms environment. WinForms controls exhibit thread affinity, which means that the thread in which they are created is the only thread that can be used to access and update them. That is why you will find examples on MSDN and elsewhere demonstrating how to marshall the call back onto the main thread.
Normal WinForms practice is to have a single thread that is dedicated to all your UI work.
I find the concept of http://en.wikipedia.org/wiki/Reentrancy_%28computing%29 to be what I usually think of as unsafe threading which is when a method has and relies on a side effect such as a global variable.
For example I have seen code that formatted floating point numbers to string, if two of these are run in different threads the global value of decimalSeparator can be permanently changed to '.'
//built in global set to locale specific value (here a comma)
decimalSeparator = ','
function FormatDot(value : real):
//save the current decimal character
temp = decimalSeparator
//set the global value to be
decimalSeparator = '.'
//format() uses decimalSeparator behind the scenes
result = format(value)
//Put the original value back
decimalSeparator = temp
To understand thread safety, read below sections:
4.3.1. Example: Vehicle Tracker Using Delegation
As a more substantial example of delegation, let's construct a version of the vehicle tracker that delegates to a thread-safe class. We store the locations in a Map, so we start with a thread-safe Map implementation, ConcurrentHashMap. We also store the location using an immutable Point class instead of MutablePoint, shown in Listing 4.6.
Listing 4.6. Immutable Point class used by DelegatingVehicleTracker.
class Point{
public final int x, y;
public Point() {
this.x=0; this.y=0;
}
public Point(int x, int y) {
this.x = x;
this.y = y;
}
}
Point is thread-safe because it is immutable. Immutable values can be freely shared and published, so we no longer need to copy the locations when returning them.
DelegatingVehicleTracker in Listing 4.7 does not use any explicit synchronization; all access to state is managed by ConcurrentHashMap, and all the keys and values of the Map are immutable.
Listing 4.7. Delegating Thread Safety to a ConcurrentHashMap.
public class DelegatingVehicleTracker {
private final ConcurrentMap<String, Point> locations;
private final Map<String, Point> unmodifiableMap;
public DelegatingVehicleTracker(Map<String, Point> points) {
this.locations = new ConcurrentHashMap<String, Point>(points);
this.unmodifiableMap = Collections.unmodifiableMap(locations);
}
public Map<String, Point> getLocations(){
return this.unmodifiableMap; // User cannot update point(x,y) as Point is immutable
}
public Point getLocation(String id) {
return locations.get(id);
}
public void setLocation(String id, int x, int y) {
if(locations.replace(id, new Point(x, y)) == null) {
throw new IllegalArgumentException("invalid vehicle name: " + id);
}
}
}
If we had used the original MutablePoint class instead of Point, we would be breaking encapsulation by letting getLocations publish a reference to mutable state that is not thread-safe. Notice that we've changed the behavior of the vehicle tracker class slightly; while the monitor version returned a snapshot of the locations, the delegating version returns an unmodifiable but “live” view of the vehicle locations. This means that if thread A calls getLocations and thread B later modifies the location of some of the points, those changes are reflected in the Map returned to thread A.
4.3.2. Independent State Variables
We can also delegate thread safety to more than one underlying state variable as long as those underlying state variables are independent, meaning that the composite class does not impose any invariants involving the multiple state variables.
VisualComponent in Listing 4.9 is a graphical component that allows clients to register listeners for mouse and keystroke events. It maintains a list of registered listeners of each type, so that when an event occurs the appropriate listeners can be invoked. But there is no relationship between the set of mouse listeners and key listeners; the two are independent, and therefore VisualComponent can delegate its thread safety obligations to two underlying thread-safe lists.
Listing 4.9. Delegating Thread Safety to Multiple Underlying State Variables.
public class VisualComponent {
private final List<KeyListener> keyListeners
= new CopyOnWriteArrayList<KeyListener>();
private final List<MouseListener> mouseListeners
= new CopyOnWriteArrayList<MouseListener>();
public void addKeyListener(KeyListener listener) {
keyListeners.add(listener);
}
public void addMouseListener(MouseListener listener) {
mouseListeners.add(listener);
}
public void removeKeyListener(KeyListener listener) {
keyListeners.remove(listener);
}
public void removeMouseListener(MouseListener listener) {
mouseListeners.remove(listener);
}
}
VisualComponent uses a CopyOnWriteArrayList to store each listener list; this is a thread-safe List implementation particularly suited for managing listener lists (see Section 5.2.3). Each List is thread-safe, and because there are no constraints coupling the state of one to the state of the other, VisualComponent can delegate its thread safety responsibilities to the underlying mouseListeners and keyListeners objects.
4.3.3. When Delegation Fails
Most composite classes are not as simple as VisualComponent: they have invariants that relate their component state variables. NumberRange in Listing 4.10 uses two AtomicIntegers to manage its state, but imposes an additional constraint—that the first number be less than or equal to the second.
Listing 4.10. Number Range Class that does Not Sufficiently Protect Its Invariants. Don't do this.
public class NumberRange {
// INVARIANT: lower <= upper
private final AtomicInteger lower = new AtomicInteger(0);
private final AtomicInteger upper = new AtomicInteger(0);
public void setLower(int i) {
//Warning - unsafe check-then-act
if(i > upper.get()) {
throw new IllegalArgumentException(
"Can't set lower to " + i + " > upper ");
}
lower.set(i);
}
public void setUpper(int i) {
//Warning - unsafe check-then-act
if(i < lower.get()) {
throw new IllegalArgumentException(
"Can't set upper to " + i + " < lower ");
}
upper.set(i);
}
public boolean isInRange(int i){
return (i >= lower.get() && i <= upper.get());
}
}
NumberRange is not thread-safe; it does not preserve the invariant that constrains lower and upper. The setLower and setUpper methods attempt to respect this invariant, but do so poorly. Both setLower and setUpper are check-then-act sequences, but they do not use sufficient locking to make them atomic. If the number range holds (0, 10), and one thread calls setLower(5) while another thread calls setUpper(4), with some unlucky timing both will pass the checks in the setters and both modifications will be applied. The result is that the range now holds (5, 4)—an invalid state. So while the underlying AtomicIntegers are thread-safe, the composite class is not. Because the underlying state variables lower and upper are not independent, NumberRange cannot simply delegate thread safety to its thread-safe state variables.
NumberRange could be made thread-safe by using locking to maintain its invariants, such as guarding lower and upper with a common lock. It must also avoid publishing lower and upper to prevent clients from subverting its invariants.
If a class has compound actions, as NumberRange does, delegation alone is again not a suitable approach for thread safety. In these cases, the class must provide its own locking to ensure that compound actions are atomic, unless the entire compound action can also be delegated to the underlying state variables.
If a class is composed of multiple independent thread-safe state variables and has no operations that have any invalid state transitions, then it can delegate thread safety to the underlying state variables.

Resources