Scala synchronization best practice - multithreading

I've started to learn Scala and Akka as the actor model. In an imperative language like C for example i can use several different methods for synchronizing threads on for example a binary tree; a simple semaphore or a mutex, atomic operations, and so on.
Scala however, is a functional object oriented language which can implement an actor model utilizing the Akka library (for example). How should synchronization be implemented in Scala? Let's say that i have i binary tree which my program is suppose to traverse and perform different operations upon. How should i make sure that two different actors isn't, for example, deleting the same node simultaniously?

If you want to do synchonized access to data structures, just use the synchronized method of AnyRef to synchronize a block. For example:
object Test {
private val myMap = collection.mutable.Map.empty[Int, Int]
def set(key: Int, value: Int): Unit = synchronized { myMap(key) = value }
def get(key: Int): Option[Int] = synchronized { myMap.get(key) }
}
However, the point of using actors is to avoid threads blocking one another, which would hurt scalability. The Actor way of managing mutable state is for state to be private to Actor instances, and only updated or accessed in response to messages. This is a more complex design, something like:
// define case classes Get, Set, Value here.
class MapHolderActor extends Actor {
private val myMap = collection.mutable.Map.empty[Int, Int]
def receive {
case Get(key) => sender ! Value(myMap.get(key))
case Set(key, value) => myMap(key) = value
}
}

On a high level you can use Actor as a mutex: it processes all incoming messages one by one.
On the lower levels (but not at the actor level), nothing stops you from using plain old java concurrency primitives
Use immutable data structures, as #Tanmay proposed, so there will be no in-place modification, thus no data races
There are transactors (although deprecated in the very recent akka versions)

Related

Appropriate use of synchronizing or locking a segment of code

Given a singleton class Employee with 2 methods
int getSalary()
void updateSalary(int increment)
Do I need to synchronize or lock both these functions or use atomic salary variable?
If yes then the question is that in this way we would have to synchronize all the functions that we define in multithreaded environment. So, why not just make synchronized a standard as today no real world application would be single threaded?
With Singleton, we always have to very careful because, singleton object being a single instance naturally, can be shared between threads. Making functions synchronized is one way, and it is not efficient way. We need to think about other aspect of concurrency, like immutability Atomic classes.
class Employee {
//singleton instantiation
private final AtomicInteger sal = new AtomicInteger(0);
int getSalary(){
return sla.get();
}
void updateSalary(int increment){
sla.add(increment);
}
}
This will solve, we do not need to synchronize every method of the singleton class.
We do not have to mark every function of every class to be synchronized, but always have to be careful if a function is modifying a state or reading a state and could be concurrently invoked, in such cases start thinking about synchronization. But, with singleton classes we always have to be careful.

Use ArrayBuffer in sequentially executed Threads?

I have two Futures, the second of which starts after the first ended. Both write to the same ArrayBuffer instance, but since they are executed serially (not at the same time), I consider them not acting concurrently.
However, I know there is the #volatile annotation for variables shared among two or more threads (#volatile disables caching).
Since after the first thread finishes, inside the ArrayBuffer instance, there might be some caching going on that makes it impossible for the second thread to see the ArrayBuffer's real state: I am not sure whether it is safe to use ArrayBuffer this way.
Is it true that caching might be a problem in my situation, and if this is the case: Is there a recommended way to make ArrayBuffer use #volatile internally?
It should be fine iff (if-and-only-if) you propagate it [the array] through the future:
val futureA = Future {
val buf = ArrayBuffer(…)
update(buf)
buf
}
val futureB = futureA map {
buf => moreUpdates(buf); buf
}
futureB foreach println // print the result of the transformations
This is OK from a memory safety point of view because the completion of futureA happens-before the onComplete (virtually all transformations on Future is implemented on top of onComplete) callback is invoked. In this case map.
The problem is not caching, per se, but the fact that an ArrayBuffer is a composite, with several subfields that have to be updated in concert to assure correct operation. You will need to use thread synchronization tools to ensure this.
class ArrayBufferWrapper[T](ab: ArrayBuffer[T]) {
def add(item: T) = {
this.synchronized {
ab.add(item)
}
}
}
By wrapping the ArrayBuffer, the components are properly realized into the current thread, and you ensure thread-safe add operations.
No, it is not safe.
This is exactly the reason why they invented functional programming. If you are using scala anyway, might as well take advantage of the paradigm it offers.
Avoid using mutable structures, or, at least, in the rare cases when you have to use them, do not let them escape the local scope. Then you won't ever have to deal with problems like this. They just will not exist anymore.
Tell us more about what you are trying to do, and i am sure someone will suggest a design or two, not involving two threads mutating the same structure.

How to make atomic exchange -- Scala way?

Problem
I have such code
var ls = src.iter.toList
src.iter = ls.iterator
(this is part of copy constructor of my iterator-wrapper) which reads the source iterator, and in next line set it back. The problem is, those two lines have to be atomic (especially if you consider that I change the source of copy constructor -- I don't like it, but well...).
I've read about Actors but I don't see how they fit here -- they look more like a mechanism for asynchronous execution. I've read about Java solutions and using them in Scala, for example: http://naedyr.blogspot.com/2011/03/atomic-scala.html
My question is: what is the most Scala way to make some operations atomic? I don't want to use some heavy artillery for this, and also I would not like to use some external resources. In other words -- something that looks and feels "right".
I kind like the solution presented in the above link, because this is what I exactly do -- exchange references. And if I understand correctly, I would guard only those 2 lines, and other code does not have to be altered! But I will wait for definitive answer.
Background
Because every Nth question, instead of answer I read "but why do you use...", here:
How to copy iterator in Scala? :-)
I need to copy iterator (make a fork) and such solution is the most "right" I read about. The problem is, it destroys the original iterator.
Solutions
Locks
For example here:
http://www.ibm.com/developerworks/java/library/j-scala02049/index.html
The only problem I see here, that I have to put lock on those two lines, and every other usage on iter. It is minor thing now, but when I add some code, it is easy to forget to add additional lock.
I am not saying "no", but I have no experience, so I would like to get answer from someone who is familiar with Scala, to point a direction -- which solution is the best for such task, and in long-run.
Immutable iterator
While I appreciate the explanation by Paradigmatic, I don't see how such approach fits my problem. The thing is IteratorWrapper class has to wrap iterator -- i.e. raw iterator should be hidden within the class (usually it is done by making it private). Such methods as hasNext() and next() should be wrapped as well. Normally next() alters the state of the object (iterator) so in case of immutable IteratorWrapper it should return both new IteratorWrapper and status of next() (successful or not). Another solution would be returning NULL if raw next() fails, anyway, this makes using such IteratorWrapper not very handy.
Worse, there is still not easy way to copy such IteratorWrapper.
So either I miss something, or actually classic approach with making piece of code atomic is cleaner. Because all the burden is contained inside the class, and the user does not have to pay the price of they way IteratorWrapper handles the data (raw iterator in this case).
Scala approach is to favor immutability whenever it is possible (and it's very often possible). Then you do not need anymore copy constructors, locks, mutex, etc.
For example, you can convert the iterator to a List at object construction. Since lists are immutable, you can safely share them without having to lock:
class IteratorWrapper[A]( iter: Iterator[A] ) {
val list = iter.toList
def iteratorCopy = list.iterator
}
Here, the IteratorWrapper is also immutable. You can safely pass it around. But if you really need to change the wrapped iterator, you will need more demanding approaches. For instance you could:
Use locks
Transform the wrapper into an Actor
Use STM (akka or other implementations).
Clarifications: I lack information on your problem constraints. But here is how I understand it.
Several threads must traverse simultaneously an Iterator. A possible approach is to copy it before passing the reference to the threads. However, Scala practice aims at sharing immutable objects that do not need to be copied.
With the copy strategy, you would write something like:
//A single iterator producer
class Producer {
val iterator: Iterator[Foo] = produceIterator(...)
}
//Several consumers, living on different threads
class Consumer( p: Producer ) {
def consumeIterator = {
val iteratorCopy = copy( p.iterator ) //BROKEN !!!
while( iteratorCopy.hasNext ) {
doSomething( iteratorCopy.next )
}
}
}
However, it is difficult (or slow) to implement a copy method which is thread-safe. A possible solution using immutability will be:
class Producer {
val lst: List[Foo] = produceIterator(...).toList
def iteratorCopy = list.iterator
}
class Consumer( p: Producer ) {
def consumeIterator = {
val iteratorCopy = p.iteratorCopy
while( iteratorCopy.hasNext ) {
doSomething( iteratorCopy.next )
}
}
}
The producer will call produceIterator once at construction. It it immutable because its state is only a list which is also immutable. The iteratorCopy is also thread-safe, because the list is not modified when creating the copy (so several thread can traverse it simultaneously without having to lock).
Note that calling list.iterator does not traverse the list. So it will not decrease performances in any way (as opposed to really copying the iterator each time).

How can I execute multiple tasks in Scala?

I have 50,000 tasks and want to execute them with 10 threads.
In Java I should create Executers.threadPool(10) and pass runnable to is then wait to process all. Scala as I understand especially useful for that task, but I can't find solution in docs.
Scala 2.9.3 and later
THe simplest approach is to use the scala.concurrent.Future class and associated infrastructure. The scala.concurrent.future method asynchronously evaluates the block passed to it and immediately returns a Future[A] representing the asynchronous computation. Futures can be manipulated in a number of non-blocking ways, including mapping, flatMapping, filtering, recovering errors, etc.
For example, here's a sample that creates 10 tasks, where each tasks sleeps an arbitrary amount of time and then returns the square of the value passed to it.
import scala.concurrent.duration._
import scala.concurrent.ExecutionContext.Implicits.global
val tasks: Seq[Future[Int]] = for (i <- 1 to 10) yield future {
println("Executing task " + i)
Thread.sleep(i * 1000L)
i * i
}
val aggregated: Future[Seq[Int]] = Future.sequence(tasks)
val squares: Seq[Int] = Await.result(aggregated, 15.seconds)
println("Squares: " + squares)
In this example, we first create a sequence of individual asynchronous tasks that, when complete, provide an int. We then use Future.sequence to combine those async tasks in to a single async task -- swapping the position of the Future and the Seq in the type. Finally, we block the current thread for up to 15 seconds while waiting for the result. In the example, we use the global execution context, which is backed by a fork/join thread pool. For non-trivial examples, you probably would want to use an application specific ExecutionContext.
Generally, blocking should be avoided when at all possible. There are other combinators available on the Future class that can help program in an asynchronous style, including onSuccess, onFailure, and onComplete.
Also, consider investigating the Akka library, which provides actor-based concurrency for Scala and Java, and interoperates with scala.concurrent.
Scala 2.9.2 and prior
This simplest approach is to use Scala's Future class, which is a sub-component of the Actors framework. The scala.actors.Futures.future method creates a Future for the block passed to it. You can then use scala.actors.Futures.awaitAll to wait for all tasks to complete.
For example, here's a sample that creates 10 tasks, where each tasks sleeps an arbitrary amount of time and then returns the square of the value passed to it.
import scala.actors.Futures._
val tasks = for (i <- 1 to 10) yield future {
println("Executing task " + i)
Thread.sleep(i * 1000L)
i * i
}
val squares = awaitAll(20000L, tasks: _*)
println("Squares: " + squares)
You want to look at either the Scala actors library or Akka. Akka has cleaner syntax, but either will do the trick.
So it sounds like you need to create a pool of actors that know how to process your tasks. An Actor can basically be any class with a receive method - from the Akka tutorial (http://doc.akkasource.org/tutorial-chat-server-scala):
class MyActor extends Actor {
def receive = {
case "test" => println("received test")
case _ => println("received unknown message")
}}
val myActor = Actor.actorOf[MyActor]
myActor.start
You'll want to create a pool of actor instances and fire your tasks to them as messages. Here's a post on Akka actor pooling that may be helpful: http://vasilrem.com/blog/software-development/flexible-load-balancing-with-akka-in-scala/
In your case, one actor per task may be appropriate (actors are extremely lightweight compared to threads so you can have a LOT of them in a single VM), or you might need some more sophisticated load balancing between them.
EDIT:
Using the example actor above, sending it a message is as easy as this:
myActor ! "test"
The actor will then output "received test" to standard output.
Messages can be of any type, and when combined with Scala's pattern matching, you have a powerful pattern for building flexible concurrent applications.
In general Akka actors will "do the right thing" in terms of thread sharing, and for the OP's needs, I imagine the defaults are fine. But if you need to, you can set the dispatcher the actor should use to one of several types:
* Thread-based
* Event-based
* Work-stealing
* HawtDispatch-based event-driven
It's trivial to set a dispatcher for an actor:
class MyActor extends Actor {
self.dispatcher = Dispatchers.newExecutorBasedEventDrivenDispatcher("thread-pool-dispatch")
.withNewThreadPoolWithBoundedBlockingQueue(100)
.setCorePoolSize(10)
.setMaxPoolSize(10)
.setKeepAliveTimeInMillis(10000)
.build
}
See http://doc.akkasource.org/dispatchers-scala
In this way, you could limit the thread pool size, but again, the original use case could probably be satisfied with 50K Akka actor instances using default dispatchers and it would parallelize nicely.
This really only scratches the surface of what Akka can do. It brings a lot of what Erlang offers to the Scala language. Actors can monitor other actors and restart them, creating self-healing applications. Akka also provides Software Transactional Memory and many other features. It's arguably the "killer app" or "killer framework" for Scala.
If you want to "execute them with 10 threads", then use threads. Scala's actor model, which is usually what people is speaking of when they say Scala is good for concurrency, hides such details so you won't see them.
Using actors, or futures with all you have are simple computations, you'd just create 50000 of them and let them run. You might be faced with issues, but they are of a different nature.
Here's another answer similar to mpilquist's response but without deprecated API and including the thread settings via a custom ExecutionContext:
import java.util.concurrent.Executors
import scala.concurrent.{ExecutionContext, Await, Future}
import scala.concurrent.duration._
val numJobs = 50000
var numThreads = 10
// customize the execution context to use the specified number of threads
implicit val ec = ExecutionContext.fromExecutor(Executors.newFixedThreadPool(numThreads))
// define the tasks
val tasks = for (i <- 1 to numJobs) yield Future {
// do something more fancy here
i
}
// aggregate and wait for final result
val aggregated = Future.sequence(tasks)
val oneToNSum = Await.result(aggregated, 15.seconds).sum

What does threadsafe mean?

Recently I tried to Access a textbox from a thread (other than the UI thread) and an exception was thrown. It said something about the "code not being thread safe" and so I ended up writing a delegate (sample from MSDN helped) and calling it instead.
But even so I didn't quite understand why all the extra code was necessary.
Update:
Will I run into any serious problems if I check
Controls.CheckForIllegalCrossThread..blah =true
Eric Lippert has a nice blog post entitled What is this thing you call "thread safe"? about the definition of thread safety as found of Wikipedia.
3 important things extracted from the links :
“A piece of code is thread-safe if it functions correctly during
simultaneous execution by multiple threads.”
“In particular, it must satisfy the need for multiple threads to
access the same shared data, …”
“…and the need for a shared piece of data to be accessed by only one
thread at any given time.”
Definitely worth a read!
In the simplest of terms threadsafe means that it is safe to be accessed from multiple threads. When you are using multiple threads in a program and they are each attempting to access a common data structure or location in memory several bad things can happen. So, you add some extra code to prevent those bad things. For example, if two people were writing the same document at the same time, the second person to save will overwrite the work of the first person. To make it thread safe then, you have to force person 2 to wait for person 1 to complete their task before allowing person 2 to edit the document.
Wikipedia has an article on Thread Safety.
This definitions page (you have to skip an ad - sorry) defines it thus:
In computer programming, thread-safe describes a program portion or routine that can be called from multiple programming threads without unwanted interaction between the threads.
A thread is an execution path of a program. A single threaded program will only have one thread and so this problem doesn't arise. Virtually all GUI programs have multiple execution paths and hence threads - there are at least two, one for processing the display of the GUI and handing user input, and at least one other for actually performing the operations of the program.
This is done so that the UI is still responsive while the program is working by offloading any long running process to any non-UI threads. These threads may be created once and exist for the lifetime of the program, or just get created when needed and destroyed when they've finished.
As these threads will often need to perform common actions - disk i/o, outputting results to the screen etc. - these parts of the code will need to be written in such a way that they can handle being called from multiple threads, often at the same time. This will involve things like:
Working on copies of data
Adding locks around the critical code
Opening files in the appropriate mode - so if reading, don't open the file for write as well.
Coping with not having access to resources because they're locked by other threads/processes.
Simply, thread-safe means that a method or class instance can be used by multiple threads at the same time without any problems occurring.
Consider the following method:
private int myInt = 0;
public int AddOne()
{
int tmp = myInt;
tmp = tmp + 1;
myInt = tmp;
return tmp;
}
Now thread A and thread B both would like to execute AddOne(). but A starts first and reads the value of myInt (0) into tmp. Now for some reason, the scheduler decides to halt thread A and defer execution to thread B. Thread B now also reads the value of myInt (still 0) into it's own variable tmp. Thread B finishes the entire method so in the end myInt = 1. And 1 is returned. Now it's Thread A's turn again. Thread A continues. And adds 1 to tmp (tmp was 0 for thread A). And then saves this value in myInt. myInt is again 1.
So in this case the method AddOne() was called two times, but because the method was not implemented in a thread-safe way the value of myInt is not 2, as expected, but 1 because the second thread read the variable myInt before the first thread finished updating it.
Creating thread-safe methods is very hard in non-trivial cases. And there are quite a few techniques. In Java you can mark a method as synchronized, this means that only one thread can execute that method at a given time. The other threads wait in line. This makes a method thread-safe, but if there is a lot of work to be done in a method, then this wastes a lot of space. Another technique is to 'mark only a small part of a method as synchronized' by creating a lock or semaphore, and locking this small part (usually called the critical section). There are even some methods that are implemented as lock-less thread-safe, which means that they are built in such a way that multiple threads can race through them at the same time without ever causing problems, this can be the case when a method only executes one atomic call. Atomic calls are calls that can't be interrupted and can only be done by one thread at a time.
In real world example for the layman is
Let's suppose you have a bank account with the internet and mobile banking and your account have only $10.
You performed transfer balance to another account using mobile banking, and the meantime, you did online shopping using the same bank account.
If this bank account is not threadsafe, then the bank allows you to perform two transactions at the same time and then the bank will become bankrupt.
Threadsafe means that an object's state doesn't change if simultaneously multiple threads try to access the object.
You can get more explanation from the book "Java Concurrency in Practice":
A class is thread‐safe if it behaves correctly when accessed from multiple threads, regardless of the scheduling or interleaving of the execution of those threads by the runtime environment, and with no additional synchronization or other coordination on the part of the calling code.
A module is thread-safe if it guarantees it can maintain its invariants in the face of multi-threaded and concurrence use.
Here, a module can be a data-structure, class, object, method/procedure or function. Basically scoped piece of code and related data.
The guarantee can potentially be limited to certain environments such as a specific CPU architecture, but must hold for those environments. If there is no explicit delimitation of environments, then it is usually taken to imply that it holds for all environments that the code can be compiled and executed.
Thread-unsafe modules may function correctly under mutli-threaded and concurrent use, but this is often more down to luck and coincidence, than careful design. Even if some module does not break for you under, it may break when moved to other environments.
Multi-threading bugs are often hard to debug. Some of them only happen occasionally, while others manifest aggressively - this too, can be environment specific. They can manifest as subtly wrong results, or deadlocks. They can mess up data-structures in unpredictable ways, and cause other seemingly impossible bugs to appear in other remote parts of the code. It can be very application specific, so it is hard to give a general description.
Thread safety: A thread safe program protects it's data from memory consistency errors. In a highly multi-threaded program, a thread safe program does not cause any side effects with multiple read/write operations from multiple threads on same objects. Different threads can share and modify object data without consistency errors.
You can achieve thread safety by using advanced concurrency API. This documentation page provides good programming constructs to achieve thread safety.
Lock Objects support locking idioms that simplify many concurrent applications.
Executors define a high-level API for launching and managing threads. Executor implementations provided by java.util.concurrent provide thread pool management suitable for large-scale applications.
Concurrent Collections make it easier to manage large collections of data, and can greatly reduce the need for synchronization.
Atomic Variables have features that minimize synchronization and help avoid memory consistency errors.
ThreadLocalRandom (in JDK 7) provides efficient generation of pseudorandom numbers from multiple threads.
Refer to java.util.concurrent and java.util.concurrent.atomic packages too for other programming constructs.
Producing Thread-safe code is all about managing access to shared mutable states. When mutable states are published or shared between threads, they need to be synchronized to avoid bugs like race conditions and memory consistency errors.
I recently wrote a blog about thread safety. You can read it for more information.
You are clearly working in a WinForms environment. WinForms controls exhibit thread affinity, which means that the thread in which they are created is the only thread that can be used to access and update them. That is why you will find examples on MSDN and elsewhere demonstrating how to marshall the call back onto the main thread.
Normal WinForms practice is to have a single thread that is dedicated to all your UI work.
I find the concept of http://en.wikipedia.org/wiki/Reentrancy_%28computing%29 to be what I usually think of as unsafe threading which is when a method has and relies on a side effect such as a global variable.
For example I have seen code that formatted floating point numbers to string, if two of these are run in different threads the global value of decimalSeparator can be permanently changed to '.'
//built in global set to locale specific value (here a comma)
decimalSeparator = ','
function FormatDot(value : real):
//save the current decimal character
temp = decimalSeparator
//set the global value to be
decimalSeparator = '.'
//format() uses decimalSeparator behind the scenes
result = format(value)
//Put the original value back
decimalSeparator = temp
To understand thread safety, read below sections:
4.3.1. Example: Vehicle Tracker Using Delegation
As a more substantial example of delegation, let's construct a version of the vehicle tracker that delegates to a thread-safe class. We store the locations in a Map, so we start with a thread-safe Map implementation, ConcurrentHashMap. We also store the location using an immutable Point class instead of MutablePoint, shown in Listing 4.6.
Listing 4.6. Immutable Point class used by DelegatingVehicleTracker.
class Point{
public final int x, y;
public Point() {
this.x=0; this.y=0;
}
public Point(int x, int y) {
this.x = x;
this.y = y;
}
}
Point is thread-safe because it is immutable. Immutable values can be freely shared and published, so we no longer need to copy the locations when returning them.
DelegatingVehicleTracker in Listing 4.7 does not use any explicit synchronization; all access to state is managed by ConcurrentHashMap, and all the keys and values of the Map are immutable.
Listing 4.7. Delegating Thread Safety to a ConcurrentHashMap.
public class DelegatingVehicleTracker {
private final ConcurrentMap<String, Point> locations;
private final Map<String, Point> unmodifiableMap;
public DelegatingVehicleTracker(Map<String, Point> points) {
this.locations = new ConcurrentHashMap<String, Point>(points);
this.unmodifiableMap = Collections.unmodifiableMap(locations);
}
public Map<String, Point> getLocations(){
return this.unmodifiableMap; // User cannot update point(x,y) as Point is immutable
}
public Point getLocation(String id) {
return locations.get(id);
}
public void setLocation(String id, int x, int y) {
if(locations.replace(id, new Point(x, y)) == null) {
throw new IllegalArgumentException("invalid vehicle name: " + id);
}
}
}
If we had used the original MutablePoint class instead of Point, we would be breaking encapsulation by letting getLocations publish a reference to mutable state that is not thread-safe. Notice that we've changed the behavior of the vehicle tracker class slightly; while the monitor version returned a snapshot of the locations, the delegating version returns an unmodifiable but “live” view of the vehicle locations. This means that if thread A calls getLocations and thread B later modifies the location of some of the points, those changes are reflected in the Map returned to thread A.
4.3.2. Independent State Variables
We can also delegate thread safety to more than one underlying state variable as long as those underlying state variables are independent, meaning that the composite class does not impose any invariants involving the multiple state variables.
VisualComponent in Listing 4.9 is a graphical component that allows clients to register listeners for mouse and keystroke events. It maintains a list of registered listeners of each type, so that when an event occurs the appropriate listeners can be invoked. But there is no relationship between the set of mouse listeners and key listeners; the two are independent, and therefore VisualComponent can delegate its thread safety obligations to two underlying thread-safe lists.
Listing 4.9. Delegating Thread Safety to Multiple Underlying State Variables.
public class VisualComponent {
private final List<KeyListener> keyListeners
= new CopyOnWriteArrayList<KeyListener>();
private final List<MouseListener> mouseListeners
= new CopyOnWriteArrayList<MouseListener>();
public void addKeyListener(KeyListener listener) {
keyListeners.add(listener);
}
public void addMouseListener(MouseListener listener) {
mouseListeners.add(listener);
}
public void removeKeyListener(KeyListener listener) {
keyListeners.remove(listener);
}
public void removeMouseListener(MouseListener listener) {
mouseListeners.remove(listener);
}
}
VisualComponent uses a CopyOnWriteArrayList to store each listener list; this is a thread-safe List implementation particularly suited for managing listener lists (see Section 5.2.3). Each List is thread-safe, and because there are no constraints coupling the state of one to the state of the other, VisualComponent can delegate its thread safety responsibilities to the underlying mouseListeners and keyListeners objects.
4.3.3. When Delegation Fails
Most composite classes are not as simple as VisualComponent: they have invariants that relate their component state variables. NumberRange in Listing 4.10 uses two AtomicIntegers to manage its state, but imposes an additional constraint—that the first number be less than or equal to the second.
Listing 4.10. Number Range Class that does Not Sufficiently Protect Its Invariants. Don't do this.
public class NumberRange {
// INVARIANT: lower <= upper
private final AtomicInteger lower = new AtomicInteger(0);
private final AtomicInteger upper = new AtomicInteger(0);
public void setLower(int i) {
//Warning - unsafe check-then-act
if(i > upper.get()) {
throw new IllegalArgumentException(
"Can't set lower to " + i + " > upper ");
}
lower.set(i);
}
public void setUpper(int i) {
//Warning - unsafe check-then-act
if(i < lower.get()) {
throw new IllegalArgumentException(
"Can't set upper to " + i + " < lower ");
}
upper.set(i);
}
public boolean isInRange(int i){
return (i >= lower.get() && i <= upper.get());
}
}
NumberRange is not thread-safe; it does not preserve the invariant that constrains lower and upper. The setLower and setUpper methods attempt to respect this invariant, but do so poorly. Both setLower and setUpper are check-then-act sequences, but they do not use sufficient locking to make them atomic. If the number range holds (0, 10), and one thread calls setLower(5) while another thread calls setUpper(4), with some unlucky timing both will pass the checks in the setters and both modifications will be applied. The result is that the range now holds (5, 4)—an invalid state. So while the underlying AtomicIntegers are thread-safe, the composite class is not. Because the underlying state variables lower and upper are not independent, NumberRange cannot simply delegate thread safety to its thread-safe state variables.
NumberRange could be made thread-safe by using locking to maintain its invariants, such as guarding lower and upper with a common lock. It must also avoid publishing lower and upper to prevent clients from subverting its invariants.
If a class has compound actions, as NumberRange does, delegation alone is again not a suitable approach for thread safety. In these cases, the class must provide its own locking to ensure that compound actions are atomic, unless the entire compound action can also be delegated to the underlying state variables.
If a class is composed of multiple independent thread-safe state variables and has no operations that have any invalid state transitions, then it can delegate thread safety to the underlying state variables.

Resources