Synchronizing WaitOnAddress with WakeByAddressSingle - multithreading

I need a dispatch loop in C++/CX that takes tasks as input and dispatches them in a queue on a different thread. I managed a simple implementation that consists of the following:
void Threading::DispatchQueue::Add(TaskType task)
{
// The scope causes the lock to be released before WakeByAddressSingle.
// This mutex is used for synchronizing queue operations, so it shoudln't
// be needed for WakeByAddressSingle.
{
std::lock_guard<std::mutex> lock(_queue_mux);
GetQueue().push(task);
}
WakeByAddressSingle(&CompareAddress);
}
void Threading::DispatchQueue::Runner()
{
while (true) {
TaskType task = DequeueTask();
if (task != nullptr) {
task();
}
else {
Log::LogMessage(this->GetType()->FullName, Level::Debug, "While is running!");
// The scope guarantees that the lock is released before the thread is
// put to sleep. It's noticeable that the thread must not sleep while
// the lock is acquired, or otherwise the Add(TaskType) method can never
// acquire the lock, meaning that the thread will never wake up.
{
std::lock_guard<std::mutex> lock(_queue_mux);
if (!GetQueue().empty()) {
continue;
}
}
WaitOnAddress(&CompareAddress, &UndesiredValue, sizeof(CompareAddress), -1);
}
}
}
Notice that queue operations on the two methods are synchronized by DequeueTask, as follows:
Threading::DispatchQueue::TaskType Threading::DispatchQueue::DequeueTask()
{
std::lock_guard<std::mutex> lock(_queue_mux);
QueueType & queue = GetQueue();
if (queue.empty()) {
return nullptr;
}
TaskType task = GetQueue().front();
GetQueue().pop();
return task;
}
This implementation has been working fine in most situations except when Add calls WakeByAddressSingle before the thread is put to wait. Take the following example:
The Runner acquires the mutex and checks whether the queue is empty. Lets assume it is. At this point, the block of code terminates and the lock is relinquished.
At this moment Add kicks in, acquires the lock, pushes a task to the queue, and calls WakeByAddressSingle. Notice that the Runner thread is not sleeping yet.
The Runner thread gets CPU time again and goes to sleep with WaitOnAddress.
As the task was already added and WakeByAddressSingle called, the thread never awakes until another task is added.
The problem is that I cannot acquire locks (I think?) before WaitOnAddress because if the thread goes to sleep with the lock then the lock will never be released. Is there some sort of paradigm that can be used in this situation? How do you synchronize WakeByAddressSingle and WaitOnAddress?

Related

C++ thread: how to send message to other long-live thread?

I have a server listening to some port, and I create several detached threads.
Not only the server it self will run forever, but also the detached threads will run forever.
//pseudocode
void t1_func()
{
for(;;)
{
if(notified from server)
dosomething();
}
}
thread t1(t1_func);
thread t2(...);
for(;;)
{
// read from accepted socket
string msg = socket.read_some(...);
//notify thread 1 and thread 2;
}
Since I am new to multithreading, I don't know how to implement such nofity in server, and check the nofity in detached threads.
Any helpful tips will be appreciated.
The easiest way to do this is with std::condition_variable.
std::condition_variable will wait until another thread calls either notify_one or notify_all on it and only then will it wake up.
Here is your t1_func implemented using condition variables:
std::condition_variable t1_cond;
void t1_func()
{
//wait requires a std::unique_lock
std::mutex mtx;
std::unique_lock<std::mutex> lock{ mtx };
while(true)
{
t1_cond.wait(lock);
doSomething();
}
}
The wait method takes a std::unique_lock but the lock doesn't have to be shared to notify the thread. When you want to wake up the worker thread from the main thread you would call notify_one or notify_all like this:
t1_cond.notify_one();
If you want to have the thread wake up after a certain amount of time you could use wait_for instead of wait.

Async networking racecondition

I am writing a client for a networked application and I would like to seperate receiving and processing the messages to different threads.
This is my solution at the moment:
Mutex mutex;
Queue queue;
recv()
{
while(true)
{
messages = receive_some_messages();
mutex.lock();
queue.add(messages);
mutex.unlock();
process.notify();
}
}
proc()
{
while(true)
{
block_until_notify();
Queue to_process;
mutex.lock();
to_process.add( queue.take_all() );
mutex.unlock();
foreach(message in to_process)
{
process_message(message);
}
}
}
This has a race condition however:
recv receives a lot of messages and puts them in queue.
recv notifices proc.
proc takes all messages from queue and starts working
recv receives some more messages and puts them in queue
recv notifies proc, but as proc is still working this does nothing.
proc completes its iteration.
proc blocks - there are still unprocessed messages in queue
I can think of several methods of fixing it, however none are favorable.
Solution 1
I could adapt sync to keep the lock on the mutex during the processing:
proc()
{
while(true)
{
block_until_notify();
Queue to_process;
mutex.lock();
to_process.add( queue.take_all() );
foreach(message in to_process)
{
process_message(message);
}
mutex.unlock();
}
}
But this would mean the threads run exclusively: either recv or proc is active, but not both.
Solution 2
I could remove the block and notify.
recv()
{
while(true)
{
messages = receive_messages();
mutex.lock();
queue.add(messages);
mutex.unlock();
}
}
proc()
{
while(true)
{
Queue to_process;
mutex.lock();
to_process.add( queue.take_all() );
mutex.unlock();
foreach(message in to_process)
{
process_message(message);
}
}
}
But this means that the proc will run in a busy-wait loop, only blocking when recv is adding messages to queue.
The question
I would like a solution where proc and recv do not run exclusively and without busy-waiting.
Does anybody have any idea on what I could do?
I think you can get by if the consumer checks for an empty queue after the queue is drained.
proc()
{
while(true)
{
Queue to_process;
mutex.lock();
if (queue.empty()) {
mutex.unlock();
block_until_notify();
mutex.lock();
}
to_process.add( queue.take_all() );
mutex.unlock();
foreach(message in to_process)
{
process_message(message);
}
}
}
I believe that this fixes the race condition you mentioned.
Your block_until_notify() function is probably a condition variable. The way to do this is to change that function so that you lock the mutex, check that the queue is empty, then wait for notification. If the queue is not empty then you proceed with processing it. After processing the queue you return to the block_until_notify function and repeat the process, again checking if the queue is empty before blocking.
If you aren't using a condition variable of some kind then I'd suggest using a semaphore. On a Windows system you'd call ReleaseSemaphore every time messages were added to the queue. The receiver would call WaitForSingleObject on the semaphore handle. That would be done in a loop and the loop would continue to repeat, even if the queue is empty, until the wait blocks.

C++11 - Managing worker threads

I am new to threading in C++11 and I am wondering how to manage worker threads (using the standard library) to perform some task and then die off. I have a pool of threads vector<thread *> thread_pool that maintains a list of active threads.
Let's say I launch a new thread and add it to the pool using thread_pool.push_back(new thread(worker_task)), where worker_task is defined as follows:
void worker_task()
{
this_thread::sleep_for(chrono::milliseconds(1000));
cout << "Hello, world!\n"
}
Once the worker thread has terminated, what is the best way to reliably remove the thread from the pool? The main thread needs to run continuously and cannot block on a join call. I am more confused about the general structure of the code than the intricacies of synchronization.
Edit: It looks like I misused the concept of a pool in my code. All I meant was that I have a list of threads that are currently running.
You can use std::thread::detach to "separate the thread of execution from the thread object, allowing execution to continue independently. Any allocated resources will be freed once the thread exits."
If each thread should make its state visible, you can move this functionality into the thread function.
std::mutex mutex;
using strings = std::list<std::string>;
strings info;
strings::iterator insert(std::string value) {
std::unique_lock<std::mutex> lock{mutex};
return info.insert(info.end(), std::move(value));
}
auto erase(strings::iterator p) {
std::unique_lock<std::mutex> lock{mutex};
info.erase(p);
}
template <typename F>
void async(F f) {
std::thread{[f] {
auto p = insert("...");
try {
f();
} catch (...) {
erase(p);
throw;
}
erase(p);
}}.detach();
}

Locking C++11 std::unique_lock causes deadlock exception

I'm trying to use a C++11 std::condition_variable, but when I try to lock the unique_lock associated with it from a second thread I get an exception "Resource deadlock avoided". The thread that created it can lock and unlock it, but not the second thread, even though I'm pretty sure the unique_lock shouldn't be locked already at the point the second thread tries to lock it.
FWIW I'm using gcc 4.8.1 in Linux with -std=gnu++11.
I've written a wrapper class around the condition_variable, unique_lock and mutex, so nothing else in my code has direct access to them. Note the use of std::defer_lock, I already fell in to that trap :-).
class Cond {
private:
std::condition_variable cCond;
std::mutex cMutex;
std::unique_lock<std::mutex> cULock;
public:
Cond() : cULock(cMutex, std::defer_lock)
{}
void wait()
{
std::ostringstream id;
id << std::this_thread::get_id();
H_LOG_D("Cond %p waiting in thread %s", this, id.str().c_str());
cCond.wait(cULock);
H_LOG_D("Cond %p woke up in thread %s", this, id.str().c_str());
}
// Returns false on timeout
bool waitTimeout(unsigned int ms)
{
std::ostringstream id;
id << std::this_thread::get_id();
H_LOG_D("Cond %p waiting (timed) in thread %s", this, id.str().c_str());
bool result = cCond.wait_for(cULock, std::chrono::milliseconds(ms))
== std::cv_status::no_timeout;
H_LOG_D("Cond %p woke up in thread %s", this, id.str().c_str());
return result;
}
void notify()
{
cCond.notify_one();
}
void notifyAll()
{
cCond.notify_all();
}
void lock()
{
std::ostringstream id;
id << std::this_thread::get_id();
H_LOG_D("Locking Cond %p in thread %s", this, id.str().c_str());
cULock.lock();
}
void release()
{
std::ostringstream id;
id << std::this_thread::get_id();
H_LOG_D("Releasing Cond %p in thread %s", this, id.str().c_str());
cULock.unlock();
}
};
My main thread creates a RenderContext, which has a thread associated with it. From the main thread's point of view, it uses the Cond to signal the rendering thread to perform an action and can also wait on the COnd for the rendering thread to complete that action. The rendering thread waits on the Cond for the main thread to send rendering requests, and uses the same Cond to tell the main thread it's completed an action if necessary. The error I'm getting occurs when the rendering thread tries to lock the Cond to check/wait for render requests, at which point it shouldn't be locked at all (because the main thread is waiting on it), let alone by the same thread. Here's the output:
DEBUG: Created window
DEBUG: OpenGL 3.0 Mesa 9.1.4, GLSL 1.30
DEBUG: setScreen locking from thread 140564696819520
DEBUG: Locking Cond 0x13ec1e0 in thread 140564696819520
DEBUG: Releasing Cond 0x13ec1e0 in thread 140564696819520
DEBUG: Entering GLFW main loop
DEBUG: requestRender locking from thread 140564696819520
DEBUG: Locking Cond 0x13ec1e0 in thread 140564696819520
DEBUG: requestRender waiting
DEBUG: Cond 0x13ec1e0 waiting in thread 140564696819520
DEBUG: Running thread 'RenderThread' with id 140564575180544
DEBUG: render thread::run locking from thread 140564575180544
DEBUG: Locking Cond 0x13ec1e0 in thread 140564575180544
terminate called after throwing an instance of 'std::system_error'
what(): Resource deadlock avoided
To be honest I don't really understand what a unique_lock is for and why condition_variable needs one instead of using a mutex directly, so that's probably the cause of the problem. I can't find a good explanation of it online.
Foreword: An important thing to understand with condition variables is that they can be subject to random, spurious wake ups. In other words, a CV can exit from wait() without anyone having called notify_*() first. Unfortunately there is no way to distinguish such a spurious wake up from a legitimate one, so the only solution is to have an additional resource (at the very least a boolean) so that you can tell whether the wake up condition is actually met.
This additional resource should be guarded by a mutex too, usually the very same you use as a companion for the CV.
The typical usage of a CV/mutex pair is as follows:
std::mutex mutex;
std::condition_variable cv;
Resource resource;
void produce() {
// note how the lock only protects the resource, not the notify() call
// in practice this makes little difference, you just get to release the
// lock a bit earlier which slightly improves concurrency
{
std::lock_guard<std::mutex> lock(mutex); // use the lightweight lock_guard
make_ready(resource);
}
// the point is: notify_*() don't require a locked mutex
cv.notify_one(); // or notify_all()
}
void consume() {
std::unique_lock<std::mutex> lock(mutex);
while (!is_ready(resource))
cv.wait(lock);
// note how the lock still protects the resource, in order to exclude other threads
use(resource);
}
Compared to your code, notice how several threads can call produce()/consume() simultaneously without worrying about a shared unique_lock: the only shared things are mutex/cv/resource and each thread gets its own unique_lock that forces the thread to wait its turn if the mutex is already locked by something else.
As you can see, the resource can't really be separated from the CV/mutex pair, which is why I said in a comment that your wrapper class wasn't really fitting IMHO, since it indeed tries to separate them.
The usual approach is not to make a wrapper for the CV/mutex pair as you tried to, but for the whole CV/mutex/resource trio. Eg. a thread-safe message queue where the consumer threads will wait on the CV until the queue has messages ready to be consumed.
If you really want to wrap just the CV/mutex pair, you should get rid of your lock()/release() methods which are unsafe (from a RAII point of view) and replace them with a single lock() method returning a unique_ptr:
std::unique_ptr<std::mutex> lock() {
return std::unique_ptr<std::mutex>(cMutex);
}
This way you can use your Cond wrapper class in rather the same way as what I showed above:
Cond cond;
Resource resource;
void produce() {
{
auto lock = cond.lock();
make_ready(resource);
}
cond.notify(); // or notifyAll()
}
void consume() {
auto lock = cond.lock();
while (!is_ready(resource))
cond.wait(lock);
use(resource);
}
But honestly I'm not sure it's worth the trouble: what if you want to use a recursive_mutex instead of a plain mutex? Well, you'd have to make a template out of your class so that you can choose the mutex type (or write a second class altogether, yay for code duplication). And anyway you don't gain much since you still have to write pretty much the same code in order to manage the resource. A wrapper class only for the CV/mutex pair is too thin a wrapper to be really useful IMHO. But as usual, YMMV.

What's the difference between deadlock and livelock?

Can somebody please explain with examples (of code) what is the difference between deadlock and livelock?
Taken from http://en.wikipedia.org/wiki/Deadlock:
In concurrent computing, a deadlock is a state in which each member of a group of actions, is waiting for some other member to release a lock
A livelock is similar to a deadlock,
except that the states of the
processes involved in the livelock
constantly change with regard to one
another, none progressing. Livelock is
a special case of resource starvation;
the general definition only states
that a specific process is not
progressing.
A real-world example of
livelock occurs when two people meet
in a narrow corridor, and each tries
to be polite by moving aside to let
the other pass, but they end up
swaying from side to side without
making any progress because they both
repeatedly move the same way at the
same time.
Livelock is a risk with
some algorithms that detect and
recover from deadlock. If more than
one process takes action, the deadlock
detection algorithm can be repeatedly
triggered. This can be avoided by
ensuring that only one process (chosen
randomly or by priority) takes action.
Livelock
A thread often acts in response to the action of another thread. If
the other thread's action is also a response to the action of another
thread, then livelock may result.
As with deadlock, livelocked threads are unable to make further progress. However, the threads are not blocked — they are simply too busy responding to each other to resume work. This is comparable to two people attempting to pass each other in a corridor: Alphonse moves to his left to let Gaston pass, while Gaston moves to his right to let Alphonse pass. Seeing that they are still blocking each other, Alphonse moves to his right, while Gaston moves to his left. They're still blocking each other, and so on...
The main difference between livelock and deadlock is that threads are not going to be blocked, instead they will try to respond to each other continuously.
In this image, both circles (threads or processes) will try to give space to the other by moving left and right. But they can't move any further.
All the content and examples here are from
Operating Systems: Internals and Design Principles
William Stallings
8º Edition
Deadlock: A situation in which two or more processes are unable to proceed because each is waiting for one the others to do something.
For example, consider two processes, P1 and P2, and two resources, R1 and R2. Suppose that each process needs access to both resources to perform part of its function. Then it is possible to have the following situation: the OS assigns R1 to P2, and R2 to P1. Each process is waiting for one of the two resources. Neither will release the resource that it already owns until it has acquired
the other resource and performed the function requiring both resources. The two
processes are deadlocked
Livelock: A situation in which two or more processes continuously change their states in response to changes in the other process(es) without doing any useful work:
Starvation: A situation in which a runnable process is overlooked indefinitely by the scheduler; although it is able to proceed, it is never chosen.
Suppose that three processes (P1, P2, P3) each require periodic access to resource R. Consider the situation in which P1 is in possession of the resource, and both P2 and P3 are delayed, waiting for that resource. When P1 exits its critical section, either P2 or P3 should be allowed access to R. Assume that the OS grants access to P3 and that P1 again requires access before P3 completes its critical section. If the OS grants access to P1 after P3 has finished, and subsequently alternately grants access to P1 and P3, then P2 may indefinitely be denied access to the resource, even though there is no deadlock situation.
APPENDIX A - TOPICS IN CONCURRENCY
Deadlock Example
If both processes set their flags to true before either has executed the while statement, then each will think that the other has entered its critical section, causing deadlock.
/* PROCESS 0 */
flag[0] = true; // <- get lock 0
while (flag[1]) // <- is lock 1 free?
/* do nothing */; // <- no? so I wait 1 second, for example
// and test again.
// on more sophisticated setups we can ask
// to be woken when lock 1 is freed
/* critical section*/; // <- do what we need (this will never happen)
flag[0] = false; // <- releasing our lock
/* PROCESS 1 */
flag[1] = true;
while (flag[0])
/* do nothing */;
/* critical section*/;
flag[1] = false;
Livelock Example
/* PROCESS 0 */
flag[0] = true; // <- get lock 0
while (flag[1]){
flag[0] = false; // <- instead of sleeping, we do useless work
// needed by the lock mechanism
/*delay */; // <- wait for a second
flag[0] = true; // <- and restart useless work again.
}
/*critical section*/; // <- do what we need (this will never happen)
flag[0] = false;
/* PROCESS 1 */
flag[1] = true;
while (flag[0]) {
flag[1] = false;
/*delay */;
flag[1] = true;
}
/* critical section*/;
flag[1] = false;
[...] consider the following sequence of events:
P0 sets flag[0] to true.
P1 sets flag[1] to true.
P0 checks flag[1].
P1 checks flag[0].
P0 sets flag[0] to false.
P1 sets flag[1] to false.
P0 sets flag[0] to true.
P1 sets flag[1] to true.
This sequence could be extended indefinitely, and neither process could enter its critical section. Strictly speaking, this is not deadlock, because any alteration in the relative speed of the two processes will break this cycle and allow one to enter the critical section. This condition is referred to as livelock. Recall that deadlock occurs when a set of processes wishes to enter their critical sections but no process can succeed. With livelock, there are possible sequences of executions that succeed, but it is also possible to describe one or more execution sequences in which no process ever enters its critical section.
Not content from the book anymore.
And what about spinlocks?
Spinlock is a technique to avoid the cost of the OS lock mechanism. Typically you would do:
try
{
lock = beginLock();
doSomething();
}
finally
{
endLock();
}
A problem start to appear when beginLock() costs much more than doSomething(). In very exagerated terms, imagine what happens when the beginLock costs 1 second, but doSomething cost just 1 millisecond.
In this case if you waited 1 millisecond, you would avoid being hindered for 1 second.
Why the beginLock would cost so much? If the lock is free is does not cost a lot (see https://stackoverflow.com/a/49712993/5397116), but if the lock is not free the OS will "freeze" your thread, setup a mechanism to wake you when the lock is freed, and then wake you again in the future.
All of this is much more expensive than some loops checking the lock. That is why sometimes is better to do a "spinlock".
For example:
void beginSpinLock(lock)
{
if(lock) loopFor(1 milliseconds);
else
{
lock = true;
return;
}
if(lock) loopFor(2 milliseconds);
else
{
lock = true;
return;
}
// important is that the part above never
// cause the thread to sleep.
// It is "burning" the time slice of this thread.
// Hopefully for good.
// some implementations fallback to OS lock mechanism
// after a few tries
if(lock) return beginLock(lock);
else
{
lock = true;
return;
}
}
If your implementation is not careful, you can fall on livelock, spending all CPU on the lock mechanism.
Also see:
https://preshing.com/20120226/roll-your-own-lightweight-mutex/
Is my spin lock implementation correct and optimal?
Summary:
Deadlock: situation where nobody progress, doing nothing (sleeping, waiting etc..). CPU usage will be low;
Livelock: situation where nobody progress, but CPU is spent on the lock mechanism and not on your calculation;
Starvation: situation where one procress never gets the chance to run; by pure bad luck or by some of its property (low priority, for example);
Spinlock: technique of avoiding the cost waiting the lock to be freed.
DEADLOCK
Deadlock is a condition in which a task waits
indefinitely for conditions that can never be
satisfied
- task claims exclusive control over shared
resources
- task holds resources while waiting for other
resources to be released
- tasks cannot be forced to relinguish resources
- a circular waiting condition exists
LIVELOCK
Livelock conditions can arise when two or
more tasks depend on and use the some
resource causing a circular dependency
condition where those tasks continue
running forever, thus blocking all lower
priority level tasks from running (these
lower priority tasks experience a condition
called starvation)
Maybe these two examples illustrate you the difference between a deadlock and a livelock:
Java-Example for a deadlock:
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class DeadlockSample {
private static final Lock lock1 = new ReentrantLock(true);
private static final Lock lock2 = new ReentrantLock(true);
public static void main(String[] args) {
Thread threadA = new Thread(DeadlockSample::doA,"Thread A");
Thread threadB = new Thread(DeadlockSample::doB,"Thread B");
threadA.start();
threadB.start();
}
public static void doA() {
System.out.println(Thread.currentThread().getName() + " : waits for lock 1");
lock1.lock();
System.out.println(Thread.currentThread().getName() + " : holds lock 1");
try {
System.out.println(Thread.currentThread().getName() + " : waits for lock 2");
lock2.lock();
System.out.println(Thread.currentThread().getName() + " : holds lock 2");
try {
System.out.println(Thread.currentThread().getName() + " : critical section of doA()");
} finally {
lock2.unlock();
System.out.println(Thread.currentThread().getName() + " : does not hold lock 2 any longer");
}
} finally {
lock1.unlock();
System.out.println(Thread.currentThread().getName() + " : does not hold lock 1 any longer");
}
}
public static void doB() {
System.out.println(Thread.currentThread().getName() + " : waits for lock 2");
lock2.lock();
System.out.println(Thread.currentThread().getName() + " : holds lock 2");
try {
System.out.println(Thread.currentThread().getName() + " : waits for lock 1");
lock1.lock();
System.out.println(Thread.currentThread().getName() + " : holds lock 1");
try {
System.out.println(Thread.currentThread().getName() + " : critical section of doB()");
} finally {
lock1.unlock();
System.out.println(Thread.currentThread().getName() + " : does not hold lock 1 any longer");
}
} finally {
lock2.unlock();
System.out.println(Thread.currentThread().getName() + " : does not hold lock 2 any longer");
}
}
}
Sample output:
Thread A : waits for lock 1
Thread B : waits for lock 2
Thread A : holds lock 1
Thread B : holds lock 2
Thread B : waits for lock 1
Thread A : waits for lock 2
Java-Example for a livelock:
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class LivelockSample {
private static final Lock lock1 = new ReentrantLock(true);
private static final Lock lock2 = new ReentrantLock(true);
public static void main(String[] args) {
Thread threadA = new Thread(LivelockSample::doA, "Thread A");
Thread threadB = new Thread(LivelockSample::doB, "Thread B");
threadA.start();
threadB.start();
}
public static void doA() {
try {
while (!lock1.tryLock()) {
System.out.println(Thread.currentThread().getName() + " : waits for lock 1");
Thread.sleep(100);
}
System.out.println(Thread.currentThread().getName() + " : holds lock 1");
try {
while (!lock2.tryLock()) {
System.out.println(Thread.currentThread().getName() + " : waits for lock 2");
Thread.sleep(100);
}
System.out.println(Thread.currentThread().getName() + " : holds lock 2");
try {
System.out.println(Thread.currentThread().getName() + " : critical section of doA()");
} finally {
lock2.unlock();
System.out.println(Thread.currentThread().getName() + " : does not hold lock 2 any longer");
}
} finally {
lock1.unlock();
System.out.println(Thread.currentThread().getName() + " : does not hold lock 1 any longer");
}
} catch (InterruptedException e) {
// can be ignored here for this sample
}
}
public static void doB() {
try {
while (!lock2.tryLock()) {
System.out.println(Thread.currentThread().getName() + " : waits for lock 2");
Thread.sleep(100);
}
System.out.println(Thread.currentThread().getName() + " : holds lock 2");
try {
while (!lock1.tryLock()) {
System.out.println(Thread.currentThread().getName() + " : waits for lock 1");
Thread.sleep(100);
}
System.out.println(Thread.currentThread().getName() + " : holds lock 1");
try {
System.out.println(Thread.currentThread().getName() + " : critical section of doB()");
} finally {
lock1.unlock();
System.out.println(Thread.currentThread().getName() + " : does not hold lock 1 any longer");
}
} finally {
lock2.unlock();
System.out.println(Thread.currentThread().getName() + " : does not hold lock 2 any longer");
}
} catch (InterruptedException e) {
// can be ignored here for this sample
}
}
}
Sample output:
Thread B : holds lock 2
Thread A : holds lock 1
Thread A : waits for lock 2
Thread B : waits for lock 1
Thread B : waits for lock 1
Thread A : waits for lock 2
Thread A : waits for lock 2
Thread B : waits for lock 1
Thread B : waits for lock 1
Thread A : waits for lock 2
Thread A : waits for lock 2
Thread B : waits for lock 1
...
Both examples force the threads to aquire the locks in different orders.
While the deadlock waits for the other lock,
the livelock does not really wait - it desperately tries to acquire the lock without the chance of getting it. Every try consumes CPU cycles.
Imagine you've thread A and thread B. They are both synchronised on the same object and inside this block there's a global variable they are both updating;
static boolean commonVar = false;
Object lock = new Object;
...
void threadAMethod(){
...
while(commonVar == false){
synchornized(lock){
...
commonVar = true
}
}
}
void threadBMethod(){
...
while(commonVar == true){
synchornized(lock){
...
commonVar = false
}
}
}
So, when thread A enters in the while loop and holds the lock, it does what it has to do and set the commonVar to true. Then thread B comes in, enters in the while loop and since commonVar is true now, it is be able to hold the lock. It does so, executes the synchronised block, and sets commonVar back to false. Now, thread A again gets it's new CPU window, it was about to quit the while loop but thread B has just set it back to false, so the cycle repeats over again. Threads do something (so they're not blocked in the traditional sense) but for pretty much nothing.
It maybe also nice to mention that livelock does not necessarily have to appear here. I'm assuming that the scheduler favours the other thread once the synchronised block finish executing. Most of the time, I think it's a hard-to-hit expectation and depends on many things happening under the hood.
I just planned to share some knowledge.
Deadlocks
A set of threads/processes is deadlocked, if each thread/process in the set is waiting for an event that only another process in the set can cause.
The important thing here is another process is also in the same set. that means another process also blocked and no one can proceed.
Deadlocks occur when processes are granted exclusive access to resources.
These four conditions should be satisfied to have a deadlock.
Mutual exclusion condition (Each resource is assigned to 1 process)
Hold and wait condition (Process holding resources and at the same time it can ask other resources).
No preemption condition (Previously granted resources can not forcibly be taken away) #This condition depends on the application
Circular wait condition (Must be a circular chain of 2 or more processes and each is waiting for resource held by the next member of the chain) # It will happen dynamically
If we found these conditions then we can say there may be occurred a situation like a deadlock.
LiveLock
Each thread/process is repeating the same state again and again but doesn't progress further. Something similar to a deadlock since the process can not enter the critical section. However in a deadlock, processes are wait without doing anything but in livelock, the processes are trying to proceed but processes are repeated to the same state again and again.
(In a deadlocked computation there is no possible execution sequence which succeeds. but In a livelocked computation, there are successful computations, but there are one or more execution sequences in which no process enters its critical section.)
Difference from deadlock and livelock
When deadlock happens, No execution will happen. but in livelock, some executions will happen but those executions are not enough to enter the critical section.

Resources