How to force a three-way deadlock with race condition - multithreading

What I have so far
I am writing a test case for a deadlock detection algorithm. We want to detect potential deadlock scenarios. For the moment, we're checking for locks being acquired out-of-order.
A simple two-way test for this is as follows:
Thread1 acquires Lock1
Thread2 acquires Lock2
Thread1 blocks trying to acquire Lock2
Thread2 blocks trying to acquire Lock1 <-- DEADLOCK
We rightly detect this as a deadlock. But delay thread2 a bit, and it will not deadlock, but we rightly detect it as a potential deadlock:
Thread1 acquires Lock1
Thread1 acquires Lock2
Thread1 releases both locks
Thread2 acquires Lock2
Thread2 acquires Lock1
Execution continues as normal.
We detect this as a potential deadlock because Thread1 acquired these locks in a different order than Thread2. So if the timing is just right, a deadlock is possible.
Where I'm stuck
I am writing test cases around deadlocks involving three or more threads. I can easily force a three-way deadlock like this:
Thread1 acquires Lock1
Thread2 acquires Lock2
Thread3 acquires Lock3
Thread3 blocks acquiring Lock2
Thread2 blocks acquiring Lock1
Thread1 blocks acquiring Lock3
Again, we can rightly detect this as a deadlock.
But how would I change this test case from a guaranteed deadlock to a potential deadlock? How would I change timings to continue execution as normal?

The easiest way to provide an example when deadlock does not occur with different locking order is to separate execution of all threads in time. You did that for two threads case: first thread acquires and releases all locks and only then the second thread starts. For three thread case situation is the same: you can fully run first thread, then second and then the third.
More complex scenarios are also possible:
Thread1 acquires Lock1
Thread3 acquires Lock3
Thread1 blocks acquiring Lock3
Thread3 acquires Lock2
Thread2 blocks acquiring Lock2
Thread3 finishes and releases Lock3 and Lock2
Thread2 acquires Lock2
Thread2 blocks acquiring Lock1
Thread1 acquires Lock3 released by Thread3, finishes and releases Lock1 and Lock3
Thread2 acquires Lock1, finishes and releases locks
But if you want a demonstration the simplest consecutive execution of threads is the most understandable one.

Related

Can two wait operations executed by two separate threads on different semaphores be interleaved during execution?

I'm motivated with this citation from "Concepts in Programming Languages" by John C. Mitchell:
"Atomicity prevents individual statements of one wait procedure from being
interleaved with individual statements of another wait on the same semaphore."
Wait and signal operations need to be atomic which is often enforced by some "lower" level mechanism of acquiring lock - disabling interrupts, disabling preemption, test and set ... But, conceptually, how these locks can be in some way "private" for each semaphore instance?
In other words, is it allowed for example that one thread acquires lock at the beginning and later be preempted in the middle of executing wait operation on one semaphore, and after that another thread acquires lock at the beginning of wait operation on some other semaphore and enters in the body of its wait operation, so that two thread are in the wait operations on different semaphores at the same time? Or, shortly, whether the wait operations on two different semaphores mutually exclusive?
My point is, if thread acquires lock in wait operation on one semaphore s1, is it allowed for another thread to acquire lock at the same time in wait operation on another semaphore s2? I'm emphasizing that these are two different semaphore instances, not the same one.
For example:
class Semaphore {
...
public:
void wait();
...
}
void Semaphore::wait(){
lock();
//POINT OF CONTINUATION FOR THREAD 2!//
if(--val<0){
//POINT OF PREEMPTION FOR THREAD 1!//
block();
}
unlock();
}
Semaphore s1;
Semaphore s2:
...
So...
Is it allowed at some point of execution that one thread be preempted while executing wait operation on semaphore s1 at //POINT OF PREEMPTION FOR THREAD 1!// , and control transfers to another thread which executes wait operation of semaphore s2 at //POINT OF CONTINUATION FOR THREAD 2!//...
...or...
Is it allowed for instructions of wait operation from one semaphore to be interleaved with instruction of wait operation from another semaphore?
..or...
Is it allowed for more than one threads to be in wait operations on different semaphores at the same time?
Sorry for my wordiness but I really struggle to clarify my question. Thanks in advance.
Yes, it's allowed. One of the reasons you would use two different locks, rather than using the same lock for everything, is to avoid unnecessary dependencies like this.
Is it allowed for instructions of wait operation from one semaphore to be interleaved with instruction of wait operation from another semaphore?
Absolutely.
Is it allowed for more than one threads to be in wait operations on different semaphores at the same time?
Absolutely.
Prohibiting any of these things would hurt performance significantly for no benefit. Contention is the enemy of multi-threaded performance.

Condvar behaviour after signal, but before mutex release

I am trying to understand what guarantees I have just after I signalled a condvar.
The basic pattern of usage is, I believe, this (pseudocode):
Consumer Thread:
Mutex.Enter()
while(variable != ready)
Condvar.Wait()
Mutex.Exit()
Producer Thread:
Mutex.Enter()
variable = ready
Condvar.Broadcast()
[Unknown?]
Mutext.Exit()
My question is. What am I guaranteed about the [Unknown] point in the code above? I am still holding the mutex, but what can I know about the state of the consumer?
From the Man page, it is unclear to me what state the producer is in after it broadcasts/signals and before it releases the mutex.
From condition vars:
pthread_cond_wait() blocks the calling thread until the specified condition is signalled. This routine should be called while mutex is locked, and it will automatically release the mutex while it waits. After signal is received and thread is awakened, mutex will be automatically locked for use by the thread. The programmer is then responsible for unlocking mutex when the thread is finished with it.
So, when producer is executing the Unknown section of code, producer is holding the mutex, and the consumer is locked on the mutex until producer releases it.

What happens if a posix mutex is unlocked while multiple threads are blocked on this mutex?

thread 1:
lock mutex1
long time operation
unlock mutex1
thread2:
lock mutex1
...
thread3:
lock mutex1
...
thread4:
lock mutex1
...
threadn:
lock mutex1
...
When thread1 unlocks mutex1, which thread will be woken up? Is there a standard specification for it?
When you unlock a posix mutex using pthread_mutex_unlock, if several threads are waiting on the mutex only one of them will wake up.
The documentation states:
If there are threads blocked on the mutex object referenced by mutex
when pthread_mutex_unlock() is called, resulting in the mutex becoming
available, the scheduling policy shall determine which thread shall
acquire the mutex.
when thread1 unlock mutex1, which thread will be woken up?
One of the other threads that are currently blocked on the mutex. It is unspecified which of them.
Is there a standard specification for it?
No. Different systems may implement it in different ways. (Some might use a simple fifo order for waking the threads, others might use heuristics for deciding which thread to wake up).

Does a thread lock another thread?

Singularity - If a thread managed to lock a mutex, it is assured that no other thread will be able to lock the thread until the original thread releases the lock.
Non-Busy Wait - If a thread attempts to lock a thread that was locked by a second thread, the first thread will be suspended (and will not consume any CPU resources) until the lock is freed by the second thread. At this time, the first thread will wake up and continue execution, having the mutex locked by it.
From: Multi-Threaded Programming With POSIX Threads
Question: I thought threads lock the mutex variables. Threads don't lock other threads?
What do the bold statements above mean? How can one thread lock other thread?
Corrections:
If a thread managed to lock a mutex, it is assured that no other thread will be able to lock the mutex until the original thread releases the lock.
Non-Busy Wait - If a thread attempts to lock a mutex that was locked by a second thread, the first thread will be suspended (and will not consume any CPU resources) until the lock is freed by the second thread. At this time, the first thread will wake up and continue execution, having the mutex locked by it.
It's a good thing you don't take for granted whatever you read on the internet, also I give you thumbs up for paying attention to what you read.

some questions regarding pthread_mutex_lock

when a thread1 already acquired a lock on mutex object, if thread2 tries to acquire a lock on the same mutex object, thread2 will be blocked.
here are my questions:
1. how will thread2 come to know that mutex object is unlocked?
2. will thread2 try to acquire lock at predifined intervals of time?
I sense a misunderstanding of how a mutex works. When thread 2 tries to acquire a mutex that is already owned by thread 1, the call that tries to take the mutex will not return until the mutex becomes available (unless you have a timeout with trylock() variant).
So thread 2 does not need to loop there and keep trying to take the mutex (unless you're using a timeout so you can abort trying to take the mutex based on some other condition like a cancel condition).
This is really OS dependant, but what usually happens is that thread2 gets suspended and put on a wait list maintained by the mutex. When the mutex becomes available, a thread on the mutex's waitlist gets removed from the list and put back on the list of active threads. The OS can then schedule it like it usually would. thread2 is totally quiescent until it can acquire the mutex.

Resources