Assume you have two processes, P1 and P2. P1 has a high priority, P2 has a low priority. P1 and P2 have one shared semaphore (i.e., they both carry out waits and posts on the same semaphore). The processes can be interleaved in any arbitrary order (e.g. P2 could be started before P1).
Briefly explain whether the processes could deadlock when:
ii. both processes run on a Linux system as time sharing tasks
iii. both processes run on a Windows 7 system as variable tasks
iv. both processes run on a Windows 7 system as real-time tasks.
I think P1 and P2 can only result in priority inversion. According to one of the requirements of deadlock(Circular wait: there is a circular chain of two or more processes, waiting for a resource held by the other processes ), priority inversion is not equal to deadlock. Besides, P1 and P2 only have 1 semaphore, which means there will be no circular, so they will never cause deadlock. Therefore, all the answers will be no.
Is that correct? If not, then what's the answer?
You are correct, no deadlock is possible with only one semaphore.
Deadlock for two processes can happen only if P1 holds some resource needed by P2 and requires resource held by P2. So P1 can't proceed until P2 releases resource and P2 can't proceed until P1 releases resource. Thus they are both stuck waiting for each other, but not letting each other move forward. As you already mentioned, circular wait condition can't be fulfilled with one semaphore.
Also, P1 waiting for P2 to free resource isn't a priority inversion. Priority inversion happens, when Px has priority between P1 and P2 and P1 waits for P2 to free resource, while P2 waits for Px, because it has higher priority. So P1 waits for Px with lower priority despite it wasn't meant to (no shared resources or anything else).
Related
Is it possible to implement a real FIFO mutex meaning that it is guaranteed that a requesting thread is granted its request only if all "younger" threads have been granted their request. Note that the order in granting the requests of two threads that requested at the same time is not relevant meaning that the solution can .
If the answer is no, is it possible to guarantee the following condition:
Let x_t be the time in which thread x has requested.
x is younger by n than y if and only if y_t - x_t = n.
x is at least younger by n than y if and only if y_t - x_t >= n.
The condition is that there exists n so that a thread is granted its request only if all threads at least younger than it by n have been granted their request.
Note: my terminology may not be accurate. With "requesting" I mean requesting acquiring and locking the mutex. With "granting" I mean locking the mutex by the specified thread.
Q: What does "in C" mean? You can do anything in C if you're willing to write the code.*
Are you asking whether a fair mutex can be provided by an operating system? That's almost trivially easy. The OS already must have a container for each mutex in which it stores the IDs of all of the threads that are waiting for the mutex. All you have to do to ensure fairness is to change the scheduling strategy to always wake the thread whose ID has been in the container longer than any other when the mutex is released.
Are you asking whether you can implement a fair mutex in application code, by using one or more OS-provided unfair mutexes? That is possible, but it won't be as clean as an OS-provided fair mutex. One approach would be to have a pool of condition variables (CVs), and have each different thread use a different CV to await its turn to enter the mutex. When a thread tries and fails to acquire the mutex, it would grab a CV from the pool, put a pointer to the CV into a FIFO queue, and then it would wait on the CV.
Whenever a thread releases the mutex, it would signal the CV that's been waiting longest in the queue. The thread that was awakened, would then return its CV to the pool, and enter the mutex.
* But see Greenspun's Tenth Rule.
Scheduling Scheme : Preemptive Priority Scheduling
Situation :
A process L (Low priority) acquires a spinlock on a resource (R). While still in the Critical Section, L gets preempted because of the arrival of another process - H (Higher Priority) into the ready queue. .
However, H also needs to access resource R and so tries to acquire a spin lock which results in H going to busy wait. Because spinlocks are used, H never actually goes into Wait and will always be in Running state or Ready state (in case an even higher priority process arrives ready queue), preventing L or any process with a priority lower than H from ever executing.
A) All processes with priority less than H can be considered to be under Starvation
B) All processes with priority less than H as well as the process H, can be considered to be in a deadlock. [But, then don't the processes have to be in Wait state for the system to be considered to be in a deadlock?]
C) All processes with priority less than H as well as the process H, can be considered to be in a livelock.[But, then only the state of H changes constantly, all the low priority process remain in just the Ready state. Don't the state of all processes need to change (as part of a spin lock) continuously if the system in livelock?]
D) H alone can be considered to be in livelock, all lower priority processes are just under starvation, not in livelock.
E) H will not progress, but cannot be considered to be in livelock. All lower priority processes are just under starvation, not in livelock.
Which of the above statements are correct? Can you explain?
This is not a livelock, because definition of livelock requires that "states of the processes involved in the livelock constantly change with regard to one another", and here states effectively do not change.
The first process can be considered to be in processor starvation, because if there were additional processor, it could run on it and eventually release the lock and let the second processor to run.
The situation also can be considered as a deadlock, with 2 resources in the resource graph, and 2 processes attempting to acquire that resources in opposite directions: first process owns the lock and needs the processor to proceed, while the second process owns the processor and needs the lock to proceed.
I stumbled upon a problem in a multi-threading book in the library and I was wondering what I would need to do in order to minimize the number of semaphores.
What would you do in this situation?
Semaphores
Assume a process P0's execution depends on other k processes: P1,...,Pk.
You need one semaphore to synchronize the processes and to satisfy this single constrain.
The semaphore S0 is initialized with 0, while P0 will try to wait k times on S0 (in other word, it will try to acquire k resources).
Each of k processes P1, ..., Pk will release S0 upon their ends of executions.
This will guarantee that P0 will start execution only after all the other k processes complete their execution (in any order and asynchronously).
In the link you provided, you need 4 semaphores, T1 does not need any semaphore because its execution depends on nobody else.
This is an interview question.
How to detect and find out if a program is in deadlock? Are there some tools that can be used to do that on Linux/Unix systems?
My idea:
If a program makes no progress and its status is running, it is deadlock. But, other reasons can also cause this problem. Open source tools are valgrind (halgrind) can do that. Right?
If you suspect a deadlock, do a ps aux | grep <exe name>, if in output, the PROCESS STATE CODE is D (Uninterruptible sleep) means it is a deadlock.
Because as #daijo explained, say you have two threads T1 & T2 and two critical sections each protected by semaphores S1 & S2 then if T1 acquires S1 and T2 acquires S2 and after that they try to acquire the other lock before relinquishing the one already held by them, this will lead to a deadlock and on doing a ps aux | grep <exe name>, the process state code will be D (ie Uninterruptible sleep).
Tools:
Valgrind, Lockdep (linux kernel utility)
Check this link on types of deadlocks and how to avoid them :
http://cmdlinelinux.blogspot.com/2014/01/linux-kernel-deadlocks-and-how-to-avoid.html
Edit: ps aux output D "could" mean process is in deadlock, from this redhat doc:
Uninterruptible Sleep State
An Uninterruptible sleep state is one
that won't handle a signal right away. It will wake only as a result
of a waited-upon resource becoming available or after a time-out
occurs during that wait (if the time-out is specified when the process
is put to sleep).
I would suggest you look at Helgrind: a thread error detector.
The simplest example of such a problem is as follows.
Imagine some shared resource R, which, for whatever reason, is guarded by two locks, L1 and L2, which must both be held when R is accessed.
Suppose a thread acquires L1, then L2, and proceeds to access R. The implication of this is that all threads in the program must acquire the two locks in the order first L1 then L2. Not doing so risks deadlock.
The deadlock could happen if two threads -- call them T1 and T2 -- both want to access R. Suppose T1 acquires L1 first, and T2 acquires L2 first. Then T1 tries to acquire L2, and T2 tries to acquire L1, but those locks are both already held. So T1 and T2 become deadlocked."
We have 3 tasks running at different priorities: A (120), B (110), C (100).
A takes a mutex semaphore with the Inversion Safe flag.
Task B does a semTake, which causes Task A's priority to be elevated to 110.
Later, task C does a semTake. Task A's priority is now 100.
At this point, A releases the semaphore and C grabs it.
We notice that A's priority did not go back down to its original priority of 120.
Shouldn't A's priority be restored right away?
Ideally, when the inherited priority level is
lowered, it will be done in a step-wise fashion. As each
dependency that caused the priority level to be bumped up is removed,
the inherited priority level should drop down to the priority level of
the highest remaining dependency.
For Example:
task A (100 bumped up to 80) has two mutexes (X & Y)
that tasks B (pri 90) and task C (pri 80) are respectively pending
for. When task A gives up mutex Y to task C, we might expect that its
priority will drop to 90. When it finally gives up mutex X to task B,
we would expect its priority level to drop back to 100.
Priority inheritance does not work that way in VxWorks.
How it works depends on the version of VxWorks you are using.
pre-VxWorks 6.0
The priority level remains "bumped up" until the task that has the
lock on the mutex semaphore gives up its last inversion safe mutex
semaphore.
Using the example from above, when task A gives up mutex Y
to task C, its priority remains at 80. After it gives up mutex X to
task B, then its priority will drop back to 100 (skipping 90).
Let's throw curve ball #1 into the mix. What if task A had a lock on mutex
Z while all this was going on, but no one was pending on Z? In that
case, the priority level will remain at 80 until Z is given up--then
it will drop back to 100.
Why do it this way?
It's simple, and for most cases, it is good
enough. However, it does mean that when "curve ball #1" comes into
play, the priority will remain higher for a longer period of time than
is necessary.
VxWorks 6.0+
The priority level now
remains elevated until the task that has the lock on the mutex
semaphore gives up its last inversion safe mutex that contributed to
raising the priority level.
This improvement avoids the problem of
"curve ball #1". It does have its own limitations. For example, if
task B and/or task C time(s) out while waiting for task A to give up
the semaphores, task A's priority level does not get recalculated
until it gives up the semaphore.