Semaphore minimization - multithreading

I stumbled upon a problem in a multi-threading book in the library and I was wondering what I would need to do in order to minimize the number of semaphores.
What would you do in this situation?
Semaphores

Assume a process P0's execution depends on other k processes: P1,...,Pk.
You need one semaphore to synchronize the processes and to satisfy this single constrain.
The semaphore S0 is initialized with 0, while P0 will try to wait k times on S0 (in other word, it will try to acquire k resources).
Each of k processes P1, ..., Pk will release S0 upon their ends of executions.
This will guarantee that P0 will start execution only after all the other k processes complete their execution (in any order and asynchronously).
In the link you provided, you need 4 semaphores, T1 does not need any semaphore because its execution depends on nobody else.

Related

Is it possible to implement a real FIFO mutex (in c)?

Is it possible to implement a real FIFO mutex meaning that it is guaranteed that a requesting thread is granted its request only if all "younger" threads have been granted their request. Note that the order in granting the requests of two threads that requested at the same time is not relevant meaning that the solution can .
If the answer is no, is it possible to guarantee the following condition:
Let x_t be the time in which thread x has requested.
x is younger by n than y if and only if y_t - x_t = n.
x is at least younger by n than y if and only if y_t - x_t >= n.
The condition is that there exists n so that a thread is granted its request only if all threads at least younger than it by n have been granted their request.
Note: my terminology may not be accurate. With "requesting" I mean requesting acquiring and locking the mutex. With "granting" I mean locking the mutex by the specified thread.
Q: What does "in C" mean? You can do anything in C if you're willing to write the code.*
Are you asking whether a fair mutex can be provided by an operating system? That's almost trivially easy. The OS already must have a container for each mutex in which it stores the IDs of all of the threads that are waiting for the mutex. All you have to do to ensure fairness is to change the scheduling strategy to always wake the thread whose ID has been in the container longer than any other when the mutex is released.
Are you asking whether you can implement a fair mutex in application code, by using one or more OS-provided unfair mutexes? That is possible, but it won't be as clean as an OS-provided fair mutex. One approach would be to have a pool of condition variables (CVs), and have each different thread use a different CV to await its turn to enter the mutex. When a thread tries and fails to acquire the mutex, it would grab a CV from the pool, put a pointer to the CV into a FIFO queue, and then it would wait on the CV.
Whenever a thread releases the mutex, it would signal the CV that's been waiting longest in the queue. The thread that was awakened, would then return its CV to the pool, and enter the mutex.
* But see Greenspun's Tenth Rule.

Big-O of a multi-threading project

Let's suppose that I create a project with 2 threads.
The Big O of them are n! and n respectively and they run at the same time.
When one of them returns what I want, both of them stops.
With that said, it would make sense that the complexity of the algorithm is O(n), although one of the threads has a Big-O of n!, am I right?
P.S. I did my research but none of the answers serve my need, since all of them talk about a problem that is cut in half (O(n/2) per thread instead of O(n) with one thread), while I want to start solving 2 problems at once but both stop when the first one is done.
The analysis of this needs to be more careful.
The thread scheduler may not guarantee that all threads will get a "fair" amount of execution time. Imagine two threads that are both counting up from 1, but the thread scheduler wakes thread A up for 1 step, then B for 1 step, then A for 2 steps, then B for 1 step, then A for 4 steps, and so on.
Thread A will do exponentially more work than thread B in this case, because it is given exponentially more time by the scheduler to do its work. So if thread B signals for thread A to stop after B counts up to n, then thread A would stop after counting up to 2n - 1. The scheduler could be even more unfair, so A's running time cannot be bounded by any function of n.
Given that, if thread A chooses to terminate itself after n! operations, then its running time can only be bounded by O(n!), because we can't guarantee that thread B will have completed its n operations and sent the termination signal within that time.
Now suppose the thread scheduler does guarantee that one thread is never favoured over another by more than some constant factor. In this case, the algorithm in thread B will send a signal to thread A after thread B completes O(n) steps. Since thread A can only complete O(n) steps in the same time (otherwise it would be favoured over thread B by more than a constant factor), then thread A will terminate in O(n) time.
That said, the fact that the algorithm in thread A is checking for a signal and terminating when it receives that signal, implies that O(n!) can't be derived as a tight upper bound just by looking at what thread A does; because it has instructions to terminate when it receives a signal from outside. So at least, there isn't a contradiction.

Is this an example of a livelock or deadlock or starvation?

Scheduling Scheme : Preemptive Priority Scheduling
Situation :
A process L (Low priority) acquires a spinlock on a resource (R). While still in the Critical Section, L gets preempted because of the arrival of another process - H (Higher Priority) into the ready queue. .
However, H also needs to access resource R and so tries to acquire a spin lock which results in H going to busy wait. Because spinlocks are used, H never actually goes into Wait and will always be in Running state or Ready state (in case an even higher priority process arrives ready queue), preventing L or any process with a priority lower than H from ever executing.
A) All processes with priority less than H can be considered to be under Starvation
B) All processes with priority less than H as well as the process H, can be considered to be in a deadlock. [But, then don't the processes have to be in Wait state for the system to be considered to be in a deadlock?]
C) All processes with priority less than H as well as the process H, can be considered to be in a livelock.[But, then only the state of H changes constantly, all the low priority process remain in just the Ready state. Don't the state of all processes need to change (as part of a spin lock) continuously if the system in livelock?]
D) H alone can be considered to be in livelock, all lower priority processes are just under starvation, not in livelock.
E) H will not progress, but cannot be considered to be in livelock. All lower priority processes are just under starvation, not in livelock.
Which of the above statements are correct? Can you explain?
This is not a livelock, because definition of livelock requires that "states of the processes involved in the livelock constantly change with regard to one another", and here states effectively do not change.
The first process can be considered to be in processor starvation, because if there were additional processor, it could run on it and eventually release the lock and let the second processor to run.
The situation also can be considered as a deadlock, with 2 resources in the resource graph, and 2 processes attempting to acquire that resources in opposite directions: first process owns the lock and needs the processor to proceed, while the second process owns the processor and needs the lock to proceed.

About deadlock in Linux and Windows

Assume you have two processes, P1 and P2. P1 has a high priority, P2 has a low priority. P1 and P2 have one shared semaphore (i.e., they both carry out waits and posts on the same semaphore). The processes can be interleaved in any arbitrary order (e.g. P2 could be started before P1).
Briefly explain whether the processes could deadlock when:
ii. both processes run on a Linux system as time sharing tasks
iii. both processes run on a Windows 7 system as variable tasks
iv. both processes run on a Windows 7 system as real-time tasks.
I think P1 and P2 can only result in priority inversion. According to one of the requirements of deadlock(Circular wait: there is a circular chain of two or more processes, waiting for a resource held by the other processes ), priority inversion is not equal to deadlock. Besides, P1 and P2 only have 1 semaphore, which means there will be no circular, so they will never cause deadlock. Therefore, all the answers will be no.
Is that correct? If not, then what's the answer?
You are correct, no deadlock is possible with only one semaphore.
Deadlock for two processes can happen only if P1 holds some resource needed by P2 and requires resource held by P2. So P1 can't proceed until P2 releases resource and P2 can't proceed until P1 releases resource. Thus they are both stuck waiting for each other, but not letting each other move forward. As you already mentioned, circular wait condition can't be fulfilled with one semaphore.
Also, P1 waiting for P2 to free resource isn't a priority inversion. Priority inversion happens, when Px has priority between P1 and P2 and P1 waits for P2 to free resource, while P2 waits for Px, because it has higher priority. So P1 waits for Px with lower priority despite it wasn't meant to (no shared resources or anything else).

How to detect and find out a program is in deadlock?

This is an interview question.
How to detect and find out if a program is in deadlock? Are there some tools that can be used to do that on Linux/Unix systems?
My idea:
If a program makes no progress and its status is running, it is deadlock. But, other reasons can also cause this problem. Open source tools are valgrind (halgrind) can do that. Right?
If you suspect a deadlock, do a ps aux | grep <exe name>, if in output, the PROCESS STATE CODE is D (Uninterruptible sleep) means it is a deadlock.
Because as #daijo explained, say you have two threads T1 & T2 and two critical sections each protected by semaphores S1 & S2 then if T1 acquires S1 and T2 acquires S2 and after that they try to acquire the other lock before relinquishing the one already held by them, this will lead to a deadlock and on doing a ps aux | grep <exe name>, the process state code will be D (ie Uninterruptible sleep).
Tools:
Valgrind, Lockdep (linux kernel utility)
Check this link on types of deadlocks and how to avoid them :
http://cmdlinelinux.blogspot.com/2014/01/linux-kernel-deadlocks-and-how-to-avoid.html
Edit: ps aux output D "could" mean process is in deadlock, from this redhat doc:
Uninterruptible Sleep State
An Uninterruptible sleep state is one
that won't handle a signal right away. It will wake only as a result
of a waited-upon resource becoming available or after a time-out
occurs during that wait (if the time-out is specified when the process
is put to sleep).
I would suggest you look at Helgrind: a thread error detector.
The simplest example of such a problem is as follows.
Imagine some shared resource R, which, for whatever reason, is guarded by two locks, L1 and L2, which must both be held when R is accessed.
Suppose a thread acquires L1, then L2, and proceeds to access R. The implication of this is that all threads in the program must acquire the two locks in the order first L1 then L2. Not doing so risks deadlock.
The deadlock could happen if two threads -- call them T1 and T2 -- both want to access R. Suppose T1 acquires L1 first, and T2 acquires L2 first. Then T1 tries to acquire L2, and T2 tries to acquire L1, but those locks are both already held. So T1 and T2 become deadlocked."

Resources