I'm trying to learn about mutex, semaphores and critical sections and I'm uncertain about some things with semaphores. Is a semaphore the same as a critical section? The definition of how to use semaphores from semaphore.h states the use is:
sem_t m;
sem_init(&m, 0, X); // initialize semaphore to X; what should X be?
sem_wait(&m);
// critical section here
sem_post(&m);
So my question is really is the "// critical section here" actually a critical section?
Semaphores are tools used to protect critical sections: to insure that only one CS is being executed at a time.
In you example, the first process to execute sem_wait(&m) gets to execute its copy of the critical section; any other process that tries to execute its correspoinding sem_wait will be blocked, until out first process finishes its CS by executing sem_post. AT that point, some other call to sem_wait will return, starting the process again.
Related
Recently, in an interview I gave, we were discussing about the critical sections.
The question asked was "what happens when we execute fork inside critical section? Will the resulting child process also execute the critical section simultaneously?"
We discussed about the possibilities though:
Yes, both might execute the critical section simultaneously
fork() system call might be blocking for child process, allowing only parent to execute the critical section.
Compiler may be intelligent enough to identify this problem and might throw compilation error.
Unfortunately, I could not find more details about this on the internet. TIA.
Edited:
Adding the pseudocode for reference:
semaphore s;
s.wait(); // lock
/* critical-section */
pid = fork(); /* what will happen here in child/parent process? */
s.signal(); // unlock
For Linux semaphores, the second parameter of sem_init determines if it's a cross-process semaphore. You place those in shared memory, which is inherited by fork.
fork does not try to check existing semaphores, nor does it try to adjust the semaphore count. Semaphores can have counts >1, and will allow that many running threads. So a count of 2 would allow two threads to run - fork isn't going to guess.
[edit] The old answer below assumed a Linux futex, which is more like a critical section.
"The" critical section is misleading. After the fork, both processes have their own critical sections. As a result, none of your 3 options apply.
what happens when we execute fork inside critical section? Will the resulting child process also execute the critical section simultaneously?"
Yes, both might execute the critical section simultaneously
fork() system call might be blocking for child process, allowing only parent to execute the critical section.
Compiler may be intelligent enough to identify this problem and might throw compilation error. Unfortunately, I could not find more
details about this on the internet. TIA.
Any of those and more is possible in principle, but neither (2) nor (3) is implemented by any system I know. In particular, since you tagged Linux, GLibc's fork() doesn't have any special provision for interaction with semaphores, and GCC will not reject code on the grounds you suggest. (There is a matter of the (System V) semaphore adjustment, but that's not directly relevant.)
But (1) is not completely correct, either. It is true that if the fork() is successful then the child will start execution by returning from the fork into the critical section. Nothing specifically prevents that from happening while the parent is still inside the critical section itself, so it may be that both run in the critical section at the same time. On the other hand, it is possible that one of the two resulting processes does not get scheduled to actually execute any instructions until a time that happens to be after the other has exited the critical section. That would look a lot like (2) even though technically, neither process was blocked.
That is barely the tip of the iceberg of issues involved in such a situation, however.
OK. The example here is provided using pthread lib in c.
In textbook I came across the following code:
//for thread 2
pthread_mutex_lock(&lock);
should_wake_up = 1;
pthread_cond_signal(&cond);
pthread_mutex_unlock(&lock);
This code works out pretty fine. I just wonder will the following code also works?
//for thread 2
pthread_mutex_lock(&lock);
should_wake_up = 1;
pthread_mutex_unlock(&lock);
pthread_cond_signal(&cond);//signal the conditional variable but the lock is not held
What's the pros and cons for the following code?
PS. Suppose the cooperating thread has the code:
//for thread 1
pthread_mutex_lock(&lock);
while(!should_wake_up)
pthread_cond_wait(&cond, &lock);
pthead_mutex_unlock(&lock);
PS2. I came across some other question, which points out that if we don't want signal being lost, we must use lock to make sure that the associated predicate (in this case, is should_wake_up) can not be changed when the lock is held in thread 1. In this case, this seems not to be the issue. Link to the post: [1]: signal on condition variable without holding lock. I think his issue is that he forget about locking. But my question is different.
For normal usage, you can unlock the mutex before signalling a condition variable.
The mutex protects the shared state (in this case the should_wake_up flag). Provided the mutex is locked when modifying the shared state, and when checking the shared state, pthread_cond_signal can be called without the mutex locked and everything will work as expected.
Under most circumstances, I would recommend calling pthread_cond_signal after calling pthread_mutex_unlock as a slight performance improvement. If you call pthread_cond_signal before pthread_mutex_unlock then the waiting thread can be woken by the signal before the mutex is unlocked, so the waiting thread then has to go back to sleep, as it blocks on the mutex that is still held by the signalling thread.
As far as I understand, a mutex is used to lock the critical section so that no other threads can access it when already a thread is using it. So mutex avoids multiple threads trying to use or change the data at a time. But semaphore allows N number of threads to enter the critical section and starts blocking from N+1.. Wont the N threads try to change the data at at time when they are inside the critical section?
The answer is yes, you are violating the concept of a critical section. I don't see your point. But yes is the answer - N threads/LWP's will all be pounding away at the same time, producing undefined behavior.
Mutexes are used to protect critical sections. Let's say a down has been already done on a mutex, and while the thread that did that is in the CS, 10 other threads are right behind it and do a down on the mutex, putting themselves to sleep. When the first thread exits the critical section and does an up on the mutex, do all 10 threads wake up and just resume what they were about to do, namely, entering the critical section? Wouldn't that mean then that all 10 might end up in the critical section at the same time?
No, only one thread will wake up and take ownership of the mutex. The rest of them will remain asleep. Which thread is the one that wakes up is usually nondeterministic.
The above is a generalisation and the details of implementation will be different in each system. For example, in Java compare Object#notify() and Object#notifyAll().
I always get confused. Would someone explain what Reentrant means in different contexts? And why would you want to use reentrant vs. non-reentrant?
Say pthread (posix) locking primitives, are they re-entrant or not? What pitfalls should be avoided when using them?
Is mutex re-entrant?
Re-entrant locking
A reentrant lock is one where a process can claim the lock multiple times without blocking on itself. It's useful in situations where it's not easy to keep track of whether you've already grabbed a lock. If a lock is non re-entrant you could grab the lock, then block when you go to grab it again, effectively deadlocking your own process.
Reentrancy in general is a property of code where it has no central mutable state that could be corrupted if the code was called while it is executing. Such a call could be made by another thread, or it could be made recursively by an execution path originating from within the code itself.
If the code relies on shared state that could be updated in the middle of its execution it is not re-entrant, at least not if that update could break it.
A use case for re-entrant locking
A (somewhat generic and contrived) example of an application for a re-entrant lock might be:
You have some computation involving an algorithm that traverses a graph (perhaps with cycles in it). A traversal may visit the same node more than once due to the cycles or due to multiple paths to the same node.
The data structure is subject to concurrent access and could be updated for some reason, perhaps by another thread. You need to be able to lock individual nodes to deal with potential data corruption due to race conditions. For some reason (perhaps performance) you don't want to globally lock the whole data structure.
Your computation can't retain complete information on what nodes you've visited, or you're using a data structure that doesn't allow 'have I been here before' questions to be answered quickly. An example of this situation would be a simple implementation of Dijkstra's algorithm with a priority queue implemented as a binary heap or a breadth-first search using a simple linked list as a queue. In these cases, scanning the queue for existing insertions is O(N) and you may not want to do it on every iteration.
In this situation, keeping track of what locks you've already acquired is expensive. Assuming you want to do the locking at the node level a re-entrant locking mechanism alleviates the need to tell whether you've visited a node before. You can just blindly lock the node, perhaps unlocking it after you pop it off the queue.
Re-entrant mutexes
A simple mutex is not re-entrant as only one thread can be in the critical section at a given time. If you grab the mutex and then try to grab it again a simple mutex doesn't have enough information to tell who was holding it previously. To do this recursively you need a mechanism where each thread had a token so you could tell who had grabbed the mutex. This makes the mutex mechanism somewhat more expensive so you may not want to do it in all situations.
IIRC the POSIX threads API does offer the option of re-entrant and non re-entrant mutexes.
A re-entrant lock lets you write a method M that puts a lock on resource A and then call M recursively or from code that already holds a lock on A.
With a non re-entrant lock, you would need 2 versions of M, one that locks and one that doesn't, and additional logic to call the right one.
Reentrant lock is very well described in this tutorial.
The example in the tutorial is far less contrived than in the answer about traversing a graph. A reentrant lock is useful in very simple cases.
The what and why of recursive mutex should not be such a complicated thing described in the accepted answer.
I would like to write down my understanding after some digging around the net.
First, you should realize that when talking about mutex, multi thread concepts is definitely involved too. (mutex is used for synchronization. I don't need mutex if I only have 1 thread in my program)
Secondly, you should know the difference bewteen a normal mutex and a recursive mutex.
Quoted from APUE:
(A recursive mutex is a) A mutex type that allows the same thread to lock
it multiple times without first unlocking it.
The key difference is that within the same thread, relock a recursive lock does not lead to deadlock, neither block the thread.
Does this mean that recusive lock never causes deadlock?
No, it can still cause deadlock as normal mutex if you have locked it in one thread without unlocking it, and try to lock it in other threads.
Let's see some code as proof.
normal mutex with deadlock
#include <pthread.h>
#include <stdio.h>
pthread_mutex_t lock;
void * func1(void *arg){
printf("thread1\n");
pthread_mutex_lock(&lock);
printf("thread1 hey hey\n");
}
void * func2(void *arg){
printf("thread2\n");
pthread_mutex_lock(&lock);
printf("thread2 hey hey\n");
}
int main(){
pthread_mutexattr_t lock_attr;
int error;
// error = pthread_mutexattr_settype(&lock_attr, PTHREAD_MUTEX_RECURSIVE);
error = pthread_mutexattr_settype(&lock_attr, PTHREAD_MUTEX_DEFAULT);
if(error){
perror(NULL);
}
pthread_mutex_init(&lock, &lock_attr);
pthread_t t1, t2;
pthread_create(&t1, NULL, func1, NULL);
pthread_create(&t2, NULL, func2, NULL);
pthread_join(t2, NULL);
}
output:
thread1
thread1 hey hey
thread2
common deadlock example, no problem.
recursive mutex with deadlock
Just uncomment this line
error = pthread_mutexattr_settype(&lock_attr, PTHREAD_MUTEX_RECURSIVE);
and comment out the other one.
output:
thread1
thread1 hey hey
thread2
Yes, recursive mutex can also cause deadlock.
normal mutex, relock in the same thread
#include <pthread.h>
#include <stdio.h>
#include <unistd.h>
pthread_mutex_t lock;
void func3(){
printf("func3\n");
pthread_mutex_lock(&lock);
printf("func3 hey hey\n");
}
void * func1(void *arg){
printf("thread1\n");
pthread_mutex_lock(&lock);
func3();
printf("thread1 hey hey\n");
}
void * func2(void *arg){
printf("thread2\n");
pthread_mutex_lock(&lock);
printf("thread2 hey hey\n");
}
int main(){
pthread_mutexattr_t lock_attr;
int error;
// error = pthread_mutexattr_settype(&lock_attr, PTHREAD_MUTEX_RECURSIVE);
error = pthread_mutexattr_settype(&lock_attr, PTHREAD_MUTEX_DEFAULT);
if(error){
perror(NULL);
}
pthread_mutex_init(&lock, &lock_attr);
pthread_t t1, t2;
pthread_create(&t1, NULL, func1, NULL);
sleep(2);
pthread_create(&t2, NULL, func2, NULL);
pthread_join(t2, NULL);
}
output:
thread1
func3
thread2
Deadlock in thread t1, in func3.
(I use sleep(2) to make it easier to see that the deadlock is firstly caused by relocking in func3)
recursive mutex, relock in the same thread
Again, uncomment the recursive mutex line and comment out the other line.
output:
thread1
func3
func3 hey hey
thread1 hey hey
thread2
Deadlock in thread t2, in func2. See? func3 finishes and exits, relocking does not block the thread or lead to deadlock.
So, last question, why do we need it ?
For recursive function (called in multi-threaded programs and you want to protect some resource/data).
E.g. You have a multi thread program, and call a recursive function in thread A. You have some data that you want to protect in that recursive function, so you use the mutex mechanism. The execution of that function is sequential in thread A, so you would definitely relock the mutex in recursion. Use normal mutex causes deadlocks. And resursive mutex is invented to solve this.
See an example from the accepted answer
When to use recursive mutex?.
The Wikipedia explains the recursive mutex very well. Definitely worth for a read. Wikipedia: Reentrant_mutex