POSIX mutex protocol - what exactly does this spec mean? - multithreading

In this documentation of POSIX mutex protocols - http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_mutexattr_getprotocol.html# - we can read following section:
While a thread is holding a mutex which has been initialized with the
PTHREAD_PRIO_INHERIT or PTHREAD_PRIO_PROTECT protocol attributes, it
shall not be subject to being moved to the tail of the scheduling
queue at its priority in the event that its original priority is
changed, such as by a call to sched_setparam(). Likewise, when a
thread unlocks a mutex that has been initialized with the
PTHREAD_PRIO_INHERIT or PTHREAD_PRIO_PROTECT protocol attributes, it
shall not be subject to being moved to the tail of the scheduling
queue at its priority in the event that its original priority is
changed.
This probably is a reference to this fragment:
If a thread whose policy or priority has been modified other than by
pthread_setschedprio() is a running thread or is runnable, it then
becomes the tail of the thread list for its new priority.
(source - http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_08 , SCHED_FIFO description)
English is not my first language, so I'm having hard time understanding what exactly does it say...
Does it mean that when thread's priority is boosted (due to inheritance or ceiling protocol) it is not placed at the tail of the new priority, but at the head? Or maybe this describes the situation of priority change due to a call to sched_setparam() (or similar function) by this thread itself or by another thread? Maybe it is just a strange description of the fact, that such thread executes at the priority inherited from the mutex, so no changes to it's original priority make any difference?
I tried searching for a different description of this behavior, but all specs are just copies of the original, some of them use slightly different words, but this makes no difference at all.
Any ideas?

The text is not wonderfully easy to untangle...
...but I agree with you, there is the general rule that changes to policy/priority for a thread cause it to be put to the back of the relevant priority queue, except where that change is made by pthread_setschedprio()...
...and then there are the exceptions to that rule.
...so, while a pthread is holding a mutex and its priority is changed to avoid priority inversion, then it seems reasonable for the thread to not be moved to the back of its priority queue.
...not so obvious, is what this means:
Likewise, when a thread unlocks a mutex that has been initialized with the PTHREAD_PRIO_INHERIT or PTHREAD_PRIO_PROTECT protocol attributes, it shall not be subject to being moved to the tail of the scheduling queue at its priority in the event that its original priority is changed.
...I think the key here is the word "original". I think this means that if the thread's real priority has been changed (explicitly by some scheduling function) but the thread has continued running, then a later mutex unlock is not required to worry about it. I think this is for efficiency... the mutex code has to worry about its own priority issues, but not any others.

Related

Can it be assumed that `pthread_cond_signal` will wake the signaled thread atomically with regards to the mutex bond to the condition variable?

Quoting POSIX:
The pthread_cond_broadcast() or pthread_cond_signal() functions may be called by a thread whether or not it currently owns the mutex that threads calling pthread_cond_wait() or pthread_cond_timedwait() have associated with the condition variable during their waits; however, if predictable scheduling behavior is required, then that mutex shall be locked by the thread calling pthread_cond_broadcast() or pthread_cond_signal().
"If predictable scheduling behavior is required". This could/would hint that locking the mutex bound to the condition variable right before calling pthread_cond_signal() should guarantee that the signaled thread will be woken up before any other thread manages to lock this mutex. Is this correct?
We will se if any PThreads guru has a more comprehensive answer, but as far as I can see, at least in the Linux manpage, you do not get fully predictable behavior. What you do get is a guarantee that if two threads wait on the same condition variable, the higher-prio thread gets to go first (at least, that should be true on Linux if one thread is SCHED_OTHER and the other is real-time SCHED_FIFO). That holds if you lock mutex before signalling (with reservation for errors after a quick read of the manpage).
See
https://linux.die.net/man/3/pthread_cond_signal
No, there is no guarantee the signalled thread will be waken up. Worse, if in the signalling thread you have sequence:
while(run_again) {
pthread_mutex_lock(&mutex);
/* prepare data */
pthread_mutex_unlock(&mutex);
pthread_cond_broadcast(&cond);
}
there is reasonable chance control would never be passed to other threads waiting on mutex because of logic in the scheduler. Some examples to play with you can find in this answer.
No.
The best reference I have found regarding the predictability is this one:
https://austin-group-l.opengroup.narkive.com/lKcmfoRI/predictable-scheduling-behavior-in-pthread-cond-broadcast
Basically, people want to guard against the possibility that threads do not get a fair chance to run. Apparently, it is not a problem for most producer-consumer scenarios, and it does not apply to pthread_cond_broadcast as well. I would say, it is useful only in limited cases.
Cppreference.com actually considers unlocking after notifying may be a pessimization:
https://en.cppreference.com/w/cpp/thread/condition_variable/notify_all

Thread scheduling in unix

In RR scheduling policy what will happen if a low priority thread locks a mutex and is removed by the scheduler because another high priority thread is waiting?
Will it also releases the lock held by low priority thread?
For example consider 3 threads running in a process with priorities 10,20 and 30 in RR scheduling policy.
Now at a given point of time low priority thread 1 locks mutex and is still doing execution mean while the high priority thread pops in and also waits on mutex held by thread 1. Now thread 2 comes in to picture which also needs the same mutex locked by thread 1.
As far as I know as per scheduling algorithm the threads sleeping or waiting for mutex,semaphore etc are removed and the other ones, even having low priority are allowed to execute. Is this correct? If so, in the above example ultimately high priority threads wait for the completion of low priority thread which doesn't make any sense.
Is this how the system works if at all threads are designed like I said above?
or
the thread priority should be set in such a way that high priority one's will not depend on low priority one's mutexe's ?
Also can anyone please explain me how scheduling works at process level? How do we set priority for a process?
Typically, scheduling and locks are unrelated in any other aspect than a "waiting thread is not scheduled until it's finished waiting". It would be rather daft to have a MUTEX that "stops other thread from accessing my data" but it only works if the other thread has the same or lower priority than the current thread.
The phenomena of "a low priority holds a lock that a high priority thread 'needs'" is called priority inversion, and it's a well known scenario in computer theory.
There are some schemes that "temporarily increase priority of a lock-holding thread until it releases the lock to the highest priority of the waiting threads" (or the priority of the first waiting thread if it's higher than the current thread, or some other variation on that theme). This is done to combat priority inversion - but it has other drawbacks too, so it's not implemented in all OS's/schedulers (after all, it affects other threads than the one that is waiting too).
Edit:
The point of a mutex (or other similar locks) is that it prevents two threads from accessing the same resources at once. For example, say we want to update five different variables with some pretty lengthy processing (complicated math, fetching data from a serial port or a network drive, or some such), but if we only do two of the variables, some other process using these would get an invalid result, then we clearly can't "let go" of the lock.
The high priority thread simply has to wait for all five variables to be updated and the low priority lock.
There is no simple workaround that the application can do to "fix" this problem - don't hold the locks more than necessary of course [and it may be that we can actually fix the problem described above, by performing the lengthy processing OUTSIDE of the lock, and only do the final "store it in the 5 variables" with the lock on. This would reduce the potential time that the high priority thread has to wait, but if the system is really busy, it won't really fix the problem.
There are whole PhD thesis written on this subject, and I'm not a specialist on "how to write a scheduler" - I have a fair idea how one works, just like I know how the engine in my car works - but if someone gave me a bunch of suitable basic shapes of steel and aluminium, and the required tools/workspace and told me to build an engine, I doubt it would work great...Same with a scheduler - I know what the parts are called, but not how to build one.
If a high priority thread needs to wait on a low priority thread (mutex semaphore etc), the low priority thread is temporarily elevated to the same priority as the high priority thread.
The high priority thread is not going to have the lock for which it is requesting until the low priority thread will unlock it.
To avoid this we can use semaphore where any other thread can initiate to unlock but in mutex it is not possible.

Create a Mutex with Priority Inheritance in C#

I have a static list of objects.
During the program, many thread are created.
Immediately after each thread is created, it creates a new object, and add it to the static list.
There is another thread in the program, that responsible to iterating over the static list.
Suppose that a thread with a low priority 'A' is access to the list, and another thread with a higher priority 'C' asks also access to it, May (in a rare case indeed),that thread with medium priority 'B' that exists in the system too, will get the CPU time of 'A'.
So, 'C' will wait for 'B', contrary to common sense.
How do I can get a lock to the List, without getting involved with this Priority inversion problem?
The function 'Lock()' can help?
Thank you!
That is, at worst, a short term priority inversion problem. Unless, of course, the low priority thread A holds the lock for a very long time. Thread C can't make progress because it's waiting on the lock. As Hans Passant said in his answer, the thread scheduler detects this problem and boosts the priority of the lower-priority thread so that it can release the lock. The first MSDN link he posted explains it quite well.
If your low priority thread A holds the lock for a very long time (i.e. it's doing complex calculations on the list) and that's causing problems in your application, then you can do one of the following:
Increase the priority of thread A
Have thread A lock the list, get an item, unlock the list, and then process the item
Lock the list, make a copy, unlock the list, and process the copy.
some variation on or combination of the above
In any case, the problem isn't the lock. The problem is coding the program so that a high-priorty thread can be left waiting for a long time on a data structure that a lower priority thread needs exclusive access to.
Priority inversion is a very generic problem on an operating system that uses thread priority to pick the next thread to schedule. Like Windows. The operating system thread scheduler has specific countermeasures against it, artificially bumping the thread priority when it detects an inversion problem so the low-priority thread is allowed to run and given an opportunity to release the lock. The MSDN page that describes the feature is here. And old KB article with more details is here.
Do not help.

Advantages of using condition variables over mutex

I was wondering what is the performance benefit of using condition variables over mutex locks in pthreads.
What I found is : "Without condition variables, the programmer would need to have threads continually polling (possibly in a critical section), to check if the condition is met. This can be very resource consuming since the thread would be continuously busy in this activity. A condition variable is a way to achieve the same goal without polling." (https://computing.llnl.gov/tutorials/pthreads)
But it also seems that mutex calls are blocking (unlike spin-locks). Hence if a thread (T1) fails to get a lock because some other thread (T2) has the lock, T1 is put to sleep by the OS, and is woken up only when T2 releases the lock and the OS gives T1 the lock. The thread T1 does not really poll to get the lock. From this description, it seems that there is no performance benefit of using condition variables. In either case, there is no polling involved. The OS anyway provides the benefit that the condition-variable paradigm can provide.
Can you please explain what actually happens.
A condition variable allows a thread to be signaled when something of interest to that thread occurs.
By itself, a mutex doesn't do this.
If you just need mutual exclusion, then condition variables don't do anything for you. However, if you need to know when something happens, then condition variables can help.
For example, if you have a queue of items to work on, you'll have a mutex to ensure the queue's internals are consistent when accessed by the various producer and consumer threads. However, when the queue is empty, how will a consumer thread know when something is in there for it to work on? Without something like a condition variable it would need to poll the queue, taking and releasing the mutex on each poll (otherwise a producer thread could never put something on the queue).
Using a condition variable lets the consumer find that when the queue is empty it can just wait on the condition variable indicating that the queue has had something put into it. No polling - that thread does nothing until a producer puts something in the queue, then signals the condition that the queue has a new item.
You're looking for too much overlap in two separate but related things: a mutex and a condition variable.
A common implementation approach for a mutex is to use a flag and a queue. The flag indicates whether the mutex is held by anyone (a single-count semaphore would work too), and the queue tracks which threads are in line waiting to acquire the mutex exclusively.
A condition variable is then implemented as another queue bolted onto that mutex. Threads that got in line to wait to acquire the mutex can—usually once they have acquired it—volunteer to get out of the front of the line and get into the condition queue instead. At this point, you have two separate sets of waiters:
Those waiting to acquire the mutex exclusively
Those waiting for the condition variable to be signaled
When a thread holding the mutex exclusively signals the condition variable, for which we'll assume for now that it's a singular signal (unleashing no more than one waiting thread) and not a broadcast (unleashing all the waiting threads), the first thread in the condition variable queue gets shunted back over into the front (usually) of the mutex queue. Once the thread currently holding the mutex—usually the thread that signaled the condition variable—relinquishes the mutex, the next thread in the mutex queue can acquire it. That next thread in line will have been the one that was at the head of the condition variable queue.
There are many complicated details that come into play, but this sketch should give you a feel for the structures and operations in play.
If you are looking for performance, then start reading about "non blocking / non locking" thread synchronization algorithms. They are based upon atomic operations, which gcc is kind enough to provide. Lookup gcc atomic operations. Our tests showed we could increment a global value with multiple threads using atomic operation magnitudes faster than locking with a mutex. Here is some sample code that shows how to add items to and from a linked list from multiple threads at the same time without locking.
For sleeping and waking threads, signals are much faster than conditions. You use pthread_kill to send the signal, and sigwait to sleep the thread. We tested this too with the same kind of performance benefits. Here is some example code.

Conditional Variable vs Semaphore

When to use a semaphore and when to use a conditional variable?
Locks are used for mutual exclusion. When you want to ensure that a piece of code is atomic, put a lock around it. You could theoretically use a binary semaphore to do this, but that's a special case.
Semaphores and condition variables build on top of the mutual exclusion provide by locks and are used for providing synchronized access to shared resources. They can be used for similar purposes.
A condition variable is generally used to avoid busy waiting (looping repeatedly while checking a condition) while waiting for a resource to become available. For instance, if you have a thread (or multiple threads) that can't continue onward until a queue is empty, the busy waiting approach would be to just doing something like:
//pseudocode
while(!queue.empty())
{
sleep(1);
}
The problem with this is that you're wasting processor time by having this thread repeatedly check the condition. Why not instead have a synchronization variable that can be signaled to tell the thread that the resource is available?
//pseudocode
syncVar.lock.acquire();
while(!queue.empty())
{
syncVar.wait();
}
//do stuff with queue
syncVar.lock.release();
Presumably, you'll have a thread somewhere else that is pulling things out of the queue. When the queue is empty, it can call syncVar.signal() to wake up a random thread that is sitting asleep on syncVar.wait() (or there's usually also a signalAll() or broadcast() method to wake up all the threads that are waiting).
I generally use synchronization variables like this when I have one or more threads waiting on a single particular condition (e.g. for the queue to be empty).
Semaphores can be used similarly, but I think they're better used when you have a shared resource that can be available and unavailable based on some integer number of available things. Semaphores are good for producer/consumer situations where producers are allocating resources and consumers are consuming them.
Think about if you had a soda vending machine. There's only one soda machine and it's a shared resource. You have one thread that's a vendor (producer) who is responsible for keeping the machine stocked and N threads that are buyers (consumers) who want to get sodas out of the machine. The number of sodas in the machine is the integer value that will drive our semaphore.
Every buyer (consumer) thread that comes to the soda machine calls the semaphore down() method to take a soda. This will grab a soda from the machine and decrement the count of available sodas by 1. If there are sodas available, the code will just keep running past the down() statement without a problem. If no sodas are available, the thread will sleep here waiting to be notified of when soda is made available again (when there are more sodas in the machine).
The vendor (producer) thread would essentially be waiting for the soda machine to be empty. The vendor gets notified when the last soda is taken from the machine (and one or more consumers are potentially waiting to get sodas out). The vendor would restock the soda machine with the semaphore up() method, the available number of sodas would be incremented each time and thereby the waiting consumer threads would get notified that more soda is available.
The wait() and signal() methods of a synchronization variable tend to be hidden within the down() and up() operations of the semaphore.
Certainly there's overlap between the two choices. There are many scenarios where a semaphore or a condition variable (or set of condition variables) could both serve your purposes. Both semaphores and condition variables are associated with a lock object that they use to maintain mutual exclusion, but then they provide extra functionality on top of the lock for synchronizing thread execution. It's mostly up to you to figure out which one makes the most sense for your situation.
That's not necessarily the most technical description, but that's how it makes sense in my head.
Let's reveal what's under the hood.
Conditional variable is essentially a wait-queue, that supports blocking-wait and wakeup operations, i.e. you can put a thread into the wait-queue and set its state to BLOCK, and get a thread out from it and set its state to READY.
Note that to use a conditional variable, two other elements are needed:
a condition (typically implemented by checking a flag or a counter)
a mutex that protects the condition
The protocol then becomes,
acquire mutex
check condition
block and release mutex if condition is true, else release mutex
Semaphore is essentially a counter + a mutex + a wait queue. And it can be used as it is without external dependencies. You can use it either as a mutex or as a conditional variable.
Therefore, semaphore can be treated as a more sophisticated structure than conditional variable, while the latter is more lightweight and flexible.
Semaphores can be used to implement exclusive access to variables, however they are meant to be used for synchronization. Mutexes, on the other hand, have a semantics which is strictly related to mutual exclusion: only the process which locked the resource is allowed to unlock it.
Unfortunately you cannot implement synchronization with mutexes, that's why we have condition variables. Also notice that with condition variables you can unlock all the waiting threads in the same instant by using the broadcast unlocking. This cannot be done with semaphores.
semaphore and condition variables are very similar and are used mostly for the same purposes. However, there are minor differences that could make one preferable. For example, to implement barrier synchronization you would not be able to use a semaphore.But a condition variable is ideal.
Barrier synchronization is when you want all of your threads to wait until everyone has arrived at a certain part in the thread function. this can be implemented by having a static variable which is initially the value of total threads decremented by each thread when it reaches that barrier. this would mean we want each thread to sleep until the last one arrives.A semaphore would do the exact opposite! with a semaphore, each thread would keep running and the last thread (which will set semaphore value to 0) will go to sleep.
a condition variable on the other hand, is ideal. when each thread gets to the barrier we check if our static counter is zero. if not, we set the thread to sleep with the condition variable wait function. when the last thread arrives at the barrier, the counter value will be decremented to zero and this last thread will call the condition variable signal function which will wake up all the other threads!
I file condition variables under monitor synchronization. I've generally seen semaphores and monitors as two different synchronization styles. There are differences between the two in terms of how much state data is inherently kept and how you want to model code - but there really isn't any problem that can be solved by one but not the other.
I tend to code towards monitor form; in most languages I work in that comes down to mutexes, condition variables, and some backing state variables. But semaphores would do the job too.
semaphore need to know the count upfront for initialization. There is no such requirement for condition variables.
The the mutex and conditional variables are inherited from semaphore.
For mutex, the semaphore uses two states: 0, 1
For condition variables the semaphore uses counter.
They are like syntactic sugar
conditionalVar + mutex == semaphore

Resources