Why isn't mutex_trylock safe for use in interrupts? - linux

Linux Kernel Development by Robert Love states:
A mutex cannot be acquired by an interrupt handler or bottom half, even with
mutex_trylock()
At http://landley.net/kdocs/htmldocs/kernel-locking.html, its mentioned that
mutex_trylock() does not suspend your task but returns non-zero if it could lock the mutex on the first try or 0 if not. This function cannot be safely used in hardware or software interrupt contexts despite not sleeping.
I don't understand why it can't be used in such cases when it doesn't go to sleep?

Imagine if you had a platform whose native, low-level primitive mutexes do not have a "try lock" operation. In that case, to implement a high-level mutex that does, you'd have to use a condition variable and a boolean "is locked" protected by the low-level mutex to indicate the high-level mutex was locked.
So a waitable mutex could be implemented using a low-level primitive mutex (that does not support a "trylock" operation) to implement a high-level mutex (that does). The "high-level mutex" can just be a boolean that's protected by the low-level mutex.
With that design, mutex_lock would be implemented as follows:
Acquire low-level mutex (this is a real lock operation on the primitive, implementation mutex).
If high-level mutex is held, do a condition wait for the high-level mutex.
Acquire high-level mutex (just locked = true;).
Release low-level mutex.
And mutex_unlock would be implemented as follows:
Acquire low-level mutex.
Release high-level mutex (just locked = false;)
Signal the condition variable.
Release the low-level mutex.
In that case, mutex_trylock would be implemented as follows:
Acquire low-level mutex.
Check if high-level mutex is held.
If so, release low-level mutex and return failure.
Take high-level mutex.
Release low-level mutex.
Return success.
Imagine if we're interrupted after step 2 but before step 3.

may be it is because in mutex_trylock interrupts are not disabled. if we have a scenario where in interrupt context a mutex is locked using mutex_trylock and another interrupt comes that tries to acquire the same mutex. It can result in deadlock kind of situation.

The reason is that we require the mutex semantics to allow for full Priority-Inheritance, even if the default implementation does not do that (PREEMPT_RT switches mutex to mutex_rt and then it does get to have PI).
Suppose your interrupt does mutex_trylock() successfully, then the interrupt, or rather the task that was interrupted, becomes the lock owner. Any contending mutex_lock() would try and PI-boost that owner. One might argue that due to interrupts being non-preemptible, the context is effectively a prio-ceiling and all is well. However, if the interrupt hit the idle task, we'll end up trying to boost the idle task, and that is a big no-no.
Also read the thread here:
https://lkml.kernel.org/r/20191218135047.GS2844#hirez.programming.kicks-ass.net

Related

Terminology question: mutex lock, spin lock, sleepable lock

All over StackOverflow and the net I see folks to distinguish mutexes and spinlocks as like mutex is a mutual exclusion lock providing acquire() and release() functions and if the lock is taken, then acquire() will allow a process to be preempted.
Nevertheless, A. Silberschatz in his Operating System Concepts says in the section 6.5:
... The simplest of these tools is the mutex lock. (In fact, the term mutex is short for mutual exclusion.) We use the mutex lock to protect critical sections and thus prevent race conditions. That is, a process must acquire the lock before entering a critical section; it releases the lock when it exits the critical section. The acquire() function acquires the lock, and the release() function releases the lock.
and then he describes a spinlock, though adding a bit later
The type of mutex lock we have been describing is also called a spinlock because the process “spins” while waiting for the lock to become available.
so as spinlock is just a type of mutex lock as opposed to sleepable locks allowing a process to be preempted. That is, spinlocks and sleepable locks are all mutexes: locks by means of acquire() and release() functions.
I see totally logical to define mutex locks in the way Silberschatz did (though a bit implicitly).
What approach would you agree with?
The type of mutex lock we have been describing is also called a spinlock because the process “spins” while waiting for the lock to become available.
Maybe you're misreading the book (that is, "The type of mutex lock we have been describing" might not refer to the exact passage you think it does), or the book is outdated. Modern terminology is quite clear in what a mutex is, but spinlocks get a bit muddy.
A mutex is a concurrency primitive that allows one agent at a time access to its resource, while the others have to wait in the meantime until it the exclusive access is released. How they wait is not specified and irrelevant, their process might go to sleep, get written to disk, spin in a loop, or perhaps you are using cooperative concurrency (also known as "asynchronous programming") and passing control to the event loop as your 'waiting operation'.
A spinlock does not have a clear definition. It might be used to refer to:
A synonym for mutex (this is in my opinion wrong, but it happens).
A specific mutex implementation that always waits in a busy loop.
Any sort of busy-waiting loop waiting for a resource. A semaphore, for example, might also get implemented using a 'spinlock'.
I would consider any use of the word to refer to a (part of a) specific implementation of a concurrency primitive that waits in a busy loop to be correct, if a more general term is not appropriate. That is, use mutex (or whatever primitive you desire) unless you specifically want to talk about a busy-waiting concurrency primitive.
The words that one author uses in one book or manual will not always have the same exact meaning in every book and every manual. The meanings of the words evolve over time, and it can happen fast when the words are names for new ideas.
Not every book was written at the same time. Not every author is the same age or had the same teachers. It's just something you'll have to get used to.
"Mutex" was a name for a new idea not so very long ago.
In one book, it might mean nothing more than a thing that keeps two or more threads from entering the same critical section at the same time. In another book, it might refer to a specific type of object in a certain operating system or library that is used for that same purpose.
A spinlock is a lock/mutex whose implementation relies mainly on a spinning loop.
More advanced locks/mutexes may have spinning parts in their implementation, however those often last for no more than a few microseconds or so.

Do lock(mutex) implementations usually try to determine for how long a mutex was locked and on which core? If not, why not?

When a lock (or try_lock) function of a mutex discovers that the mutex is already locked (by another thread (presumably)), could it try to determine whether the owning thread is (or was extremely recently) running on another core?
Knowing whether the owner is running gives an indication of a possible reason the thread still holds the lock (why it couldn't unlock it): if the thread owning the mutex is either "runnable" (waiting for an available core and time slice) or "sleeping" or waiting for I/O... then clearly spinning on the mutex is not useful.
I understand that non recursive, non "safe", non priority inheritance mutexes (that I call "anonymous" mutexes: mutexes that in practice can be unlocked by another thread as they are just common semaphores) carry as little information as possible, but surely that could be determined for those "owned" mutexes that know the identify of the locking thread.
Another more useful information would be more how much time it was locked, but that would potentially require adding a field in the mutex object.
Is that ever done?

Why not to use mutex inside an interrupt

i have passed through this post and i noticed that in Clifford's answer he said that we shouldn't use mutex in an interrupt, i know that in an interrupt we have to avoid too much instructions and delays ext... but am not very clear about the reasons could anyone clarify me for which reason we have to avoid this?
In case that we want establish a synchronous communication between 2 interrupt driven threads what are the other mecahnism to use if using mutex is not allowed?
The original question you cite refers to code on an Atmel ATMegaAVR - a simple 8 mit microcontroller. In that context, one can assume that the mutex machanism is part of a simple RTOS.
In such a system, there is a thread context and an interrupt context. Interrupts are invoked by the hardware, while threads are scheduler by the RTOS scheduler. Now when an interrupt occurs, any thread will be immediately pre-empted; the interrupt must run to completion and can only be preempted by a higher priority interrupt (where nested interrupts are supported). All pending interrupts will run to completion before the scheduler can run.
Blocking on a mutex (or indeed any blocking kernel object) is a secheduling event. If you were to make any blocking call in an interrupt, the scheduler will never run. In prectice an RTOS would either ignore the blocking call, raise an exception, or enter a terminal error handler.
Some OS's such as SMX, Velocity or even WinCE have somewhat more complex interrupt architectures and support variety of deferred interrupt handler. Deferred interrupt handlers are run-to-completion scheduled from an interrupt but running outside of the interrupt context; the rules for blocking in such handlers may differ, but you would need to refer to the specific OS documentation. Without deferred interrupt handlers, the usual solution is to have a thread wait on a some blocking object such as a semaphore, and have the interrupt itself do little more that cause the object to unblock (such as giving a semaphore for example).
Multi-processor/core and parallel processing systems are another issue altogether, such systems are way beyond the scope of the question where the original comment was made, and beyond my experience - my comment may not apply in such a system, but there are no doubt additional complexities and considerations in any case
A mutex is typically used to ensure that a resource is used by only one user at any given time.
When a thread needs to use a resource it attempts to get the mutex first to ensure the resource is available. If the mutex is not available then the thread typically blocks to wait for the mutex to become available.
While a thread owns the mutex, it prevents other threads from obtaining the mutex and interfering with its use of the resource. Higher priority threads are often the concern here because those are the threads that may preempt the mutex owner.
The RTOS kernel assigns ownership of the mutex to a particular thread and typically only the mutex owner can release the mutex.
Now lets imagine this from an interrupt handler's point of view.
If an interrupt handler attempts to get a mutex that is not available, what should it do? The interrupt handler cannot block like the thread (the kernel is not equipped to push the context of an interrupt handler or switch to a thread from an interrupt handler).
If the interrupt handler obtains the mutex, what higher priority code is there that could interrupt the interrupt handler and attempt to use the mutex? Is the interrupt handler going to release the mutex before completing?
How does the kernel assign ownership of the mutex to an interrupt handler? An interrupt handler is not a thread. If the interrupt handler does not release the mutex then how will the kernel validate that the mutex is being released by the owner?
So maybe you have answers for all those questions. Maybe the you can guarantee that the interrupt handler runs only when the mutex is available or that the interrupt handler will not block on the mutex. Or maybe you're trying to protect the resource access from an even higher priority nested interrupt handler that also wants to use the resource. And maybe your kernel doesn't have any hangup with assigning ownership or restricting who releases the mutex. I guess if you've got all these questions answered then maybe you have a case for using a mutex within an interrupt handler.
But perhaps what you really need is a semaphore instead. One common application of a semaphore is to signal an event. Semaphores are very often used this way within interrupt handlers. The interrupt handler posts or sets the semaphore to signal that an event has occurred. The threads pend on the semaphore to wait for the event condition. (A semaphore doesn't have that ownership restriction that a mutex has.) Event signalling semaphores is one common way to establish synchronous communication between 2 interrupt driven threads.
The term "mutex" is often defined both as being the simplest form of synchronization between execution contexts, and also as being a construct that will not only check whether a resource is available, but wait for it to become available if it isn't, acquiring it as soon as it becomes available. These definitions are inconsistent, since the simplest forms of synchronization merely involve testing whether one has been granted ownership of a resource, and don't provide any in-built mechanism to wait for it if it isn't.
It is almost never proper to have code within an interrupt handler that waits for a resource to become available, unless the only things that could hold the resource would be higher-priority interrupts or hardware that will spontaneously release it. If the term "mutex" is only used to describe such constructs, then there would be very few cases where one could properly use a mutex within an interrupt handler. If, however, one uses the term "mutex" more broadly to refer to the simplest structures that will ensure that a piece of code that accesses a resource can only execute at times when no other piece of code anywhere in the universe will be accessing that resource, then the use of such constructs within interrupts is often not only proper, but required.
While there might be unusual cases where there's some problem with using a mutex in an interrupt handler, it's quite common practice and there's nothing wrong with it.
It really only makes sense on systems with more than one core. With just a single core (and no hyper-threading), the mutex would never do anything anyway. If the core is running code that acquires a mutex that interrupt code can acquire, interrupts (or the subset of them that matter) are disabled anyway. So with just one core, the mutex would never see any contention.
However, with multiple cores, it's common to use mutexes to protect structures that communicate between interrupt and non-interrupt code. So long as you know what you're doing, and you have to if you're going to write interrupt handlers, there's nothing wrong with it.
How the mutex blocks and unblocks is heavily implementation dependent. It can put the CPU to sleep and be woken by an inter-process interrupt. It can spin the CPU in some CPU-specific way.
Note that a totally unrelated concept that is often confused with this is using user-space mutexes in user-space signal handlers. That's a completely different question.

How can any thread usually release the non-recursive mutex no matter which thread originally took the mutex?

https://stackoverflow.com/a/189778/462608
In the case of non-recursive mutexes, there is no sense of ownership and any thread can usually release the mutex no matter which thread originally took the mutex.
What I have studied about Mutexes is that a thread acquires it when it wants to do something to a shared object, and when it completes whatever it wanted to do, it releases the lock. And meanwhile other threads can either sleep or spinlock.
What does the above quote mean by "any thread can usually release the mutex no matter which thread originally took the mutex."?
What's the point that I am missing?
This may differ between different thread implementations, but since you've tagged your question with "pthreads" I assume you're interested in pthread mutexes (and not vxworks mutexes, which is apparently what the link you provide describes).
So in pthreads the rule is that the same thread that locks a mutex must unlock it. You can set attributes on the mutex object whether you want an error to be generated if this rule is violated, or whether the result is undefined behavior (say, for debug vs. release builds). See the manpage for the pthread_mutexattr_settype function for details.
The specification for unlocking a pthread_mutex_t by a thread that wasn't the one that locked it depends on the mutex type (at best it returns an error):
Attempting to unlock a mutex on a thread that didn't lock it is undefined behavior for the following mutex types:
PTHREAD_MUTEX_NORMAL
PTHREAD_MUTEX_DEFAULT
Attempting to unlock a mutex on a thread that didn't lock it returns an error (EPERM) for these types:
PTHREAD_MUTEX_ERRORCHECK
PTHREAD_MUTEX_RECURSIVE
See http://pubs.opengroup.org/onlinepubs/007904875/functions/pthread_mutex_lock.html for details.
The bottom line is that it's never OK to unlock a mutex on a different thread, even if things seem to work.

Recursive Lock (Mutex) vs Non-Recursive Lock (Mutex)

POSIX allows mutexes to be recursive. That means the same thread can lock the same mutex twice and won't deadlock. Of course it also needs to unlock it twice, otherwise no other thread can obtain the mutex. Not all systems supporting pthreads also support recursive mutexes, but if they want to be POSIX conform, they have to.
Other APIs (more high level APIs) also usually offer mutexes, often called Locks. Some systems/languages (e.g. Cocoa Objective-C) offer both, recursive and non recursive mutexes. Some languages also only offer one or the other one. E.g. in Java mutexes are always recursive (the same thread may twice "synchronize" on the same object). Depending on what other thread functionality they offer, not having recursive mutexes might be no problem, as they can easily be written yourself (I already implemented recursive mutexes myself on the basis of more simple mutex/condition operations).
What I don't really understand: What are non-recursive mutexes good for? Why would I want to have a thread deadlock if it locks the same mutex twice? Even high level languages that could avoid that (e.g. testing if this will deadlock and throwing an exception if it does) usually don't do that. They will let the thread deadlock instead.
Is this only for cases, where I accidentally lock it twice and only unlock it once and in case of a recursive mutex, it would be harder to find the problem, so instead I have it deadlock immediately to see where the incorrect lock appears? But couldn't I do the same with having a lock counter returned when unlocking and in a situation, where I'm sure I released the last lock and the counter is not zero, I can throw an exception or log the problem? Or is there any other, more useful use-case of non recursive mutexes that I fail to see? Or is it maybe just performance, as a non-recursive mutex can be slightly faster than a recursive one? However, I tested this and the difference is really not that big.
The difference between a recursive and non-recursive mutex has to do with ownership. In the case of a recursive mutex, the kernel has to keep track of the thread who actually obtained the mutex the first time around so that it can detect the difference between recursion vs. a different thread that should block instead. As another answer pointed out, there is a question of the additional overhead of this both in terms of memory to store this context and also the cycles required for maintaining it.
However, there are other considerations at play here too.
Because the recursive mutex has a sense of ownership, the thread that grabs the mutex must be the same thread that releases the mutex. In the case of non-recursive mutexes, there is no sense of ownership and any thread can usually release the mutex no matter which thread originally took the mutex. In many cases, this type of "mutex" is really more of a semaphore action, where you are not necessarily using the mutex as an exclusion device but use it as synchronization or signaling device between two or more threads.
Another property that comes with a sense of ownership in a mutex is the ability to support priority inheritance. Because the kernel can track the thread owning the mutex and also the identity of all the blocker(s), in a priority threaded system it becomes possible to escalate the priority of the thread that currently owns the mutex to the priority of the highest priority thread that is currently blocking on the mutex. This inheritance prevents the problem of priority inversion that can occur in such cases. (Note that not all systems support priority inheritance on such mutexes, but it is another feature that becomes possible via the notion of ownership).
If you refer to classic VxWorks RTOS kernel, they define three mechanisms:
mutex - supports recursion, and optionally priority inheritance. This mechanism is commonly used to protect critical sections of data in a coherent manner.
binary semaphore - no recursion, no inheritance, simple exclusion, taker and giver does not have to be same thread, broadcast release available. This mechanism can be used to protect critical sections, but is also particularly useful for coherent signalling or synchronization between threads.
counting semaphore - no recursion or inheritance, acts as a coherent resource counter from any desired initial count, threads only block where net count against the resource is zero.
Again, this varies somewhat by platform - especially what they call these things, but this should be representative of the concepts and various mechanisms at play.
The answer is not efficiency. Non-reentrant mutexes lead to better code.
Example: A::foo() acquires the lock. It then calls B::bar(). This worked fine when you wrote it. But sometime later someone changes B::bar() to call A::baz(), which also acquires the lock.
Well, if you don't have recursive mutexes, this deadlocks. If you do have them, it runs, but it may break. A::foo() may have left the object in an inconsistent state before calling bar(), on the assumption that baz() couldn't get run because it also acquires the mutex. But it probably shouldn't run! The person who wrote A::foo() assumed that nobody could call A::baz() at the same time - that's the entire reason that both of those methods acquired the lock.
The right mental model for using mutexes: The mutex protects an invariant. When the mutex is held, the invariant may change, but before releasing the mutex, the invariant is re-established. Reentrant locks are dangerous because the second time you acquire the lock you can't be sure the invariant is true any more.
If you are happy with reentrant locks, it is only because you have not had to debug a problem like this before. Java has non-reentrant locks these days in java.util.concurrent.locks, by the way.
As written by Dave Butenhof himself:
"The biggest of all the big problems with recursive mutexes is that
they encourage you to completely lose track of your locking scheme and
scope. This is deadly. Evil. It's the "thread eater". You hold locks for
the absolutely shortest possible time. Period. Always. If you're calling
something with a lock held simply because you don't know it's held, or
because you don't know whether the callee needs the mutex, then you're
holding it too long. You're aiming a shotgun at your application and
pulling the trigger. You presumably started using threads to get
concurrency; but you've just PREVENTED concurrency."
The right mental model for using
mutexes: The mutex protects an
invariant.
Why are you sure that this is really right mental model for using mutexes?
I think right model is protecting data but not invariants.
The problem of protecting invariants presents even in single-threaded applications and has nothing common with multi-threading and mutexes.
Furthermore, if you need to protect invariants, you still may use binary semaphore wich is never recursive.
One main reason that recursive mutexes are useful is in case of accessing the methods multiple times by the same thread. For example, say if mutex lock is protecting a bank A/c to withdraw, then if there is a fee also associated with that withdrawal, then the same mutex has to be used.
The only good use case for recursion mutex is when an object contains multiple methods. When any of the methods modify the content of the object, and therefore must lock the object before the state is consistent again.
If the methods use other methods (ie: addNewArray() calls addNewPoint(), and finalizes with recheckBounds()), but any of those functions by themselves need to lock the mutex, then recursive mutex is a win-win.
For any other case (solving just bad coding, using it even in different objects) is clearly wrong!
What are non-recursive mutexes good for?
They are absolutely good when you have to make sure the mutex is unlocked before doing something. This is because pthread_mutex_unlock can guarantee that the mutex is unlocked only if it is non-recursive.
pthread_mutex_t g_mutex;
void foo()
{
pthread_mutex_lock(&g_mutex);
// Do something.
pthread_mutex_unlock(&g_mutex);
bar();
}
If g_mutex is non-recursive, the code above is guaranteed to call bar() with the mutex unlocked.
Thus eliminating the possibility of a deadlock in case bar() happens to be an unknown external function which may well do something that may result in another thread trying to acquire the same mutex. Such scenarios are not uncommon in applications built on thread pools, and in distributed applications, where an interprocess call may spawn a new thread without the client programmer even realising that. In all such scenarios it's best to invoke the said external functions only after the lock is released.
If g_mutex was recursive, there would be simply no way to make sure it is unlocked before making a call.
IMHO, most arguments against recursive locks (which are what I use 99.9% of the time over like 20 years of concurrent programming) mix the question if they are good or bad with other software design issues, which are quite unrelated. To name one, the "callback" problem, which is elaborated on exhaustively and without any multithreading related point of view, for example in the book Component software - beyond Object oriented programming.
As soon as you have some inversion of control (e.g. events fired), you face re-entrance problems. Independent of whether there are mutexes and threading involved or not.
class EvilFoo {
std::vector<std::string> data;
std::vector<std::function<void(EvilFoo&)> > changedEventHandlers;
public:
size_t registerChangedHandler( std::function<void(EvilFoo&)> handler) { // ...
}
void unregisterChangedHandler(size_t handlerId) { // ...
}
void fireChangedEvent() {
// bad bad, even evil idea!
for( auto& handler : changedEventHandlers ) {
handler(*this);
}
}
void AddItem(const std::string& item) {
data.push_back(item);
fireChangedEvent();
}
};
Now, with code like the above you get all error cases, which would usually be named in the context of recursive locks - only without any of them. An event handler can unregister itself once it has been called, which would lead to a bug in a naively written fireChangedEvent(). Or it could call other member functions of EvilFoo which cause all sorts of problems. The root cause is re-entrance.
Worst of all, this could not even be very obvious as it could be over a whole chain of events firing events and eventually we are back at our EvilFoo (non- local).
So, re-entrance is the root problem, not the recursive lock.
Now, if you felt more on the safe side using a non-recursive lock, how would such a bug manifest itself? In a deadlock whenever unexpected re-entrance occurs.
And with a recursive lock? The same way, it would manifest itself in code without any locks.
So the evil part of EvilFoo are the events and how they are implemented, not so much a recursive lock. fireChangedEvent() would need to first create a copy of changedEventHandlers and use that for iteration, for starters.
Another aspect often coming into the discussion is the definition of what a lock is supposed to do in the first place:
Protect a piece of code from re-entrance
Protect a resource from being used concurrently (by multiple threads).
The way I do my concurrent programming, I have a mental model of the latter (protect a resource). This is the main reason why I am good with recursive locks. If some (member) function needs locking of a resource, it locks. If it calls another (member) function while doing what it does and that function also needs locking - it locks. And I don't need an "alternate approach", because the ref-counting of the recursive lock is quite the same as if each function wrote something like:
void EvilFoo::bar() {
auto_lock lock(this); // this->lock_holder = this->lock_if_not_already_locked_by_same_thread())
// do what we gotta do
// ~auto_lock() { if (lock_holder) unlock() }
}
And once events or similar constructs (visitors?!) come into play, I do not hope to get all the ensuing design problems solved by some non-recursive lock.

Resources