Could you have the following scenario in concurrent programs?
suppose a thread acquires a lock to execute a critical section.Then before the critical section is executed the processor preempts the thread. The new thread that comes for execution needs the lock from the old thread (that was preempted). So the current thread can't proceed (hangs until it get preempted). Is there a mechanism in Operating systems to not let threads preempted until the lock is released?
It is possible for a thread holding a mutex to be preempted while executing a critical section. If the thread that the OS switches to tries to acquire that mutex and finds that it is already locked, then that thread should be context switched out immediately. The thread scheduler should be smart enough to not switch back to that thread until it has switched back to the thread holding the mutex and the mutex is released.
If you are writing Kernel code then yes, there are mechanisms for preventing a thread to preempt.
For standard code there is no such thing. Some operations are atomic and are ensured atomic by the compiler and kernel but right after those operations the thread may be preempted and it can remain preempted for an undetermined amount of time (unless the system is a real-time sistem).
Related
Linux kernel 2.6 introduced a new per-thread field---preempt_count---which is incremented/decremented whenever a lock is acquired/released. This field is used to allow kernel preemption: "If need_resched is set and preempt_count is zero, then a more important task is runnable and it is safe to preempt."
According to the "Linux Kernel Development" book by Robert Love:
"So when is it safe to reschedule? The kernel is capable of preempting a task running in the kernel so long as it does not hold a lock."
My question is: why isn't it safe to preempt a task running in the kernel while this task holds a lock?
If another task is scheduled and tries to grab the lock, it will block (or spin until its time slice ends), so we wouldn't get the two threads simultaneously inside the same critical section. Can anyone please outline a problematic scenario in case we do preempt a task that holds a lock in kernel-mode?
Thanks!
While this is an old question, the accepted answer isn't correct.
First of all the title is asking:
Why kernel preemption is safe only when preempt_count > 0?
This isn't correct, it's the opposite. Kernel preemption is disabled when preempt_count > 0, and enabled when preempt_count == 0.
Furthermore, the claim:
If another task is scheduled and tries to grab the lock, it will block (or spin until its time slice ends),
Is not always true.
Say you acquire a spin lock. Preemption is enabled. A process switch happens, and in the context of the new process some softirq runs. Preemption is disabled while running softirqs. If one of those softirqs attempts to accquire your lock it will never stop spinning because preemption is disabled. Thus you have a deadlock.
You have no control over whether the process that preempts yours will run softirqs or not. The preempt_count field where you disable softirqs is process-specific. Softirqs have to run with preemption disabled to preserve the per-cpu serialization of softirqs.
With the help of #Tsyvarev, I think I can now answer my own question and depict a problematic scenario in which we do preempt a task that holds a lock in kernel-mode.
Thread #1 holds a spin-lock and gets preempted.
Thread #2 is then scheduled, and spins to grab the spin-lock.
Now, if thread #2 is a conventional process, it will eventually finish its time slice. In that case, thread #1 will be scheduled again, release the lock, and we are all good.
But, if thread #2 is real-time process of a higher priority, thread #1 will never get to run again and we have a deadlock.
This answer is corroborated by another stackoverflow thread which cites the FreeBSD documentation:
While locks can protect most data in the case of a preemption, not all
of the kernel is preemption safe. For example, if a thread holding a
spin mutex preempted and the new thread attempts to grab the same spin
mutex, the new thread may spin forever as the interrupted thread may
never get a chance to execute.
although the above quote doesn't explicitly explain why the "interrupted thread may never get a chance to execute" again.
I am new to multithreading paradigm. While learning concurrency, every source I found says:
"The difference between mutex and binary semaphore is the ownership
i.e. a mutex can be signaled for release by only the thread who
created it while a semaphore can be signaled any thread"
Considering a scenario where thread A has acquired a binary semaphore lock on resource x and processing it. If any thread can call a release signal for lock on x, doesn't this open a possibility of any thread calling a release on the lock amidst the time when thread A was using x.
Isn't there a scope of inconsistency in this or am I missing something?
Of course, if threads are arbitrarily acquiring or releasing a semaphore, the result would be disastrous and the fact, that implementations do not prevent this, does not imply that this is a useful scenario.
However, there might be real use cases if the involved threads use another mechanism to coordinate themselves while using the semaphore to hold these threads off which do not participate in these coordination.
Imagine you expand a use case where one thread acquires the semaphore for performing a task to parallel execution of said task. After acquiring the semaphore, several worker threads are spawned, each of them working on a different part of the data, thus naturally working non-interfering. Then the last worker thread releases the semaphore which elides the need for another communication between the initiating thread and the worker threads. Of course, this requires the worker thread to detect whether it is the last one, but a simple atomic integer holding the number of active workers would be sufficient.
I know spin lock only works on multiprocessor. But if two threads try to acquire the same resource and one is put on spinlock, what prevents the other one not running on the same processor? If it happens the one with spin lock will prevent the one holding the resources to exceed. In this case it becomes a deadlock. How does OS prevent it happen?
Some background facts first:
spin-locks (and locks generally) are not limited to multiprocessor systems. They work fine on single processor or even single-threaded application can use them without any harm.
spin-locks are not only provided by OS, they have pure user-space implementation as well. For example, tbb provides tbb::spin_mutex.
By default, nothing prevents a thread from running on any available CPU (regardless of the locks they use).
There are reentrant/recursive type of locks. It means that if a thread acquired it once, and tries to acquire it once again without releasing, it will succeed, not deadlock as usual locks. But it does not mean that the same applies to different threads just because they are scheduled to the same CPU. With any type of lock, if one software thread locked a mutex, other threads have to wait.
It is possible for one thread to acquire the lock and be preempted (i.e. interrupted by OS timer) before it releases the lock. Another thread can be scheduled to the same CPU and it might want to acquire the same lock. In case of pure spin-locks, this thread will uselessly spin until it exceeds its time-slice allowed by OS and will be preempted. Finally, the first thread will get a chance to run and release its lock so another thread will be able to acquire it.
As you can see, it is not quite efficient to spent the time on the hopeless waiting. Thus, more sophisticated implementations, after a number of attempts to acquire the spinlock, call OS for help in order to voluntary give away its time-slice to other threads which possibly can unlock the current one.
In Linux, say I have code with 100 threads. 5 of those threads compete over a shared resource protected by a mutex. I know that when the critical section is actually being run, only the 5 threads are subject to having their execution stopped if they try to obtain the lock and the other 95 threads will be running without issues.
My question is is there any point at which those other 95 threads' execution will be paused or affected, ie when the mutex/kernel/whatever is determining which threads are blocked on the mutex and which thread should get the lock, and which threads should be able to run because their not asking for the lock, etc
No, other threads are not affected.
The kernel doesn't ask which threads are affected by the lock. Each thread tells the kernel when it tries to acquire the lock.
When threads do that, they go to sleep and get into a special wake-up queue associated with the lock.
Threads that don't use the lock won't get into the same queue as those that do, so their blocking behavior is unrelated.
Singularity - If a thread managed to lock a mutex, it is assured that no other thread will be able to lock the thread until the original thread releases the lock.
Non-Busy Wait - If a thread attempts to lock a thread that was locked by a second thread, the first thread will be suspended (and will not consume any CPU resources) until the lock is freed by the second thread. At this time, the first thread will wake up and continue execution, having the mutex locked by it.
From: Multi-Threaded Programming With POSIX Threads
Question: I thought threads lock the mutex variables. Threads don't lock other threads?
What do the bold statements above mean? How can one thread lock other thread?
Corrections:
If a thread managed to lock a mutex, it is assured that no other thread will be able to lock the mutex until the original thread releases the lock.
Non-Busy Wait - If a thread attempts to lock a mutex that was locked by a second thread, the first thread will be suspended (and will not consume any CPU resources) until the lock is freed by the second thread. At this time, the first thread will wake up and continue execution, having the mutex locked by it.
It's a good thing you don't take for granted whatever you read on the internet, also I give you thumbs up for paying attention to what you read.