Mechanics of Condition.Signal() - multithreading

If I had threads as below
void thread(){
while() {
lock.acquire();
if(condition not true)
{
Cond.wait()
}
// blah blah
Cond.Signal();
lock.release();
}
}
Well I guess my main question is that whether the signalling thread continues running for a while after cond.signal() or immediately gives up the CPU?. I would like it in some cases not to release the lock before the woken up thread finishes execution and in some other cases it may be beneficial to release the lock immediately after signalling, without waiting for the other woken thread to finish.
I understand that if there are any threads waiting on the condition then they get woken up on Cond.signal(). But what do you mean by woekn up - put on the ready queue or does the scheduler make sure that it runs immediately?.
and what about the signalling thread.. does it go to sleep on the same condtion upon signalling? .. so then some other thread has to wake it up to make it release the lock?.

This is in large part dependent on your environment (OS, library, language...) and how the synchronisation primitives are implemented. Since you haven't specified any I'll just give a general answer.
When putting a thread to sleep, most environment will choose to remove it from the scheduler's ready queue and the thread will give up its remaining CPU time. When woken up, the thread is simply placed back into the ready queue and will resume execution the next time the scheduler selects it from the queue.
It's also possible that the thread will do some active waiting (spinning) instead of being removed from the scheduler's ready queue. In this case, the thread will resume execution right away. Note that since a thread can still be run out of CPU of time while spinning, it might have to wait to be rescheduled before waking up. This is a useful strategy if your critical sections are very small and you don't want to pay for the scheduling overheads.
A hybrid approach would be to do a small amount of active waiting before removing the thread from the scheduler's ready queue.
As for the signaling thread, unless specified explicitly by your environment (I can't of any reasons but you never know), I wouldn't expect a call to signal() to block in a way that you have to wake it up. Signal() might have to synchronize itself with other threads calling signal() but those are implementation details and you shouldn't have to do anything about it.

Related

How the epoll(), mutex and semaphore alike system calls are implemented behind the scene?

This is really a question confusing me for a long time. I tried googling a lot but still don't quite understand. My question is like this:
for system calls such as epoll(), mutex and semaphore, they have one thing in common: as soon as something happens(taking mutex for example, a thread release the lock), then a thread get woken up(the thread who are waiting for the lock can be woken up).
I'm wondering how is this mechanism(an event in one thread happens, then another thread is notified about this) implemented on earth behind the scene? I can only come up with 2 ways:
Hardware level interrupt: For example, as soon as another thread releases the lock, an edge trigger will happen.
Busy waiting: busy waiting in very low level. for example, as soon as another thread releases the lock, it will change a bit from 0 to 1 so that threads who are waiting for the lock can check this bit.
I'm not sure which of my guess, if any, is correct. I guess reading linux source code can help here. But it's sort of hard to a noob like me. It will be great to have a general idea here plus some pseudo code.
Linux kernel has a built-in object class called "wait queue" (other OSes have similar mechanisms). Wait queues are created for all types of "waitable" resources, so there are quite a few of them around the kernel. When thread detects that it must wait for a resource, it joins the relevant wait queue. The process goes roughly as following:
Thread adds its control structure to the linked list associated with the desired wait queue.
Thread calls scheduler, which marks the calling thread as sleeping, removes it from "ready to run" list and stashes its context away from the CPU. The scheduler is then free to select any other thread context to load onto the CPU instead.
When the resource becomes available, another thread (be it a user/kernel thread or a task scheduled by an interrupt handler - those usually piggy back on special "work queue" threads) invokes a "wake up" call on the relevant wait queue. "Wake up" means, that scheduler shall remove one or more thread control structures from the wait queue linked list and add all those threads to the "ready to run" list, which will enable them to be scheduled in due course.
A bit more technical overview is here:
http://www.makelinux.net/ldd3/chp-6-sect-2

Why do I get a thread context switch every time I synchronize with a mutex?

I have multiple threads updating a single array in tight loops. (10 threads on a dual-core processor # roughly 100000 updates per second). Each time the array is updated under the protection of a mutex (WaitForSingleObject / ReleaseMutex). I have noticed that no thread ever does two consecutive updates to the array which means there must be some sort of yield relating to the synchronization. This means there are about 100000 context switches happening every second which seems sub-optimal. Why does this happen ?
The problem here is that there is an order of all waiting threads.
Each thread blocked in a WaitForSingleObject goes into a queue and is then suspended by the scheduler so that it does not eat up execution time anymore. When the mutex is freed, one of the waiting threads is resumed by the scheduler. It is unspecified what the exact order is in which threads are wakened from the queue, but in many cases it will be a simple first-in, first-out.
What happens now is that if the same thread releases the mutex and then does another WaitForSingleObject on the same mutex, he is going to be re-inserted into the queue and it is quite unlikely that he will be inserted at the front of the queue if there are already other threads waiting. This makes sense, as allowing him to skip to the front of the queue could lead to other threads starving. So the scheduler will probably just suspend him and wake the the thread that is at the front of the queue instead.
I guess this is because of the multi processor.
When the first thread (running on the first processor) release the mutex, the second thread (on the second processor) got it, then when the first thread try to get the mutex, it can not. When the mutex is finally released by the second thread, it is taken by the third thread (on the first processor).

Interrupt while placing process on the waiting queue

Suppose there is a process that is trying to enter the critical region but since it is occupied by some other process, the current process has to wait for it. So, at the time when the process is getting added to the waiting queue of the semaphore, suppose an interrupt comes (ex- battery finished), then what will happen to that process and the waiting queue?
I think that since the battery has finished so this interrupt will have the highest priority and so the context of the process which was placing the process on the waiting queue would be saved and interrupt service routine for this routing will be executed.
And then it will return to the process that was placing the process on the queue.
Please give some hints/suggestions for this question.
This is very hardware / OS dependant, however a few thoughts:
As has been mentioned in the comments, a ‘battery finished’ interrupt may be considered as a special case, simply because the machine may turn off without taking any action, in which case the processes + queue will disappear. In general however, assuming a non-fatal interrupt and an OS that suspends / resumes correctly, I think it’s unlikely there will be any noticeable impact to the execution of either process.
In a multi-core setup, the process may not be immediately suspended. The interrupt could be handled by a different core and neither of the processes you’ve mentioned would be any the wiser.
In a pre-emptive multitasking OS there's also no guarantee that the process adding to the queue would be resumed immediately after the interrupt, the scheduler could decide to activate the process currently in the critical section or another process entirely. What would happen when the process adding itself to the semaphore wait queue resumed would depend on how far through adding it was, how the queue has been implemented and what state the semaphore was in. It may be that it never gets on to the wait queue because it detects that the other process has already woken up and left the critical section, or it may be that it completes adding itself to the queue and suspends as if nothing had happened…
In a single core/processor machine with a cooperative multitasking OS, I think the scenario you’ve described in your question is quite likely, with the executing process being suspended to handle the interrupt and then resumed afterwards until it finished adding itself to the queue and yielded.
It depends on the implementation, but conceptually the same operating process should be performing both the addition of the process to the wait queue and the management of the interrupts, so your process being moved to wait would instead be treated as interrupted from the wait queue.
For Java, see the API for Thread.interrupt()
Interrupts this thread.
Unless the current thread is interrupting itself, which is always permitted, the checkAccess method of this thread is invoked, which may cause a SecurityException to be thrown.
If this thread is blocked in an invocation of the wait(), wait(long), or wait(long, int) methods of the Object class, or of the join(), join(long), join(long, int), sleep(long), or sleep(long, int), methods of this class, then its interrupt status will be cleared and it will receive an InterruptedException.
If this thread is blocked in an I/O operation upon an interruptible channel then the channel will be closed, the thread's interrupt status will be set, and the thread will receive a ClosedByInterruptException.
If this thread is blocked in a Selector then the thread's interrupt status will be set and it will return immediately from the selection operation, possibly with a non-zero value, just as if the selector's wakeup method were invoked.
If none of the previous conditions hold then this thread's interrupt status will be set.
Interrupting a thread that is not alive need not have any effect.

Mutex lock: what does "blocking" mean?

I've been reading up on multithreading and shared resources access and one of the many (for me) new concepts is the mutex lock. What I can't seem to find out is what is actually happening to the thread that finds a "critical section" is locked. It says in many places that the thread gets "blocked", but what does that mean? Is it suspended, and will it resume when the lock is lifted? Or will it try again in the next iteration of the "run loop"?
The reason I ask, is because I want to have system supplied events (mouse, keyboard, etc.), which (apparantly) are delivered on the main thread, to be handled in a very specific part in the run loop of my secondary thread. So whatever event is delivered, I queue in my own datastructure. Obviously, the datastructure needs a mutex lock because it's being modified by both threads. The missing puzzle-piece is: what happens when an event gets delivered in a function on the main thread, I want to queue it, but the queue is locked? Will the main thread be suspended, or will it just jump over the locked section and go out of scope (losing the event)?
Blocked means execution gets stuck there; generally, the thread is put to sleep by the system and yields the processor to another thread. When a thread is blocked trying to acquire a mutex, execution resumes when the mutex is released, though the thread might block again if another thread grabs the mutex before it can.
There is generally a try-lock operation that grab the mutex if possible, and if not, will return an error. But you are eventually going to have to move the current event into that queue. Also, if you delay moving the events to the thread where they are handled, the application will become unresponsive regardless.
A queue is actually one case where you can get away with not using a mutex. For example, Mac OS X (and possibly also iOS) provides the OSAtomicEnqueue() and OSAtomicDequeue() functions (see man atomic or <libkern/OSAtomic.h>) that exploit processor-specific atomic operations to avoid using a lock.
But, why not just process the events on the main thread as part of the main run loop?
The simplest way to think of it is that the blocked thread is put in a wait ("sleeping") state until the mutex is released by the thread holding it. At that point the operating system will "wake up" one of the threads waiting on the mutex and let it acquire it and continue. It's as if the OS simply puts the blocked thread on a shelf until it has the thing it needs to continue. Until the OS takes the thread off the shelf, it's not doing anything. The exact implementation -- which thread gets to go next, whether they all get woken up or they're queued -- will depend on your OS and what language/framework you are using.
Too late to answer but I may facilitate the understanding. I am talking more from implementation perspective rather than theoretical texts.
The word "blocking" is kind of technical homonym. People may use it for sleeping or mere waiting. The term has to be understood in context of usage.
Blocking means Waiting - Assume on an SMP system a thread B wants to acquire a spinlock held by some other thread A. One of the mechanisms is to disable preemption and keep spinning on the processor unless B gets it. Another mechanism probably, an efficient one, is to allow other threads to use processor, in case B does not gets it in easy attempts. Therefore we schedule out thread B (as preemption is enabled) and give processor to some other thread C. In this case thread B just waits in the scheduler's queue and comes back with its turn. Understand that B is not sleeping just waiting rather passively instead of busy-wait and burning processor cycles. On BSD and Solaris systems there are data-structures like turnstiles to implement this situation.
Blocking means Sleeping - If the thread B had instead made system call like read() waiting data from network socket, it cannot proceed until it gets it. Therefore, some texts casually use term blocking as "... blocked for I/O" or "... in blocking system call". Actually, thread B is rather sleeping. There are specific data-structures known as sleep queues - much like luxury waiting rooms on air-ports :-). The thread will be woken up when OS detects availability of data, much like an attendant of the waiting room.
Blocking means just that. It is blocked. It will not proceed until able. You don't say which language you're using, but most languages/libraries have lock objects where you can "attempt" to take the lock and then carry on and do something different depending on whether you succeeded or not.
But in, for example, Java synchronized blocks, your thread will stall until it is able to acquire the monitor (mutex, lock). The java.util.concurrent.locks.Lock interface describes lock objects which have more flexibility in terms of lock acquisition.

synchronising threads with mutexes

In Qt, I have a method which contains a mutex lock and unlock. The problem is when the mutex is unlock it sometimes take long before the other thread gets the lock back. In other words it seems the same thread can get the lock back(method called in a loop) even though another thread is waiting for it. What can I do about this? One thread is a qthread and the other thread is the main thread.
You can have your thread that just unlocked the mutex relinquish the processor. On Posix, you do that by calling pthread_yield() and on Windows by calling Sleep(0).
That said, there is no guarantee that the thread waiting on the lock will be scheduled before your thread wakes up again.
It shouldn't be possible to release a lock and then get it back if some other thread is already waiting on it.
Check that you actually releasing the lock when you think you do. Check that waiting thread actually waits (and not spins a loop with a trylock tests and sleeps, I actually done that once and was very puzzled at first :)).
Or if waiting thread really never gets time to even reach locking code, try QThread::yieldCurrentThread(). This will stop current thread and give scheduler a chance to give execution to somebody else. Might cause unnecessary switching depending on tightness of your loop.
If you want to make sure that one thread has priority over the other ones, an option is to use a QReadWriteLock. It's adapted to a typical scenario where n threads are going to read a value in a infinite loop, with only one thread updating it. I think it's the scenario you described.
QReadWriteLock offers two ways to lock: lockForRead() and lockForWrite(). The threads depending on the value will use the latter, while the thread updating the value (typically via the GUI) will use the former (lockForWrite()) and will have top priority. You won't need to sleep or yield or whatever.
Example code
Let's say you have a QReadWrite lock; somewhere.
"Reader" thread
forever {
lock.lockForRead();
if (condition) {
do_stuff();
}
lock.unlock();
}
"Writer" thread
// external input (eg. user) changes the thread
lock.lockForWrite(); // will block as soon as the reader lock ends
update_condition();
lock.unlock();

Resources