Interrupt while placing process on the waiting queue - semaphore

Suppose there is a process that is trying to enter the critical region but since it is occupied by some other process, the current process has to wait for it. So, at the time when the process is getting added to the waiting queue of the semaphore, suppose an interrupt comes (ex- battery finished), then what will happen to that process and the waiting queue?
I think that since the battery has finished so this interrupt will have the highest priority and so the context of the process which was placing the process on the waiting queue would be saved and interrupt service routine for this routing will be executed.
And then it will return to the process that was placing the process on the queue.
Please give some hints/suggestions for this question.

This is very hardware / OS dependant, however a few thoughts:
As has been mentioned in the comments, a ‘battery finished’ interrupt may be considered as a special case, simply because the machine may turn off without taking any action, in which case the processes + queue will disappear. In general however, assuming a non-fatal interrupt and an OS that suspends / resumes correctly, I think it’s unlikely there will be any noticeable impact to the execution of either process.
In a multi-core setup, the process may not be immediately suspended. The interrupt could be handled by a different core and neither of the processes you’ve mentioned would be any the wiser.
In a pre-emptive multitasking OS there's also no guarantee that the process adding to the queue would be resumed immediately after the interrupt, the scheduler could decide to activate the process currently in the critical section or another process entirely. What would happen when the process adding itself to the semaphore wait queue resumed would depend on how far through adding it was, how the queue has been implemented and what state the semaphore was in. It may be that it never gets on to the wait queue because it detects that the other process has already woken up and left the critical section, or it may be that it completes adding itself to the queue and suspends as if nothing had happened…
In a single core/processor machine with a cooperative multitasking OS, I think the scenario you’ve described in your question is quite likely, with the executing process being suspended to handle the interrupt and then resumed afterwards until it finished adding itself to the queue and yielded.

It depends on the implementation, but conceptually the same operating process should be performing both the addition of the process to the wait queue and the management of the interrupts, so your process being moved to wait would instead be treated as interrupted from the wait queue.
For Java, see the API for Thread.interrupt()
Interrupts this thread.
Unless the current thread is interrupting itself, which is always permitted, the checkAccess method of this thread is invoked, which may cause a SecurityException to be thrown.
If this thread is blocked in an invocation of the wait(), wait(long), or wait(long, int) methods of the Object class, or of the join(), join(long), join(long, int), sleep(long), or sleep(long, int), methods of this class, then its interrupt status will be cleared and it will receive an InterruptedException.
If this thread is blocked in an I/O operation upon an interruptible channel then the channel will be closed, the thread's interrupt status will be set, and the thread will receive a ClosedByInterruptException.
If this thread is blocked in a Selector then the thread's interrupt status will be set and it will return immediately from the selection operation, possibly with a non-zero value, just as if the selector's wakeup method were invoked.
If none of the previous conditions hold then this thread's interrupt status will be set.
Interrupting a thread that is not alive need not have any effect.

Related

How is a process waken up if slept in interruptible state?

The kernel code can explicitly put the process to sleep if it's waiting for some task to occur. Now, if the task is put in TASK_INTERRUPTIBLE state, it can wake either by explicit wake up call or by receiving a signal.
Let's say another process issued a signal to a process which is in the wait queue and in TASK_INTERRUPTIBLE state, it will put the process into TASK_RUNNING and the signal will be handled when the process is scheduled next. Is this correct?
An explicit wake up call by other process can also be used to wake up the slept processes. I am wondering how could another process know when the condition became true for the slept process to wake up? Suppose a disk i/o is to be completed and so the process is put to sleep. How could another process know that the i/o is completed? Or is it done by kernel threads?
What am I missing?
It is up to the code that entered the interruptible state to detect the interruption and take appropriate action when it wakes. That might involve the code that is currently handling the user operation completing it with a -ERESTARTSYS error that will be intercepted and dealt with before the system call returns to user mode.
The code that has completed some I/O can just issue a "wake up" to the queue it is responsible for without caring whether there is any task on the queue to be woken up, or the exact condition the task is waiting for.
The task that is woken up needs to decide what to do, and that could include repeating the wait if the the condition it is waiting for had not been satisfied.

How does the scheduler know that a thread is blocked waiting for input?

When a thread executing user code is waiting for input, how does the scheduler know to interrupt it or how does the thread know to call the scheduler, seeing as the average programmer of a simple single threaded application is unlikely to insert sched_yield() everywhere. Does the compiler insert sched_yield() on optimisation or does the thread just spin lock until the general timer interrupt set by the scheduler fires, or does the user have to explicitly state wait(), sleep() functions in order for the context to switch?
This question is especially relevant if the scheduler is not preemptive because then it has to call the scheduler when it is waiting for input for throughput to be effective, but I'm not sure how it does this.
Be careful not to confuse preemption with the ability of a process to sleep. Processes can sleep even with a non-preempting scheduler. This is what happens when a process is waiting for I/O. The process makes a system call such as read() and the device determines no data is available. It then internally puts the process to sleep by updating a data structure used by the scheduler. The scheduler then executes other processes until an interrupt or some other event occurs that wakes the original process. The awoken process then becomes eligible again for scheduling.
On the other hand preemption is the ability of an architecture's scheduler to stop execution of a process without its cooperation. The interruption can occur anywhere in the program's instruction stream. Control returns to the scheduler which can then execute other processes and return to the interrupted (preempted) process later. Most schedulers allocate time slices where a process is allowed to run for up to a predetermined amount of time, after which it is preempted if higher-priority processes need time slices.
Unless you're writing drivers or kernel code, you don't need to worry about the underlying mechanisms too much. When writing user-space applications the key concepts are (1) that some system calls may block which means your process is put to sleep until an event occurs, and (2) on preemptible systems (all mainstream modern operating systems) your program may be preempted at any time so that other processes can run.
* Note that in some platforms, such as Linux, a thread is really just another process which shares its virtual address space with another process. Processes and threads are therefore treated exactly the same by the scheduler.
It is not clear to me whether your question is about theory or practice. In practice in every modern operating system, i/o operations are privileged. Meaning that in order for a user process or thread to access files, devices and so on it must issue a system call.
Then the kernel has the opportunity to do whatever it considers appropriate. For example it can check whether the I/o operation will block and, therefore switch the running (i.e. “call” the scheduler) process after issuing the operation.
Note that this mechanism can work even when there is no timer interruption handled by the kernel. Anyway in general it will depend upon your system. For example in an embedded system where no OS exits (or a minimal one) it could be the entire responsibility of the user’s code to invoke the scheduler before issueing a blocking operation.
Kernel can be preemptive, not scheduler.
First sched_yield() and wait() are types of voluntary preemption, when process itself gives out CPU even if kernel is non-preemptive.
If kernel has ability to switch to another process when time quantum has expired or higher priority process become runnable then we are talking about involuntary preemption, i.e preemptive kernel, and it can happen on different places explained below.
Difference is that insched_yield() process stays in runnable TASK_RUNNING state but just goes to the end of the run queue for it's static priority. Process must wait to get the CPU again.
On the other hand, wait() puts process to a sleep TASK_(UN)INTERRUPTABLE state, on a wait queue, calls schedule() and waits for an event to occur. When event occur, process are moved to run queue again. But that doesn't mean that they will get CPU immediately.
Here is explained when schedule() can be called after process is woken up:
Wakeups don't really cause entry into schedule(). They add a
task to the run-queue and that's it.
If the new task added to the run-queue preempts the current
task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
called on the nearest possible occasion:
If the kernel is preemptible (CONFIG_PREEMPT=y):
in syscall or exception context, at the next outmost
preempt_enable(). (this might be as soon as the wake_up()'s
spin_unlock()!)
in IRQ context, return from interrupt-handler to
preemptible context
If the kernel is not preemptible (CONFIG_PREEMPT is not set)
then at the next:
cond_resched() call
explicit schedule() call
return from syscall or exception to user-space
return from interrupt-handler to user-space

Query Regarding Non Preemptive thread

I was reading about non preemptive threads and I found a slide from Princeton University and it shows the following diagram: (Source Link: http://www.cs.princeton.edu/courses/archive/fall11/cos318/lectures/L5_ThreadsImplementation.pdf)
From what I understood is that a thread to be executed is first put into a ready queue. When it pop's out of the queue it is in running state. If it wants to invoke another thread, it calls the yield function, which will store the current state of the thread and insert it in the tail of the queue. And the thread which is in the front of the queue will be executed.
What happens if The thread is blocked (i.e. it is waiting for some resource) ? I thought in non-preemptive thread it will wait for the resource and then carry on execution.
But from the below diagram it looks as though it goes into blocked state and then is put into the ready queue ? Why is that?
As said in the comments, non-preemptive means that another thread cannot interrupt (preempt) a running thread, not that the running thread won't yield when it has to wait for something.
When a thread is waiting for data from memory (for example), it's said to be in blocked state: its context is saved and another thread takes place in the computing resource (CPU core). When data is available in CPU's cache memory, then the first thread is said ready to resume its execution (and it will, as soon as it is the next to be executed and that the currently executed thread yields the computing resource).
This enables overlapping both data movements and threads execution, thus saving time by optimizing resource usage.

Transition from Wait queue to run queue

When a process calls wait_event_interruptible the process goes to sleep(assuming the condition is satisfied and there are no pending signals) the scheduler removes the process from the run queue to the wait queue.
When there is wake_up call how exactly and who removes the process from wait queue and keeps it in the run queue?
Thaks
The "wake_up call" is a system call done by another thread/process/task (some kernels put state on thread rather than process) with the thread/process/task to wake up in parameter. Because the system call is an interrupt (int $0x80 on Linux, until it was recently replaced by sysenter which is basically the same), thus entering the kernel, the scheduler will be called and the requested thred/process/task will be poped out the blocked queue and pushed into the ready queue. If this thread/process/task has the highest priority, it will eventually run when returning from the interrupt, therefore going directly from the blocked state to the running state.

Mechanics of Condition.Signal()

If I had threads as below
void thread(){
while() {
lock.acquire();
if(condition not true)
{
Cond.wait()
}
// blah blah
Cond.Signal();
lock.release();
}
}
Well I guess my main question is that whether the signalling thread continues running for a while after cond.signal() or immediately gives up the CPU?. I would like it in some cases not to release the lock before the woken up thread finishes execution and in some other cases it may be beneficial to release the lock immediately after signalling, without waiting for the other woken thread to finish.
I understand that if there are any threads waiting on the condition then they get woken up on Cond.signal(). But what do you mean by woekn up - put on the ready queue or does the scheduler make sure that it runs immediately?.
and what about the signalling thread.. does it go to sleep on the same condtion upon signalling? .. so then some other thread has to wake it up to make it release the lock?.
This is in large part dependent on your environment (OS, library, language...) and how the synchronisation primitives are implemented. Since you haven't specified any I'll just give a general answer.
When putting a thread to sleep, most environment will choose to remove it from the scheduler's ready queue and the thread will give up its remaining CPU time. When woken up, the thread is simply placed back into the ready queue and will resume execution the next time the scheduler selects it from the queue.
It's also possible that the thread will do some active waiting (spinning) instead of being removed from the scheduler's ready queue. In this case, the thread will resume execution right away. Note that since a thread can still be run out of CPU of time while spinning, it might have to wait to be rescheduled before waking up. This is a useful strategy if your critical sections are very small and you don't want to pay for the scheduling overheads.
A hybrid approach would be to do a small amount of active waiting before removing the thread from the scheduler's ready queue.
As for the signaling thread, unless specified explicitly by your environment (I can't of any reasons but you never know), I wouldn't expect a call to signal() to block in a way that you have to wake it up. Signal() might have to synchronize itself with other threads calling signal() but those are implementation details and you shouldn't have to do anything about it.

Resources