Keyboard process Interaction - io

I was wondering what is the step by step sequence of a keyboard interrupt. I know that everytime a key is pressed an interrupt is triggered and the interrupt is queued. If there's no higher priority process the CPU will be allocated to the interrupting process. My question is: where is the interrupting process located in the diagram below (before and after the interruption ). Is it in the ready queue waiting for something to happen or somewhere else ?.
This question was partially answered here https://stackoverflow.com/a/719676.

Related

How does the kernel track which processes receive data from an interrupt?

In a preemptive kernel (say Linux), say process A makes a call to getc on stdin, so it's blocked waiting for a character. I feel like I have a fundamental misunderstanding of how the kernel knows then to wake process A at this point and deliver the data after it's received.
My understanding is then this process can be put into a suspended state while the scheduler schedules other processes/threads to run, or it gets preempted. When the keypress happens, through polling/interrupts depending on the implementation, the OS runs a device driver that decodes the key that was pressed. However it's possible (and likely) that my process A isn't currently running. At this point, I'm confused on how my process that was blocked waiting on I/O is now queued to run again, especially how it knows which process is waiting for what. It seems like the device drivers hold some form of a wait queue.
Similarly, and I'm not sure if this is exactly related to the above, but if my browser window, for example, is in focus, it seems to receive key presses but not other windows. Does every window/process have the ability to "listen" for keyboard events even if they're not in focus, but just don't for user experience sake?
So I'm curious how kernels (or how some) keep track of what processes are waiting on which events, and when those events come in, how it determines which processes to schedule to run?
The events that processes wait on are abstract software events, such as a particular queue is not empty, rather than concrete hardware events, such as a interrupt 4635 occurring.
Some configuration ( perhaps guided by a hardware description like device tree ) identifies interrupt 4635 as being a signal from a given serial device with a given address. The serial device driver configures itself so it can access the device registers of this serial port, and attaches its interrupt handler to the given interrupt identifier (4635).
Once configured, when an interrupt from the serial device is raised, the lowest level of the kernel invokes this serial device's interrupt handler. In turn, when the handler sees a new character arriving, it places it in the input queue of that device. As it enqueues the character, it may notice that some process(es) are waiting for that queue to be non-empty, and cause them to be run.
That approximately describes the situation using condition variables as the signalling mechanism between interrupts and processes, as was established in UNIX-y kernels 44 years ago. Other approaches involve releasing a semaphore on each character in the queue; or replying with messages for each character. There are many forms of synchronization that can be used.
Common to all such mechanisms, is that the caller chooses to suspend itself to wait for io to complete; and does so by associating its suspension with the instance of the object which it is expecting input from.
What happens next can vary; typically the waiting process, which is now running, reattempts to remove a character from the input queue. It is possible some other process got to it first, in which case, it merely goes back to waiting for the queue to become non empty.
So, the OS doesn't explicitly route the character from the device to the application; a series of implicit and indirect steps does.

How is interrupt context "restored" when a interrupt handler is interrupted by another interrupt?

I read some related posts:
(1) From Robert Love: http://permalink.gmane.org/gmane.linux.kernel.kernelnewbies/1791
You cannot sleep in an interrupt handler because interrupts do not have a backing
process context, and thus there is nothing to reschedule back into. In other
words, interrupt handlers are not associated with a task, so there is nothing to
"put to sleep" and (more importantly) "nothing to wake up". They must run
atomically.
(2) From Which context are softirq and tasklet in?
If sleep is allowed, then the linux cannot schedule them and finally cause a
kernel panic with a dequeue_task error. The interrupt context does not even
have a data structure describing the register info, so they can never be scheduled
by linux. If it is designed to have that structure and can be scheduled, the
performance for interrupt handling process will be effected.
So in my understanding, interrupt handlers run in interrupt context, and can not sleep, that is to say, can not perform the context switch as normal processes do with backing mechanism.
But a interrupt handler can be interrupted by another interrupt. And when the second interrupt handler finishes its work, control flow would jump back to the first interrupt handler.
How is this "restoring" implemented without normal context switch? Is it like normal function calls with all the registers and other related stuff stored in a certain stack?
The short answer is that an interrupt handler, if it can be interrupted by an interrupt, is interrupted precisely the same way anything else is interrupted by an interrupt.
Say process X is running. If process X is interrupted, then the interrupt handler runs. To the extent there is a context, it's still process X, though it's now running interrupt code in the kernel (think of the state as X->interrupt if you like). If another interrupt occurs, then the interrupt is interrupted, but there is still no special process context. The state is now X->first_interrupt->second_interrupt. When the second interrupt finishes, the first interrupt will resume just as X will resume when the first interrupt finishes. Still, the only process context is process X.
You can describe these as context switches, but they aren't like process context switches. They're more analogous to entering and exiting the kernel -- the process context stays the same but the execution level and unit of code can change.
The interrupt routine will store some CPU state and registers before enter real interrupt handler, and will restore these information before returning to interrupted task. Normally, this kind of storing and restoring is not called context-switch, as the context of interrupted process is not changed.
As of 2020, interrupts (hard IRQ here) in Linux do not nest on a local CPU in general. This is at least mentioned twice by group/maintainer actively contributing to Linux kernel:
From NAPI updates written by Jakub Kicinski in 2020:
…Because normal interrupts don't nest in Linux, the system can't service any new interrupt while it's already processing one.
And from Bootlin in 2022:
…Interrupt handlers are run with all interrupts disabled on the local CPU…
So this question is probably less relevant nowadays, at least for Linux kernel.

Threads: When ones thread is running can you interact with the other?

So I'm learning about threads at the moment and I'm wondering how some things are handled. For example, say I have a program where one thread listens for input and another performs some calculation on a single processor. When the calculation thread is running, what happens if the user should press a button intended for the input thread? Won't the input get ignored by the input thread until it is switched to that specific thread?
It depends a good deal on how the input mechanism is implemented. One easy-but-very-inelegant way to implement I/O is continuous polling... in that scenario, the input thread might sit in a loop, reading a hardware register over and over again, and when the value in the register changes from 0 to 1, the input thread would know that the button is pressed:
void inputThread()
{
while(1)
{
if (some_register_indicates_the_button_is_pressed()) react();
}
}
The problem with this method is that it's horribly inefficient -- the input thread is using billions of CPU cycles just checking the register over and over again. In a multithreaded system running this code, the thread scheduler would switch the CPU between the busy-waiting input thread and the calculation thread every quantum (e.g. once every 10 milliseconds) so the input thread would use half of the CPU cycles and the calculation thread would use the other half. In this system, if the input thread was running at the instant the user pressed the button, the input would be detected almost instantaneously, but if the calculation thread was running, the input wouldn't be detected until the next time the input thread got to run, so there might be as much as 10mS delay. (Worse, if the user released the button too soon, the input thread might never notice it was pressed at all)
An improvement over continuous polling is scheduled polling. It works the same as above, except that instead of the input thread just polling in a loop, it polls once, then sleeps for a little while, then polls again:
void inputThread()
{
while(1)
{
if (some_register_indicates_the_button_is_pressed()) react();
usleep(3000); // sleep for 30 milliseconds
}
}
This is much less inefficient that the first case, since every time usleep() is called, the thread scheduler puts the input thread to sleep and the CPU is made immediately available for any other threads to use. usleep() also sets a hardware timer, and when that hardware timer goes off (30 milliseconds later) it raises an interrupt. The interrupt causes the CPU to leave off whatever it was doing and run the thread-scheduling code again, and the thread-scheduling code will (in most cases) realize that its time for usleep() to return, and wake up the input thread so it can do another iteration of its loop. This still isn't perfect: the inputThread is still using a small amount of CPU on an ongoing basis -- not much, but if you do many instances of this it starts to add up. Also, the problem of the thread being asleep the whole time the button is held down is still there, and potentially even more likely.
Which leads us to interrupt-driven I/O. In this model, the input thread doesn't poll at all; instead it tells the OS to notify it when the button is pressed:
void inputThread()
{
while(1)
{
sleep_until_button_is_pressed();
react();
}
}
The OS's notification facility, in turn, has to set things up so that the OS is notified when the button is pressed, so that the OS can wake up and notify the input thread. The OS does this by telling the button's control hardware to generate an interrupt when the button is pressed; once that interrupt goes off, it works much like the timer interrupt in the previous example; the CPU runs the thread scheduler code, which sees that it's time to wake up the input thread, and lets the input thread run. This mechanism has very nice properties: (1) the input thread gets woken up ASAP when the button is pressed (there's no waiting around for the calculation thread to finish its quantum first), and (2) the input thread doesn't eat up any CPU cycles at all, except when the button is pushed. Because of these advantages, it's this sort of mechanism that is used in modern computers for any non-trivial I/O.
Note that on a modern PC or Mac, there's much more going on than just two threads and a hardware button; e.g. there are dozens of hardware devices (keyboard, mouse, video card, hard drive, network card, sound card, etc) and dozens of programs running at once, and it's the operating system's job to mediate between them all as necessary. Despite all that, the general principles are still the same; let's say that in your example the button the user clicked wasn't a physical button but an on-screen GUI button. In that case, something like the following sequence of events would occur:
User's finger presses the left mouse button down
Mouse's internal hardware sends a mouse-button-pressed message over the USB cable to the computer's USB controller
Computer's USB controller generates an interrupt
Interrupt causes the CPU to break out of the calculation thread's code and run the OS's scheduler routine
The thread scheduler sees that the USB interrupt line indicates a USB event is ready, and responds by running the USB driver's interrupt handler code
USB driver's interrupt handler code reads in the event, sees that it is a mouse-button-pressed event, and passes it along to the window manager
Window manager knows which window has the focus, so it knows which program to forward the mouse-button-pressed event to
Window manager tells the OS to wake up the input thread associated with that window
Your input thread wakes up and calls react()
If you're running on a single processor system, then yes.
Short answer: yes, threads always interact. The problems start to appear when they interact in a non-predictable way. Every thread in a process has access to the entire process memory space, so changing memory in one thread may spoil the data for another thread.
Well, there are multiple ways the thread can comunicate with each other. One of them is having global variable and use it as a buffer for communication beteen threads.
When you asked about button there must be a thread containing event loader loop. Within this thread, input won't be ignored according to my experience.
You can see some of my threads about this topic:
Here, I was interested how to make 3 thread application that do communicate through events.
The thread waiting for user input will be made ready 'immediately'. On most OS, threads that were waiting on I/O and have become ready are given a temporary priority boost and, even on a single-core CPU, will 'immediately' preempt another thread that was running at the same priority.
So, if a single-core CPU is running a calculation and another, waiting, thread of the same priority gets input, it will probably run straightaway.

Interrupt while placing process on the waiting queue

Suppose there is a process that is trying to enter the critical region but since it is occupied by some other process, the current process has to wait for it. So, at the time when the process is getting added to the waiting queue of the semaphore, suppose an interrupt comes (ex- battery finished), then what will happen to that process and the waiting queue?
I think that since the battery has finished so this interrupt will have the highest priority and so the context of the process which was placing the process on the waiting queue would be saved and interrupt service routine for this routing will be executed.
And then it will return to the process that was placing the process on the queue.
Please give some hints/suggestions for this question.
This is very hardware / OS dependant, however a few thoughts:
As has been mentioned in the comments, a ‘battery finished’ interrupt may be considered as a special case, simply because the machine may turn off without taking any action, in which case the processes + queue will disappear. In general however, assuming a non-fatal interrupt and an OS that suspends / resumes correctly, I think it’s unlikely there will be any noticeable impact to the execution of either process.
In a multi-core setup, the process may not be immediately suspended. The interrupt could be handled by a different core and neither of the processes you’ve mentioned would be any the wiser.
In a pre-emptive multitasking OS there's also no guarantee that the process adding to the queue would be resumed immediately after the interrupt, the scheduler could decide to activate the process currently in the critical section or another process entirely. What would happen when the process adding itself to the semaphore wait queue resumed would depend on how far through adding it was, how the queue has been implemented and what state the semaphore was in. It may be that it never gets on to the wait queue because it detects that the other process has already woken up and left the critical section, or it may be that it completes adding itself to the queue and suspends as if nothing had happened…
In a single core/processor machine with a cooperative multitasking OS, I think the scenario you’ve described in your question is quite likely, with the executing process being suspended to handle the interrupt and then resumed afterwards until it finished adding itself to the queue and yielded.
It depends on the implementation, but conceptually the same operating process should be performing both the addition of the process to the wait queue and the management of the interrupts, so your process being moved to wait would instead be treated as interrupted from the wait queue.
For Java, see the API for Thread.interrupt()
Interrupts this thread.
Unless the current thread is interrupting itself, which is always permitted, the checkAccess method of this thread is invoked, which may cause a SecurityException to be thrown.
If this thread is blocked in an invocation of the wait(), wait(long), or wait(long, int) methods of the Object class, or of the join(), join(long), join(long, int), sleep(long), or sleep(long, int), methods of this class, then its interrupt status will be cleared and it will receive an InterruptedException.
If this thread is blocked in an I/O operation upon an interruptible channel then the channel will be closed, the thread's interrupt status will be set, and the thread will receive a ClosedByInterruptException.
If this thread is blocked in a Selector then the thread's interrupt status will be set and it will return immediately from the selection operation, possibly with a non-zero value, just as if the selector's wakeup method were invoked.
If none of the previous conditions hold then this thread's interrupt status will be set.
Interrupting a thread that is not alive need not have any effect.

How does a process executing in kernel space come to know of occurrence of unexpected (asynchronous) events?

I have two questions:
1) Kernel-space process: When a process is executing in kernel mode, it does not receive any signals.
Instead a process puts itself in a wait-queue when it expects to receive completion of any event. Such as completion of an I/O event. The process does not do anything at this stage. Once the event is triggered, (the kernel sets a trap ? Correct if wrong) and finally the process is waked up and resumes execution.
Above is the case for expected events or (synchronous).
But i wish to understand, how the process executing in kernel mode comes to know the occurrence of asynchronous events?
2) User-space process: Does the kernel always sets a trap to intimate a process executing in user-space to intimate that the signal has received? (Once the pending-signals-mask is set?)
Please answer this question with respect to the implementation in Linux (or it's flavors).

Resources