How does interrupt polling perform context switching? - multithreading

Consider a very old single-core CPU that does not support hardware interrupts, and let's say I want to write a multi-tasked operating system. Using a hardware timer, one can poll an IRQ line in order to determine whether the timer has elapsed, and if so then switch threads/processes.
However, in order to be able to poll, the kernel has to have execution attention by the CPU. For a CPU that supports hardware interrupts, an ISR is called upon an interrupt and (correct me if I'm wrong) if the interrupt is by the context-switch timer, the appropriate ISR calls the kernel code that handles context switching.
If a CPU does not support hardware interrupts (again, correct me if I'm wrong), then the kernel has to repeatedly check for interrupts and the appropriate ISR is called in kernel space.
But, if a user thread is currently in execution on this hypothetical processor, the thread has to manually yield execution to the kernel for it to be able check whether the context-switch is due according to the timer through the appropriate IRQ line. This can be done by calling an appropriate kernel function.
Is there a way to implement non-cooperative multithreading on a single-core processor that only supports software interrupts? Are my thoughts correct, or am I missing something?

Well, you are generally correct that the kernel can't do multitasking until it gains control of the CPU. That happens via an interrupt or when user code makes a system call.
The timer interrupt, in particular, is used for preemptive time slicing. I think it would be pretty hard to find a whole CPU that didn't support a timer interrupt, that you didn't have to program with punch cards or switches. Interrupts are much older than multiple cores or virtual memory or DMA or anything fancy at all.
Some SoCs have real time sub-components that have this sort of restriction (like Beaglebone), and it might come up if you were coding a small CPU in an FPGA or something.
Without interrupts, you have to wait for system calls, which basically becomes cooperative multitasking.

Related

linux bottom-half preemption

As far as I know there are many mechanisms to implement bottom-halves in Linux:
softirq
taslket
workqueue
threaded irq ( request_threaded_irq() )
Which all have their characteristics regarding schedulability.
What I cannot get from the literature is their preemption possibility. What kind of tasks can preempt the various different bottom-half implementations?
More specifically, I am interested in threaded irqs and workqueues. How much can one be confident that once scheduled a threaded irq or a workqueue is not preempted before completion i.e. runs in one shot? What are the types of tasks that are able preempt them?
For example, Linux Kernel Development by Robert Love states that only top-halves can preempt softirqs, so I would say that softirqs complete in one shot most of the times (or if they get preempted, it's only for a very short time).
My goal is to qualitatively assess the time between two operations in the same threaded irq or workqueue. In particular the time between i2c data read and a reading of the system clock.
Thanks.
Worqueues and threaded IRQ handlers are run in a process context and can be preempted. When they can be preempted actually depends on your kernel configuration (CONFIG_PREEMPT or CONFIG_PREEMPT_VOLUNTARY) and also on the realtime priorities you set on the handling thread.
You can not assume that your workqueue or your bottom half will not get interrupted. This means that if you share resources with a top half, you have to use proper locking.

Process Scheduling from Processor point of view

I understand that the scheduling is done by the kernel. Let us suppose a
process (P1) in Linux is currently executing on the processor.
Since the current process doesn't know anything about the time slice
and the kernel is currently not executing on the processor, how does the kernel schedule the next process to execute?
Is there some kind of interrupt to tell the processor to switch to execute the kernel or any other mechanism for the purpose?
In brief, it is an interrupt which gives control back to the kernel. The interrupt may appear due to any reason.
Most of the times the kernel gets control due to timer interrupt, or a key-press interrupt might wake-up the kernel.
Interrupt informing completion of IO with peripheral systems or virtually anything that changes the system state may
wake-up the kernel.
More about interrupts:
Interrupts as such are divided into top-half and bottom half. Bottom Halves are for deferring work from interrupt context.
Top-half: runs with interrupts disabled hence should be superfast, relinquish the CPU as soon as possible, usually
1) stores interrupt state flag and disables the interrupts(reset
some pin on the processor),
2) communicates with the hardware, stores state information,
delegates remaining responsibility to bottom-half,
3) restores the interrupt state flag and enables the interrupt((set
some pin on the processor).
Bottom-half: Handles the deferred work(delegated work by the top-half) runs with interrupts enabled hence may take a while before completion.
Two mechanisms are used to implement bottom-half processing.
1) Tasklets
2) Work queues
.
If timer is the interrupt to switch back to kernel, is the interrupt a hardware interrupt???
The timer interrupt of interest under our context of discussion is the hardware timer interrupt,
Inside kernel, the word timer interrupt may either mean (architecture-dependent) hardware timer interrupts or software timer interrupts.
Read this for a brief overview.
More about timers
Remeber "Timers" are an advanced topic, difficult to comprehend.
is the interrupt a hardware interrupt??? if it is a hardware
interrupt, what is the frequency of the timer?
Read Chapter 10. Timers and Time Management
if the interval of the timer is shorter than time slice, will kernel give the CPU back the same process, which was running early?
It depends upon many factors for ex: the sheduler being used, load on the system, process priorities, things like that.
The most popular CFS doesn't really depend upon the notion of time slice for preemption!
The next suitable process as picked up by CFS will get the CPU time.
The relation between timer ticks, time-slice and context switching is not so straight-forward.
Each process has its own (dynamically calculated) time slice. The kernel keeps track of the time slice used by the process.
On SMP, the CPU specific activities such as monitoring the execution time of the currently running process is done by the interrupts raised by the local APIC timer.
The local APIC timer sends an interrupt only to its processor.
However, the default time slice is defined in include/linux/sched/rt.h
Read this.
Few things could happen -
a. The current process (p1) can finish up its timeslice and then the
scheduler will check is there is any other process that could be run.
If there's no other process, the scheduler will put itself in the
idle state. The scheduler will assign p1 to the CPU if p1 is a CPU hoggy
task or p1 didn't leave the CPU voluntarily.
b. Another possibility is - a high priority task has jumped in. On every
scheduler tick, the scheduler will check if there's any process which
needs the CPU badly and is likely to preempt the current task.
In other words, a process can leave the CPU in two ways - voluntarily or involuntarily. In the first case, the process puts itself to sleep and therefore releases the CPU (case a). In the other case, a process has been preempted with a higher priority task.
(Note: This answer is based on the CFS task scheduler
of the current Linux kernel)

How does the OS scheduler regain control of CPU?

I recently started to learn how the CPU and the operating system works, and I am a bit confused about the operation of a single-CPU machine with an operating system that provides multitasking.
Supposing my machine has a single CPU, this would mean that, at any given time, only one process could be running.
Now, I can only assume that the scheduler used by the operating system to control the access to the precious CPU time is also a process.
Thus, in this machine, either the user process or the scheduling system process is running at any given point in time, but not both.
So here's a question:
Once the scheduler gives up control of the CPU to another process, how can it regain CPU time to run itself again to do its scheduling work? I mean, if any given process currently running does not yield the CPU, how could the scheduler itself ever run again and ensure proper multitasking?
So far, I had been thinking, well, if the user process requests an I/O operation through a system call, then in the system call we could ensure the scheduler is allocated some CPU time again. But I am not even sure if this works in this way.
On the other hand, if the user process in question were inherently CPU-bound, then, from this point of view, it could run forever, never letting other processes, not even the scheduler run again.
Supposing time-sliced scheduling, I have no idea how the scheduler could slice the time for the execution of another process when it is not even running?
I would really appreciate any insight or references that you can provide in this regard.
The OS sets up a hardware timer (Programmable interval timer or PIT) that generates an interrupt every N milliseconds. That interrupt is delivered to the kernel and user-code is interrupted.
It works like any other hardware interrupt. For example your disk will force a switch to the kernel when it has completed an IO.
Google "interrupts". Interrupts are at the centre of multithreading, preemptive kernels like Linux/Windows. With no interrupts, the OS will never do anything.
While investigating/learning, try to ignore any explanations that mention "timer interrupt", "round-robin" and "time-slice", or "quantum" in the first paragraph – they are dangerously misleading, if not actually wrong.
Interrupts, in OS terms, come in two flavours:
Hardware interrupts – those initiated by an actual hardware signal from a peripheral device. These can happen at (nearly) any time and switch execution from whatever thread might be running to code in a driver.
Software interrupts – those initiated by OS calls from currently running threads.
Either interrupt may request the scheduler to make threads that were waiting ready/running or cause threads that were waiting/running to be preempted.
The most important interrupts are those hardware interrupts from peripheral drivers – those that make threads ready that were waiting on IO from disks, NIC cards, mice, keyboards, USB etc. The overriding reason for using preemptive kernels, and all the problems of locking, synchronization, signaling etc., is that such systems have very good IO performance because hardware peripherals can rapidly make threads ready/running that were waiting for data from that hardware, without any latency resulting from threads that do not yield, or waiting for a periodic timer reschedule.
The hardware timer interrupt that causes periodic scheduling runs is important because many system calls have timeouts in case, say, a response from a peripheral takes longer than it should.
On multicore systems the OS has an interprocessor driver that can cause a hardware interrupt on other cores, allowing the OS to interrupt/schedule/dispatch threads onto multiple cores.
On seriously overloaded boxes, or those running CPU-intensive apps (a small minority), the OS can use the periodic timer interrupts, and the resulting scheduling, to cycle through a set of ready threads that is larger than the number of available cores, and allow each a share of available CPU resources. On most systems this happens rarely and is of little importance.
Every time I see "quantum", "give up the remainder of their time-slice", "round-robin" and similar, I just cringe...
To complement #usr's answer, quoting from Understanding the Linux Kernel:
The schedule( ) Function
schedule( ) implements the scheduler. Its objective is to find a
process in the runqueue list and then assign the CPU to it. It is
invoked, directly or in a lazy way, by several kernel routines.
[...]
Lazy invocation
The scheduler can also be invoked in a lazy way by setting the
need_resched field of current [process] to 1. Since a check on the value of this
field is always made before resuming the execution of a User Mode
process (see the section "Returning from Interrupts and Exceptions" in
Chapter 4), schedule( ) will definitely be invoked at some close
future time.

How does a kernel return from the thread

I am doing some study hardcore study on computers etc. so I can get started on my own mini Hello World OS.
I was looking a how kernels work and I was wondering how the kernel makes the current thread return to the kernel (so it can switch to another) even though the kernel isn't running and the thread has no instruction to do so.
Does it use some kind of CPU interrupt that goes back to the kernel after a few nanoseconds?
Does it use some kind of CPU interrupt that goes back to the kernel after a few nanoseconds?
It is during timer interrupts and (blocking) system calls that the kernel decides whether to keep executing the currently active thread(s) or switch to another thread. The timer interupt handler updates resource usages, such as consumed system and user time, for the currently running process and scheduler_tick() function that decides whether a process/tread need to be pre-empted.
See "Preemption and Context Switching" on page 62 of Linux Kernel Development book.
The kernel, however, must know when to call schedule(). If it called schedule() only
when code explicitly did so, user-space programs could run indefinitely. Instead, the kernel
provides the need_resched flag to signify whether a reschedule should be performed (see
Table 4.1).This flag is set by scheduler_tick() when a process should be preempted, and
by try_to_wake_up() when a process that has a higher priority than the currently run-
ning process is awakened.The kernel checks the flag, sees that it is set, and calls schedule() to switch to a new process.The flag is a message to the kernel that the scheduler should be invoked as soon as possible because another process deserves to run.
Does it use some kind of CPU interrupt
Yes! Modern preemptive kernels are absolutely dependent upon interrupts from hardware to deliver good I/O performance. Keyboard, mouse, disk, NIC, USB, etc. drivers are all entered from interrupts and can make threads that are waiting on them ready/running when required (e.g., when data is available).
Threads can also change state as a result of making an OS call that changes the caller's own state of that of another thread.
The interrupt from the hardware timer is one of many interrupt sources and is only special in that many system operations have timeouts that are signaled by this interrupt. Other than that, the timer interrupt just causes a reschedule which, in most cases, changes nothing re. the ready/running state of threads. If the machine is grossly CPU-overloaded to the point where there are more ready threads than there are cores, there is a side-effect of the timer interrupt that causes CPU time to be shared amongst the ready threads.
Do not fixate on the timer interrupt—the other driver interrupts are absolutely essential. It is not impossible to build a functional preemptive multithreaded kernel with no timer interrupt at all.

Does Linux drop into the kernel on all cores?

For a multi-core computer running Linux 2.6.x, what happens when a thread makes a system call? Does it drop into the kernel only on the core that the thread is running on, or does it drop into the kernel on all cores (sorry if this is a newbie question).
Is this behaviour (whichever is the correct one) the same when receiving interrupts in general? If not, what are the differences?
Only the thread that does the syscall enters the kernel. All scheduling in Linux is done on thread granularity. As for interrupts - they are routed to one core, i.e. only one processor is interrupted for each given hardware event. Then interrupts could be manually assigned to specific cores. This is done with a mask in /proc/irq/IRQ-NUMBER/smp_affinity. You can see which CPUs receive what hardware interrupts in /proc/interrupts.
Ony one core handles a system call, and only one core handles an interrupt.
I don't have any references off hand for exactly how interrupts are routed - perhaps Intel's System Programming Guide would be helpful here.
But, imagine if all cores were interrupted by every system call or interrupt. Linux is designed to scale to many cores. This would kill that scalability - on a massive server every disk I/O, timer interrupt, etc., would effectively stall every single core in the system, preventing them from doing useful work.

Resources