How does a kernel come to know that the CPU is idle? - linux

I was studying Operating Systems and am stuck on a doubt that when a currently running process on the processor requests for some I/O, the CPU becomes idle and the scheduler then schedules another process to execute on the CPU. How does the kernel here come to know that the CPU has become idle. Is there some kind of hardware interrupt sent by the processor?

The OS 'knows' that a CPU needs to become idle when it performs a scheduling run and has fewer ready threads than cores.
If the scheduler runs and has only two ready threads that can use CPU, but has four actual cores available, then it will direct the 'surplus' cores to an 'idle' thread that is a loop around a 'HLT', or like instruction, that causes the core to stop fetching and executing instructions until an interrupt is received.

In my option, the kernel always running on the CPU, and the kernel knows schedule which process or interrupt handler.

Related

Is the scheduler built into the kernel a program or a process?

I looked up the CPU scheduler source code built into the kernel.
https://github.com/torvalds/linux/tree/master/kernel/sched
But I have a question.
There are mixed opinions on the cpu scheduler on the Internet.
I saw an opinion that CPU scheduler is a process.
Question: If so, when ps-ef on Linux, the scheduler process should be visible. It was difficult to find the PID and name of the scheduler process.
The PID for the CPU scheduler process is not on the internet either. However, the PID 0 SWAPPER process is called SCHED, but in Linux, PID 0 functions as an idle process.
I saw an opinion that CPU scheduler is not a process.
CPU scheduler is a passive source code built into the kernel, and user processes frequently enter the kernel and rotate the source code.
Question: How does the user process execute the kernel's scheduler source code on its own?
What if you created a user program without adding a system call using the scheduler of the kernel?
How does the user process self-rotate the scheduler in the kernel without such code?
You have 2 similar questions (The opinion that the scheduler built into the kernel is the program and the opinion that it is the process and I want to know how to implement the cpu scheduling process in Linux operating system) so I'll answer for both of these here.
The answer is that it doesn't work that way at all. The scheduler is not called by user mode processes by using system calls. The scheduler isn't a system call. There are timers that are programmed to throw interrupts after some time has elapsed. Timers are accessed using registers that are memory in RAM often called memory mapped IO (MMIO). You write to some position in RAM specified by the ACPI tables (https://wiki.osdev.org/ACPI) and it will allow to control the chips in the CPU or external PCI devices (PCI is everything nowadays).
When the timer reaches 0, it will trigger an interrupt. Interrupts are thrown by hardware (the CPU). The CPU thus includes special mechanism to let the OS determine the position at which it will jump on interrupt (https://wiki.osdev.org/Interrupt_Descriptor_Table). Interrupts are used by the CPU to notify the OS that an event happened. Without interrupts, the OS would have to reserve at least one core of the processor for a special kernel process that would constantly poll the registers of peripherals and other things. It would be impossible to implement. Also, if user mode processes did the scheduler system call by themselves, the kernel would be slave to user mode because it wouldn't be able to tell if a process is finished and processes could be selfish over CPU time.
I didn't look at the source code but I think the scheduler is also often called on some IO completion (also on interrupt but not always on timer interrupt). I am quite sure that the scheduler must not be preempted. That is interrupts (and other things) will be disabled while the schedule() function runs.
I don't think you can call the scheduler a process (not even a kernel thread). The scheduler can be called by kernel threads that are created by interrupts due to bottom half processing. In bottom half processing, the top "half" of the interrupt handler runs fast and efficiently while the bottom "half" is added to the queue of processes and runs when the scheduler decides it should be scheduled. This has the effect of creating some kernel threads. The scheduler can thus be called from kernel threads but not always from bottom half of interrupts. There has to be a mechanism to call the scheduler without the scheduler having to schedule the task itself. Otherwise, the kernel will stop functioning.

At what points in a program the system switch threads

I know that threads cannot actually run in parallel on the same core, but in a regular desktop system there is normally hundreds or even thousands of threads. Which is of course much more than today's average of 4 core CPU's. So the system actually running some thread for X time and then switches to run another thread for Y amount of time an so on.
My question is, how does the system decide how much time to execute each thread?
I know that when a program is calling sleep() on a thread for an amount of time, the operation system can use this time to execute other threads, but what happens when a program does not call sleep at all?
E.g:
int main(int argc, char const *argv[])
{
while(true)
printf("busy");
return 0;
}
When does the operating system decide to suspend this thread and excutre another?
The OS keeps a container of all those threads that can use CPU execution, (usually such threads are described as being'ready'). On most desktop systems, this is a very small fraction of the total number of threads. Most threads in such systems are waiting on either I/O, (this includes sleeping - waiting on timer I/O), or inter-thread signaling; such threads cannot use CPU execution and so the OS does not dispatch them onto cores.
A software syscall, (eg. a request to open a file, a request to sleep or wait for a signal from another thread), or a hardware interrupt from a peripheral device, (eg. a disk controller, NIC, KB, mouse), may cause the set of ready threads to change and so initiate a scheduling run.
When run, the shceduler decides on what set of ready threads to assign to the available cores. The algorithm it uses is a compromise that tries to optimize overall performance by balancing the need for expensive context-switches with the need for responsive I/O. The kernel CAN stop any thread on any core an preempt it, but it would surely prefer not to:)
So:
My question is, how does the system decide how much time to execute
each thread?
Essentially, it does not. If the set of ready threads is not greater than the number of cores, there is no need to stop/control/influence a CPU-intensive loop - it can be allowed to run on forever, taking up a whole core.
Note that your example is very poor - the printf() call will request output from the OS and, if not immediately available, the OS will block your seemingly 'CPU only' thread until it is.
but what happens when a program does not call sleep at all?
It's just one more thread. If it is purely CPU-intensive, then whether it runs continually depends upon the loading on the box and the number of cores available, as already described. It can, of course, get blocked by requesting I/O or electing to wait for a signal from another thread, so removing itself from the set of ready threads.
Note that one I/O device is a hardware timer. This is very useful for timing out system calls and providing Sleep() functionality. It usually does have a side-effect on those boxes where the number of ready threads is larger than the number of cores available to run them, (ie. the box is overloaded or the task/s it runs have no limits on CPU use). It can result in sharing out the available cores around the ready threads, so giving the illusion of running more threads than it's actually physically capable of, (try not to get hung up on Sleep() and the timer interrupt - it's one of many interrupts that can change thread state).
It is this behaviour of the timer hardware, interrupt and driver that gives rise to the apalling 'quantum', 'time-sharing', 'round-robin' etc. etc.etc. confusion and FUD that surrounds the operation of modern preemptive kernels.
A preemptive kernel, and it's drivers etc, is a state-machine. Syscalls from running threads and hardware interrupts from peripheral devices go in, a set of running threads comes out.
It depends which type of scheduling your OS is using for example lets take
Round Robbin:
In order to schedule processes fairly, a round-robin scheduler generally employs time-sharing, giving each job a time slot or quantum(its allowance of CPU time), and interrupting the job if it is not completed by then. The job is resumed next time a time slot is assigned to that process. If the process terminates or changes its state to waiting during its attributed time quantum, the scheduler selects the first process in the ready queue to execute.
There are others scheduling algorithms as well you will find this link useful:https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/5_CPU_Scheduling.html
The operating system has a component called the scheduler that decides which thread should run and for how long. There are essentially two basic kinds of schedulers: cooperative and preemptive. Cooperative scheduling requires that the threads cooperate and regularly hand control back to the operating system, for example by doing some kind of IO. Most modern operating systems use preemptive scheduling.
In preemptive scheduling the operating system gives a time slice for the thread to run. The OS does this by setting a handler for a CPU timer: the CPU regularly runs a piece of code (the scheduler) that checks if the current thread's time slice is over, and possibly decides to give the next time slice to a thread that is waiting to run. The size of the time slice and how to choose the next thread depends on the operating system and the scheduling algorithm you use. When the OS switches to a new thread it saves the state of the CPU (register contents, program counter etc) for the current thread into main memory, and restores the state of the new thread - this is called a context switch.
If you want to know more, the Wikipedia article on Scheduling has lots of information and pointers to related topics.

MultiCore CPUs, Multithreading and context switching?

Let's say we have a CPU with 20 cores and a process with 20 CPU-intensive independent of each other threads: One thread per CPU core.
I'm trying to figure out whether context switching happens in this case. I believe it happens because there are system processes in the operating system that need CPU-time too.
I understand that there are different CPU architectures and some answers may vary but can you please explain:
How context switching happens e.g. on Linux or Windows and some known CPU architectures? And what happens under the hood on modern hardware?
What if we have 10 cores and 20 threads or the other way around?
How to calculate how many threads we need if we have n CPUs?
Does CPU cache(L1/L2) gets empty after context switching?
Thank you
How context switching happens e.g. on Linux or Windows and some known
CPU architectures? And what happens under the hood on modern hardware?
A context-switch happens when an interrupt occurs and that interrupt, together with the kernel thread and process state data, specify a set of running threads that is different than the set running before the interrupt. Note that, in OS terms, an interrupt may be either a 'real' hardware interrupt that causes a driver to run and that driver requests a scheduling run, or a syscall from a thread that is already running. In either case, the OS scheduling state-machine decides whether to change the set of threads running on the available cores.
The kernel can change the set of running threads by stopping thread/s and running others. It can stop any thread running on any core by queueing up a premption request and generating a hardware interrupt of that core to force the core to run its interprocessor driver to handle the request.
What if we have 10 cores and 20 threads?
Depends on what the threads are doing. If they are in any other state than ready/running, (eg blocked on I/O or inter-thread comms), there will be no context-switching between them because nothing is running. If they are all ready/running, 10 of them will run forever on the 10 cores until there is an interrupt. Most systems have a periodic timer interrupt that can have the effect of sharing the available cores around the threads.
or the other way around
10 threads run on 10 cores. The other 10 cores are halted. The OS may move the threads around the cores, eg. to prevent uneven heat dissipation across the die.
How to calculate how many threads we need if we have n CPUs?
App-dependent. It would be nice if all cores were always used up 100% on exactly as many ready threads as cores but, since most threads are blocked for much more time than they are running, it's difficult, except in some end-cases, (eg - your '20 CPU-intensive threads on 20 cores'), to come up with any optimal number.
Does CPU cache(L1/L2) gets empty after context switching?
Maybe - it depends entirely on the data usage of the threads. The caches will get reloaded on-demand, as usual. There is no 'context-switch total cache reload' but, if the threads access different, large arrays of data while running, then the (L1 at least), cache will indeed get fully reloaded during the thread run.

Linux Kernel Multicore Issue

I have some doubts regarding some linux kernel scheduling.
1) Does linux kernel(schedular to be specific) always runs on CPU-0?
2) One Scenario:
One kernel thread running on CPU - 0, goes into sleep with interrupts disabled.
In this case, will the schedular run on other CPU?
if Yes, how is the selection made out of the remaining core so as to which will run
the schedular, is this decision made while disabling interrupts on CPU - 0?
The scheduler is just a piece of code (in particular, the schedule() function).
Like most other parts of the kernel, it runs on whatever CPU it is called.
The scheduler gets called when some thread wants to sleep or after an interrupt has been handled; this can happen on all CPUs.
1) Does linux kernel(schedular to be specific) always runs on CPU-0?
(No, scheduler can runs on any CPU cores.)
2) One Scenario:
One kernel thread running on CPU - 0, goes into sleep with interrupts disabled.
In this case, will the schedular run on other CPU?
(A thread running on CPU -0, goes into sleep. Which means the thread
quits the CPU voluntarily.The sleep code will call linux scheduler, and the scheduler will choose another thread/process to run.This is noting to do with the interrupts.Disabling the interrupts(eg. timer interrupt), can stop the thread being interrupted and scheduled out to the CPU against its will.)
if Yes, how is the selection made out of the remaining core so as to which will run
the schedular, is this decision made while disabling interrupts on CPU - 0?
(Hope this helps!)

How does the OS scheduler regain control of CPU?

I recently started to learn how the CPU and the operating system works, and I am a bit confused about the operation of a single-CPU machine with an operating system that provides multitasking.
Supposing my machine has a single CPU, this would mean that, at any given time, only one process could be running.
Now, I can only assume that the scheduler used by the operating system to control the access to the precious CPU time is also a process.
Thus, in this machine, either the user process or the scheduling system process is running at any given point in time, but not both.
So here's a question:
Once the scheduler gives up control of the CPU to another process, how can it regain CPU time to run itself again to do its scheduling work? I mean, if any given process currently running does not yield the CPU, how could the scheduler itself ever run again and ensure proper multitasking?
So far, I had been thinking, well, if the user process requests an I/O operation through a system call, then in the system call we could ensure the scheduler is allocated some CPU time again. But I am not even sure if this works in this way.
On the other hand, if the user process in question were inherently CPU-bound, then, from this point of view, it could run forever, never letting other processes, not even the scheduler run again.
Supposing time-sliced scheduling, I have no idea how the scheduler could slice the time for the execution of another process when it is not even running?
I would really appreciate any insight or references that you can provide in this regard.
The OS sets up a hardware timer (Programmable interval timer or PIT) that generates an interrupt every N milliseconds. That interrupt is delivered to the kernel and user-code is interrupted.
It works like any other hardware interrupt. For example your disk will force a switch to the kernel when it has completed an IO.
Google "interrupts". Interrupts are at the centre of multithreading, preemptive kernels like Linux/Windows. With no interrupts, the OS will never do anything.
While investigating/learning, try to ignore any explanations that mention "timer interrupt", "round-robin" and "time-slice", or "quantum" in the first paragraph – they are dangerously misleading, if not actually wrong.
Interrupts, in OS terms, come in two flavours:
Hardware interrupts – those initiated by an actual hardware signal from a peripheral device. These can happen at (nearly) any time and switch execution from whatever thread might be running to code in a driver.
Software interrupts – those initiated by OS calls from currently running threads.
Either interrupt may request the scheduler to make threads that were waiting ready/running or cause threads that were waiting/running to be preempted.
The most important interrupts are those hardware interrupts from peripheral drivers – those that make threads ready that were waiting on IO from disks, NIC cards, mice, keyboards, USB etc. The overriding reason for using preemptive kernels, and all the problems of locking, synchronization, signaling etc., is that such systems have very good IO performance because hardware peripherals can rapidly make threads ready/running that were waiting for data from that hardware, without any latency resulting from threads that do not yield, or waiting for a periodic timer reschedule.
The hardware timer interrupt that causes periodic scheduling runs is important because many system calls have timeouts in case, say, a response from a peripheral takes longer than it should.
On multicore systems the OS has an interprocessor driver that can cause a hardware interrupt on other cores, allowing the OS to interrupt/schedule/dispatch threads onto multiple cores.
On seriously overloaded boxes, or those running CPU-intensive apps (a small minority), the OS can use the periodic timer interrupts, and the resulting scheduling, to cycle through a set of ready threads that is larger than the number of available cores, and allow each a share of available CPU resources. On most systems this happens rarely and is of little importance.
Every time I see "quantum", "give up the remainder of their time-slice", "round-robin" and similar, I just cringe...
To complement #usr's answer, quoting from Understanding the Linux Kernel:
The schedule( ) Function
schedule( ) implements the scheduler. Its objective is to find a
process in the runqueue list and then assign the CPU to it. It is
invoked, directly or in a lazy way, by several kernel routines.
[...]
Lazy invocation
The scheduler can also be invoked in a lazy way by setting the
need_resched field of current [process] to 1. Since a check on the value of this
field is always made before resuming the execution of a User Mode
process (see the section "Returning from Interrupts and Exceptions" in
Chapter 4), schedule( ) will definitely be invoked at some close
future time.

Resources