How scheduler gets called when a high priority task comes - linux

I have read here about situations where a scheduler is called. But what happens when a high priority task comes?

High priority tasks are scheduled more often than low priority tasks but when a high priority task comes it still has to wait until the quantum of the running task is over.

Priority changes and is adjusted based on past CPU usage.
The longer version
In Linux, process priority is dynamic. The scheduler keeps track of what processes are doing and adjusts their priorities periodically; in this way, processes that have been denied the use of the CPU for a long time interval are boosted by dynamically increasing their priority. Correspondingly, processes running for a long time are penalized by decreasing their priority.

Scheduler maintains a set of all tasks that are ready to run in the system. In a multi-priority system, the task set usually supports the notion of priority. When a high priority task arrives in the system, it is put into the set of tasks sorted by priority.
There are certain points in the kernel where we check if a better process is available to run, compared to the currently running process. This can happen when the time slice expires OR when the ISR is done OR when a lock is unlocked, etc. Look for calls to switch() OR _switch() or something similar...this is the routine that checks the set of tasks and determines if the current task is the highest prio.
If the current task is not the highest prio task, then the current task is switched out and the highest prio task is obtained from the task set and scheduled to run.

Related

How linux kernel scheduling works on multi core processor?

I recently started reading Robert Love's book "Linux Kernel Development 3rd edition" and dived into the scheduler part, which left me with lots of questions.
So first off, I understood there are 2 cases where the scheduler changes the task that currently runs (Correct me if I'm not precise), either by a task that willingly requested to re-schedule since it blocks on some I/O or sleeps, or a timer interrupt that caused the cpu to jump to scheduler code and preempt the current task if it's interruptible.
Does each core in a multicore processor get the interrupt that is related to re-scheduling? Do they each have a different timer, or say there is one interrupt that in some type of algorithm picks a specific core to handle it each time?
Assuming not only one core re-schedules each interrupt (since then I would imagine it might take a while to swap processes on all of the cores), what happens if two cores re-schedule at the same time? Because, I assume that when you run the schedule function the task-list must be locked, and then I'd imagine a few cores re-scheduling their current task simultaneously resulting in only one core actually doing scheduling work and all of the other cores waiting on the task-list lock.
Not only that the task-list lock is required to touch the actual task-list and say change tasks state or run-queue order, what if one core that schedules currently calculates which task should be run next and meanwhile another core finishes scheduling successfully which causes the first core calculation to be totally mistaken since the successful re-scheduling just heavily changed the system state?
I understood that in linux priority is divided to "nice value" which is -20 to 19 (higher means less priority and more "nice") and real-time priority (0-99). real-time priority values matter only for a couple of scheduling policies, and each process can register to a different scheduling policy.
Does the real-time policies always beat processes that are not registered to real-time policies? Meaning if I run a real-time process I will never get to execute normal processes? How are the "nice" values of normal processes and real-time priority values of real-time processes work together in the scheduler algorithm?

Do two SCHED_FIFO tasks with equal priority get processing time within each period in Linux?

Do two SCHED_FIFO tasks with equal priority get processing time within each period in Linux, granted neither of the tasks finish before the period ends?
Linux documentation says SCHED_FIFO processes can get preempted only by processes with higher priority, but my understanding is that CFS operates on a higher layer, and assigns timeslots to each of the two tasks within each period.
Linux documentation says SCHED_FIFO processes can get preempted only by processes with higher priority
This is correct, in addition to this, they can also be preempted if you set RLIMIT_RTTIME (getrlimit(2)) and that limit is reached.
The only other reasons why another SCHED_FIFO process (with the same priority) can be scheduled is if the first sleeps or if it voluntary yields (voluntary preemption).
CFS has nothing to do with SCHED_FIFO, it only takes care of SCHED_NORMAL, SCHED_BATCH and SCHED_IDLE.

Why “ps aux” in Linux does not show the process whose pid=0? [duplicate]

The idle task (a.k.a. swapper task) is chosen to run when no more runnable tasks in the run queue at the point of task scheduling. But what is the usage for this so special task? Another question is why i can't find this thread/process in the "ps aux" output (PID=0) from the userland?
The reason is historical and programatic. The idle task is the task running, if no other task is runnable, like you said it. It has the lowest possible priority, so that's why it's running of no other task is runnable.
Programatic reason: This simplifies process scheduling a lot, because you don't have to care about the special case: "What happens if no task is runnable?", because there always is at least one task runnable, the idle task. Also you can count the amount of cpu time used per task. Without the idle task, which task gets the cpu-time accounted no one needs?
Historical reason: Before we had cpus which are able to step-down or go into power saving modes, it HAD to run on full speed at any time. It ran a series of NOP-instructions, if no tasks were runnable. Today the scheduling of the idle task usually steps down the cpu by using HLT-instructions (halt), so power is saved. So there is a functionality somehow in the idle task in our days.
In Windows you can see the idle task in the process list, it's the idle process.
The linux kernel maintains a waitlist of processes which are "blocked" on IO/mutexes etc. If there is no runnable process, the idle process is placed onto the run queue until it is preempted by a task coming out of the wait queue.
The reason it has a task is so that you can measure (approximately) how much time the kernel is wasting due to blocks on IO / locks etc. Additionally it makes the code that much easier for the kernel as the idle task is the same as every task it needs to context switch, instead of a "special case" idle task which could make changing kernel behaviour more difficult.
There is actually one idle task per cpu, but it's not held in the main task list, instead it's in the cpu's "struct rq" runqueue struct, as a struct task_struct * .
This gets activated by the scheduler whenever there is nothing better to do (on that CPU) and executes some architecture-specific code to idle the cpu in a low power state.
You can use ps -ef and it will list the no of process which are running. Then in the first link, it will list the first pid - 0 which is the swapper task.

How linux process scheduler prevents starvation of a process

I have read that linux kernel contains many schedule classes each having it's own priority. To select a new process to run, the process scheduler iterates from the highest priority class to lowest priority class. If a runnable process is found in a class, the highest priority process is selected to run from that class.
Extract from Linux kernel development by Robert Love:
The main entry point into the process schedule is the function
schedule() , defined in kernel/sched.c .This is the function that the
rest of the kernel uses to invoke the process scheduler, deciding
which process to run and then running it. schedule() is generic with
respect to scheduler classes.That is, it finds the highest priority
scheduler class with a runnable process and asks it what to run next.
Given that, it should be no surprise that schedule() is simple.The
only important part of the function—which is otherwise too
uninteresting to reproduce here—is its invocation of pick_next_task()
, also defined in kernel/sched.c .The pick_next_task() function goes
through each scheduler class, starting with the highest priority, and
selects the highest priority process in the highest priority class.
Let's imagine the following scenario. There are some processes waiting in lower priority classes and processes are being added to higher priority classes continuously. Won't the processes in lower priority classes starve?
Linux kernel implements Completely Fair Scheduling algorithm which is based on virtual clock.
Each scheduling entity has a sched_entity structure associated with it whose snapshot looks like
struct sched_entity {
...
u64 exec_start;
u64 sum_exec_runtime;
u64 vruntime;
u64 prev_sum_exec_runtime;
...
}
The above four attributes are used to track the runtime of a process and using these attributes along with some other methods(update_curr() where these are updated), the virtual clock is implemented.
When a process is assigned to CPU, exec_start is updated to current time and the consumed CPU time is recorded in sum_exec_runtime. When process is taken off from CPU sum_exec_runtime value is preserved in prev_sum_exec_runtime. sum_exec_runtime is calculated cumulatively. (Meaning it grows monotonically).
vruntime stores the amount of time that has elapsed on virtual clock during process execution.
How vruntime is calculated?
Ignoring all the complex calculations, the core concept of how it is calculated is :-
vruntime += delta_exec_weighted;
delta_exec_weighted = delta_exec * (NICE_0_LOAD/load.weight);
Here delta_exec is the time difference between process assigned to CPU and taken off from CPU whereas load.weight is the weight of the process which depends on priority (Nice Value). Usually an increase in nice value of 1 for a process means it gets 10 percent less CPU time resulting in less weight.
Process with NICE value 0, weight = 1024
Process re-Niced with value 1, weight = 1024/1.25 = 820(approx)
Points drawn from above
So vruntime increases when a process gets CPU
And vruntimeincreases slowly for higher priority processes compared with lower priority processes.
The runqueue is maintained in red-black tree and each runqueue has a min_vruntime variable associated with it that holds the smallest vruntime among all the process in the run-queue. (min_vruntime can only increase, not decrease as processes will be scheduled).
The key for the node in red black tree is process->vruntime - min_vruntime
When scheduler is invoked, the kernel basically picks up the task which has the smallest key (the leftmost node) and assigns it the CPU.
Elements with smaller key will be placed more to the left, and thus be scheduled more quickly.
When a process is running, its vruntime will steadily increase, so it will finally move rightwards in the red-black tree.
Because vruntime is increase more slowly for more important processes, they will also move rightwards more slowly, so their chance to be scheduled is bigger for a less important process - just as required.
If a process sleeps, its vruntime will remain unchanged. Because the per-queue min_vruntime will increase in the meantime, the sleeper process will be placed more to the left after waking up because the key(mentioned above) got smaller.
Therefore there are no chances of starvation as a lower priority process if deprived of CPU, will have its vruntime smallest and hence key will be smallest so it moves to the left of the tree quickly and therefore scheduled.
It would indeed starve.
There are many ways of dealing with such scenario.
Aging, the longer the process is in the system, increase its priority.
Scheduling algorithms giving every process a time-quantum to use the CPU. Time-Quantum varies, usually, interactive processes are given lower time-quantum as they spend more time doing I/O while time consuming/computational processes are given bigger time quantum.
After a process runs its time quantum, it is put in an expired queue until there are no active processes in the system.
Then, the expired queue becomes the active queue and vice versa.
These are 2 ways in preventing starvation.

Linux CFS (Completely Fair Scheduler) latency

I am a beginner to the Linux Kernel and I am trying to learn how Linux schedules processes.
I have read some books on the Linux Kernel and gone through the links from IBM http://www.ibm.com/developerworks/linux/library/l-cfs/ and all, but I am still left with some doubts.
How does the scheduler schedule all of the tasks within the sysctl_sched_latency time?
When a process wakes up what actually is done in the place_entity function?
When a process wakes up why is the vruntime adjusted by subtracting from sched_latency? Can't that lead to processes in the run queue with large differences in the vruntime value?
Firstly the virtual runtime of a task
in theory is the when the task would start its next time slice of
execution on a theoretically perfect multiple threaded CPU.
in practice is its actual runtime normalized to the total number of running tasks
1. How does the scheduler schedule all of the tasks within the
sysctl_sched_latency time?
It maintains a time ordered red and black tree, where all the runnable tasks are
sorted by their virtual runtime. Nodes on the left have run for the shortest amount of time.
CFS picks the left most task and runs it, until the task schedules or the scheduler ticks
then the CPU time it spent running is added to its virtual runtime.
When it is no longer the left most node, then new task with the shortest virtual is run and
the old task prempted.
2. When a process wakes up what actually is done in the place_entity function?
Short version:
When a process wakes up the place_entity function either leaves the
task's virtual runtime as it was or increases it.
Long version:
When a process wakes up the place_entity function does the following things
Initialise the temporary virtual runtime to the CFS run queue's virtual runtime of the smallest task.
As sleeps less than a single latency don't count,
initializses a threshold variable to sysctl_sched_latency.
If the GENTLE_FAIR_SLEEPERS feature is enabled,
then half the value of the this variable.
Decrement the previously initialised temporary virtual runtime by this threshold value.
Ensure that the temporary virtual runtime is at least equal to the task's virtual runtime, by setting the calculated virtual runtime to the maximum of itself and the task's virtual runtime.
Set the task's virtual runtime to the temporary runtime.
3. When a process wakes up why is the vruntime adjusted by subtracting from sched_latency?
The virtual runtime is decremented because sleeps less than a single latency don't count.
E.g the task shouldn't have its position changed in the red black tree changed if it has
only slept for a single scheduler latency.
4. Can't that lead to processes in the run queue with large differences in the vruntime value?
I believe that the logic described in Step 3 for Question 2, prevents or at least minimises that.
References
sched Linux Kernel Source
sched_fair.c Linux Kernel Source
Notes on the CFS Scheduler Design

Resources