Page fault in Interrupt context - linux

Can a page fault occur in an interrupt handler/atomic context ?

It can, but it would be a disaster. :-)

(This is an oldish question. The existing answers contain correct facts, but are quite thin. I will attempt to answer it in a more substantial way.)
The answer to this question depends upon whether the code is in the kernel (supervisor mode), or in user mode. The reason is that the rules for memory access in these regions are usually different. Here is a brief sequence of events to illustrate the problem (assuming kernel memory could be paged out):
While a user program is executing, an interrupt occurs (e.g. key press / disk event).
CPU transitions to supervisor mode and begins executing the handler in the kernel.
The interrupt handler begins to save the CPU state (so that the user process can be correctly resumed later), but in doing so it touches some of its storage which had previously been paged out.
This triggers a page fault exception.
In order to process the page fault exception, the kernel must now save the CPU state of the code that experienced the page miss.
It may actually be able to do this if it has a preallocated pool of memory that will never be paged out, but such a pool would be inevitably be limited in size.
So you see, the safest (and simplest) solution is for the kernel to ensure that memory owned by the kernel is not pagable at all. For this reason, page faults should not really occur within the kernel. They can occur, but as #adobriyan notes, that usually indicates a much bigger error than a simple need to page in some memory. (I believe this is the case in Linux. Check your specific OS to be sure whether kernel memory is non-pagable. OS architectures do differ.)
So in summary, kernel memory is usually not pagable, and since interrupts are usually handled within the kernel, page faults should not in general occur while servicing interrupts. Higher priority interrupts can still interrupt lower ones. It is just that all their resources are kept in physical memory.
The question about atomic contexts is less clear. If by that you mean atomic operations supported by the hardware, then no interrupt occurs within a partial completion of the operation. If you are instead referring to something like a critical section, then remember that critical sections only emulate atomicity. From the perspective of the hardware there is nothing special about such code except for the entry and exit code, which may use true hardware atomic operations. The code in between is normal code, and subject to being interrupted.
I hope this provides a useful response to this question, as I also wondered about this issue for a while.

Yes.
The code for the handler or critical region could span the boundary between two pages. If the second page is not available, then a page fault is necessary to bring it in.

Not sure why no body has used the word "Double Fault":
http://en.wikipedia.org/wiki/Double_fault
But that is the terms used in Intel manual:
http://software.intel.com/en-us/articles/introduction-to-pc-architecture/
or here:
ftp://download.intel.com/design/processor/manuals/253668.pdf (look at section 6-38).
There is something called triple fault too, which as the name indicate, can also happened when the CPU is trying to service the double fault error.

I think the answer is YES.
I just checked the page fault handler code in kernel 4.15 for x86_64 platform.
Take the following as a hint. no_context is the classic 'kernel oops'.
no_context(struct pt_regs *regs, unsigned long error_code,
unsigned long address, int signal, int si_code)
{
/* Are we prepared to handle this kernel fault? */
if (fixup_exception(regs, X86_TRAP_PF)) {
/*
* Any interrupt that takes a fault gets the fixup. This makes
* the below recursive fault logic only apply to a faults from
* task context.
*/
if (in_interrupt())
return;

Related

Signal vs Exceptions vs Hardware Interrupts vs Traps

I read this answer and I thought that I got a clear idea. But then this answer is confusing me again.
Can somebody please give me a clear picture of the differences between Signal, exception, hardware interrupts and traps?
Moreover, I would like to know which among these block CPU preemption of the kernel code?
Examples would be helpful.
•Interrupts are generated by hardware for events external to the processor core.
These are asynchronous in nature which means the processor is unaware of when the interrupt will be generated. These are also called hardware interrupts. Example: interrupt generated by keyboard to type a character on screen, or the timer interrupt.
•Exceptions: Exceptions occur when the processor detects an error condition while executing an instruction and are classified as faults, traps, or aborts depending on the way they are reported and whether the instruction that caused the exception can be restarted without loss of program or task continuity. (Those technical terms are used on x86 at least, maybe other architectures or in general.) Example: Divide by zero, or a page fault.
•Traps: are basically an instruction which tells the kernel to switch to kernel mode from user mode. Example: during a system call, a TRAP instruction would force kernel to execute the system call code inside kernel (kernel mode) on behalf of the process.
Trap is a kind of exception.
The x86 int 0x80 "software interrupt" instruction is a trap, not like external interrupts. x86 uses a single table of handlers for both interrupts and exceptions; other ISAs may also do that.
Some people use this term more generally, as a synonym for "exception". e.g. you might say "MIPS add will trap on signed overflow, so compilers always use addu."
•Signals: signals are generated by kernel or by a process (kill system call). They are eventually managed by the OS kernel, which delivers them to the target thread/process. E.g. a divide by zero instruction would result in kernel delivering a SIGFPE signal (arithmetic exception) to the process that ran it. (For example, the x86 #DE fault is handled by the kernel, generating a software SIGFPE for the current process.)
Related:
When an interrupt occurs, what happens to instructions in the pipeline? - #Krazy Glew's answer also defines some terminology using Intel definitions.
Why are segfaults called faults (and not aborts) if they are not recoverable? - segfaults are a special case of page faults, which are normally recoverable. And even an invalid page fault from user-space doesn't crash the kernel, so it's recoverable in that sense.

Under what circumstances does control pass from userspace to the Linux kernel space?

I'm trying to understand which events can cause a transition from userspace to the linux kernel. If it's relevant, the scope of this question can be limited to the x86/x86_64 architecture.
Here are some sources of transitions that I'm aware of:
System calls (which includes accessing devices) causes a context switch from userspace to kernel space.
Interrupts will cause a context switch. As far as I know, this also includes scheduler preemptions, since a scheduler usually relies on a timer interrupt to do its work.
Signals. It seems like at least some signals are implemented using interrupts but I don't know if some are implemented differently so I'm listing them separately.
I'm asking two things here:
Am I missing any userspace->kernel path?
What are the various code paths that are involved in these context switches?
One you are missing: Exceptions
(which can be further broken down in faults, traps and aborts)
For example a page fault, breakpoint, division by zero or floating-point exception. Technically, one can view exceptions as interrupts but not really the way you have defined an interrupt in your question.
You can find a list of x86 exceptions at this osdev webpage.
With regard to your second question:
What are the various code paths that are involved in these context
switches?
That really depends on the architecture and OS, you will need to be more specific. For x86, when an interrupt occurs you go to the IDT entry and for SYSENTER you get to to address specified in the MSR. What happens after that is completely up to the OS.
No one wrote a complete answer so I will try to incorporate the comments and partial answers into an answer. Feel free to comment or edit the answer to improve it.
For the purposes of this question and answer, userspace to kernel transitions mean a change in processor state that allows access to kernel code and memory. In short I will refer to these transistions as context switches.
When discussing events that can trigger userspace to kernel transitions, it is important to separate the OS constructs that we are used to (signals, system calls, scheduling) that require context switches and the way these constructs are implemented, using context switches.
In x86, there are two central ways for context switches to occur: interrupts and SYSENTER. Interrupts are a processor feature, which causes a context switch when certain events happen:
Hardware devices may request an interrupt, for example, a timer/clock can cause an interrupt when a certain amount of time has elapsed. A keyboard can interrupt when keys are pressed. It's also called a hardware interrupt.
Userspace can initiate an interrupt. For example, the old way to perform a system call in Linux on x86 was to execute INT 0x80 with arguments passed through the registers. Debugging breakpoints are also implemented using interrupts, with the debugger replacing an instruction with INT 0x3. This type of an interrupt is called a software interrupt.
The CPU itself generates interrupts in certain situations, like when memory is accessed without permissions, when a user divides by zero, or when one core must notify another core that it needs to do something. This type of interrupt is called an exception, and you can read more about them in #esm 's answer.
For a broader discussion of interrupts see here: http://wiki.osdev.org/Interrupt
SYSENTER is an instruction that provides the modern path to cause a context switch for the particular case of performing a system call.
The code that handles the context switching due to interrupts or SYSENTER in Linux can be found in arch/x86/kernel/entry_{32|64}.S.
There are many situations in which a higher-level Linux construct might cause a context switch. Here are a few examples:
If a system call got to int 0x80 or sysenter instruction, a context switch occurs. Some system call routines can use userspace information to get the information the system call was meant to get. In this case, no context switch will occur.
Many times scheduling doesn't require an interrupt: a thread will perform a system call, and the return from the syscall is delayed until it is scheduled again. For processses that are in a section where syscalls aren't performed, Linux relies on timer interrupts to gain control.
Virtual memory access to a memory location that was paged out will cause a segmentation fault, and therefore a context switch.
Signals are usually delivered when a process is already "switched out" (see comments by #caf on the question), but sometimes an inter-processor interrupt is used to deliver the signal between two running processes.

Clarification about the behaviour of request_threaded_irq

I have scoured the web, but haven't found a convincing answer to a couple of related questions I have, with regard to the "request_threaded_irq" feature.
Question1:
Firstly, I was reading this article, regarding threaded IRQ's:
http://lwn.net/Articles/302043/
and there is this one line that isn't clear to me:
"Converting an interrupt to threaded makes only sense when the handler
code takes advantage of it by integrating tasklet/softirq
functionality and simplifying the locking."
I understand had we gone ahead with a "traditional", top half/bottom half approach, we would have needed either spin-locks or disable local IRQ to meddle with shared data. But, what I don't understand is, how would threaded interrupts simplify the need for locking by integrating tasklet/softirq functionality.
Question2:
Secondly, what advantage (if any), does a request_threaded_handler approach have over a work_queue based bottom half approach ? In both cases it seems, as though the "work" is deferred to a dedicated thread. So, what is the difference ?
Question3:
Lastly, in the following prototype:
int request_threaded_irq(unsigned int irq, irq_handler_t handler, irq_handler_t thread_fn, unsigned long irqflags, const char *devname, void *dev_id)
Is it possible that the "handler" part of the IRQ is continuously triggered by the relevant IRQ (say a UART receving characters at a high rate), even while the "thread_fn"(writing rx'd bytes to a circular buffer) part of the interrupt handler is busy processing IRQ's from previous wakeups ? So, wouldn't the handler be trying to "wakeup" an already running "thread_fn" ? How would the running irq thread_fn behave in that case ?
I would really appreciate if someone can help me understand this.
Thanks,
vj
For Question 2,
An IRQ thread on creation is setup with a higher priority, unlike workqueues.
In kernel/irq/manage.c, you'll see some code like the following for creation of kernel threads for threaded IRQs:
static const struct sched_param param = {
.sched_priority = MAX_USER_RT_PRIO/2,
};
t = kthread_create(irq_thread, new, "irq/%d-%s", irq,
new->name);
if (IS_ERR(t)) {
ret = PTR_ERR(t);
goto out_mput;
}
sched_setscheduler_nocheck(t, SCHED_FIFO, &param);
Here you can see, the scheduling policy of the kernel thread is set to an RT one (SCHED_FIFO) and the priority of the thread is set to MAX_USER_RT_PRIO/2 which is higher than regular processes.
For Question 3,
The situation you described can also occur with normal interrupts. Typically in the kernel, interrupts are disabled while an ISR executes. During the execution of the ISR, characters can keep filling the device's buffer and the device can and must continue to assert an interrupt even while interrupts are disabled.
It is the job of the device to make sure the IRQ line is kept asserted till all the characters are read and any processing is complete by the ISR. It is also important that the interrupt is level triggered, or depending on the design be latched by the interrupt controller.
Lastly, the device/peripheral should have an adequately sized FIFO so that characters delivered at a high rate are not lost by a slow ISR. The ISR should also be designed to read as many characters as possible when it executes.
Generally speaking what I've seen is, a controller would have a FIFO of a certain size X, and when the FIFO is filled X/2, it would fire an interrupt that would cause the ISR to grab as much data as possible. The ISR reads as much as possible and then clears the interrupt. Meanwhile, if the FIFO is still X/2, the device would keep the interrupt line asserted causing the ISR to execute again.
Previously, the bottom-half was not a task and still could not block. The only difference was that interrupts were disabled. The tasklet or softirq allow different inter-locks between the driver's ISR thread and the user API (ioctl(), read(), and write()).
I think the work queue is near equivalent. However, the tasklet/ksoftirq has a high priority and is used by all ISR based functionality on that processor. This may give better scheduling opportunities. Also, there is less for the driver to manage; everything is already built-in to the kernel's ISR handler code.
You must handle this. Typically ping-pong buffers can be used or a kfifo like you suggest. The handler should be greedy and get all data from the UART before returning IRQ_WAKE_THREAD.
For Question no 3,
when an threadedirq is activated the corresponding interrupt line is masked / disabled. when the threadedirq runs and completes it enables it towards the end of the it. hence there won't be any interrupt firing while the respective threadedirq is running.
The original work of converting "hard"/"soft" handlers to threaded handlers was done by Thomas Gleixner & team when building the PREEMPT_RT Linux (aka Linux-as-an-RTOS) project (it's not part of mainline).
To truly have Linux run as an RTOS, we cannot tolerate a situation where an interrupt handler interrupts the most critical rt (app) thread; but how can we ensure that the app thread even overrides an interrupt?? By making it (the interrupt) threaded, schedulable (SCHED_FIFO) and have a lower priority than the app thread (interrupt threads rtprio defaults to 50). So a "rt" SCHED_FIFO app thread with a rtprio of 60 would be able to "preempt" (closely enough that it works) even an interrupt thread. That should answer your Qs. 2.
Wrt to Qs 3:
As others have said, your code must handle this situation.
Having said that, pl note that a key point to using a threaded handler is so that you can do work that (possibly) blocks (sleeps). If your "bottom half" work is guaranteed to be non-blocking and must be fast, pl use the traditional style 'top-half/bh' handlers.
How can we do that? Simple: don't use request_threaded_irq() just call request_irq() - the comment in the code clearly says (wrt 3rd parameter):
* #thread_fn: Function called from the irq handler thread
* If NULL, no irq thread is created"
Alternatively, you can pass the IRQF_NO_THREAD flag to request_irq.
(BTW, a quick check with cscope on the 3.14.23 kernel source tree shows that request_irq() is called 1502 times [giving us non-threaded interrupt handling], and request_threaded_irq() [threaded interrupts] is explicitly called 204 times).

Memory addressing in assembly / multitasking

I understand how programs in machine code can load values from memory in to registers, perform jumps, or store values in registers to memory, but I don't understand how this works for multiple processes. A process is allocated memory on the fly, so must it use relative addressing? Is this done automatically (meaning there are assembly instructions that perform relative jumps, etc.), or does the program have to "manually" add the correct offset to every memory position it addresses.
I have another question regarding multitasking that is somewhat related. How does the OS, which isn't running, stop a thread and move on to the next. Is this done with timed interrupts? If so, then how can the values in registers be preserved for a thread. Are they saved to memory before control is given to a different thread? Or, rather than timed interrupts, does the thread simply choose a good time to give up control. In the case of timed interrupts, what happens if a thread is given processor time and it doesn't need it. Does it have to waste it, can it call the interrupt manually, or does it alert the OS that it doesn't need much time?
Edit: Or are executables edited before being run to compensate for the correct offsets?
That's not how it works. All modern operating systems virtualize the available memory. Giving every process the illusion that it has 2 gigabytes of memory (or more) and doesn't have to share it with anybody. The key component in a machine that does this is the MMU, nowadays built in the processor itself. Another core feature of this virtualization is that it isolates processes. One misbehaving one cannot bring another one down with it.
Yes, a clock tick interrupt is used to interrupt the currently running code. Processor state is simply saved on the stack. The operating system scheduler then checks if any other thread is ready to run and has a high enough priority to get first in line. Some extra code ensures that everybody gets a fair share. Then it just a matter of setting the MMU to resume execution on the other thread. If no thread is ready to run then the CPU gets physically turned off with the HALT instruction. To be woken again by the next clock interrupt.
This is ten-thousand foot view, it is well covered in any book about operating system design.
A process is allocated memory on the fly, so must it use relative addressing?
No, it can use relative or absolute addressing depending on what it is trying to address.
At least historically, the various different addressing modes were more about local versus remote memory. Relative addressing was for memory addresses close to the current address while absolute was more expensive but could address anything. With modern virtual memory systems, these distinctions may be no longer necessary.
A process is allocated memory on the fly, so must it use relative addressing? Is this done automatically (meaning there are assembly instructions that perform relative jumps, etc.), or does the program have to "manually" add the correct offset to every memory position it addresses.
I'm not sure about this one. This is taken care of by the compiler normally. Again, modern virtual memory systems make make this complexity unnecessary.
Are they saved to memory before control is given to a different thread?
Yes. Typically all of the state (registers, etc.) is stored in a process control block (PCB), a new context is loaded, the registers and other context is loaded from the new PCB, and execution begins in the new context. The PCB can be stored on the stack or in kernel memory or in can utilize processor specific operations to optimize this process.
Or, rather than timed interrupts, does the thread simply choose a good time to give up control.
The thread can yield control -- put itself back at the end of the run queue. It can also wait for some IO or sleep. Thread libraries then put the thread in wait queues and switch to another context. When the IO is ready or the sleep expires, the thread is put back into the run queue. The same happens with mutex locks. It waits for the lock in a wait queue. Once the lock is available, the thread is put back into the run queue.
In the case of timed interrupts, what happens if a thread is given processor time and it doesn't need it. Does it have to waste it, can it call the interrupt manually, or does it alert the OS that it doesn't need much time?
Either the thread can run (perform CPU instructions) or it is waiting -- either on IO or a sleep. It can ask to yield but typically it is doing so by [again] sleeping or waiting on IO.
I probably walked into this question quite late, but then, it may be of use to some other programmers. First - the theory.
The modern day operating system will virtualize the memory, and to do so, it maintains, within its system memory area, a series of page pointers. Each page is of a fixed size (usually 4K), and when any program seeks some memory, its allocated memory addresses that are virtualized using the memory page pointer. Its approximates the behaviour of "segment" registers in the prior generation of the processors.
Now when the scheduler decides to get another process running, it may or may not keep the previous process in memory. If it keeps it in memory, then all that the scheduler does is to save the entire register snapshot (now, including YMM registers - this bit was a complex issue earlier as there are no single instructions that saved the entire context : read up on XSAVE), and this has a fixed format (available in Intel SW manual). This is stored in the memory space of the scheduler itself, along with the information on the memory pages that were being used.
If however, the scheduler needs to "dump" the current process context that is about to go to sleep to the hard disk - this situation usually arises when the process that is waking up needs extraordinary amount of memory, then the scheduler writes the memory page files in the disk blocks (called pagefile - reserved area of memory - also the source of "old grandmother wisdom" that pagefile must be equal to size of real memory) and the scheduler preserves the memory page pointer addresses as offsets in the pagefile. When it wakes up, the scheduler reads from pagefile the offset address, allocates real memory and populates the memory page pointers, and then loads the contents from the disk blocks.
Now, to answer your specific questions :
1. Do u need to use only relative addressing, or you can use absolute?
And. You may use either - whatever u perceive to be as absolute is also relative as the memory page pointer relativizes that address in an invisible format. There is no really absolute memory address anywhere (including the io device memories) except the kernel of the operating system itself. To test this, u may unassemble any .EXE program, to see that the entry point is always CALL 0010 which clearly implies that each thread gets a different "0010" to start the execution.
How do threads get life and what if it surrenders the unused slice.
Ans. The threads usually get a slice - modern systems have 20ms as the usual standard - but this is sometimes changed in special purpose compilation for servers that do not have many hardware interrupts to deal with - in order of their position on the process queue. A thread usually surrenders its slice by calling function sleep(), which is a formal (and very nice way) to surrender your balance part of the time slice. Most libraries implementing asynchronous reads, or interrupt actions, call sleep() internally, but in many instances, top level programs also call sleep() - e.g. to create a time gap. An invocation to sleep will certainly change the process context - the CPU actually is not given the liberty to sleep using NOP.
The other method is to wait for an IO to complete, and this is handled differently. The program on asking for an IO process, will cede its time slice, and the process scheduler flags this thread to be in "WAITING FOR AN IO" state - and this thread will not be given a time slice by the processor till its intended IO is completed, or timed out. This feature helps programmers as they do not have to explicitly write a sleep_until_IO() kind of interface.
Trust this sets you going further in your explorations.

How processor deals with instruction upon interrupt

What will happen if in the middle of a long instruction the CPU recieves interruption? Will the CPU execute the whole instruction or only part of it?
From a programmer's point of view, either a specific instruction is retired, and all its side effects committed to registers/memory or it isn't (and it's as if the instruction wasn't executed at all). The whole point of instruction retirement is to guarantee a coherent view of the program state at the point of external events, such as interrupts.
That's notably why instructions retire in order, so that external observers can still look at the architectural state of the CPU as if it was executing sequentially instructions.
There are exceptions to this, notably the REP-string class of instructions.
I believe this is what you asked about, but if it is not, then let me ask you: how would you observe that an instruction was "partially" executed from anywhere ?
As far as I know, it depends on the processor and the instruction. More specifically, it depends whether and when the processor samples for pending interrupts. If the processor only looks for pending interrupts after completing the current instruction, then clearly nothing will interrupt an instruction in the middle. However, if a long instruction is executing, it might be beneficial (latency-wise) to sample for interrupts several times during the instruction's execution. There is a downside to this -- in this case, you would have to restore all changes that were made to registers and flags as a result of the instruction's partial execution, because after the interrupt completes you would have to go back and reissue that instruction.
(Source: Wikipedia, http://en.wikipedia.org/wiki/Interrupt)

Resources