does kernel's panic() function completely freezes every other process? - linux

I would like to be confirmed that kernel's panic() function and the others like kernel_halt() and machine_halt(), once triggered, guarantee complete freezing of the machine.
So, are all the kernel and user processes frozen? Is panic() interruptible by the scheduler? The interrupt handlers could still be executed?
Use case: in case of serious error, I need to be sure that the hardware watchdog resets the machine. To this end, I need to make sure that no other thread/process is keeping the watchdog alive. I need to trigger a complete halt of the system. Currently, inside my kernel module, I simply call panic() to freeze everything.
Also, the user-space halt command is guaranteed to freeze the system?
Thanks.
edit: According to: http://linux.die.net/man/2/reboot, I think the best way is to use reboot(LINUX_REBOOT_CMD_HALT): "Control is given to the ROM monitor, if there is one"

Thank you for the comments above. After some research, I am ready to give myself a more complete answer, below:
At least for the x86 architecture, the reboot(LINUX_REBOOT_CMD_HALT) is the way to go. This, in turn, calls the syscall reboot() (see: http://lxr.linux.no/linux+v3.6.6/kernel/sys.c#L433). Then, for the LINUX_REBOOT_CMD_HALT flag (see: http://lxr.linux.no/linux+v3.6.6/kernel/sys.c#L480), the syscall calls kernel_halt() (defined here: http://lxr.linux.no/linux+v3.6.6/kernel/sys.c#L394). That function calls syscore_shutdown() to execute all the registered system core shutdown callbacks, displays the "System halted" message, then it dumps the kernel, AND, finally, it calls machine_halt(), that is a wrapper for native_machine_halt() (see: http://lxr.linux.no/linux+v3.6.6/arch/x86/kernel/reboot.c#L680). It is this function that stops the other CPUs (through machine_shutdown()), then calls stop_this_cpu() to disable the last remaining working processor. The first thing that this function does is to disable interrupts on the current processor, that is the scheduler is no more able to take control.
I am not sure why the syscall reboot() still calls do_exit(0), after calling kernel_halt(). I interpret it like that: now, with all processors marked as disabled, the syscall reboot() calls do_exit(0) and ends itself. Even if the scheduler is awoken, there are no more enabled processors on which it could schedule some task, nor interrupt: the system is halted. I am not sure about this explanation, as the stop_this_cpu() seems to not return (it enters an infinite loop). Maybe is just a safeguard, for the case when the stop_this_cpu() fails (and returns): in this case, do_exit() will end cleanly the current task, then the panic() function is called.
As for the panic() code (defined here: http://lxr.linux.no/linux+v3.6.6/kernel/panic.c#L69), the function first disables the local interrupts, then it disables all the other processors, except the current one by calling smp_send_stop(). Finally, as the sole task executing on the current processor (which is the only processor still alive), with all local interrupts disabled (that is, the preemptible scheduler -- a timer interrupt, after all -- has no chance...), then the panic() function loops some time or it calls emergency_restart(), that is supposed to restart the processor.
If you have better insight, please contribute.

Related

Is CPU affinity enforced across system calls?

So if I set a process's CPU affinity using:
sched_setaffinity()
and then perform some other system call using that process, is that system call ALSO guaranteed to execute on the same CPU enforced by sched_setaffinity?
Essentially, I'm trying to enforce that a process, and the system calls it makes, are executed on the same core. Obviously I can use sched_setaffinity() to enforce userspace code will execute on only one CPU, but does that same system call enforce kernel-space code in that process context will execute on the same core as well?
Thanks!
Syscalls are really just your process code switching from user to kernel mode. The task that is being run does not change at all, it just temporarily enters kernel mode to execute the syscall and then returns back to user mode.
A task can be preempted by the scheduler and moved to a different CPU, and this can happen in the middle of normal user mode code or even in the middle of a syscall.
By setting the task affinity to a single CPU using sched_setaffinity(), you remove this possibility, since even if the task gets preempted, the scheduler has no choice but to keep it running on the same CPU (it may of course change the currently running task, but when your task resumes it will still be on the same CPU).
So to answer your question:
does that same system call enforce kernel-space code in that process context will execute on the same core as well?
Yes, it does.
Now, to address #Barmar's comment: in the case of syscalls that can "sleep", this does not mean that the task could change CPU if the affinity does not allow it.
What happens when a syscall sleeps, is simply that the syscall code tells the scheduler: "hey, I'm waiting for something, just run another task while I wait and wake me up later". When the syscall resumes, it checks if the requested resource is available (it could even tell the kernel exactly when it wants to be waken up), and if not it either waits again or returns to user code saying "sorry, I got nothing, try again". The resource could of course be made available by some interrupt that causes an interrupt handler to run on a different CPU, but that's a different story, and it doesn't really matter. To put it simply: interrupt code does not run in process context, at all. For what the task executing the syscall is concerned, the resource is just magically there when execution resumes.

What happens if when a system call is being executed, the hardware timer interrupt fires?

Say we have one CPU with only one core, and there are many threads that are running on this core.
Let's say that Thread A has issued a system call, now the interrupt handler for the system call will be executed.
Now while the the system call is being executed, say that the hardware timer interrupt (the one responsible for the scheduling of threads) fires. What will happen in this case, will the CPU stop running the system call and go to execute the scheduler code, or does the CPU must wait for the system call to be fully executed before switching to another thread?
In Linux the answer is actually dependent on a kernel build time configuration option called CONFIG_PREEMPT. There are actually three options:
If CONFIG_PREEMPT is not set, the interrupt handler will mark a flag indicating that the scheduler needs to run. The flag will be checked on return to user space upon system call termination.
If CONFIG_PREEMPT_VOLUNTARY is set, the same will occur except the flag will be checked and the scheduler run (and task possibly switched if needed) at specific static code points in the system call code
If CONFIG_PREEMPT_FULL is set, the scheduler will run in most cases on the return code path from the interrupt handler to the system call code, unless a preempt critical section is in force.
Unless the system call blocks interrupts, the interrupt handler will get invoked.

request_irq to be handled by a single CPU

I would like to ask if there is a way to register the interrupt handler so that only one cpu will handle this interrupt line.
The problem is that we have a function that can be called in both normal context and interrupt context. In this function we use irqs_disabled() to check the caller context. If the caller context is interrupt, we switch the processing to polling mode (continuously check the interrupt status register). Although the irqs_disabled() tells that the local interrupt of current CPU is disabled, the interrupt handler is still called by other CPUs and hence the interrupt status register are cleared in the interrupt handler. The polling code now checks the wrong value of the interrupt status register and do wrong processing.
You're doing it wrong. Don't limit your interrupt to be handled by a single CPU - instead use a spin_lock_irqsave to protect the code path. This will work both on the same CPU and across CPUs.
See http://www.mjmwired.net/kernel/Documentation/spinlocks.txt for the relevant API and here is a nice article from Linux Journal that explain the usage: http://www.linuxjournal.com/article/5833
I've got no experience with ARM, but on x86 you can arrange for a particular interrupt to be called on only one processor via /proc/irq/<number>/smp_affinity - set from user space - replacing the number with irq you care about - and this looks as if it's essentially generic. Note that the value you set it to is a bit mask, expressed in hex, without a leading 0x. I.e. if you want cpu 0, set it to 1, for cpu 1, set it to 2, etc. Beware of a process called irqbalance, which uses this mechanism, and might well override whatever you have done.
But why are you doing this? If you want to know whether you are called from an interrupt, there's an interface available named something like in_interrupt(). I've used it to avoid trying to call blocking functions from code that might be called from interrupt context.

when schedule() returns?

In case of blocking IO, say, driver read, we call wait_event_interruptible() with some condition. When the condition is met, read will be done.
I looked into wait_event_interruptible() function, it checks for condition and calls schedule(). schedule() will look for the next runnable process and does context switch and other process will run. Does it mean that, the next instruction to be executed for the current process will be inside schedule() function when this process is woken up again?
If yes, if multiple process voluntarily calls schedule, then all processes will have next instruction to be executed once after it gets woken up will be well inside schedule()?
In case of ret_from_interrupt, schedule() is called. When it will return? as iret is executed after that.
I think the answer to the first question is yes as that's a fairly typical way of implementing context switching. That's how OS161 works, for example.
If the scheduler is called from an ISR, everything should be the same. The scheduler should change the context and return to the ISR and the ISR should then return using IRET. It will return to a different process/thread if the scheduler chooses to switch to a different one and therefore loads its context and saves the old one.
Re point 2: The iret instruction (return from interrupt handler) is executed and that gets you into ret_from_interrupt. Then Linux passes control to the task next to run (schedule()). One of the overriding considerations when writing interrupt handlers is that while they are executing many other activities are inhibited (other, lower priority, interrupts are the prime example), so you want to get out of there as fast as possible. That is why most interrupt handlers just stash away work to be done before returning, and said work is then handled elsewhere (today in some special kernel thread).

Internals of a Linux system call

What happens (in detail) when a thread makes a system call by raising interrupt 80? What work does Linux do to the thread's stack and other state? What changes are done to the processor to put it into kernel mode? After running the interrupt handler, how is control restored back to the calling process?
What if the system call can't be completed quickly: e.g. a read from disk. How does the interrupt handler relinquish control so that the processor can do other stuff while data is being loaded and how does it then obtain control again?
A crash course in kernel mode in one stack overflow answer
Good questions! (Interview questions?)
What happens (in detail) when a
thread makes a system call by raising
interrupt 80?
The int $80 operation is vaguely like a function call. The CPU "takes a trap" and restarts at a known address in kernel mode, typically with a different MMU mode as well. The kernel will save many of the registers, though it doesn't have to save the registers that a program would not expect an ordinary function call to save.
What work does Linux do to the
thread's stack and other state?
Typically an OS will save registers that the ABI promises not to change during procedure calls. The stack will stay the same; the kernel will run on a per-thread kernel stack rather than the per-thread user stack. Naturally some state will change, otherwise there would be no reason to do the system call.
What changes are done to the
processor to put it into kernel mode?
This is usually entirely automatic. The CPU has, generically, a software-interrupt instruction that is a bit like a functional-call operation. It will cause the switch to kernel mode under controlled conditions. Typically, the CPU will change some sort of PSW protection bit, save the old PSW and PC, start at a well-known trap vector address, and may also switch to a different memory management protection and mapping arrangement.
After running the interrupt handler,
how is control restored back to the
calling process?
There will be some sort of "return from interrupt" or "return from trap" instruction, typically, that will act a bit like a complicated function-return instruction. Some RISC processors did very little automatically and required specific code to do the return and some CISC processors like x86 have (never-really-used) instructions that would execute dozens of operations documented in pages of architecture-manual pseudo-code for capability adjustments.
What if the system call can't be
completed quickly: e.g. a read from
disk. How does the interrupt handler
relinquish control so that the
processor can do other stuff while
data is being loaded and how does it
then obtain control again?
The kernel itself is threaded much like a threaded user program is. It just switches stacks (threads) and works on someone else's process for a while.
To answer the last part of the question - what does the kernel do if the system call needs to sleep -
After a system call, the kernel is still logically running in the context of the same task that made the system call - it's just in kernel mode rather than user mode - it is NOT a separate thread and most system calls do not invoke logic from another task/thread. What happens is that the system call calls wait_event, or wait_event_timeout or some other wait function, which adds the task to a list of tasks waiting for something, then puts the task to sleep, which changes its state, and calls schedule() to relinquish the current CPU.
After this the task cannot be run again until it gets woken up, typically by another task (kernel task, etc) or interrupt handler calling a wake* function which will wake up the task(s) sleeping waiting for that particular event, which means the scheduler will soon schedule them again.
It's worth noting that userspace tasks (i.e. threads) are only one type of task and there are a few others internal to the kernel which can do work as well - these are kernel threads and bottom half handlers / tasklets / task queues etc. Work which doesn't belong to any particular userspace process (for example network handling e.g. responding to pings) gets done in these. These tasks are allowed to go to sleep, unlike interrupts (which should not invoke the scheduler)
http://tldp.org/LDP/khg/HyperNews/get/syscall/syscall86.html
This should help people who seek for answers to what happens when the syscall instruction is executed which transfers the control to the kernel (user mode to kernel mode). This is based upon x86_64 architecture.
https://0xax.gitbooks.io/linux-insides/content/SysCall/syscall-2.html

Resources