Linux Signals and Interrupt handler - linux

Reading about interrupts in linux, I understand that their handlers will run till completion (lets not consider the bottom halves here). So, assume that my code has SIGINT handler registered (using the signal()/sigaction() call) with a while(1)-loop in it (i.e the handler never returns).
If I quit my program abruptly while running, then shouldn't this scenario freeze my machine entirely? Won't my machine with only one CPU core go into an infinite loop?
What I mean is; since my interrupt handler is not returning, won't the CPU be stuck in executing the while(1) code only? (i.e no other process will get the chance of running, because there won't be any context-switch/preemption inside the handler or can the interrupt handler get preempted in between running the while(1) loop?)

You definitely mix signal handlers and interrupt handlers, despite they have similar handling. Unlike you are writing kernel code you won't meet interrupt handlers directly.
But, game rules for signal handlers are very similar. You should either exit from a signal handler or finish the program (and, the latter is analog for stopping the whole system, for the kernel land). This includes exotic ways for exiting signal handlers as longjmp().
From kernel POV, a process in forever loop in an interrupt handler doesn't differ from a process with the same loop in a usual code piece like main(). Entering a signal handler modifies signal mask but doesn't change things radically. Such process can be stopped, traced, killed in the same manner as outside of signal.
(All this doesn't concern some special process classes with advanced credentials. E.g. X Window server can be special because it disables some kernel activity during its video adapter handling. But you likely should know the needed safety rules when writing such software.)

Related

pthread_sigmask not working properly with aio callback threads

My application is sometimes terminating from SIGIO or SIGUSR1 signals even though I have blocked these signals.
My main thread starts off with blocking SIGIO and SIGUSR1, then makes 2 AIO read operations. These operations use threads to get notification about operation status. The notify functions (invoked as detached threads) start another AIO operation (they manipulate the data that has been read and start writing it back to the file) and notification is handled by sending signal (one operation uses SIGIO, the other uses SIGUSR1) to this process. I am receiving these signals synchronously by calling sigwait in the main thread. Unfortunately, sometimes my program crashes, being stopped by SIGUSR1 or SIGIO signal (which should be blocked by a sigmask).
One possible solution is to set SIG_IGN handlers for them but this doesn't solve the problem. Their handlers shouldn't be invoked, rather should they be retrieved from pending signals by sigwait in the next iteration of the main program loop.
I have no idea which thread handles this signal in this manner. Maybe it's the init who receives this signal? Or some shell thread? I have no idea.
I'd hazard a guess that the signal is being received by one of your AIO callback threads, or by the very thread which generates the signal. (Prove me wrong and I'll delete this answer.)
Unfortunately per the standard, "[t]he signal mask of [a SIGEV_THREAD] thread is implementation-defined." For example, on Linux (glibc 2.12), if I block SIGUSR1 in main, then contrive to run a SIGEV_THREAD handler from an aio_read call, the handler runs with SIGUSR1 unblocked.
This makes SIGEV_THREAD handlers unsuitable for an application that must reliably and portably handle signals.

How is interrupt context "restored" when a interrupt handler is interrupted by another interrupt?

I read some related posts:
(1) From Robert Love: http://permalink.gmane.org/gmane.linux.kernel.kernelnewbies/1791
You cannot sleep in an interrupt handler because interrupts do not have a backing
process context, and thus there is nothing to reschedule back into. In other
words, interrupt handlers are not associated with a task, so there is nothing to
"put to sleep" and (more importantly) "nothing to wake up". They must run
atomically.
(2) From Which context are softirq and tasklet in?
If sleep is allowed, then the linux cannot schedule them and finally cause a
kernel panic with a dequeue_task error. The interrupt context does not even
have a data structure describing the register info, so they can never be scheduled
by linux. If it is designed to have that structure and can be scheduled, the
performance for interrupt handling process will be effected.
So in my understanding, interrupt handlers run in interrupt context, and can not sleep, that is to say, can not perform the context switch as normal processes do with backing mechanism.
But a interrupt handler can be interrupted by another interrupt. And when the second interrupt handler finishes its work, control flow would jump back to the first interrupt handler.
How is this "restoring" implemented without normal context switch? Is it like normal function calls with all the registers and other related stuff stored in a certain stack?
The short answer is that an interrupt handler, if it can be interrupted by an interrupt, is interrupted precisely the same way anything else is interrupted by an interrupt.
Say process X is running. If process X is interrupted, then the interrupt handler runs. To the extent there is a context, it's still process X, though it's now running interrupt code in the kernel (think of the state as X->interrupt if you like). If another interrupt occurs, then the interrupt is interrupted, but there is still no special process context. The state is now X->first_interrupt->second_interrupt. When the second interrupt finishes, the first interrupt will resume just as X will resume when the first interrupt finishes. Still, the only process context is process X.
You can describe these as context switches, but they aren't like process context switches. They're more analogous to entering and exiting the kernel -- the process context stays the same but the execution level and unit of code can change.
The interrupt routine will store some CPU state and registers before enter real interrupt handler, and will restore these information before returning to interrupted task. Normally, this kind of storing and restoring is not called context-switch, as the context of interrupted process is not changed.
As of 2020, interrupts (hard IRQ here) in Linux do not nest on a local CPU in general. This is at least mentioned twice by group/maintainer actively contributing to Linux kernel:
From NAPI updates written by Jakub Kicinski in 2020:
…Because normal interrupts don't nest in Linux, the system can't service any new interrupt while it's already processing one.
And from Bootlin in 2022:
…Interrupt handlers are run with all interrupts disabled on the local CPU…
So this question is probably less relevant nowadays, at least for Linux kernel.

When does a process handle a signal

I want to know when does a linux process handles the signal.
Assuming that the process has installed the signal handler for a signal, I wanted to know when would the process's normal execution flow be interrupted and signal handler called.
According to http://www.tldp.org/LDP/tlk/ipc/ipc.html, the process would handle the signal when it exits from a system call. This would mean that a normal instruction like a = b+c (or its equivalent machine code) would not be interrupted because of signal.
Also, there are system calls which would get interrupted (and fail with EINTR or get restarted) upon receiving a signal. This means that signal is processed even before the system call completes. This behaviour seems to b conflicting with what I have mentioned in the previous paragraph.
So, I am not clear as to when is the signal processed and in which process states would it be handled by the process. Can it be interrupted
Anytime it enters from kernel space to user space, or
Anytime it is in user space, or
Anytime the process is scheduled for execution by the scheduler
Thanks!
According to http://www.tldp.org/LDP/tlk/ipc/ipc.html, the process would handle the signal when it exits from a system call. This would mean that a normal instruction like a = b+c (or its equivalent machine code) would not be interrupted because of signal.
Well, if that were the case, a CPU-intensive process would not obey the process scheduler. The scheduler, in fact, can interrupt a process at any point of time when its time quantum has elapsed. Unless it is a FIFO real-time process.
A more correct definition: One point when a signal is delivered to the process is when the control flow leaves the kernel mode to resume executing user-mode code. That doesn't necessarily involve a system call.
A lot of the semantics of signal handling are documented (for Linux, anyway - other OSes probably have similar, but not necessarily in the same spot) in the section 7 signal manual page, which, if installed on your system, can be accessed like this:
man 7 signal
If manual pages are not installed, online copies are pretty easy to find.

Implementation of Signals under Linux and Windows?

I am not new to the use of signals in programming. I mostly work in C/C++ and Python.
But I am interested in knowing how signals are actually implemented in Linux (or Windows).
Does the OS check after each CPU instruction in a signal descriptor table if there are any registered signals left to process? Or is the process manager/scheduler responsible for this?
As signal are asynchronous, is it true that a CPU instruction interrupts before it complete?
The OS definitely does not process each and every instruction. No way. Too slow.
When the CPU encounters a problem (like division by 0, access to a restricted resource or a memory location that's not backed up by physical memory), it generates a special kind of interrupt, called an exception (not to be confused with C++/Java/etc high level language exception abstract).
The OS handles these exceptions. If it's so desired and if it's possible, it can reflect an exception back into the process from which it originated. The so-called Structured Exception Handling (SEH) in Windows is this kind of reflection. C signals should be implemented using the same mechanism.
On the systems I'm familiar with (although I can't see why it should be much different elsewhere), signal delivery is done when the process returns from the kernel to user mode.
Let's consider the one cpu case first. There are three sources of signals:
the process sends a signal to itself
another process sends the signal
an interrupt handler (network, disk, usb, etc) causes a signal to be sent
In all those cases the target process is not running in userland, but in kernel mode. Either through a system call, or through a context switch (since the other process couldn't send a signal unless our target process isn't running), or through an interrupt handler. So signal delivery is a simple matter of checking if there are any signals to be delivered just before returning to userland from kernel mode.
In the multi cpu case if the target process is running on another cpu it's just a matter of sending an interrupt to the cpu it's running on. The interrupt does nothing other than force the other cpu to go into kernel mode and back so that signal processing can be done on the way back.
A process can send signal to another process. process can register its own signal handler to handle the signal. SIGKILL and SIGSTOP are two signals which can not be captured.
When process executes signal handler, it blocks the same signal, That means, when signal handler is in execution, if another same signal arrives, it will not invoke the signal handler [ called blocking the signal], but it makes the note that the signal has arrived [ ie: pending signal]. once the already running signal handler is executed, then the pending signal is handled. If you do not want to run the pending signal, then you can IGNORE the signal.
The problem in the above concept is:
Assume the following:
process A has registered signal handler for SIGUSR1.
1) process A gets signal SIGUSR1, and executes signalhandler()
2) process A gets SIGUSR1,
3) process A gets SIGUSR1,
4) process A gets SIGUSR1,
When step (2) occurs, is it made as 'pending signal'. Ie; it needs to be served.
And when the step (3) occors, it is just ignored as, there is only one bit
available to indicate the pending signal for each available signals.
To avoid such problem, ie: if we dont want to loose the signals, then we can use
real time signals.
2) Signals are executed synchronously,
Eg.,
1) process is executing in the middle of signal handler for SIGUSR1,
2) Now, it gets another signal SIGUSR2,
3) It stops the SIGUSR1, and continues with SIGUSR2,
and once it is done with SIGUSR2, then it continues with SIGUSR1.
3) IMHO, what i remember about checking if there are any signal has arrived to the process is:
1) When context switch happens.
Hope this helps to some extend.

Linux/vxworks signals

I came across the following in a vxworks manual and was wondering why this is the case.
What types of things do signals do that make them undesirable?
In applications, signals are most
appropriate for error and exception
handling, and not for a
general-purpose inter-task
communication.
The main issue with signals is that signal handlers are registered on a per process/memory space basis (in vxWorks, the kernel represents one memory space, and each RTP is a different memory space).
This means that regardless of the thread/task context, the same signal handler will get executed (for a given process). This can cause some problems with side-effects if your signal handler is not well behaved.
For example, if your signal uses a mutex for protect a shared resource, this could cause nasty problems, or at least, unexpected behavior
Task A Task B Signal Handler
Take Mutex
...
Gets preempted
does something
....
<SIGNAL ARRIVES>----->Take Mutex (blocks)
resumes
....
Give Mutex
----->Resumes Handler
I'm not sure the example above really conveys what I'm trying to.
Here are some other characteristics of signals:
Handler not executed until the task/process is scheduled. Just because you sent the signal, doesn't mean the handler will execute right away
No guarantee on which Task/Thread will execute the handler. Any thread/task in the process could run it (whichever thread/task executes first). VxWorks has ways around this.
Note that the above only applies to asynchronous signals sent via a kill call.
An exception will generate a synchronous signal which WILL get executed right away in the current context.

Resources