IRQ Handling from User Space Linux - linux

I'm writting a driver for a synthesized device in an FPGA. The device has several IRQs and have requested them on my driver:
irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
rc = request_irq(irq, &Custom_driver_handler,IRQF_TRIGGER_RISING , DRIVER_NAME, base_addr);
My problem is that i want that the irq_handler calls a function of an user space application. Is there any way to call my user space application from the irq_handler of the driver on kernel space??
I know i could save a flag from the driver and mmap its direction from the user application to polling it, but what i want to know is if there is any faster/more correct way.
Thank you in advance

There are several ways of invoking user-space functions from kernel, usually named upcalls: http://lkml.iu.edu/hypermail/linux/kernel/9809.3/0922.html; check also https://lwn.net/Articles/127698/ "Handling interrupts in user space" and the http://wiki.tldp.org/kernel_user_space_howto overview from 2008, part "Sending Signals from the Kernel to the User Space".
To make writing drivers easier, there is UIO framework in the kernel now: https://unix.stackexchange.com/questions/136274/can-i-achieve-functionality-similar-to-interrupts-in-linux-userspace https://lwn.net/Articles/232575/ https://yurovsky.github.io/2014/10/10/linux-uio-gpio-interrupt/ https://www.osadl.org/fileadmin/dam/rtlws/12/Koch.pdf http://www.hep.by/gnu/kernel/uio-howto/
With UIO you can block or poll special file descriptor to wait for interrupt (block by using read() syscall; poll with poll syscall): https://lwn.net/Articles/232575/
On the user space side, the first UIO-handled device will show up as /dev/uio0 (assuming a normal udev setup). The user-space driver will open the device. Reading the device returns an int value which is the event count (number of interrupts) seen by the device; if no interrupts have come in since the last read, the operation will block until an interrupt happens (though non-blocking operation is supported in the usual way as well). The file descriptor can be passed to poll().
include/linux/uio_driver.h is available in linux kernel for many years, it is here for 3. and 4. versions of kernel.

Related

Using libiio trigger to propagate IRQ to userspace

I browsed most of that litterature already and the kernel side of it is very much covered, understood and assimilated.
However my question is basically: How can I be alerted, for example through a callback function, when an interruption occurs in the kernel space and my daemon
lives in userspace.
IRQ Occuring -------> Upper half of the interrupt(put the timestamp in the buffer) ---------> Lower half of the interrupt handler (read data and adds the
timestamp in the buffer) -----------> ???? userspace daemon (async callback of some description) that will write a log on the HD.
I have everything up to the daemon part written and working, I seem to understand that through libiio you can assign a trigger to a particular channel or
attribute but that is where I get stuck.
Litterature on the libiio (most notably its wiki) has a chapter on triggers that is 4 lines long ... not very helpful to say the least.
iioinfo lists the devices present in the LSM6DSLTR (18 devices including two triggers) and I have been unable to assign the triggers to any device that they are
not assigned to already (No such file or directory) which I gather refer to the absence of the trigger directory in the sysfs for that particular device.
I also gather that the sysfs directory is, of course, slave to the kernel device tree and that the creation of that part of the device tree is made as the
driver is loaded.
So there must be a way to alter that part of the sysfs filesystem by asking the kernel driver to provision a device in the context to implement the trigger
directory so we can assign a trigger to it
second part will be to slave that trigger to an actual IRQ (42 in my case - as listed in /proc/interrupts) and be alerted in the userspace daemon when a
thresold has been reached in terms of either acceleration or vibration ...
Well, this is the nuts and bolts of what I am trying to achieve. the last part will be to write a few registers on the chip to configure how and when IRQ will
occur for any given integrated devices.

How can this client-server communication method save two kernel crossings?

In an OS book, when it talks about client-server communication, it says:
Client-server communication is a common pattern in many systems, and so one can ask: how can we improve its performance? One step is to recognize that both the client and the server issue a write immediately followed by a read, to wait for the other side to reply; at the cost of adding a system call, these can be combined to eliminate two kernel crossings per round trip.
I wonder how "issue a write immediately followed by a read" can save 2 kernel crossings per round trip.
A write issues a system call into the kernel, causes a kernel crossing from user mode to kernel mode. When the write finishes, the OS returns to user-code, from kernel mode to user mode.
Then, read is called, and causes a kernel crossing from user mode to kernel mode, and then it returns to user-code, from kernel mode to user mode.
So what is the saved kernel crossing? Does it mean that the when the write finishes, it does not return to user code and user mode, instead, it directly runs read in kernel mode?
As far as understand the OS book, it is a potential optimization. OS may have a syscall that do write and read at once. It could be a hypothetical syscall like int write_read(int fd, char *write_buf, size_t write_len, char *read_buf, size_t *read_len). But there is no such call the linux kernel.
Modern kernels do not use interrupts for syscalls so the optimization would not help much. Moreover modern applications that are performance critical usually use some kind of asynchronous, non-blocking handling so the proposed optimization would be useless for them anyway. Further problem with that optimization would be error reporting. If something failed the caller could not easily recognize wheteher read failed or write failed.

how did the kernel find the right process to send hardware interrupt

At first, this two things might exist:
A system has a table to response hardware interrupt
A process has a table to response interrupt send/set by the kernel
If I hit a key on the keyboard, the keyboard will send a interrupt to the CPU/kernel, and the kernel will process this interrupt. But, maybe the current running process is not the foreground one in front of our eyes, it could be a daemon process or something else. So, how the kernel knows which process should read/response our key stroke?
Thanks!
Hardware interrupts are only handled by the kernel. The device-specific event is processed and if there is event/data to share with user space then the driver makes that available. In your example of a keyboard, the device driver services the interrupt, extracting any data and clearing the condition. The input event representing the data which is pulled from the hardware is then sent to the input subsystem. A user space process must have the exposed input device handle open and blocked on a read. The input subsystem within the kernel is managing this. It's very common to see the same in other drivers: expose a device handle (e.g. /dev/misc/mydevice) which responds to open/close/read/write/ioctl. When a process performs a "read" and there is no data, the kernel code blocks the calling process causing it to wait until there is data to satisfy the read condition. I recommend reading up on kernel device drivers. "Linux Device Drivers" is a great start.

Best way to read/write to another block device from kernel mode

I'm writing a simple block dev driver to overcome some limitations with porting a previously hardware based RAID array to linux's software raid (mdadm).
This driver will create it's own block device, but proxy r/w requests to 1 or more other block devices (much like mdadm does already).
What is the best way for one kernel mode driver to read and write to another kernel mode (block device) driver?
[EDIT1]:
Ok, looking through the mdadm kernel module code - it looks like we need to do as the kernel does - use generic_make_request to the other disk drivers that handle the disks in the 'volume'. This avoids any user-mode filesystem block devices (/dev/xyz) to kernel mode device driver translations and keeps the I/O completely in kernel mode.
Now... How to obtain the bio handle from the several /dev/xyz strings passed to my module....
[EDIT2]:
Looking at this the wrong way, need to give my driver Major/Minors (translate the /dev/xyz in usermode and hand the dev_t value via ioctl to the driver, from there it can reference the driver.
Well on my way here, but still open to suggestions/recommendations.
The answer was to modify the BIO and re-send it off as I've done in this post:
https://unix.stackexchange.com/questions/171800/hp-smartarray-raid5-recovery-on-linux/171801#171801

Does windows have a interrupt-context?

I have recently started reading Linux Kernel Development By Robert Love and I am Love -ing it!
Please read the below excerpt from the book to better understand my questions:
A number identifies interrupts and the kernel uses
this number to execute a specific interrupt handler to process and respond to the interrupt.
For example, as you type, the keyboard controller issues an interrupt to let the system
know that there is new data in the keyboard buffer. The kernel notes the interrupt number of the incoming interrupt and executes the correct interrupt handler.The interrupt
handler processes the keyboard data and lets the keyboard controller know it is ready for
more data...
Now I have dual boot on my machine and sometimes (in fact,many) when I type something on windows, I find myself doing it in, what I call Night crawler mode. This is when I am typing and I don't see anything on the screen and later after a while the entire text comes in one flash, probably the buffer just spits everything out.
Now I don't see this happening on Linux. Is it because of the interrupt-context present in Linux and the absence of it in windows?
BTW, I am still not sure if there is an interrupt-context in windows, google didn't give me any relevant results for that.
All OSes have an interrupt context, it's a feature/constraint of the CPU architecture -- basically, this is "just the way things work" with computer hardware. Different OSes (and drivers within that OS) make different choices about what work and how much work to do in the interrupt before returning, though. That may be related to your windows experience, or it may not. There is a lot of code involved in getting a key press translated into screen output, and interrupt handling is only a tiny part.
A number identifies interrupts and the kernel uses this number to execute a specific interrupt handler to process and respond to the interrupt. For example, as you type, the keyboard controller issues an interrupt to let the system know that there is new data in the keyboard buffer.The kernel notes the interrupt num- ber of the incoming interrupt and executes the correct interrupt handler.The interrupt handler processes the keyboard data and lets the keyboard controller know it is ready for more data
This is a pretty poor description. Things might be different now with USB keyboards, but this seems to discuss what would happen with an old PS/2 connection, where an "8042"-compatible chipset on your motherboard signals on an IRQ line to the CPU, which then executes whatever code is at the address stored in location 9 in the interrupt table (traditionally an array of pointers starting at address 0 in physical memory, though from memory you could change the address, and last time I played with this stuff PCs still had <1MB RAM and used different memory layout modes).
That dispatch process has nothing to do with the kernel... it's the way the hardware works. (The keyboard controller could be asked not to generate interrupts, allowing OS/driver software to "poll" it regularly to see if there happened to be new event data available, but it'd be pretty crazy to use that really).
Still, the code address from the interrupt table will point into the kernel or keyboard driver, and the kernel/driver code will read the keyboard event data from the keyboad controller's I/O port. For these hardware interrupt handlers, a primary goal is to get the data from the device and store it into a buffer as quickly as possible - both to ensure a return from the interrupt to whatever processing was happening, and because the keyboard controller can only handle one event at a time - it needs to be read off into the buffer before the next event.
It's then up to the OS/driver to either provide some kind of input availability signal to application software, or wait for the application software to attempt to read more keyboard input, but it can do it a "whenever you're ready" fashion. Whichever way, once an application has time to read and start responding to the input, things can happen that mean it takes an unexpectedly long amount of time: it could be that the extra keystroke triggers some complex repagination algorithm that takes a long time to run, or that the keystroke results in the program executing code that has been swapped out to disk (check wikipedia for "virtual memory"), in which case it could be only after the hard disk has read part of the program into memory that the program can continue to run. There are thousands of such edge cases involving window movement, graphics clipping algorithms, etc. that could account for the keyboard-handling code taking a long time to complete, and if other keystrokes have happened meanwhile they'll be read by the keyboard driver into that buffer, then only "perceived" by the application after the slow/blocking processing completes. It may well be that the processing consequent to all the keystrokes then in the buffer completes much more quickly: for example, if part of the program was swapped in from disk, that part may be ready to process the remaining keystrokes.
Why would Linux do better at this than Windows? Mainly because the Operating System, drivers and applications tend to be "leaner and meaner"... less bloated software (like C++ vs C# .NET), less wasted memory, so less swapping and delays.

Resources