ioctl vs kernel modules in Linux - linux

I know that kernel modules are used to write device drivers. You can add new system calls to the Linux kernel and use it to communicate with other devices.
I also read that ioctl is a system call used in linux to implement system calls which are not available in the kernel by default.
My question is, why wouldn't you just write a new kernel module for your device instead of using ioctl? why would ioctl b useful where kernel modules exist?

You will need to write a kernel driver in either case, but you can choose between adding a new syscall and adding a ioctl.
Let's say you want to add a feature to get the tuner settings for a video capturing device.
If you implement it as a syscall:
You can't just load a module, you need to change the kernel itself
Hundreds of drivers could each add dozens of syscalls each, kludging up the table with thousands of global functions that must be kept forever.
For the driver to have any reach, you will need to convince kernel maintainers that this burden is worthwhile.
You will need to upstream the definition into glibc, and people must upgrade before they can write programs for it
If you implement it as an ioctl:
You can build your module for an existing kernel and let users load it, without having to get kernel maintainers involved
All functions are simple per-driver constants in the applicable header file, where they can easily be added or removed
Everyone can start programming with it just by including the header
Since an ioctl is much easier, more flexible, and exactly meant for all these driver specific function calls, this is generally the preferred method.

I also read that ioctl is a system call used in linux to implement system calls which are not available in the kernel by default.
This is incorrect.
System calls are (for Linux) listed in syscalls(2) (there are hundreds of them between user space and kernel land) and ioctl(2) is one of them. Read also wikipage on ioctl and on Unix philosophy and Linux Assembler HowTo
In practice, ioctl is mostly used on device files, and used for things which are not a read(2) or a write(2) of bytes.
For example, a sound is made by writing bytes to /dev/audio, but to change the volume you'll use some ioctl. See also fcntl(2) playing a similar role.
Input/output could also happen (somehow indirectly ...) thru mmap(2) and related virtual address space system calls.
For much more, read Advanced Linux Programming and Operating Systems: Three Easy Pieces. Look into Osdev for more hints about coding your own OS.
A kernel module could implement new devices, or new ioctl, etc... See kernelnewbies for more. I tend to believe it might sometimes add a few new syscalls (but this was false in older linux kernels like 3.x ones)
Linux is mostly open source. Please download then look inside source code. See also Linux From Scratch.
IIRC, Linux kernel 1.0 did not have any kernel modules. But that was around 1995.

Related

In OS, why loadable kernel modules (LKMs) don't need to invoke message passing in order to communicate?

My question lies in a paragraph, the paragraph are shown as follow, I can't understand the the bold sentence. If it doesn't need to invoke message passing, how does it complete communication between process?
Modules
Perhaps the best current methodology for operating-system design involves
using loadable kernel modules (LKMs). Here, the kernel has a set of core
components and can link in additional services via modules, either at boot time
or during run time. This type of design is common in modern implementations
of UNIX, such as Linux, macOS, and Solaris, as well as Windows.
The idea of the design is for the kernel to provide core services, while
other services are implemented dynamically, as the kernel is running. Linking
services dynamically is preferable to adding new features directly to the kernel,
which would require recompiling the kernel every time a change was made.
Thus, for example, we might build CPU scheduling and memory management
algorithms directly into the kernel and then add support for different file
systems by way of loadable modules.
The overall result resembles a layered system in that each kernel section
has defined, protected interfaces; but it is more flexible than a layered system,
because any module can call any other module. The approach is also similar to
the microkernel approach in that the primary module has only core functions
and knowledge of how to load and communicate with other modules; but it
is more efficient, because modules do not need to invoke message passing in
order to communicate.
Linux uses loadable kernel modules, primarily for supporting device
drivers and file systems. LKMs can be “inserted” into the kernel as the system is started (or booted) or during run time, such as when a USB device is
plugged into a running machine. If the Linux kernel does not have the necessary driver, it can be dynamically loaded. LKMs can be removed from the
kernel during run time as well. For Linux, LKMs allow a dynamic and modular
kernel, while maintaining the performance benefits of a monolithic system. We
cover creating LKMs in Linux in several programming exercises at the end of
this chapter.
In OS, why loadable kernel modules (LKMs) don't need to invoke message passing in order to communicate?
The simple answer is that because they're loaded into kernel space and dynamically linked; the kernel can use "mostly normal" functions calls instead of anything more expensive (message passing, remote procedure calls, ...) to communicate with it.
Note: Typically (especially for *nix systems) a driver will provide a set of function pointers to the kernel (e.g. maybe one for open(), one for read(), one for ioctl(), etc) in some kind of "device context" structure; allowing the kernel to call the driver's functions via. the function pointers (e.g. like "result = deviceContext->open( ..);).
"The approach is also similar to the microkernel approach in that the primary module has only core functions and knowledge of how to load and communicate with other modules; but it is more efficient, because modules do not need to invoke message passing in order to communicate."
This paragraph has the potential to give you a false impression. For extensibility alone, modular monolithic kernels are similar to micro-kernels (and both are a lot more extensible than a "literally monolithic (one piece, like stone)" kernel). For other things (e.g. security) modular monolithic kernels are extremely dissimilar to micro-kernels.
For Linux specifically; you can think of it as almost 30 million lines (growing at a rate of over 1 million lines per year) of potential security vulnerabilities running at the highest privilege level with full access to every scrap of data, with an average of about 150 discovered critical vulnerabilities per year (and who knows how many undiscovered critical vulnerabilities).
One of the main goals of micro-kernels is to place isolation barriers between the "kernel core" and everything else; so that you might end up with several thousand lines of kernel that doesn't grow (and a significant improvement in security). It's those isolation barriers that require less efficient communication (e.g. message passing).
"...but it is more efficient, because modules do not need to invoke message passing in order to communicate."
This could be rephrased more correctly as "...but it is more efficient, because modules do not need to pass through an isolation barrier."
Note that message passing is merely one way to pass through an isolation barrier - there's shared memory, signals, pipes, sockets, remote procedure calls, etc. Nothing says a micro-kernel has to use message passing and you could design a micro-kernel that does not use message passing at all.

Linux kernel: Find all drivers reachable via syscalls

I am comparing a mainline Linux kernel source with a modified copy of the same source that has many drivers added. A little background: That modified source is an Android kernel source, it contains many drivers added by the vendor, SoC manufacturer, Google etc.
I am trying to identify all drivers added in the modified source that are reachable from userspace via any syscalls. I'm looking for some systematic or ideally automatic way to find all these to avoid the manual work.
For example, char device drivers are of interest, since I could perform some openat, read, write, ioctl and close syscalls on them if there is a corresponding device file. To find new character device drivers, I could first find all new files in the source tree and then grep them for struct file_operations. But besides char drivers, what else is there that I need to look for?
I know that the syscalls mentioned above do some kind of "forwarding" to the respective device driver associated with the file. But are there other syscalls that do this kind of forwarding? I think I would have to focus on all these syscalls, right?
Is there something I can grep for in source files that indicates that syscalls can lead there? How should I go about this to find all these drivers?
Update (narrowing down):
I am targeting specific devices (e.g. Huawei P20 Lite), so I know relevant architecture and hardware. But for the sake of this question, we can just assume that hardware for whatever driver is present. It doesn't really matter in my case if I invoked a driver and it reported back that no corresponding hardware is present, as long as I can invoke the driver.
I only look for the drivers directly reachable via syscalls. By directly reachable I mean drivers designed to have some syscall interface with userspace. Yes, syscalls not aimed at a certain driver may still indirectly trigger code in that driver, but these indirect effects can be neglected.
Maybe some background on my objective clarifies: I want to fuzz-test the found drivers using Syzkaller. For this, I would create descriptions of the syscalls usable to fuzz each driver that Syzkaller parses.
I'm pretty sure there is no way to do this programmatically. Any attempt to do so would hit up against a couple of problems:
The drivers that are called in a given case depend on the hardware. For example, on my laptop, the iwlwifi driver will be reachable via network syscalls, but on a server that driver won't be used.
Virtually any code loaded into the kernel is reachable from some syscall if the hardware is present. Drivers interact with hardware, which in turn either interacts with users, external devices, or networks, and all of these operations are reachable by syscalls. People don't write drivers that don't do anything.
Even drivers that aren't directly reachable by a system call can affect execution. For example, a driver for a true RNG would be able to affect execution by changing the behavior of the system PRNG, even if it weren't accessible by /dev/hwrng.
So for a generic kernel that can run on any hardware of a given architecture, it's going to be pretty hard to exclude any driver from consideration. If your hope is to trace the execution of the code by some programmatic means without actually executing it, then you're going to need to solve the halting problem.
Sorry for the bad news.

Linux kernel assembly and logic

My question is somewhat weird but I will do my best to explain.
Looking at the languages the linux kernel has, I got C and assembly even though I read a text that said [quote] Second iteration of Unix is written completely in C [/quote]
I thought that was misleading but when I said that kernel has assembly code I got 2 questions of the start
What assembly files are in the kernel and what's their use?
Assembly is architecture dependant so how can linux be installed on more than one CPU architecture
And if linux kernel is truly written completely in C than how can it get GCC needed for compiling?
I did a complete find / -name *.s
and just got one assembly file (asm-offset.s) somewhere in the /usr/src/linux-headers-`uname -r/
Somehow I don't think that is helping with the GCC working, so how can linux work without assembly or if it uses assembly where is it and how can it be stable when it depends on the arch.
Thanks in advance
1. Why assembly is used?
Because there are certain things then can be done only in assembly and because assembly results in a faster code. For eg, "you can get access to unusual programming modes of your processor (e.g. 16 bit mode to interface startup, firmware, or legacy code on Intel PCs)".
Read here for more reasons.
2. What assembly file are used?
From: https://www.kernel.org/doc/Documentation/arm/README
"The initial entry into the kernel is via head.S, which uses machine
independent code. The machine is selected by the value of 'r1' on
entry, which must be kept unique."
From https://www.ibm.com/developerworks/library/l-linuxboot/
"When the bzImage (for an i386 image) is invoked, you begin at ./arch/i386/boot/head.S in the start assembly routine (see Figure 3 for the major flow). This routine does some basic hardware setup and invokes the startup_32 routine in ./arch/i386/boot/compressed/head.S. This routine sets up a basic environment (stack, etc.) and clears the Block Started by Symbol (BSS). The kernel is then decompressed through a call to a C function called decompress_kernel (located in ./arch/i386/boot/compressed/misc.c). When the kernel is decompressed into memory, it is called. This is yet another startup_32 function, but this function is in ./arch/i386/kernel/head.S."
Apart from these assembly files, lot of linux kernel code has usage of inline assembly.
3. Architecture dependence?
And you are right about it being architecture dependent, that's why the linux kernel code is ported to different architecture.
Linux porting guide
List of supported arch
Things written mainly in assembly in Linux:
Boot code: boots up the machine and sets it up in a state in which it can start executing C code (e.g: on some processors you may need to manually initialize caches and TLBs, on x86 you have to switch to protected mode, ...)
Interrupts/Exceptions/Traps entry points/returns: there you need to do very processor-specific things, e.g: saving registers and reenabling interrupts, and eventually restoring registers and properly returning to user mode. Some exceptions may be handled entirely in assembly.
Instruction emulation: some CPU models may not support certain instructions, may not support unaligned data access, or may not have an FPU. An option is using emulation when getting the corresponding exception.
VDSO: the VDSO is a virtual library that the kernel maps into userspace. It allows e.g: selecting the optimal syscall sequence for the current CPU (on x86 use sysenter/syscall instead of int 0x80 if available), and implementing certain system calls without requiring a context switch (e.g: gettimeofday()).
Atomic operations and locks: Maybe in a future some of these could be written using C11 support for atomic operations.
Copying memory from/to user mode: Besides using an optimized copy, these check for out-of-bounds access.
Optimized routines: the kernel has optimized version of some routines, e.g: crypto routines, memset, clear_page, csum_copy (checksum and copy to another place IP data in one pass), ...
Support for suspend/resume and other ACPI/EFI/firmware thingies
BPF JIT: newer kernels include a JIT compiler for BPF expressions (used for example by tcpdump, secmode mode 2, ...)
...
To support different architectures, Linux has assembly code (re-)written for each architecture it supports (and sometimes, there are several implementations of some code for different platforms using the same CPU architecture). Just look at all the subdirectories under arch/
Assembly is needed for a couple of reasons.
There are many instructions that are needed for the operation of an operating system that have no C equivalent, at least on most processors. A good example on Intel x86/64 processors is the iret instruciton, which returns from hardware/software interrupts. These interrupts are key to handling hardware events (like a keyboard press) and system calls from programs on older processors.
A computer does not start up in a state that is immediately ready for execution of C code. For an Intel example, when execution gets to the startup routine the processor may not be in 32-bit mode (or 64-bit mode), and the stack required by C also may not be ready. There are some other features present in some processors (like paging) which need to be turned on from assembly as well.
However, most of the Linux kernel is written in C, which interfaces with some platform specific C/assembly code through standardized interfaces. By separating the parts in this way, most of the logic of the Linux kernel can be shared between platforms. The build system simply compiles the platform independent and dependent parts together for specific platforms, which results in different executable kernel files for different platforms (and kernel configurations for that matter).
Assembly code in the kernel is generally used for low-level hardware interaction that can't be done directly from C. They're like a platform- specific foundation that's used by higher-level parts of the kernel that are written in C.
The kernel source tree contains assembly code for a variety of systems. When you compile a kernel for a particular type of system (such as an x86 PC), only the appropriate assembly code for that platform is included in the build process.
Linux is not the second version of Unix (or Unix in general). It is Unix compatible, but Unix and Linux have separate histories and, in terms of code base (of their kernels), are completely separate. Linus Torvald's idea was to write an open source Unix.
Some of the lower level things like some of the architecture dependent parts of memory management are done in assembly. The old (but still available) Linux kernel API for x86, int 0x80, is implemented in assembly. There are probably other places in the kernel that are implemented in assembly, but I don't know any others.
When you compile the kernel, you select an architecture to target. Depending on the target, the right assembly files for that architecture are included in the build.
The reason you don't find anything is because you're searching the headers, not the sources. Download a tar ball from kernel.org and search that.

Debugging kernel hang because of IOCTL calls

I am trying to make a kernel module which is working on 2.6.32 kernel to work on 3.6 kernel. We use IOCTL calls to update structures in Linux Kernel Module. These calls are working fine in 2.6.32 kernel.
When I try the same in 3.6 kernel I am facing kernel hang whenever ioctl calls are made from user-space application. Its a socket based interface not a file based interface hence we use the ioctl under struct proto_ops.
How can I debug this scenario as there is no core dump generated. To copy data from userspace I am using copy_from_user command.
Any pointers for debugging this scenario would be very helpful
ioctl() is one of the remaining parts of the kernel which runs under the Big Kernel Lock (BKL). In the past, the usage of the BKL has made it possible for long-running ioctl() methods to create long latencies for unrelated processes.
Follows an explanation of the patch that introduced unlocked_ioctl and compat_ioctl into 2.6.11. The removal of the ioctl field happened a lot later, in 2.6.36.
Explanation: When ioctl was executed, it took the Big Kernel Lock (BKL), so nothing else could execute at the same time. This is very bad on a multiprocessor machine, so there was a big effort to get rid of the BKL. First, unlocked_ioctl was introduced. It lets each driver writer choose what lock to use instead. This can be difficult, so there was a period of transition during which old drivers still worked (using ioctl) but new drivers could use the improved interface (unlocked_ioctl). Eventually all drivers were converted and ioctl could be removed.
compat_ioctl is actually unrelated, even though it was added at the same time. Its purpose is to allow 32-bit userland programs to make ioctl calls on a 64-bit kernel. The meaning of the last argument to ioctl depends on the driver, so there is no way to do a driver-independent conversion.
Reference: The new way of ioctl() by Jonathan Corbet

How can I get edge events via GPIO on Linux without a busy-loop?

I'm working an a system with embedded Linux (Kernel 2.6.31).
It is a AT91SAM9G20 chip inside, and some of the Pins are forwarded to the outside.
Now I want to use them as GPIO Inputs.
I read the gpio.txt documentation about using the GPIOs via filesystem, and that works very well 'til here. I connected some switches to the gpio-pins and I can see the result in /sys/class/gpio/gpioX/value. But now I'd like to react on a change without busy-waiting in a loop. (i.e echo "Switch1 was pressed").
I guess I need interrupts here, but I couldn't find out how to use them without writing my own kernel driver. I'm relatively new to Linux and C (I normally program in Java), so I'd like to handle the Interrupts via sysfs too. But my problem is, that there is no "edge"-file in my GPIO directory (I guess because this is only since Kernel version 2.6.33+). Is that right? Instead of "edge" I've got a uevent file in there, which is not described in gpio.txt.
In the gpio.txt documentation there was a Standard Kernel Driver mentioned: "gpio_keys". Is it possible to use this for my problem?
I guess it would be better to work with this driver than allowing a userspace program to manipulate kernel tasks.
I found a lot of codesnippets for writing my own driver, but I wasn't even able to find out which of the 600 gpio.h files to include, and how to refer to the library (cross compiler couldn't find the gpio.h file).
Sorry for newbie questions, I hope you could give me some advices.
Thanks in advance
See this for an example on how to do that. Basically, the thing you're missing is the usage of the select or poll system calls.

Resources