kernel driver or user space driver? - linux

I would like to ask your advice on the following: I need to write drivers for omap3, for accessing external dsp through fpga (through gpmc interface). The dsp is required to load file to dsp, and to read/write buffer from dsp. There is already FPGA driver in kernel. The kernel is 2.6.32. So I thought of the following options:
writing dsp driver in kernel, which uses the existing fpga driver.
writing a user space driver which interfaces with the fpga kernel driver.
writing user space driver using UIO , which will not use the kernel fpga driver, but shall do the access to fpga, as part of the user space single and complete dsp driver.
What do you think is prefered option ?
What is the advantage of kernel driver over user sace and vise versa ?
Thanks, Ran

* User-space driver:
Easier to debug.
Loads of libraries to support you.
Allows you to hide the details of your IP if you want to ( people will really hate you if you did! )
A crash won't affect the whole system.
Higher latency in handling interrupts since the kernel will have to relay the interrupt to the user space somehow.
You can't control access to your device from user space.
* Kernel-space driver:
Harder to debug.
Only linux kernel frameworks are supported.
You can always provide a binary blob to hide the details of your IP but this is annoying since it has to be generated against a specific kernel.
A crash will bring down the whole system.
Less latency in handling interrupts.
You can control access to your device from kernel space because it's a global context that all processes see.
As a kernel engineer I'm more comfortable/happy hacking code in a kernel context, that's probably why I would write the whole driver in the kernel.
However, I would say that the best thing to do is to divide the functionality of your driver into units and only put the unit in the kernel when there's a reason to do so.
For example:
If your device has a shared resource ( like an MMU, hardware FIFO ) and you want multiple processes to be able to use it safely, then probably you need some buffer manager to be in the kernel and all the processes would be communicating with it through ioctl.
If your driver needs to respond to an interrupt as fast as possible ( very low-latency ), then you will need to put the part of the code that handles interrupts in the kernel interrupt handler rather than putting it in user space and notifying user space when an interrupt occurs.

Related

What is an “irqchip”?

In reference to the QEMU x86_64 machine option kernel_irqchip=on|off, the description reads:
Controls in-kernel irqchip support for the chosen accelerator when available
What is an "irqchip"?
An "irqchip" is KVM's name for what is more usually called an "interrupt controller". This is a piece of hardware which takes lots of interrupt signals (from devices like a USB controller, disk controller, PCI cards, serial port, etc) and presents them to the CPU in a way that lets the CPU control which interrupts are enabled, be notified when a new interrupt arrives, dismiss a handled interrupt, and so on.
An emulated system (VM) needs an emulated interrupt controller, in the same way that real hardware has a real hardware interrupt controller. In a KVM VM, it is possible to have this emulated device be in userspace (ie QEMU) like all the other emulated devices. But because the interrupt controller is so closely involved with the handling of simulated interrupts, having to go back and forth between the kernel and userspace frequently as the guest manipulates the interrupt controller is bad for performance. So KVM provides an emulation of an interrupt controller in the kernel (the "in-kernel irqchip") which QEMU can use instead of providing its own version in userspace. (On at least some architectures the in-kernel irqchip is also able to use hardware assists for virtualization of interrupt handling which the userspace version cannot, which further improves VM performance.)
The default QEMU setting is to use the in-kernel irqchip, and this gives the best performance. So you don't need to do anything with this command line option unless you know you have a specific reason why the in-kernel irqchip will not work for you.

What is the difference between DMA-Engine and DMA-Controller?

As mentioned above, what is the difference between a dma engine and a dma-controller (on focus on linux)?
When does the linux dma engine come into place? Is this a special device or always part of all periphery devices, which support dma?
When browsing the linux source, I found the driver ste_dma40.c. How does any driver uses this engine?
DMA - Direct memory access. The operation of your driver reading or writing from/to your HW memory without the CPU being involved in it (freeing it to do other stuff).
DMA Controller - reading and writing can't be done by magic. if the CPU doesn't do it, we need another HW to do it. Many years ago (at the time of ISA/EISA) it was common to use a shared HW on the motherboard that did this operation. In recent years , each HW has its own DMA HW mechanism.
But in all cases this specific HW gets the source address and the destination address and passes the data. Usually triggering an interrupt when done.
DMA Engine - Now here I am not sure what you mean. I believe you probably refer to the SW side that handles the DMA.
DMA is a little more complicated than usual I\O since all memory SRC and DST has to be physically present at all times during the DMA operation. If the DST address is swapped to disk, the HW will write to a bad address and the system will crash.
This and other aspects of DMA are handled by the driver with code sections you probably refer to as the "DMA Engine"
*Another interpretation of what 'DMA Engine' is, may be a code part of Firmware (or HW) that handles the DMA HW controller on the HW side.
According to this document, http://www.asprom.com/application/intel_3.pdf:
The 82C37 DMA controllers should not
be confused with the DMA engines found
in some earlier MCH (Memory Controller
Hub) components. These DMA controllers
are tied to the ISA/LPC bus and used
mostly for transfers to/from slow
devices such as floppy disk controllers.
So it seems it is a device found on previous platfroms that used MCHs devices.

What is the main difference between drivers and user applications?

I know that user applications can run only in user mode, which is for system security. On the contrary most drivers run in kernel mode, to access I/O devices. Nevertheless some driver run in user mode, but are allowed to access I/O devices. So I have the following question. What is the main difference between drivers and user applications? Can't user application be allowed to access I/O devices like some drivers do?
Thanks.
First of all, some preview from this link :-
Applications run in user mode, and core operating system components
run in kernel mode. Many drivers run in kernel mode, but some drivers
run in user mode.
When you start a user-mode application, Windows(/any OS) creates a process for
the application. The process provides the application with a private
virtual address space and a private handle table. Because an
application's virtual address space is private, one application cannot
alter data that belongs to another application.
In addition to being private, the virtual address space of a user-mode
application is limited. A processor running in user mode cannot access
virtual addresses that are reserved for the operating system. Limiting
the virtual address space of a user-mode application prevents the
application from altering, and possibly damaging, critical operating
system data.
All code that runs in kernel mode shares a single virtual address
space. This means that a kernel-mode driver is not isolated from other
drivers and the operating system itself. If a kernel-mode driver
accidentally writes to the wrong virtual address, data that belongs to
the operating system or another driver could be compromised.
Also, from this link
Software drivers
Some drivers are not associated with any hardware device at all. For
example, suppose you need to write a tool that has access to core
operating system data structures, which can be accessed only by code
running in kernel mode. You can do that by splitting the tool into two
components. The first component runs in user mode and presents the
user interface. The second component runs in kernel mode and has
access to the core operating system data. The component that runs in
user mode is called an application, and the component that runs in
kernel mode is called a software driver. A software driver is not
associated with a hardware device.
Also, software drivers() always run in kernel mode. The main reason
for writing a software driver is to gain access to protected data that
is available only in kernel mode. But device drivers do not always
need access to kernel-mode data and resources. So some device drivers
run in user mode.
What is the main difference between drivers and user applications?
The difference is same as that between a sub-marine and a ship. Drivers are hardware-dependent and operating-system-specific. They usually provide the interrupt handling required for any necessary asynchronous time-dependent hardware interface. Hence, almost all of them run in kernel mode. Whereas, as specified in the second paragraph, to prevent applications damaging critical OS data, the user applications are bound to run in user space.
Also, not all drivers communicate directly with a device. For a given I/O request (like reading data from a device), there are often several drivers, layered in a stack, that participate in the request. The one driver in the stack that communicates directly with the device is called the function driver; the drivers that perform auxiliary processing are called filter drivers.
Can't user application be allowed to access I/O devices like some
drivers do?
The application calls a function implemented by the operating system, and the operating system calls a function implemented by the driver. The driver knows how to communicate with the device hardware to get the data. After the driver gets the data from the device, it returns the data to the operating system, which returns it to the application.
Applications connect to IO devices through APIs/interfaces presented by device drivers(rather the OS). The OS handles most hardware/software interaction. Hardware vendors write "plugins/modules/drivers" which allows the OS to control their specific hardware. So using the interfaces provided by the OS, you can write your Application to access IO devices.
So, you can't have a user application directly access the hardware without the help of drivers because it's all drivers below the hierarchy to access the device as device drivers are written in low-level languages which can communicate with hardware, whereas user-applications are written in high-level languages.
Also, check this answer to get more idea about drivers address space in various OS'.
Your question lacks any concrete operating system tags, so I'll answer generically about the subject
Yes, drivers can be very similar to a user application in a microkernel-based operating system. In a microkernel, usually a driver is just a userspace app that queries the kernel for some generic capabilities and then functions normally in userspace
Take for example a hypothetical implementation of a pagein/pageout driver in such an OS.
When a page fault happens in user mode, the transition is made to kernel as usual with synchronous exceptions, but a microkernel would queue the pagein request to a userspace driver via local message queue.
After that (still in the exception handler) the faulting process is made blocking and dequeued.
Then the userspace driver app does the necessary page-in and notifies the microkernel on the work done again via local message queue. The kernel is then ready to reschedule the faulting process as it will have its memory mappings ready.
Such functioning incurs a performance overhead but it has its advantages, mainly that the drivers can be implemented and tested just like any other user app.
In a monolithic kernel such disposition is impossible and that case is better described by other answers.

How can the linux kernel be forced to enumerate the PCI-e bus?

Linux kernel 2.6
I've got an fpga that is loaded over GPIO connected to a development board running linux.
The fpga will transmit and receive data over the pci-express bus. However, this is enumerated
at boot and as such, no link is discovered (because the fpga is not loaded at boot).
How can I force re-enumeration of the pci-e bus in linux?
Is there a simple command or will I have to make kernel changes?
I need the capability to hotplug pcie devices.
As root, try the following command:
echo "1" > /sys/bus/pci/rescan
See this link for more information: http://www.kernel.org/doc/Documentation/ABI/testing/sysfs-bus-pci
I wonder what platform you are on: A work around (aka hack) for this that works on x86 systems is to have the BIOS basically statically configure a PCI device at whatever bus, device, function the FPGA normally lands on, then the OS will enumerate the device and reserve the PCI space for it (even though the device isn't really there). Then in your device driver you will have to do some extra things like setup the BARs and int lines manually after the fpga has been programmed. Of course this requires modifying the BIOS, which if you are working with a BIOS vendor you can contract them to make this change for you, if you are not working with a BIOS vendor then it will be much harder... Also keep in mind that I was working on VxWorks on x86, and we had a AMI make a custom BIOS for our boards...
If you don't have a BIOS, then consider programming it in the bootloader, there you already have the ability to read from disk, and adding GPIO capabilities probably isn't too difficult (assuming you are using jtag and GPIOs?), in fact depending on what bootloader you use it might already be able to do GPIO?
The issues with modifying the kernel to do this is that you have to find the sweet spot where you can read the bitfile, before the PCI enumeration... If for example the disk device drivers are initialized after PCI, then obviously you must do some radical changes to the kernel just to read the bitfile prior to PCI enumeration, which might cause other annoying problems...
One other option which you may have already discovered, and which is really only ok for development time: Power up the system, program the fpga board, then do a reset (without power cycle, for example: sudo reboot now), the FPGA should keep its configuration, and linux should enumerate it...
After turning on your computer, the BIOS enumerates the PCI bus and attempts to fulfill all IO space and memory mapped IO (MMIO) requests. It sets up these BAR's initially, and when the operating system loads these BAR's can be changed by the OS as it sees fit while the PCI bus driver enumerates the bus yet again. It is even possible for the superuser of the system to run the command setpci to change these BAR's after the BIOS has already attempted to configure them and the OS has loaded (may cause drivers to fail and several other bad things if done improperly).
I have had to do this in cases where the card in question was not assigned any resources by the BIOS since the region requested required a 64-bit address and the BIOS only operated with 32-bit address assignments. I was able to go in after-the-fact and change these addresses (originally assigned by the BIOS) to whatever addresses I saw fit, insert the kernel module, and my driver would map and use these newly-assigned addresses for the card without knowing the difference.
The problem that exists with hotplugging PCI-Express cards is that the power to the slot, itself, cannot be turned on/off without specific hotplug controllers that need to exist on the motherboard/backplane. Not having these hotplug controllers to turn the slot's power off may lead to shorts between the tiny pins when the card is physically inserted and/or removed if power is still present. Hotplug events, however, can be initiated by either end (the host or the endpoint device). This does not seem to be the case, however if your FPGA already has a link established with the root complex, a possible solution to your problem would be to generate hotplug interrupts to cause a bus rescan in the OS.
There is a major problem, though -- if your card does not actually obtain a link to the root complex, it won't be able to generate any hotplug events; which seems to be the case. After booting, the FPGA should toggle the PRESENT line on the PCIe bus to tell the OS there is a card ready to be enumerated. Once detected, the OS should attempt to establish a link to the card and assign memory regions to the device. After the OS enumerates the card you'll be able to load drivers against it and see it in lspci. You stated you're using kernel 2.6, which does have support for hotplugging and dynamic resource allocation so this method should work as long as your FPGA supports the ability to toggle the PRESENT PCIe line, too.

Hardware clock signals implementation in Linux Kernel

I am looking at some pointers for understanding how the Linux kernel implements the setting up of various hardware clocks. This basically relates to working with setting up the various clocks that hardware features like the LCD, UART etc will use. For example when Linux boots how does it handle setting up the clocks for UART or USB. Maybe something like a Clock manager or something.
I am basically trying to implement something similar for a different OS on a new hardware that i am working on. Any help would be really appreciated.
[Edit]
Thanks for the replies and the links. So here is what i have implemented up until now. This should give you an idea of where I'm headed.
I looked up the Hardware Reference Manual for the particular system I'm targeting and wrote some code to monitor/modify the signals/pins of the peripherals I am interested in i.e. turning them ON/OFF from the command line.Now a collection of these clocks/signals together control a peripheral.The HRM would say that if you want to turn on the UART or something then turn on such and such signals/pins. And #BjoernD yes I am using something like a mmap() function to talk to the peripherals.
The meat of my question is that I want to understand the design and implementation of a Clock/Peripheral Manager which uses the utility that I have already written. This Clock/Peripheral Manager would give me the control of enabling/disabling the peripherals I want.Basically this Manager would enable me to make changes in the init code that is right now running. Also during run time processes can call this Manager to turn ON/OFF the devices so that power consumption is optimized. It might not have made perfect sense but I'm myself trying to wrap my head around this.
Now I'm sure something like this would have been implemented in Linux or for that matter any OS for performance issues (nobody would want to waste power by turning on all peripherals at boot time). I want to understand the Software Architecture of it. Reference from any OS would do as of now to atleast get a headstart. Also I am not writing my own OS, there is an OS in place but Im looking more at a board level software aka BSP to work on. But thanks for the OS link anyways, they are really good. Appreciate it.
Thanks!
What you want to achieve is highly specific to a) the platform you are using and b) the device you want to use. For instance, on x86 there are 3 ways to communicate with a device:
Interrupts allow the device to signal the CPU. The OS usually provides mechanisms to register interrupt handlers - functions that are called upon occurrence of an interrupt. In Linux see request_irq() and friends in linux/include/interrupt.h
Memory-mapped I/O is physical memory of the device that the platform's BIOS makes available in the same way you also access plain physical memory - simply by writing to a memory address. What exactly is behind such memory (e.g., network interface config registers or an LCD frame buffer) depends on the device and is usually specified in the device's data sheet.
I/O ports are accessed through a special address space and special instructions (INB/OUTB & co.). Other than that they work similar to I/O memory.
There's a multitude of ways to find out what resources a device provies and where the BIOS mapped them. Some platforms use ACPI tables (google yourself for the 1,000k page spec), PCI provides info on devices in a standardized way through the PCI config space, USB has similar ways of discovering devices attached to the bus, and some devices, e.g., UARTS, are simply specified to be available at a pre-configured I/O range that is fixed for your platform.
As a start for understanding Linux, I'd recommend "Understanding the Linux kernel". For specifics on how Linux handles devices and what is there to write drivers, have a look at Linux Device Drivers. Furthermore, you will need to have a look at the peculiarities of your platform and the device you want to drive.
If you want to start an own OS, a UART is certainly something that will be veeery helpful to print debug output, so you might want to go for this first.
Now that I wrote down all this, it seems that your actual question is: How to get started with Operating System design. This question should be highly valuable for you: What are some resources for getting started in operating system development?
The two big power users in most computers are the CPU and the disks. Both of these have capabilities for power saving in Linux. The CPU clock can be slowed down when the system is not busy, and the disk motors can be stopped when no I/O is happening. For a UART, even if you save all of the power that it uses by turning off its clock, it is still tiny compared to the others because a UART doesn't have much logic in it.
Best ways to save power are
1) more efficient power supply
2) replace rotating disk with SSD
3) Slow down the CPU and memory bus

Resources