PCIe interrupt handling linux kernel - linux

I am working on a PCIe Linux driver. I would like to register an ISR for the device. IRQ number assigned to the device by the Linux system is 16, which is shared by other(USB host controller) device also. (Checked by lspci -v). It is a pin based interrupt.
By searching online I found almost all PCI driver example just provides only IRQF_SHARED as flag in API request_irq(), and does not provide any other flags to mention behaviour like High/Low level interrupt.
My question is, how the Linux kernel determines the behaviour of shared interrupt (for PCIe device), if it is low level or High level ?

PCIe uses MSI, so there is no hi/low level to be concerned with. Traditional PCI cards use level triggered interrupts, but most devices use active low signaling so this isn't something a driver writer has access to modify/tweak.

Related

Does a HDD/SSD use 1 or 2 device controllers to connect to the OS?

I am studying Operating Systems, and came across divice controllers.
I gathered that a device controller is hardware whereas a device driver is software.
I also know that a HDD and a SSD both have a small PCB buit into them and I assume those PCB's are the device controllers.
Now what I want to know is if there is another device controller on the PC/motherboard side of the bus/cable connecting the HDD/SSD to the OS?
Is the configuration: OS >> Device Driver >> Bus >> Device Controller >> HDD/SSD
Or is it: OS >> Device Driver >> Device Controler >> Bus >> Device Controller >> HDD/SSD
Or is it some other configuration?
Sites I visited for answers:
Tutorialspoint
JavaPoint
Idc online
Quora
Most hard-disks on desktop are SATA or NVME. eMMC is popular for smartphones but some might use something else. These are hardware interface standards that describe the way to interact electrically with those disks. It tells you what voltage at what frequency and for what amount of time you need to apply (a signal) to a certain pin (a bus line) to make the device behave or react in a certain way.
Most computers are separated in a few external chips. On desktop, it is mostly SATA, NVME, DRAM, USB, Audio Output, network card and graphics card. Even though there is few chips, the CPU would be very expensive if it had to support all those hardware interface standards on the same silicon chip. Instead, the CPU implements PCI/PCI-e as a general interface to interact with all those chips using memory mapped registers. Each of these devices have an external PCI-e controller between the device and the CPU. In the same order as above, you have AHCI, NVME controller, DRAM (not PCI and in the CPU), xHCI (almost everywhere) and Intel HDA (example). Network cards are PCI-e and there isn't really a controller outside the card. Graphics card are also self standing PCI-e devices.
So, the OS detects the registers of those devices that are mapped in the address space. The OS writes at those locations, and it will write the registers of the devices. PCI-e devices can read/write DRAM directly but this is managed by the CPU in its general implementation of the PCI-e standard most likely by doing some bus arbitration. The CPU really doesn't care what's the device that it is writing. It knows that there is a PCI register there and the OS instructs to write it with something so it does. It just happens that this device is an implementation of a standard and that the OS developers read the standard so they write the proper values in those registers and the proper data structures in DRAM to make sure that the device knows what to do.
Drivers implement the standard of the software interface of those controllers. The drivers are the ones instructing the CPU on values to write and writing the proper data structures in DRAM for giving commands to the controllers. The user thread simply places the syscall number in a conventionnal register determined by the OS developers and they call an instruction to jump into the kernel at a specific address that the kernel decides by writing a register at boot. Once there, the kernel looks at the register for the number and determines what driver to call based on the operation.
On Linux and some place else, it is done with files. You call syscalls on files and the OS has a driver attached to the file. They are called virtual files. A lot of transfer mechanisms are similar to the reading/writing files pattern so Linux uses that to make a general driver model where the kernel doesn't even need to understand the driver. The driver just says create me a file there that's not really on the hard disk and if someone opens it and calls an operation on it then call this function that's there in my driver. From there, the driver can do whatever it wants because it is in kernel mode. It just creates the proper data structures in DRAM and writes the registers of the device it drives to make it do something.

Process of device driver detection in linux

Wanted to know how a device is detected in Linux? What exactly is the workflow of the device driver in device detection?
It is the Kernel's job to detect devices as it has the lowest level access to the available hardware. When the Kernel scans through all available addresses it maintains a list of Vendor and Device IDs.
To use PCI bus devices as an example, there is a Vendor ID and a Device ID associated with all PCI devices.
Device drivers are written in such a way as to identify to the Kernel what kinds of devices the driver is able to control. Drivers may advertise that they can handle more than one vendor and device type combination.
The Kernel will allocate a driver to each device based on these IDs. A similar process is in place for USB devices. Older technologies like legacy devices (serial ports, parallel, ps2 mice/keyboards) will have explicitly hardcoded methods of associating particular drivers with devices.
You can use the Linux commands lsusb and lspci to see the available devices and IDs on your system.
So in direct answer to your question - the device driver usually does nothing to detect the device, at least in the first instance. Once the driver is associated with a device (by the Kernel) the driver will likely do further interrogation of the device to ensure it contains the right firmware or is the right hardware revision, etc.

Why there is `gpio_request` instead of `request_region` in raspberry pi driver?

In the book LDD3, if one driver want to control the pins of CPU, it should call request_region() function to declare the usage of the ports.
When I want to implement a simple driver module on my Raspberry Pi, however, that I found in this example the request of ports is implemented by gpio_request() function.
Why and when we need to use gpio_request() instead of request_region()? And, what's the difference purposes for these two functions.
BTW: I searched the LDD3 page by page but I can't find any clues about the GPIO... why there is no any introductions to GPIO? Is it because of the 2.6 kernel version?
In the book LDD3, if one driver want to control the pins of CPU, it should call request_region() function to declare the usage of the ports.
First, the word "port" is ambiguous and requires context. Port can refer to a physical connector (e.g. USB port), or a logical connection (e.g. TCP port).
Your understanding of request_region() is flawed. That routine is for management of I/O address space. Your question is tagged with raspberry-p1 which uses an ARM processor and has no I/O address space to manage. ARM processors use memory-mapped device registers. You would use request_mem_region() in a device driver for the memory addresses of that peripheral's register block.
Each GPIO is controlled by a bit position in one or more control registers. Those registers would be handled by an overall GPIO subsystem. (There's also a lower-layer (closer to the HW) pin-control driver for multiplexed pins, i.e. pins that can be assigned to a peripheral device or used as GPIO.)
The driver for the GPIO (or pin-control) subsystem should perform a request_mem_region() for the memory addresses of the SoC's GPIO control registers. A gpio_request() would be management of an individual pin that is subordinate to management of the registers.
Note that use of request_mem_region() and gpio_request() are not mutually exclusive in a device driver. For instance the driver for a USB controller would request_mem_region() the memory addresses for its control registers. It may also have to gpio_request() for pin(s) that control the power to the USB connector(s) (assuming that's how the power is controlled with logic external to the controller).
why there is no any introductions to GPIO? Is it because of the 2.6 kernel version?
Conventions for using GPIO in Linux appeared in Documentation/gpio.h in 2007 with version 2.6.22. Generic (i.e. standardized rather than platform specific) GPIO support appeared in the Linux kernel several years later with version 2.6.3x(?). Prior to that (and even after) each platform (e.g. SoC manufacturer) had its own set of routines for accessing (and maybe managing) GPIOs.
LDD3 claims to be current as of the 2.6.10 kernel. Also that book may be x86-centric (as Linux has x86 origins), and x86 processors typically do not have GPIOs.

How does base address register gets address?

I've finished developing a pcie driver for an FPGA under a linux distributiuon. Everything works fine. But I'm wondering where the base address register in the PCI Endpoint of the FPGA gets the base address. When I've generated the PCIe Endpoint I was able to set up the length of the BAR, but not more.
In the PCIe driver I do the standard functions like pci_enable_device, but I do not set up specifically a base address.
So does the operating system set up the base address during startup? or how does it work?
By the side I would like to know what initialisations the operating system gernerally do if an pcie pcie device is connected. Since I do see my pci device in lspci even if the driver is unloaded.
Kind regards
Thomas
The address allocation for the PCI devices are generally done at the BIOS level. Let us refer to the x86 platform. If we look closely at the system address map, it would be something like this (image taken from BIOS DISASSEMBLY NINJUTSU, by Darmawan Salihun)
On the address map, there is a dedicated space to map the PCI memory regions. The same could be replicated using the output of /proc/iomem.
This implementation is platform dependent, and as the BIOS "knows" about the platform, it would set aside the addresses dedicated to the PCI slots. When a device is plugged into the slot, the BIOS interacts with the firmware on the device and actually sets up the memory regions for the device, such that the OS could make use of it.
Now coming to the drivers part. In Linux, the drivers follow a specific standard known as the 'Linux Device Model', which constitutes a Core Layer(PCI core), Host Controller Drivers(PCI Controller/Masters) and Client Drivers(PCI devices). When the PCI device(client) is plugged into the slot, the corresponding host controller knows about the attachment and it further informs the PCI core about it, and hence appears in the output of lspci.
lspci shows the devices which are identified by the host controller, in which case, it may or may not be tied to a driver. The core further traverses the drivers in the system, finds a matching one, and attaches to this device.
So, the reason you are seeing the device in the output of lspci is because the host controller has identified the device, and has informed the PCI core. It doesn't matter even if any driver is attached to the device or not.
On most consumer grade computers, BAR allocation seem to be done in the BIOS.
I suppose that in a hotplug capable architecture this must be done or at least triggered by the OS.

Linux user space PCI driver

I'm trying to write a PCI device driver that runs in user space. Not my idea, what the client wants. Target is an embedded Linux board that will never have more than a single user. I'm an experienced C programmer and know Linux, just not familiar with Linux driver development.
Is this really a device driver or just a library? Do I need to use the typical calls pci_register_driver, etc. or can I just access the device using fopen, and using mmap and ioperm to get to it?
Interrupts will be done using the MSI model. Also need to handle DMA transfers. The device will be streaming lots of data to the user.
There's not much info out there on this subject, LDD3 only devotes a couple of pages to it, and there's nothing else that I could find here on SO.
Thanks in advance!
If there is no driver handling the PCI card it would be possible to access it using ioperm (or iopl - depending on the address) if only port accesses are required.
Using DMA and interrupts is definitely impossible without a kernel-mode driver.
By googleing I found some text about something like a "generic kernel-mode driver" that allows writing user-mode drivers (including DMA and interrupts).
You should ask your customer which kind of kernel-mode drivers for accessing PCI cards is installed on the Linux board.
There is now a proper way to do high performance userspace PCI drivers, called vfio. There is not much documentation, but see the kernel docs http://lxr.free-electrons.com/source/Documentation/vfio.txt and the header file /usr/include/linux.vfio.h. It is available since Linux 3.6.

Resources