Vendor information of network interface in kernel - linux

I develop Linux netfilter kernel module and need retrieve vendor information of network card, something like:
"Intel Corporation 82579LM Gigabit Network Connection"
or
"Intel Corporation Centrino Advanced-N 6205"
I have available net_device structure. Is possible to retrieve such description from net_device in kernel ?

The answer is no.
This can be done from userspace only, the kernel does not keep such information. However, you can retrieve the vendor id and product id of the device. For that, you need to know more about the PCI subsystem. And the combination of vendor id and product id, sometimes with subvendor and subproduct id, determine the device identity.

Related

How existing kernel driver should be initialized as PCI memory-mapped?

Existing kernel drivers such as xilinx have specific way to be registered (as tty device), if they are mapped directly to cpu memory map as done here with device tree:
https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18842249/Uartlite+Driver
But in other cases, there is a PCIe device (like FPGA which has the xilinx uart IPs) which is connected to and the cpu.
How should we make the uart get registered when using PCIe device ?
The device tree I try to register into PCIe is uartlite driver:
https://github.com/Xilinx/linux-xlnx/blob/master/drivers/tty/serial/uartlite.c
I think that what I probably need to do is:
Write a custom pci driver.
Need to prepare platform_device struct and then call the uart probe routine from pci driver:
ulite_probe(struct platform_device *pdev)
I've seen related question with others using FPGA with multiple device connected, but seems that there is no docuemnt, or tutorial which describes how to do this.
Any comment, example or document is appreciated.
So something like a ARM CPU connected to an Artix FPGA over PCIe right?
Yes, you would need a custom PCIe driver. The PCIe configuration and data spaces would have to be mapped. Have a look at pci_resource_{start, len} and pci_remap_bar functions. You can then use pci_get_device to get a pointer to the struct device and retrieve the virtual address of the PCIe configuration space. The UART driver can then use the struct device pointer and it's register map should be at some offset to the virtual address of the PCIe configuration space as per your design. You can invoke the probe call of UARTlite IP driver in your own driver.
"Existing kernel drivers such as xilinx have specific way to be registered (as tty device), if they are mapped directly to cpu memory map as done here with device tree". Note that this is true if we are only talking of tty devices. A GPIO peripheral IP won't be expose as tty but in /sys/class/gpio.

Process of device driver detection in linux

Wanted to know how a device is detected in Linux? What exactly is the workflow of the device driver in device detection?
It is the Kernel's job to detect devices as it has the lowest level access to the available hardware. When the Kernel scans through all available addresses it maintains a list of Vendor and Device IDs.
To use PCI bus devices as an example, there is a Vendor ID and a Device ID associated with all PCI devices.
Device drivers are written in such a way as to identify to the Kernel what kinds of devices the driver is able to control. Drivers may advertise that they can handle more than one vendor and device type combination.
The Kernel will allocate a driver to each device based on these IDs. A similar process is in place for USB devices. Older technologies like legacy devices (serial ports, parallel, ps2 mice/keyboards) will have explicitly hardcoded methods of associating particular drivers with devices.
You can use the Linux commands lsusb and lspci to see the available devices and IDs on your system.
So in direct answer to your question - the device driver usually does nothing to detect the device, at least in the first instance. Once the driver is associated with a device (by the Kernel) the driver will likely do further interrogation of the device to ensure it contains the right firmware or is the right hardware revision, etc.

Creating custom virtual usb device using usb-vhci

I am new to working on virtual USB device simulation in Linux. So far I have installed the virtual host control (vhci) libraries as per this tutorial (http://sourceforge.net/p/usb-vhci/wiki/Home/) and can see a virtual USB device being created which has some typical specifications that the library implements (Bus 05 in the image with the vendor and product IDs being "dead" and "beef" respectively).
However I want the created virtual device to have the specifications of a real device I have at hand (a mouse, for example).
So how to enumerate and initialize a virtual USB device with the same credentials as another device?
The kernel module (vhci-hcd) is only a (virtual) host controller that you can attach virtual devices to.
If you want to emulate eg a mouse you should get the libusb_vhci from the same source, and look into the examples. These are bare minimum starting points that does nothing except for the basic usb device handling. You'll have to extend this with all descriptors and protocol handling for a USB HID mouse or whatever you want to emulate.
http://www.usbmadesimple.co.uk/ums_5.htm should be a good starting point.
you can use lsusb and in particular lsusb -D to dump the descriptors of devices you have connected.

How does PCI/PCIe devices init/register themselves in the Linux kernel?

When the kernel starts up, the PCI subsystem creates a pci_bus for each physical PCI bus, then the pci_bus is added to pci_root_buses(with PCI configuration). But the PCI device driver registers drivers by pci_register_driver, and it adds PCI driver to pci_bus_type.
My questions:
How does pci_bus_type know PCI configuration.
What is the relationship between pci_bus_type and pci_root_buses.
Since the question is partially incomplete, but comments are too small to give a answer I'll try to mix this in a bit.
So the kernel tries to abstract the physical implementation of the PCI(e) bus from the driver developer. Hence a PCI bus on an NVidia Tegra is different to a PCI bus on a Freescale ARM and a x86_64 PCI bus, but it should be possible to register devices against them regardless of the real bus implementation.
The structure pci_root_buses is a list of abstract PCI buses, where the implementation could be different.
You can see this in the bus type structure, where function pointers are defined to allow each real bus to have a different implementation how to treat a device. I think it's best if you read the PCI chapter in LDD3 and have a special look at Boot Time.
Also look at Configuration Time to see how the Kernel matches drivers to hardware. The rough idea of PCI is that the kernel can discover the bus and map memory to each physical PCI device allowing access to the PCI configuration space of the device. The driver developer registers a device class by calling pci_register_driver and therefore telling the kernel which driver functions to use for certain vendor ids.
Looking at LDD3 again it seems the missing mapping that you might be looking for is the probe function:
int (*probe) (struct pci_dev *dev, const struct pci_device_id *id);
Pointer to the probe function in the PCI driver. This function is called by the PCI core when it has a struct pci_dev that it thinks this driver wants to control. A pointer to the struct pci_device_id that the PCI core used to make this decision is also passed to this function. If the PCI driver claims the struct pci_dev that is passed to it, it should initialize the device properly and return 0. If the driver does not want to claim the device, or an error occurs, it should return a negative error value. More details about this function follow later in this chapter.
Kernel data structures
Bus type
Further reading
Linux Device Drivers 3rd edition - Chapter PCI
In Kernel Documentation about PCIe

How does base address register gets address?

I've finished developing a pcie driver for an FPGA under a linux distributiuon. Everything works fine. But I'm wondering where the base address register in the PCI Endpoint of the FPGA gets the base address. When I've generated the PCIe Endpoint I was able to set up the length of the BAR, but not more.
In the PCIe driver I do the standard functions like pci_enable_device, but I do not set up specifically a base address.
So does the operating system set up the base address during startup? or how does it work?
By the side I would like to know what initialisations the operating system gernerally do if an pcie pcie device is connected. Since I do see my pci device in lspci even if the driver is unloaded.
Kind regards
Thomas
The address allocation for the PCI devices are generally done at the BIOS level. Let us refer to the x86 platform. If we look closely at the system address map, it would be something like this (image taken from BIOS DISASSEMBLY NINJUTSU, by Darmawan Salihun)
On the address map, there is a dedicated space to map the PCI memory regions. The same could be replicated using the output of /proc/iomem.
This implementation is platform dependent, and as the BIOS "knows" about the platform, it would set aside the addresses dedicated to the PCI slots. When a device is plugged into the slot, the BIOS interacts with the firmware on the device and actually sets up the memory regions for the device, such that the OS could make use of it.
Now coming to the drivers part. In Linux, the drivers follow a specific standard known as the 'Linux Device Model', which constitutes a Core Layer(PCI core), Host Controller Drivers(PCI Controller/Masters) and Client Drivers(PCI devices). When the PCI device(client) is plugged into the slot, the corresponding host controller knows about the attachment and it further informs the PCI core about it, and hence appears in the output of lspci.
lspci shows the devices which are identified by the host controller, in which case, it may or may not be tied to a driver. The core further traverses the drivers in the system, finds a matching one, and attaches to this device.
So, the reason you are seeing the device in the output of lspci is because the host controller has identified the device, and has informed the PCI core. It doesn't matter even if any driver is attached to the device or not.
On most consumer grade computers, BAR allocation seem to be done in the BIOS.
I suppose that in a hotplug capable architecture this must be done or at least triggered by the OS.

Resources