How does the Linux Operating System understand the underlying hardware? - linux

I want to learn how Linux OS understands the underlying hardware.Can anyone suggest me where to start for getting this understanding,As of now i just know the '/dev' sub-directory plays a vital role in that.
It has the device special files which are like a portal to the device driver which then takes it to the physical device.
I read somewhere that Udev daemon listens to the netlink socket to collect this information and Udev device manager detects addition and removal of devices as they occur.
But with these i am just not satisfied with the thought of how Linux reads the hardware.
Please let me know where to start to understand this, i am so thankful to anyone trying to help.

I think at first you need to find out how the memory mapping works. What is the address space and how it relates to physical memory. Then you could read about how the hardware is mapped in address space and how to access it. It is a big amount of docs to read.
Some of those information are in Linux Documentation Project.
Additionally some knowledge about electronic would be helpful.
In general - Linux for communication with devices needs some "channel" of communication. This channel may be for example ISA, PCI, USB, etc bus. For example PCI devices are memory mapped devices and Linux kernel communicates with them via memory accesses. So first Linux needs to see given device in some memory area and then it is able to configure this device and do some communication with it.
In case of USB devices it is a little bit complicated because USB devices are not memory mapped. You need to configure USB host first to be able to communicate with USB devices. Every communication with USB device is achieved via USB host.
There are also devices which are not connected via ISA, PCI or USB. They are connected directly to the processor and visible under some memory address. This solution is usually implemented in embedded devices. For example ARM processors use this approach.
Regarding udev - it is user-space application which listens for events from Linux kernel and helps other applications with recognizing device addition and configuration.

Related

Does a HDD/SSD use 1 or 2 device controllers to connect to the OS?

I am studying Operating Systems, and came across divice controllers.
I gathered that a device controller is hardware whereas a device driver is software.
I also know that a HDD and a SSD both have a small PCB buit into them and I assume those PCB's are the device controllers.
Now what I want to know is if there is another device controller on the PC/motherboard side of the bus/cable connecting the HDD/SSD to the OS?
Is the configuration: OS >> Device Driver >> Bus >> Device Controller >> HDD/SSD
Or is it: OS >> Device Driver >> Device Controler >> Bus >> Device Controller >> HDD/SSD
Or is it some other configuration?
Sites I visited for answers:
Tutorialspoint
JavaPoint
Idc online
Quora
Most hard-disks on desktop are SATA or NVME. eMMC is popular for smartphones but some might use something else. These are hardware interface standards that describe the way to interact electrically with those disks. It tells you what voltage at what frequency and for what amount of time you need to apply (a signal) to a certain pin (a bus line) to make the device behave or react in a certain way.
Most computers are separated in a few external chips. On desktop, it is mostly SATA, NVME, DRAM, USB, Audio Output, network card and graphics card. Even though there is few chips, the CPU would be very expensive if it had to support all those hardware interface standards on the same silicon chip. Instead, the CPU implements PCI/PCI-e as a general interface to interact with all those chips using memory mapped registers. Each of these devices have an external PCI-e controller between the device and the CPU. In the same order as above, you have AHCI, NVME controller, DRAM (not PCI and in the CPU), xHCI (almost everywhere) and Intel HDA (example). Network cards are PCI-e and there isn't really a controller outside the card. Graphics card are also self standing PCI-e devices.
So, the OS detects the registers of those devices that are mapped in the address space. The OS writes at those locations, and it will write the registers of the devices. PCI-e devices can read/write DRAM directly but this is managed by the CPU in its general implementation of the PCI-e standard most likely by doing some bus arbitration. The CPU really doesn't care what's the device that it is writing. It knows that there is a PCI register there and the OS instructs to write it with something so it does. It just happens that this device is an implementation of a standard and that the OS developers read the standard so they write the proper values in those registers and the proper data structures in DRAM to make sure that the device knows what to do.
Drivers implement the standard of the software interface of those controllers. The drivers are the ones instructing the CPU on values to write and writing the proper data structures in DRAM for giving commands to the controllers. The user thread simply places the syscall number in a conventionnal register determined by the OS developers and they call an instruction to jump into the kernel at a specific address that the kernel decides by writing a register at boot. Once there, the kernel looks at the register for the number and determines what driver to call based on the operation.
On Linux and some place else, it is done with files. You call syscalls on files and the OS has a driver attached to the file. They are called virtual files. A lot of transfer mechanisms are similar to the reading/writing files pattern so Linux uses that to make a general driver model where the kernel doesn't even need to understand the driver. The driver just says create me a file there that's not really on the hard disk and if someone opens it and calls an operation on it then call this function that's there in my driver. From there, the driver can do whatever it wants because it is in kernel mode. It just creates the proper data structures in DRAM and writes the registers of the device it drives to make it do something.

How to detect when a usb cable is connected/disconnected on the device side in Linux 2.6.37?

I have a embedded device that runs linux 2.6.37.
I want my application to know when the USB is connected.
Currently I can achieve this by pooling
/sys/devices/platform/musb/musb-hdrc.0/vbus.
However this approach does not distinguish between a USB charger or a USB host.
I found this udev approach but I don't think it's available in my version of the kernel. because I did not find any USB related nodes in my /dev. This discussing also shows that it might not be feasible, ether.
I also found linux hotplug and tried the netlink example, but I didn't see any output running the example when I connect/disconnect the USB cable.
What I want to do is to detect connection type on the device, when USB is connected, and prepare (unmount file system) and switch to g_file_storage if device is connected to a host, and do nothing if device is connect to a charger.
How shall I achieve this?
To achieve that, you can use the inotify(7) feature, available in all linux kernels to be awaken as soon as some device node gets created in /sys.
To know what type of device you have, you have to read the usb info from proper usb ioctl call (or if you are not a kernel interface expert, using the libusb interface) to get the device vendor, device id and device class fields coming from the device. Normally, the hotplug software gets informed on these clase of events (via a special socket). The most probably reason you don't get the device properly initialized is some misconfiguration in the config files for udev system, which normally has one entry for each possible device vendor/device id pair and allows it to load the appropiate device driver to control it. The process continues with the device driver module creating dynamically the actual devices, and they'll appear in the /dev/ filesystem as a consequence of some other kernel event to udevd.
Read apropiate documents in <linux_src>/Documentation (this directory directory belongs to the linux kernel source code, so you'll probably need to install it), and udevd(8) man pages to be able to add a new usb.
On 2.6.37 kernel, this could be done by polling
/sys/devices/platform/musb-omap2430.0/musb-hdrc.0/mode
If handshake with host is successful then it will read as "peripheral", if fail it'll be "idle".

How does base address register gets address?

I've finished developing a pcie driver for an FPGA under a linux distributiuon. Everything works fine. But I'm wondering where the base address register in the PCI Endpoint of the FPGA gets the base address. When I've generated the PCIe Endpoint I was able to set up the length of the BAR, but not more.
In the PCIe driver I do the standard functions like pci_enable_device, but I do not set up specifically a base address.
So does the operating system set up the base address during startup? or how does it work?
By the side I would like to know what initialisations the operating system gernerally do if an pcie pcie device is connected. Since I do see my pci device in lspci even if the driver is unloaded.
Kind regards
Thomas
The address allocation for the PCI devices are generally done at the BIOS level. Let us refer to the x86 platform. If we look closely at the system address map, it would be something like this (image taken from BIOS DISASSEMBLY NINJUTSU, by Darmawan Salihun)
On the address map, there is a dedicated space to map the PCI memory regions. The same could be replicated using the output of /proc/iomem.
This implementation is platform dependent, and as the BIOS "knows" about the platform, it would set aside the addresses dedicated to the PCI slots. When a device is plugged into the slot, the BIOS interacts with the firmware on the device and actually sets up the memory regions for the device, such that the OS could make use of it.
Now coming to the drivers part. In Linux, the drivers follow a specific standard known as the 'Linux Device Model', which constitutes a Core Layer(PCI core), Host Controller Drivers(PCI Controller/Masters) and Client Drivers(PCI devices). When the PCI device(client) is plugged into the slot, the corresponding host controller knows about the attachment and it further informs the PCI core about it, and hence appears in the output of lspci.
lspci shows the devices which are identified by the host controller, in which case, it may or may not be tied to a driver. The core further traverses the drivers in the system, finds a matching one, and attaches to this device.
So, the reason you are seeing the device in the output of lspci is because the host controller has identified the device, and has informed the PCI core. It doesn't matter even if any driver is attached to the device or not.
On most consumer grade computers, BAR allocation seem to be done in the BIOS.
I suppose that in a hotplug capable architecture this must be done or at least triggered by the OS.

How does the USB storage driver works in Linux?

I am trying to find out a high-level overview of how the USB storage driver works in Linux. I'm looking for a simple article or even a picture/flowchart describing how it works.
Basically, I'm looking to get these questions answered:
When you plug the device into your computer, what happens? Is there a daemon that picks up on it, or does the event trigger an interrupt somewhere? Does the core USB driver read information about the device before passing control over to the USB storage driver? How does it decide what type of device it is? How does the device get mounted, and what allows it to communicate with the computer's filesystem? When I copy a file, what does the data flow look like in the kernel?
I hope the question isn't too vague - I tried Google to no avail, so I'm wondering if anyone knows any articles or diagrams that can explain this, or perhaps if they can explain it themselves without too much effort. Thanks.
No, it is a very good question.
The block writing is going in linux with the block device layer. The filesystems are working with this block dev layer.
If this layer wants to write something out, says it to the driver of the usb master device. This driver is talking with the usb controller chip of the motherboard.
This chip is very simple: the usb is practically a serial port, with a lot of extensions, mainly targeting the autoconfiguration and the power management. But basically, you can write out bytes, and read in bytes.
Your questions:
When you plug the device into your computer, what happens? Is there a daemon that picks up on it, or does the event trigger an interrupt somewhere?
The device (usb slave) says the master (in the motherboard): "I am here". The usb controller chip gets the message and says it to the kernel (normally) with an interrupt. The kernel reinitializes and rescans the usb bus, and says the udev: "here is a new 1234:5678 usb device on the usb tree 1.3.5"
"How does it decide what type of device it is?"
Usb devices have a vendor and model id, and they can say this on ask. Google for "usb ids".
"How does the device get mounted, and what allows it to communicate with the computer's filesystem?"
The kernel only loads the driver and says the udev (which is in userspace): "Here is a new block device on device number 22:16". From this, udev tries to mount this with some userspace daemon, it is already distribution-dependant.

Linux user space PCI driver

I'm trying to write a PCI device driver that runs in user space. Not my idea, what the client wants. Target is an embedded Linux board that will never have more than a single user. I'm an experienced C programmer and know Linux, just not familiar with Linux driver development.
Is this really a device driver or just a library? Do I need to use the typical calls pci_register_driver, etc. or can I just access the device using fopen, and using mmap and ioperm to get to it?
Interrupts will be done using the MSI model. Also need to handle DMA transfers. The device will be streaming lots of data to the user.
There's not much info out there on this subject, LDD3 only devotes a couple of pages to it, and there's nothing else that I could find here on SO.
Thanks in advance!
If there is no driver handling the PCI card it would be possible to access it using ioperm (or iopl - depending on the address) if only port accesses are required.
Using DMA and interrupts is definitely impossible without a kernel-mode driver.
By googleing I found some text about something like a "generic kernel-mode driver" that allows writing user-mode drivers (including DMA and interrupts).
You should ask your customer which kind of kernel-mode drivers for accessing PCI cards is installed on the Linux board.
There is now a proper way to do high performance userspace PCI drivers, called vfio. There is not much documentation, but see the kernel docs http://lxr.free-electrons.com/source/Documentation/vfio.txt and the header file /usr/include/linux.vfio.h. It is available since Linux 3.6.

Resources