How does a x86-type CPU know it is communicating with the right Linux IO device? - linux

How does the x86-type (32 or 64 bits) CPU knows it is communicating with the right Linux IO device?
For example, if I have a SCSI storage device plugged into a PCI bus. How is the CPU able to detect and communicate the right commands with this device? Knowing this hardware answer will help with software programming.

Related

Does a HDD/SSD use 1 or 2 device controllers to connect to the OS?

I am studying Operating Systems, and came across divice controllers.
I gathered that a device controller is hardware whereas a device driver is software.
I also know that a HDD and a SSD both have a small PCB buit into them and I assume those PCB's are the device controllers.
Now what I want to know is if there is another device controller on the PC/motherboard side of the bus/cable connecting the HDD/SSD to the OS?
Is the configuration: OS >> Device Driver >> Bus >> Device Controller >> HDD/SSD
Or is it: OS >> Device Driver >> Device Controler >> Bus >> Device Controller >> HDD/SSD
Or is it some other configuration?
Sites I visited for answers:
Tutorialspoint
JavaPoint
Idc online
Quora
Most hard-disks on desktop are SATA or NVME. eMMC is popular for smartphones but some might use something else. These are hardware interface standards that describe the way to interact electrically with those disks. It tells you what voltage at what frequency and for what amount of time you need to apply (a signal) to a certain pin (a bus line) to make the device behave or react in a certain way.
Most computers are separated in a few external chips. On desktop, it is mostly SATA, NVME, DRAM, USB, Audio Output, network card and graphics card. Even though there is few chips, the CPU would be very expensive if it had to support all those hardware interface standards on the same silicon chip. Instead, the CPU implements PCI/PCI-e as a general interface to interact with all those chips using memory mapped registers. Each of these devices have an external PCI-e controller between the device and the CPU. In the same order as above, you have AHCI, NVME controller, DRAM (not PCI and in the CPU), xHCI (almost everywhere) and Intel HDA (example). Network cards are PCI-e and there isn't really a controller outside the card. Graphics card are also self standing PCI-e devices.
So, the OS detects the registers of those devices that are mapped in the address space. The OS writes at those locations, and it will write the registers of the devices. PCI-e devices can read/write DRAM directly but this is managed by the CPU in its general implementation of the PCI-e standard most likely by doing some bus arbitration. The CPU really doesn't care what's the device that it is writing. It knows that there is a PCI register there and the OS instructs to write it with something so it does. It just happens that this device is an implementation of a standard and that the OS developers read the standard so they write the proper values in those registers and the proper data structures in DRAM to make sure that the device knows what to do.
Drivers implement the standard of the software interface of those controllers. The drivers are the ones instructing the CPU on values to write and writing the proper data structures in DRAM for giving commands to the controllers. The user thread simply places the syscall number in a conventionnal register determined by the OS developers and they call an instruction to jump into the kernel at a specific address that the kernel decides by writing a register at boot. Once there, the kernel looks at the register for the number and determines what driver to call based on the operation.
On Linux and some place else, it is done with files. You call syscalls on files and the OS has a driver attached to the file. They are called virtual files. A lot of transfer mechanisms are similar to the reading/writing files pattern so Linux uses that to make a general driver model where the kernel doesn't even need to understand the driver. The driver just says create me a file there that's not really on the hard disk and if someone opens it and calls an operation on it then call this function that's there in my driver. From there, the driver can do whatever it wants because it is in kernel mode. It just creates the proper data structures in DRAM and writes the registers of the device it drives to make it do something.

some problem about pci, io devices, cpu ,bus

I've recently been learning to write an x86 operating system, but I've run into a few problems.
Is the PCI bus an IO device that is connected to the south bridge chip?
I know I can read and write to the hard disk by in/out instructions to 0x1f0~0x1f7 registers, is the hard disk connected to the Southbridge chip?
Nowadays, many IO devices seem to be PCI devices, which are inserted into slots on the PCI bus, and through the PCI configuration space, the CPU can access the PCI devices through the MMIO or IO port, right?
Nowadays, graphics cards seem to be PCI interface, inserted into the PCI card slot, before the emergence of the PCI bus graphics card is connected to the South Bridge chip?
thanks!!!🙏

How does the Linux Operating System understand the underlying hardware?

I want to learn how Linux OS understands the underlying hardware.Can anyone suggest me where to start for getting this understanding,As of now i just know the '/dev' sub-directory plays a vital role in that.
It has the device special files which are like a portal to the device driver which then takes it to the physical device.
I read somewhere that Udev daemon listens to the netlink socket to collect this information and Udev device manager detects addition and removal of devices as they occur.
But with these i am just not satisfied with the thought of how Linux reads the hardware.
Please let me know where to start to understand this, i am so thankful to anyone trying to help.
I think at first you need to find out how the memory mapping works. What is the address space and how it relates to physical memory. Then you could read about how the hardware is mapped in address space and how to access it. It is a big amount of docs to read.
Some of those information are in Linux Documentation Project.
Additionally some knowledge about electronic would be helpful.
In general - Linux for communication with devices needs some "channel" of communication. This channel may be for example ISA, PCI, USB, etc bus. For example PCI devices are memory mapped devices and Linux kernel communicates with them via memory accesses. So first Linux needs to see given device in some memory area and then it is able to configure this device and do some communication with it.
In case of USB devices it is a little bit complicated because USB devices are not memory mapped. You need to configure USB host first to be able to communicate with USB devices. Every communication with USB device is achieved via USB host.
There are also devices which are not connected via ISA, PCI or USB. They are connected directly to the processor and visible under some memory address. This solution is usually implemented in embedded devices. For example ARM processors use this approach.
Regarding udev - it is user-space application which listens for events from Linux kernel and helps other applications with recognizing device addition and configuration.

Does Linux automatically binds IRQs to the NUMA-nodes to which the PCIe-devices are connected?

As we known, we can map IRQs of some devices to some CPU-Cores by using IRQ Affinity on Linux
cat <8-bit-core-mask> /proc/irq/[irq-num]/smp_affinity:
http://www.alexonlinux.com/smp-affinity-and-proper-interrupt-handling-in-linux
https://community.mellanox.com/docs/DOC-2123
Also
We known, that we can map IRQ (hardware-interrupts) on the some CPU-Nodes (Processors on motherboard) on NUMA-systems, by using: https://events.linuxfoundation.org/sites/events/files/eeus13_shelton.pdf
cat <8-bit-node-mask> /proc/irq/[irq-num]/node
But if one PCIe-device (Ethernet, GPU, ...) connected to the NUMA-0, and other PCIe-device connected to the NUMA-1, then it would be optimal to use interrupts on those NUMA-nodes (CPU) to which these devices are connected, to avoid high latency communication between nodes: Is CPU access asymmetric to Network card
Does Linux automatically binds IRQs to the nodes to which the PCIe-devices are connected , or does it have to be done manually?
And if we have to do this with our hands, then what is the best way to do this?
Particularly interested in Linux x86_64: Debian 8 (Kernel 3.16) and Red Hat Enterprise Linux 7 (Kernel 3.10), and others...
Motherboard chipsets: Intel C612 / Intel C610, and others...
Ethernet cards: Solarflare Flareon Ultra SFN7142Q Dual-Port 40GbE QSFP+ PCIe 3.0 Server I/O Adapter - Part ID: SFN7142Q
By architecture all low IRQs mapped to Node 0.
Some of them CAN'T be remapped like IRQ 0 timer.
Anyway need to review your system (blueprints).
In case you have high network load and doing routing it makes sense to pin NIC queues. Most effectively to pin tx and rx queues to "nearest" cores in term of caches. But before suggest that would be great to know your architecture.
Need to know:
1. Your system (dmidecode, lspci output), cat /proc/interrupt
2. Your requirements (what the purpose of the server). IOW would be great to understand what's your server for. So just explain the flows and architecture.

How to fix PCI enumeration? How to fix where a device is mapped?

I have a embedded system and there are two pci devices. I want to map always those devices in the same place. I know that Bios can do it. But want I want is doing from Linux.
In the bios, the steps are:
https://superuser.com/questions/595672/how-is-memory-mapped-to-certain-hardware-how-is-mmio-accomplished-exactly
1º The BIOS discovers all the devices on the system.
2º Then it interrogates each device to decide whether the BIOS will set that device up and, if so, determine how much memory address space, if any, the device needs.
3ºThe BIOS then assigns space to each device and program's the address decoder by writing to its BAR (base address register).
What I want is do it when the linux initializes. I am using a powerPC and Linux (kernel 3.XX)
Thanks!
You could ask the kernel to enumerate the bus again. check the PCIe hotplug implementation in the Linux.

Resources