i/o Bus Standards & Device Controllers - io

I'm really confused about bus standards like ide,ata,sata,pci etc.
I have just read this article : 1https://superuser.com/questions/350582/when-a-disk-read-or-disk-write-occurs-where-does-the-data-go/350592#350592
So if we talk about hard disk drive; hdd logic board contains a microcontroller,buffer-ram,motor driver etc.This microcontroller communicates with the motor driver for reading and writing sectors to hdd platers.As i know a microcontroller is combination of cpu,registers,io ports,ram etc.Also there must be firmware inside the microcontroller.
My first question is how hdd microcontroller clock frequency is determined?
And according to above article why there is a word like "sata drive"? I mean if "ata" or "sata" etc. are just bus interfaces between the cpu and device controllers why "ata","sata" or "pci" words become a prefix for peripheral devices?
I really want to understand deeply about communication with peripheral devices.Above article i understood that two seperate communications occur when we want to read sectors from hdd,
first is between "cpu - device controller" and second is "device controller - hdd".So how these seperate communications work each other?
Finally if "ata" or "sata" are interfaces that just stand for "cpu&memory(dma)-device controller" communication gateway, why this interface is slower than the front side bus(fsb)?i mean if i speak for dma transfer, after disk controller reads one sector from hdd it must transfer this sector to memory right?So why these slow bus interfaces are used for communication between the memory and device controllers?

Related

Does a HDD/SSD use 1 or 2 device controllers to connect to the OS?

I am studying Operating Systems, and came across divice controllers.
I gathered that a device controller is hardware whereas a device driver is software.
I also know that a HDD and a SSD both have a small PCB buit into them and I assume those PCB's are the device controllers.
Now what I want to know is if there is another device controller on the PC/motherboard side of the bus/cable connecting the HDD/SSD to the OS?
Is the configuration: OS >> Device Driver >> Bus >> Device Controller >> HDD/SSD
Or is it: OS >> Device Driver >> Device Controler >> Bus >> Device Controller >> HDD/SSD
Or is it some other configuration?
Sites I visited for answers:
Tutorialspoint
JavaPoint
Idc online
Quora
Most hard-disks on desktop are SATA or NVME. eMMC is popular for smartphones but some might use something else. These are hardware interface standards that describe the way to interact electrically with those disks. It tells you what voltage at what frequency and for what amount of time you need to apply (a signal) to a certain pin (a bus line) to make the device behave or react in a certain way.
Most computers are separated in a few external chips. On desktop, it is mostly SATA, NVME, DRAM, USB, Audio Output, network card and graphics card. Even though there is few chips, the CPU would be very expensive if it had to support all those hardware interface standards on the same silicon chip. Instead, the CPU implements PCI/PCI-e as a general interface to interact with all those chips using memory mapped registers. Each of these devices have an external PCI-e controller between the device and the CPU. In the same order as above, you have AHCI, NVME controller, DRAM (not PCI and in the CPU), xHCI (almost everywhere) and Intel HDA (example). Network cards are PCI-e and there isn't really a controller outside the card. Graphics card are also self standing PCI-e devices.
So, the OS detects the registers of those devices that are mapped in the address space. The OS writes at those locations, and it will write the registers of the devices. PCI-e devices can read/write DRAM directly but this is managed by the CPU in its general implementation of the PCI-e standard most likely by doing some bus arbitration. The CPU really doesn't care what's the device that it is writing. It knows that there is a PCI register there and the OS instructs to write it with something so it does. It just happens that this device is an implementation of a standard and that the OS developers read the standard so they write the proper values in those registers and the proper data structures in DRAM to make sure that the device knows what to do.
Drivers implement the standard of the software interface of those controllers. The drivers are the ones instructing the CPU on values to write and writing the proper data structures in DRAM for giving commands to the controllers. The user thread simply places the syscall number in a conventionnal register determined by the OS developers and they call an instruction to jump into the kernel at a specific address that the kernel decides by writing a register at boot. Once there, the kernel looks at the register for the number and determines what driver to call based on the operation.
On Linux and some place else, it is done with files. You call syscalls on files and the OS has a driver attached to the file. They are called virtual files. A lot of transfer mechanisms are similar to the reading/writing files pattern so Linux uses that to make a general driver model where the kernel doesn't even need to understand the driver. The driver just says create me a file there that's not really on the hard disk and if someone opens it and calls an operation on it then call this function that's there in my driver. From there, the driver can do whatever it wants because it is in kernel mode. It just creates the proper data structures in DRAM and writes the registers of the device it drives to make it do something.

What is the relationship between SPI and DMA in linux?

In my opinion, SPI and DMA are both controllers.
SPI is a communication tool and DMA can transfer data without CPU.
The system API such as spi_sync() or spi_async(), are controlled by the CPU.
So what is the meaning of SPI with DMA, does it mean DMA can control the SPI API without CPU? Or the SPI control uses CPU but the data transfer to DMA directly?
SPI is not a tool, it is a communication protocol. Typical micro controllers have that protocol implemented in hardware which can accessed by read/write to dedicated registers in the address space of the given controller.
DMA on micro controllers is typically designed to move content of registers to memory and visa versa. DMA can sometimes configured to write a special amount of read/writes or increasing or decreasing source and target address of memory and so on.
If you have a micro controller which have SPI with DMA support, it typically means that you can have some data in the memory which will be transferred to the SPI unit to send multiple data bytes without intervention of the cpu core itself. Or read an amount of data bytes from SPI to memory automatically without wasting cpu core.
How such DMA SPI transfers are configured is written in the data sheets of the controllers. There are a very wide span of types so no specific information can be given here without knowing the micro type.
The linux APIs for dealing with SPI are abstracting the access of DMA and SPI by using the micro controller specific implementations in the drivers.
It is quite unclear if you want to use the API to access your SPI or you want to implement a device driver to make the linux API working on your specific controller.
It is not possible to give you a general introduction to write a kernel driver here or clarify register by register from your data sheets. If you need further information you have to make your question much more specific!

Non-Hardware Device Drivers

In Windows Internal 7th Edition - Book following text is Mentioned Under Windows Kernel Architecture
Device drivers -This includes both hardware device drivers, which translate user I/O function
calls into specific hardware device I/O requests, and non-hardware device drivers, such as file
system and network drivers.
Can anyone please elaborate on hardware device drivers and non-hardware device drivers?
Assume you have multiple layers - e.g. when a process makes a file IO request it goes to a virtual file system layer, which may send a request to a file system layer, which may send request/s to a software RAID layer, which may send requests to a USB mass storage device driver, which may send a request to a USB controller driver.
You can split these layers into 2 main categories:
a) "device drivers", where there's an actual device. For these, the relationships between device drivers tends to mirror the hierarchical relationships between hardware devices (e.g. "PCI bus with controllers plugged in, with various devices plugged into those controllers, with various peripherals plugged into those devices" may become a tree of "parent device driver communicating with none or more child device drivers that are...").
b) "things that do not drive a device, and therefore are not technically device drivers". For the file IO example above, this is the VFS, file systems and software RAID layer. For networking it'll be code to handle a TCP/IP stack (and figure out routing, etc - which network card should send a packet based on the destination IP address). For user input (keyboard, etc) it could be things like Input Method Editors. For sound it can be code to determine how loud the sound should be on which speakers (on which sound card/s) based on a 2D position.
For most operating systems; device drivers need to be treated as "special" because they need to use interfaces (and possibly direct hardware access) that normal software/processes can't use. For example, for monolithic kernels they might be treated as a kernel extension and (dynamically) linked directly into the kernel.
However; "things that do not drive a device, and therefore are not technically device drivers" end up needing similar special support (e.g. the ability to use the same or similar interfaces that normal software/processes can't use but device drivers can use, the ability to be linked into a monolithic kernel, etc). For an OS designer, the differences between device drivers and "things that aren't technically device drivers but need the same access" is relatively insignificant (compared to normal software/processes which don't have/need special access); so it's tempting to use the same word to describe both - e.g. call them all "kernel modules" (regardless of whether they're device drivers or not); or call them all "device drivers" (regardless of whether they're technically device drivers or not).
Note that there's a few things that confuse this even more:
a) There's actually a third category - "virtual devices". In some cases software is trying to emulate a real device (e.g. RAM disks that use software/RAM to emulate a hard drive; printers that use a PDF file format converter to "print" to a file, etc). For these cases, emulation/virtualization necessitates implementation as a device driver (but there's technically no device being driven).
b) To make terminology seem more consistent; some operating systems are biased towards defining interfaces as "virtual devices". If you try hard enough you can pretend anything is some kind of abstract virtual device ("It's not a compression/decompression library, it's a virtual compression/decompression device", "It's not a database management engine, it's a virtual relational data storage device", ...).
c) Some operating systems also try to pretend that everything is a file (e.g. Unix - https://en.wikipedia.org/wiki/Everything_is_a_file ). In this case you might have a directory of "device drivers pretending to be files" (e.g. /dev) and end up with "things that are not device drivers that are pretending to be device drivers that are pretending to be files" slapped into the same directory.
Your question is unclear. If you ask for an example of a non-hardware device driver, an example would be the random number generator device. On Linux, for example, the "/dev/random" device provides a software implementation of a random number generator so systems without the necessary hardware can still have this function

How to make PCI device initiate a DMA operation?

I need to find a way to trigger DMA operations easily at my command to facilitate hardware debugging. Is it possible to initialize a DMA read on existing PCI device (e.g. sound card or netcard) in my Linux, by writing directly to its registers? Or do I have to write a custom driver and invoke it by insmod?
There is no standard way to start a DMA operation. Generally, you need to prepare a DMA buffer on the host and setup DMA registers on your device, load DMA address(s), size(s) etc.
However, there is no single standard for DMA registers for PCI devices.
You need to find the specification document of your PCI device. In that spec, look for DMA chapter (this is also called PCI "master access" as opposed to "target access").
You will find there:
- If scatter-gather or contiguous DMA are supported.
- How to setup DMA registers, one of them is usually called DMA CSR - "DMA command/status register".
- If the device supports complicated DMA layout (one or many ring buffers, chain of DMA descriptors etc.)
But the good thing is that many PCI devices support 0-size DMA.
Which does not do any memory access but just triggers a "DMA complete" interrupt. This can be a very convenient place to start for you.

Best way to transfer video data to a device over PCI in linux

I need to transfer video data to and from an FPGA device over PCI in a linux environment. I'm using a third party PCI master core on the FPGA. So far, I've implemented a simple DMA controller on the FPGA to transfer data from the FPGA to the CPU, using consecutive PCI write bursts.
Next, I need to transfer video data from the CPU to the FPGA. What is the best way to go about this?
Should I implement a module on the FPGA which performs a whole bunch of burst reads over PCI. Or is there a way to get the CPU to efficiently write data into the FPGA's memory using PCI write bursts?
My bandwidth requirements are around 30 MB/s in both directions.
Thanks.
You could do posted writes from CPU like what video card drivers do but you'll need to have some driver magic such as setting MTRR (which means you might have some architectural dependency). If you want to be safe DMA read from FPGA is a better way to go. 30MB/s isn't much.
Sounds to me the FPGA should master both reads and writes. Otherwise you would hog the host CPU. That's a classic task for a DMA (and you cannot guarantee a DMA exists on every host).

Resources