Ensure all data is flushed from user space to eMMC memory - linux

I am working with embedded Linux system, where software should be replaced in power fail safe manner, and that when we signal that procedure is complete power failure should not negatively affect the system.
Documentation for sync syscall mentions that this flushes only the kernel buffers, and internal device (like eMMC) buffers can still be not fully flushed.
I was looking through /dev/mmc directories, and found the file called "removeable" which should allow to have behavior similar to actually removable devices (where power is disconected on removal in abrupt way).
Does Linux eMMC drivers has sth more dedicated to fully flush eMMC card/prepare it for power loss?

Related

Why's filesystem has your own block size, instead of using hard disk block size?

In short, how's file system communicate with block device?
I don't know much about block sizes. I think the block size of the filesystem for ext4 (Linux) is 4KB which is logical considering the modern processor's page size (4KB). Now I think the block size (minimal addressable unit) of modern SSDs is either 256KB or 4MB depending on the disk. This is probably due to several factors (memory throughput, cost vs performance, etc).
In short, how's file system communicate with block device?
The filesystem doesn't communicate with block devices, the OS does. On x86, the OS sets up the registers of the PCI DMA host controller known as AHCI (https://www.intel.com/content/www/us/en/io/serial-ata/serial-ata-ahci-spec-rev1-3-1.html). The OS triggers read/write cycles to/from RAM in blocks of 256KB/4MB. It probably holds structures of the filesystem that it loads at boot. It thus knows already where the different files are (it has a cache). It will load the part of the filesystem it needs, read/write the file, then rewrite modifications on disk in big blocks.
Also, the AHCI triggers MSI interrupts on command completion. Basically, the OS triggers DMA read/write cycles on behalf of user mode processes. It will then put these processes on the wait-for-IO queue. On interrupt, it will put back the process on the run queue.

How to get the usage record of USB device on Linux

I want to write a linux inspection tool to check the usb device usage records on a certain machine. Parsing the dmesg method can obtain the usb usage record from the system startup to the present, and does not use dmesg -c to clear the dmesg information. So the point of the question is whether there is a place on the Linux system that records all USB usage records in the system, just like the Windows system writes this in the registry.
Linux doesn't natively provide this functionality. It isn't seen as an intrinsically important feature to have, and as mentioned, it can be done easily with a udev rule for those who want it. It's generally assumed that anyone with physical access to the machine can read any unencrypted data on it and execute arbitrary code on it if it's running, so logging USB devices isn't really an effective security measure.
If you want to see the recent history, you can check the kernel log (often /var/log/kern.log) to read the recent and older entries that the kernel has output when a USB device has been inserted. Do note that these are rotated periodically, so they won't provide the entire history of the system.

What is the difference between DMA-Engine and DMA-Controller?

As mentioned above, what is the difference between a dma engine and a dma-controller (on focus on linux)?
When does the linux dma engine come into place? Is this a special device or always part of all periphery devices, which support dma?
When browsing the linux source, I found the driver ste_dma40.c. How does any driver uses this engine?
DMA - Direct memory access. The operation of your driver reading or writing from/to your HW memory without the CPU being involved in it (freeing it to do other stuff).
DMA Controller - reading and writing can't be done by magic. if the CPU doesn't do it, we need another HW to do it. Many years ago (at the time of ISA/EISA) it was common to use a shared HW on the motherboard that did this operation. In recent years , each HW has its own DMA HW mechanism.
But in all cases this specific HW gets the source address and the destination address and passes the data. Usually triggering an interrupt when done.
DMA Engine - Now here I am not sure what you mean. I believe you probably refer to the SW side that handles the DMA.
DMA is a little more complicated than usual I\O since all memory SRC and DST has to be physically present at all times during the DMA operation. If the DST address is swapped to disk, the HW will write to a bad address and the system will crash.
This and other aspects of DMA are handled by the driver with code sections you probably refer to as the "DMA Engine"
*Another interpretation of what 'DMA Engine' is, may be a code part of Firmware (or HW) that handles the DMA HW controller on the HW side.
According to this document, http://www.asprom.com/application/intel_3.pdf:
The 82C37 DMA controllers should not
be confused with the DMA engines found
in some earlier MCH (Memory Controller
Hub) components. These DMA controllers
are tied to the ISA/LPC bus and used
mostly for transfers to/from slow
devices such as floppy disk controllers.
So it seems it is a device found on previous platfroms that used MCHs devices.

What kind of api does a sata hard drive expose?

I understand that the linux kernel uses a driver to communicate with the hard disk device and that there is firmware code on the device to service the driver's requests. My questions are:
what kind of functionality (i.e. api) does the firmware expose? For example, does it only expose an address space that the kernel manages, or is there some code in the linux kernel that deals with some of the physics related to the hard drive (i.e. data layout on track/sector/platter etc...)
Does the kernel schedule the disk's head movement, or is it the firmware?
Is there a standard spec for the apis exposed by hard disk devices?
I understand that the linux kernel uses a driver to communicate with the hard disk device
That's true for all peripherals.
there is firmware code on the device to service the driver's requests
Modern HDDs (since the advent of IDE) have an integrated disk controller.
"Firmware" by itself isn't going to do anything, and is an ambiguous description. I.E. what is executing this "firmware"?
what kind of functionality (i.e. api) does the firmware expose? For example, does it only expose an address space that the kernel manages, or is there some code in the linux kernel that deals with some of the physics related to the hard drive (i.e. data layout on track/sector/platter etc...)
SATA drives use the ATA Packet Interface, ATAPI.
The old SMD and ST506 drive interfaces used cylinder, head, and sector (aka CHS) addressing. Disk controllers for such drives typically kept a similar interface on the host side, so the operating system was obligated to be aware of the drive (physical) geometry. OSes would try to optimize performance by aligning partitions to cylinders, and minimize seek/access time by ordering requests by cylinder address.
Although the disk controller typically required CHS addressing, the higher layers of an OS would use a sequential logical sector address. Conversion between a logical sector address to cylinder, head, & sector address is straightforward so long as the drive geometry is known.
The SCSI and IDE (ATA) interfaces for the host side of the disk controller offered logical block addressing (block = sector) rather than CHS addressing. The OS no longer had to be aware of the physical geometry of the drive, and the disk controller was able to use the abstraction of logical addressing to implement a more consistent areal density per sector using zone-bit recording.
So the OS should only issue a read or write block operation with a logical block address, and not be too concerned with the drive's geometry.
For example, low-level format is no longer possible through the ATA interface, and the drive's geometry is variable (and unknown to the host) due to zone-bit recording. Bad sector management is typically under sole control of the integrated controller.
However you can probably still find some remnants of CHS optimization in various OSes (e.g. drive partitions aligned to a "cylinder").
Does the kernel schedule the disk's head movement, or is it the firmware?
It's possible with a seek operation, but more likely the OS uses R/W operations with auto-seek or LBA R/W operations.
However with LBA and modern HDDs that have sizeable cache and zone-bit recording, such seek operations are not needed and can be counterproductive.
Ultimately the disk controller performs the actual seek.
Is there a standard spec for the apis exposed by hard disk devices?
ATA/ATAPI is a published specification (although it seems to be in a "working draft" state for 20 years).
See http://www.t13.org/Documents/UploadedDocuments/docs2013/d2161r5-ATAATAPI_Command_Set_-_3.pdf
ABSTRACT
This standard specifies the AT Attachment command set used to communicate between host systems and
storage devices. This provides a common command set for systems manufacturers, system integrators, software
suppliers, and suppliers of storage devices. The AT Attachment command set includes the PACKET feature set
implemented by devices commonly known as ATAPI devices. This standard maintains a high degree of
compatibility with the ATA/ATAPI Command Set - 2 (ACS-2).

kernel driver or user space driver?

I would like to ask your advice on the following: I need to write drivers for omap3, for accessing external dsp through fpga (through gpmc interface). The dsp is required to load file to dsp, and to read/write buffer from dsp. There is already FPGA driver in kernel. The kernel is 2.6.32. So I thought of the following options:
writing dsp driver in kernel, which uses the existing fpga driver.
writing a user space driver which interfaces with the fpga kernel driver.
writing user space driver using UIO , which will not use the kernel fpga driver, but shall do the access to fpga, as part of the user space single and complete dsp driver.
What do you think is prefered option ?
What is the advantage of kernel driver over user sace and vise versa ?
Thanks, Ran
* User-space driver:
Easier to debug.
Loads of libraries to support you.
Allows you to hide the details of your IP if you want to ( people will really hate you if you did! )
A crash won't affect the whole system.
Higher latency in handling interrupts since the kernel will have to relay the interrupt to the user space somehow.
You can't control access to your device from user space.
* Kernel-space driver:
Harder to debug.
Only linux kernel frameworks are supported.
You can always provide a binary blob to hide the details of your IP but this is annoying since it has to be generated against a specific kernel.
A crash will bring down the whole system.
Less latency in handling interrupts.
You can control access to your device from kernel space because it's a global context that all processes see.
As a kernel engineer I'm more comfortable/happy hacking code in a kernel context, that's probably why I would write the whole driver in the kernel.
However, I would say that the best thing to do is to divide the functionality of your driver into units and only put the unit in the kernel when there's a reason to do so.
For example:
If your device has a shared resource ( like an MMU, hardware FIFO ) and you want multiple processes to be able to use it safely, then probably you need some buffer manager to be in the kernel and all the processes would be communicating with it through ioctl.
If your driver needs to respond to an interrupt as fast as possible ( very low-latency ), then you will need to put the part of the code that handles interrupts in the kernel interrupt handler rather than putting it in user space and notifying user space when an interrupt occurs.

Resources