difference between device file and device driver - linux

I am currently reading the Linux Module Programming Guide and I have stumbled onto two terms that have confused a bit - device files and device driver. Upon goggling these terms I have come across the following-
A device driver is a piece of software that operates or controls a particular type of device.
A device file is an interface for a device driver that appears in a file system as if it were an ordinary file. In Unix-like operating systems, these are usually found under the /dev directory and are also called device nodes.
What I would like to know is -
1) Are device files an interface between user space programs and the device driver?
2) Does the program access the driver in the kernel via the appropriate device special file?
eg, when using say spidev char dev file, does that allow my userspace program to interact with spi.c and omap2_mcspi.c etc using simple read, write and ioctl calls?

One of the primary abstractions in Unix is the file (source):
Programs, services, texts, images, and so forth, are all files. Input and output devices, and generally all devices, are considered to be files, according to the system.
This lets users treat a variety of entities with a uniform set of operations, even through the implementation of those operations may be wildly different.
As you were getting at with your question, device files are the user facing side of the abstraction. This is what the user sees; a file that they can write to, read from, open, close, etc. The device drivers are the implementation of those operations.
So the user will make a call to a file operation such as write, and then the kernel will then use the device driver to carry out the operation.

Device File like /dev/spidevX.Y is a SW abstraction of a SPI device which exposes Linux low level SPI API to the userspace with syscalls (in Linux driver world known as "file operations"):
That is read(), write(), ioctl()...
spidev.c is a special kind of driver which is registered for generic SPI client(chip) devices, and it's main goal is to export Kernel low level SPI API to userspace.
There is a whole Linux SPI layer in between defined in spi.c
Device driver representing real HW SPI controller is where callbacks (hooks) are implemented and registered to the kernel as a part of spi_master (spi_controller)structure.
Here is a callback initialization for SPI message transfer:
master->transfer_one_message = atmel_spi_transfer_one_message;

everything in linux is a file.
device driver is a software used by operating system to communicate with device.
device driver makes use of device files.

Related

Does a HDD/SSD use 1 or 2 device controllers to connect to the OS?

I am studying Operating Systems, and came across divice controllers.
I gathered that a device controller is hardware whereas a device driver is software.
I also know that a HDD and a SSD both have a small PCB buit into them and I assume those PCB's are the device controllers.
Now what I want to know is if there is another device controller on the PC/motherboard side of the bus/cable connecting the HDD/SSD to the OS?
Is the configuration: OS >> Device Driver >> Bus >> Device Controller >> HDD/SSD
Or is it: OS >> Device Driver >> Device Controler >> Bus >> Device Controller >> HDD/SSD
Or is it some other configuration?
Sites I visited for answers:
Tutorialspoint
JavaPoint
Idc online
Quora
Most hard-disks on desktop are SATA or NVME. eMMC is popular for smartphones but some might use something else. These are hardware interface standards that describe the way to interact electrically with those disks. It tells you what voltage at what frequency and for what amount of time you need to apply (a signal) to a certain pin (a bus line) to make the device behave or react in a certain way.
Most computers are separated in a few external chips. On desktop, it is mostly SATA, NVME, DRAM, USB, Audio Output, network card and graphics card. Even though there is few chips, the CPU would be very expensive if it had to support all those hardware interface standards on the same silicon chip. Instead, the CPU implements PCI/PCI-e as a general interface to interact with all those chips using memory mapped registers. Each of these devices have an external PCI-e controller between the device and the CPU. In the same order as above, you have AHCI, NVME controller, DRAM (not PCI and in the CPU), xHCI (almost everywhere) and Intel HDA (example). Network cards are PCI-e and there isn't really a controller outside the card. Graphics card are also self standing PCI-e devices.
So, the OS detects the registers of those devices that are mapped in the address space. The OS writes at those locations, and it will write the registers of the devices. PCI-e devices can read/write DRAM directly but this is managed by the CPU in its general implementation of the PCI-e standard most likely by doing some bus arbitration. The CPU really doesn't care what's the device that it is writing. It knows that there is a PCI register there and the OS instructs to write it with something so it does. It just happens that this device is an implementation of a standard and that the OS developers read the standard so they write the proper values in those registers and the proper data structures in DRAM to make sure that the device knows what to do.
Drivers implement the standard of the software interface of those controllers. The drivers are the ones instructing the CPU on values to write and writing the proper data structures in DRAM for giving commands to the controllers. The user thread simply places the syscall number in a conventionnal register determined by the OS developers and they call an instruction to jump into the kernel at a specific address that the kernel decides by writing a register at boot. Once there, the kernel looks at the register for the number and determines what driver to call based on the operation.
On Linux and some place else, it is done with files. You call syscalls on files and the OS has a driver attached to the file. They are called virtual files. A lot of transfer mechanisms are similar to the reading/writing files pattern so Linux uses that to make a general driver model where the kernel doesn't even need to understand the driver. The driver just says create me a file there that's not really on the hard disk and if someone opens it and calls an operation on it then call this function that's there in my driver. From there, the driver can do whatever it wants because it is in kernel mode. It just creates the proper data structures in DRAM and writes the registers of the device it drives to make it do something.

Non-Hardware Device Drivers

In Windows Internal 7th Edition - Book following text is Mentioned Under Windows Kernel Architecture
Device drivers -This includes both hardware device drivers, which translate user I/O function
calls into specific hardware device I/O requests, and non-hardware device drivers, such as file
system and network drivers.
Can anyone please elaborate on hardware device drivers and non-hardware device drivers?
Assume you have multiple layers - e.g. when a process makes a file IO request it goes to a virtual file system layer, which may send a request to a file system layer, which may send request/s to a software RAID layer, which may send requests to a USB mass storage device driver, which may send a request to a USB controller driver.
You can split these layers into 2 main categories:
a) "device drivers", where there's an actual device. For these, the relationships between device drivers tends to mirror the hierarchical relationships between hardware devices (e.g. "PCI bus with controllers plugged in, with various devices plugged into those controllers, with various peripherals plugged into those devices" may become a tree of "parent device driver communicating with none or more child device drivers that are...").
b) "things that do not drive a device, and therefore are not technically device drivers". For the file IO example above, this is the VFS, file systems and software RAID layer. For networking it'll be code to handle a TCP/IP stack (and figure out routing, etc - which network card should send a packet based on the destination IP address). For user input (keyboard, etc) it could be things like Input Method Editors. For sound it can be code to determine how loud the sound should be on which speakers (on which sound card/s) based on a 2D position.
For most operating systems; device drivers need to be treated as "special" because they need to use interfaces (and possibly direct hardware access) that normal software/processes can't use. For example, for monolithic kernels they might be treated as a kernel extension and (dynamically) linked directly into the kernel.
However; "things that do not drive a device, and therefore are not technically device drivers" end up needing similar special support (e.g. the ability to use the same or similar interfaces that normal software/processes can't use but device drivers can use, the ability to be linked into a monolithic kernel, etc). For an OS designer, the differences between device drivers and "things that aren't technically device drivers but need the same access" is relatively insignificant (compared to normal software/processes which don't have/need special access); so it's tempting to use the same word to describe both - e.g. call them all "kernel modules" (regardless of whether they're device drivers or not); or call them all "device drivers" (regardless of whether they're technically device drivers or not).
Note that there's a few things that confuse this even more:
a) There's actually a third category - "virtual devices". In some cases software is trying to emulate a real device (e.g. RAM disks that use software/RAM to emulate a hard drive; printers that use a PDF file format converter to "print" to a file, etc). For these cases, emulation/virtualization necessitates implementation as a device driver (but there's technically no device being driven).
b) To make terminology seem more consistent; some operating systems are biased towards defining interfaces as "virtual devices". If you try hard enough you can pretend anything is some kind of abstract virtual device ("It's not a compression/decompression library, it's a virtual compression/decompression device", "It's not a database management engine, it's a virtual relational data storage device", ...).
c) Some operating systems also try to pretend that everything is a file (e.g. Unix - https://en.wikipedia.org/wiki/Everything_is_a_file ). In this case you might have a directory of "device drivers pretending to be files" (e.g. /dev) and end up with "things that are not device drivers that are pretending to be device drivers that are pretending to be files" slapped into the same directory.
Your question is unclear. If you ask for an example of a non-hardware device driver, an example would be the random number generator device. On Linux, for example, the "/dev/random" device provides a software implementation of a random number generator so systems without the necessary hardware can still have this function

Own RS232 device as linux filesystem device

Is there a way to expose own RS232 AVR device as Linux file system device e.g. /dev/avr_device? The program must be written as kernel space module or in user space? Is this possible to do by libfuse? Maybe should I use FIFO pipes as communication channel with device?
To be able to mount a device, in which you have installed a linux filesystem, you need that device to be a block device, but a serial tty device is a char device, incompatible with that.
To be able to solve that problem in the classical view of the system, you need to develop a block device driver, that attaches to that char device (the serial port) and uses it to control de block device emulation protocol, this means to convert the block number and block data into packets to be sent over the serial line to a receiver at the other side an implement the block device details of being some kind of storage device. This can be done with some effort.... the problem is if using a slow serial line will be of interest to simulate any kind of storage.
The advantage of the last approach is that you only have to simulate a block device and will be able to create any local filesystem available for linux.
On a higher level, you can implement a filesystem type, which is a higher level abstraction (fuse allows you for this) but this makes that a more difficult problem, as you have to implement every filesystem primitives (and believe me, there are far more primitives to emulate a filesystem than a block device) to implement every remote primitive as a set of local primitives (this can be unfeasible for a single programmer only)
This second approach fixes completely the functionality of the filesystem, and fixes completely the set of operations you can do to files to the implemented primitives you write. It is far more difficult and normally lacks uniformity with the rest of the system, so I should not recommend you to follow this approach.
The second approach has only one advantage, and it is: as the filesystem uses high level primitives, these can be encoded more compactly into network messages and be more efficiently transmitted over the line, giving more speed for a slow connection. But at the cost of having to implement all the filesystem functionality, and loosing uniformity on the use of these kind of filesystems (you have to implement user access, security, caching of requests, etc).
In the first approach, you have only to implement 4 or 5 primitives, and you get all the functionality of any filesystem that can be installed on a block device.

How should I structure a linux driver that uses several chips in one device?

I have a hardware device that consists of 3 separate chips on an I2C bus. I would like to group them together and expose them to userland as one logical device. The user would see the logical device represented by a single directory somewhere in /sys as well as the nodes you'd expect from the I2C chips under /sys/class/i2c-adapter/i2c-?/*.
One of the chips is an MCP23017 which as far as I can tell already has a driver (drivers/gpio/gpio-mcp23s08.c) and I'd like to reuse it. Another of the chips is a PCA9685 and I would like to contribute a driver for this chip that uses the PWM system in include/linux/pwm.h. The third chip is an MCU running custom firmware.
How should I structure the set of drivers? One idea I had is to register a platform driver to present the logical device, and use I2C drivers from within that. Is this a good way to go? Are there better ways?
The logical device is a motor driver board and IR receiver. I have a simple diagram of its structure.
I'm looking to create two interfaces. The first similar to /sys/class/gpio where motors can be 'exported' and then accessed via reading and writing attributes. This would be useful for shell script access and quick debugging of the mechanical parts of the system attached to the motors. The second a character device node in /dev where data can be read or written in binary format, more useful for application control.
it seems not usual design, are you sure you have access to I2C bus of all chips ?
I think you should be able to talk only to MCU, and MCU should manage other devices.
Otherwise, why MCU is there ?
However, I cannot see your diagram, perhaps link is wrong.

How do user-space programs interact with a USB 802.11 Wi-Fi driver on Linux?

Allow me to qualify this question by stating that I am new to driver development. I am trying to understand the driver source code for a USB Wi-Fi card that uses a RealTek 8187L chip. Based on a good answer to my previous question, I established that the relevant driver source code that I needed to inspect is in drivers/net/wireless/rtl818x/rtl8187/dev.c (inside the Linux kernel source).
Doing some reading, it seems as though a USB driver instantiates a usb_driver struct that it registers with the kernel, which describes (among other things) the devices the driver supports (.id_table), the function to execute when a supported device is connected (.probe) and optionally, a set of file operations (.fops), for interaction with user-space. The usb_driver struct associated with the 8187L driver does not include a .fops:
static struct usb_driver rtl8187_driver = {
.name = KBUILD_MODNAME,
.id_table = rtl8187_table,
.probe = rtl8187_probe,
.disconnect = __devexit_p(rtl8187_disconnect),
.disable_hub_initiated_lpm = 1,
};
module_usb_driver(rtl8187_driver);
Hence, I am curious as to how user-space programs are interacting with this driver to send and receive data.
On an old Linux Journal post (2001), there is the following excerpt:
The fops and minor variables are optional. Most USB drivers hook into another kernel
subsystem, such as the SCSI, network or TTY subsystem. These types of drivers register
themselves with the other kernel subsystem, and any user-space interactions are provided
through that interface. But for drivers that do not have a matching kernel subsystem,
such as MP3 players or scanners, a method of interacting with user space is needed. The
USB subsystem provides a way to register a minor device number and a set of
file_operations [fops] function pointers that enable this user-space interaction.
So it sounds like the 8187L driver is probably one that "hooks into another kernel subsystem". So I suppose my question is, for such a driver, which doesn't supply a .fops function pointer, how is the interaction taking place with this other kernel subsystem? Ultimately I want to be able to find the point(s) in the driver code that programs are actually interacting with to send and receive data so I can continue to analyse how the code works.
The driver for an individual wireless chipset is at a very low level. There are many layers between it and userspace. The next layer above rtl8187 is mac80211. In drivers/net/wireless/rtl818x/rtl8187/dev.c observe the call to ieee80211_register_hw. That registration call provides the link between the mac80211 layer and the rtl8187 device.
Next look at the implementation of ieee80211_register_hw, found in net/mac80211/main.c. That calls ieee80211_if_add, found in net/mac80211/iface.c, which calls register_netdevice. And that puts the device in the kernel's main list of network interfaces, making it available for things like ifconfig and route.
Most userspace programs don't interact directly with a network interface, they just send packets to an IP address and the kernel uses the routing table to choose an outgoing interface.
The RTL8187 driver registers itself with the IEEE 802.11 wireless networking subsystem, by calling ieee80211_alloc_hw() and ieee80211_register_hw() in its probe function.

Resources