I am a linux newbie, trying to understand Linux Device Model. I had been going through Linux 3.1.6 code base, particularly the driver part and found that
some of the drivers were using (for example i2c-bus device : linux-3.1.6/drivers/i2c/i2c-dev.c) *register_chrdev()* and
few others (for example pci bus : linux-3.1.6/drivers/pci/bus.c) were using *device_register()*.
My question is when to use register_chrdev (yes, I know its for a character device, but why not use device_register) and device_register ?
Does that depend on where does the driver developer wants his device/driver to be listed down, like devfs vs sysfs ? Or the interface exposed to the user space to access the device ?
One function registers a character device association (hooking up major:minors to your function), the other just creates an abstract device object (only), so to speak. The two are complementary. The device object is used for the generation of an event so that udev can, if there is also a cdev association registered, create a node in /dev. (Compare with, for example, drivers/char/misc.c.)
See when you register a device as a character device specifically then following thing happens:
Major Number is given in accordance. If you use any device depending on functionality whose registration is based on character device (like tty, input etc), then those will have their respective major number. Thats why its said that dont assign major number statically if not sure.
And
There are certain file operations which correspond to operations that could be performed on char devices only.
Do ask if any query.
Related
I have a PCIe model written in System Verilog, although I think this question is language agnostic. The model performs PCIe configuration reads and writes and memory reads and writes perfectly in simulation. However, what I need to do is "discover" my PCIe device and configure my config space registers in simulation. Is there a boiler plate chunk of pseudo code that represents the Linux PCIe enumeration process that I can just add my own models transactions functions too so that I can get a "Bus walk", followed by BAR programming, SR-IOV enable if discovered, MSIx config? It seems like this would be a common exercise for PCIe device so maybe there is model.
It isn't terribly difficult to do. Basically you loop through the config space, checking for each each possible device on the first root bus 0. When a device is found, you allocate a memory space for it based on its requested size and program the BARs accordingly. If you find any bridges, you also configure and enable them - the basic bridge registers for this are standard. This includes assigning the upstream and downstream bus numbers, which then allows you to enumerate the new downstream bus, and so on.
I had to do this once to access a PCI I/O card on a system that had no OS or other software environment. It wasn't too bad and that was across two bridges from two vendors, as well as the I/O card registers and the CPU bus root bridge setup. This was PCI, not PCIe, but it would be very much the same. You could even do it with completely hard-coded numbers if the hardware never changed, but in my case there were a couple variants so I actually had to do some simple enumeration to find the device numbers dynamically. One gotcha is that you may have to delay a bit, or retry, to give all the devices time to come online before you try to access them.
In doing that I found this book to be invaluable: PCI System Architecture (4th Edition). I notice there is also an version for PCIe: PCI Express System Architecture (1st Edition). I would definitely get one of those if you haven't already. These books contain detailed algorithms and explanations about how to do all of this. At the time I didn't really use or refer to any code to speak of, but...
The best code resource I have found is U-Boot. It operates at a similarly low-level and is totally self contained and is still fairly small and as simple as possible. For example, the enumeration appears to start with the function pci_init() calls a board specific pci_xxx_init(). This then sets up the root bridge and then calls pci_hose_scan_bus() in drivers/pci/pci.c to do the real work. Also check out the routines in drivers/pci/pci_auto.c, as well as the rest of the folder.
For your task you probably only need a very small subset and could just hack out parts of these files into a simple driver. Basically a for() loop and some pci_read/write_config() calls with logic to recognize your device and bridge IDs.
In most example project for embedded system there is a system file in which we can find structures for different peripheral as well as the memory mapping of the peripheral register, in addition there is also a module per peripheral that contains basic function to manipulate the periheral like: periph_enable, periph_write, periph_read; this is the architecture i have in mind when i tackle a new project.
Actually i started to work with a BF609 but now with an embedded linux in it, my task consist in writing a communication driver with another device via UART, as usual i tried to look for the files i used to use but in vain, i can't find the mapping of the different peripheral.
I started to read this book, i undrestand that the kernel see each device like a file and that a driver is mainly the implementation of the open, close, read and writefunctions in these file but i still don't undrestand how these functions communicate with peripheral registers.
My questions:
1) How device drivers recognise the mapping of the peripheral is there sth i missed, is there any example that explain how to implement simple read and write functions via UART for example
2) Where can i find the mapping of the peripheral in the buildroot directory
Thanks in advance
currently i write an driver module which offers some entries in the sysfs. I read a lot through the driver source tree and the internet. I found two approches where the sysfs_create_group() is called:
a) most commonly: In the probe() function of the Driver. Like adviced here
How to attach file operations to sysfs attribute in platform driver?
Random Thing i looked at:
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/rtc/rtc-ds1307.c#n1580
b) In the driver struct.
http://kroah.com/log/blog/2013/06/26/how-to-create-a-sysfs-file-correctly/
I know, Greg KH is a very well known developer. So i tried to follow his advice. In the bla_show()/bla_store() functions i tried to get my Driver private data, but my printk()'s showed much different addresses than i printed in the probe() function. My private data is (null). Which is ofc wrong.
When i use approch a) it works as expected, but as Greg KH suggests it is wrong too. I see it a lot in the stable tree in different drivers. Greg writes, the userspace already got the notification that there is a new device, but the LDD3 book states, that the probe function is there to determine if the device is present.
To sum my question up:
Why get the userspace notified about it, even when the kernel doesnt know if it can handle it?
Where is the right place to call sysfs_create_group()? Is it a) or b) ?
LDD3: https://static.lwn.net/images/pdf/LDD3/ch14.pdf
PDF page 24
probe is a function called to query the existence of a specific device
(and whether this driver can work with it), remove is called when the
device is removed from the system,and shutdown is called at shutdown
time to quiesce the device.
I am more confused than before .....
Best Regards
Georg
A device driver is a program that controls a particular type of device that is attached to your computer.
Platform devices are inherently not discoverable, i.e. the hardware cannot say "Hey! I'm present!" to the software. So for these kind of device we need a driver which call as Platform drivers. Drivers provide probe() and remove() methods.
struct platform_driver {
int (*probe)(struct platform_device *);
int (*remove)(struct platform_device *);
.
.
struct device_driver driver;// this file has 2 parameter name or owner.
};
probe() should in general verify that the specified device hardware
actually exists. First we register our driver. Once it found device then it'll call driver probe. It is using name for searching a device.
Ans : your device is available then you need sysfs entry for communication (To the user space). so conceptually you need to define you sysfs entry in probe.
sys_notify function on your attribute and it will cause your userspace code to wake up. It will trigger when sysfs is available for userspace. It just avoiding a blocking call. When kernel does not have sysfs then it'll not notify userspace.
sysfs is a virtual file system provided by the Linux kernel that exports information about various kernel subsystems, hardware devices, and associated device drivers from the kernel's device model to user space through virtual files. When your device is available then you need this entry to export your information.
I have booted a ubuntu on a ZedBoard. I want to transfer data between fpga and linux. For example, I want to write or read a register from linux. What is best way for doing it? I have not any idea.
Thanx.
First of all, you need to specifically say what you want to do, for example. if you want to access the IO signals on the FPGA, you need to first add the GPIO module to your system, synthesize and implement it.
Then you use the Linux GPIO Driver to access the port as it is explained in this page:
Linux GPIO Driver
The GPIO driver fits in the Linux GPIO framework which is not a char
mode driver. Yet it does provide access to the GPIO by user space
through the sysfs filesystem. This allows each GPIO signal to be read
and written in similar manner to a char mode device. The interface is
somewhat documented in the kernel tree at Documentation/gpio.txt. The
following text is aimed to augment, not replace the existing
documentation.
For other, more complex interfaces you need to create your own driver or use one of the drivers that are available and modify it to fit your needs.
I have been trying to understand the mfd framework in linux kernel to write my drivers but there seems to be hardly any documentation and the mfd core itself doesnt seem to have much helpful comments. So, I am trying to understand what the mfd_cell structure describes. That seems to be the basis here. What I m particularly interested in finding out is if this used as a general abstraction for 'x' number of sub-devices or is it intended/useful for a full hierarchy of sub-devices.
An MFD is a device that contains several sub-devices. For instance, in embedded systems a PMIC usually contains a battery manager, a charger and sometimes devices with unrelated functions like a USB PHY, an Audio codec, a Real-Time Clock, ...
A cell is meant to describe a sub-device. The mfd subsystem will use the information registered in that structure to create a platform device for each sub-device, along with the platform_data for the sub-device.
You can specify more advanced things like the resources used by this device and suspend-resume operations (to be called from the driver for the sub-device).
The new platform devices that are created will have the cell structure as their platform data and can access the real platform data through cell->platform_data.