How are platform drivers managed in linux - linux

Helo All,
I am trying to learn embedded linux.
Please provide clarity on below:
As DTB provides Board and SOC related information will it be sufficient to make sure correct drivers will be enabled?
How a correct driver will get selected and will it happen during build time or kernel runtime based on the DTB?
How are drivers for platform devices(UART, SPI, I2C etc) handled in Linux? I mean as different SOC's have different implementations(register and bit fields will be different) of these peripherals how a single driver will be able to handle both SOC's?
Is it sufficient to have well defined DTB about board and kernel to make Linux up and running on any platform.
Thanks,
Suraj

Related

How to debug graphics driver in linux?

I am new to linux kernel debugging. I have a radeon graphics card and I am doing some graphics driver development for my embedded system. Before making a custom driver for another radeon card, I want to know how the graphics driver behaves in linux. I have studied a bit on DRM, GEM/TTM, KMS, Framebuffers; but I want to see them actually happening on linux systems. I have Ubuntu system with kernel 3.10.x
I want to debug the driver and see the following. Please help how to do it.
How to access framebuffer and see what is currently being drawn. (This is more of a curiosity)
Wish to see how TTM and GART tables are maintained and interpret them(any link would also help) and KMS as my display is DVI-D
How DMA is playing role here. Can I begin without DMA (over PCI for time being etc..)
Minimal device driver requirement to draw some raw pixels on screen
Unlike linux, devices in my embedded system are not treated as files. So need to understand them and reinterpret them for my system.
Plan is to make Mesa over it. Just I am in the beginning stage. So any help would be appreciated.
Thanks

What is SoC (system on chip)? Does Renesas V850 have a system on it?

I have experience writing a C program and burning the program into a chip using an IDE provided by the chip manufacturer.
I also heard that there is a concept called SoC, which means an operating system, like Linux, is running on a chip. In this case, I can run my program on the chip just like on a Linux PC.
I don't really know the differences between these two kinds of chips. Are they the same? Can I install Linux on every chip?
And I have to use a chip called Renesas V850 in my work. Which kind of chip is this V850?
SoC is just a marketing term for 'more than a processor on a chip'. It doesn't mean Linux or operating system.
Years ago, each part of a system was on its own chip: processor, serial port, memory, ADC, DAC, etc. You had a PCB and a schematic that tied them all together.
Over time, more and more got integrated into the processor, particularly for application-specific processors and microcontrollers. Today, pretty much only big iron processors like Intel and AMD flagship processors are stand-alone, and even then there's some x86 chip produced that are 'SoC's (like the AMD Geode line, if that's still around). Everything else has USB ports, serial ports, ADCs, DACs, even wireless radios integrated into the same die.
As for 'what is a Renasas v850?' You'd do better to google that and read the product documentation. It isn't an ARM or MIPs core, and it doesn't appear to support the mainline Linux kernel, only μClinux.
The Renesas V850 Wikipedia page states that the Linux kernel support for v850 has been absent since version 2.6.27 (which released in 2008).
Typically, you need to know what group your chip belongs to and to read more about it on Renesas website. They provide all the documentation you may need. There is also a section for application notes and sample code that may also help.

Linux - Nic's flags configuration

Context
Debian 64 bit. kernel 3.18.x
Litterally struggling to understand how a network driver is initialized.
I mean how to choose which flag to set. I dig in the kernel for days now to train myself. The card setup is the only point I miss.
I take the intel 82574 as an example. I downloaded the card's datasheet, saw many information but not a clue on how to setup the hardware.
Question
Where to start to know what flags to set ? The datasheet didn't helped me (i am not very experienced but willing to learn).
Please give me a starting point, a tip or anything to help me understand what is going on in the already written open sourced driver.
How can a developer knows how to initialize his nic ? (yes reinventing the wheel the time to understand)
You'll need to read the source code of the kernel module that handles your specific NIC.
EDIT: Of course, to develop such a module, you'd usually just use a register map as specified in a data sheet or application node; often, manufacturers develop their linux drivers themselves, so the driver developers might even be the same people that developed the chipset (because it's really handy to have a platform to test against -- it's impossible to test hardware without having something like a driver, so you might as well write a proper driver).
Furthermore, devices often come with code examples -- no one is going to build a device based on an IC that he has not seen in action.
If you've got access to neither proper documentation nor source, you can only reverse engineer - and that's an incredibly large field.
Using your example with the Intel 82574 Network Adapter, Intel provides a zip file of the source code used to build the Linux driver. The driver is like all drivers in that it hooks into the OS API for Networking.
The Linux networking API is document on both the linux.org site and discussed on popular Linux sites like lwn.org. Below is the link to lwn's chapter on Network drivers using the networking API called NAPI.
https://static.lwn.net/images/pdf/LDD3/ch17.pdf
You'll notice in the Intel igb driver source code that the NAPI net_device data structure is one of the first things that is setup. It registers the driver with the OS. This way the OS knows which igb functions to call when loading/unloading the driver, or when needing to send/receive data.
The igb functions read/modify/write the necessary bits in the 82574's memory-mapped registers that control and monitor the device. The device registers are all documented in the 82574 datasheet available on Intel's site. And this is usually the case for almost any networking company like Broadcom/Chelsio/Mellanox/Marvell.
Hope that helps a little more.

questions about embedded linux device driver by linux newbie

I have been studying linux driver recently,
as those articles I read said, the device driver modules are likely to be automatically loaded on demand by kernel, I am therefore wondering about the recipe how kernel figures out which module to load for a specific device(sound card, I2C/spi device, etc), I also cannot thoroughly imagine how the kernel detects each hardware device while boot-time .
answers relevant to embedded linux are prefered , PC linux are also welcome !
3Q
I think you are mixing two different things, which is hardware detection, and on demand module loading.
In some cases, the kernel is explicitely doing a module request. However, in most cases, the kernel itself does not do any "on demand loading".
But wait, you must be mistaken, if I plug my shiny new webcam, isn't
the module automagically loaded ?
Yes it is, but not by the kernel. All the kernel does is calling a userspace program with so called "hotplug event" or "uevent" as arguments. On Linux PC, this userspace program is usually udev, but on embedded system, you can use for example mdev. You can find a more detailed explanation here and here
Regarding the second part of your question, the kernel is doing hardware discovery only if the hardware is discoverable. Example of discoverable hardware is USB and PCI. Example of non discoverable harwdare busses is SPI or I2C.
In the latter cases, the presence of a particular device on a given bus is either encoded directly in the kernel, or given to him by the booloader. Google for "device tree" for an example of the latter.
To sum things up : Hardware detection is done by the kernel, and module loading is done by userspace, with information provided by the kernel.

Is there a way to ask the Linux Kernel to re-run its PCI initialization code?

I'm looking for either a kernel mode call that I can make from a driver, a userland utility, or a system call that will ask the Kernel to look at the PCI bus and either completely re-run its initialization, or initialize a specific device. Specifically, I need the Kernel to recognize a device that was added to the bus after boot and then configure its address space, interrupt, and other configuration parameters, and finally enable the device so that I can load the driver for it (unless this all happens as part of the driver load).
I'm stuck on the 2.4.x series Kernel for this, and am currently working with 2.4.20, but will be moving to 2.4.37 if it matters. The distro is a stripped down Red Hat 7.3 running in a ram disk, but I can add in whatever tools are needed to get this working (as long as they play nice with 2.4 series).
If some background would help clarify what I'm trying to do: From a cold boot, once in Linux I use GPIO to program an FPGA. Part of the FPGA, once programmed, implements a simple PCI device. Currently, after programming the FPGA, I reboot the system and Linux recognizes the device after coming up and loads the driver for it.
Instead of needing that reboot, I'd like to simply ask the Kernel to do whatever it does during boot up to find PCI devices (I have the Kernel configured to find PCI devices on its own, instead of asking the BIOS for that information, so the BIOS won't need to know about this device (I hope)).
I believe that Linux is capable of seeing the device after it is programmed but before a reboot, because scanpci will show the device after I program it, as will lspci -H 1. I just need a way to get it into /proc/pci, configured and enabled.
This below command will help the user to rescan it complete root hub.
echo "1" > /sys/class/pci_bus/0000\:00/rescan
You could speed up the reboot with kexec, if you don't figure out how to get the PCI scan redone. You could ask this on the LKML, if you haven't already.
unloading/reloading the module doesn't help, does it?
http://www.linuxjournal.com/article/5633 suggests you should be able to do it with 2.4 kernels using pcihpfs.
If that isn't working, maybe the driver doesn't support hotplug?
It would probably crash the system if you reconfigured the addresses of other PCI devices while they are in use.
A better way would be to just configure the new card. If your kernel has support for Cardus devices, it already knows how to configure a newly-inserted PCI device (which is what Cardbus is). You just need to figure out how to get the kernel to do it...
It should be possible for a kernel module to do this. Even if you can't get built-in hotplug code, you should be able to set the pci resources using calls to pci_bus_write_config_dword() and friends. There is probably some IRQ routing setup to do as well.

Resources