I have little background on how these hardware actually works, but now I'm required to learn how to write a Linux frame buffer driver for Android devices.
I'm confused by Linux graphics stack. From what I understand, on a desktop computer the compositing window manager interacts with DRM, which then sends data to specific video card driver. On the other hand there are some kind of controllers retrieving data from GPU's memory through DMA and send it to the monitor, as suggested by the answer here .
Also by diagram at page 29 of this book, I figured that a frame buffer driver is on top of actual graphic devices, so it must need to interact with specific video card driver, for example, an nVidia driver.
But when I google writing a frame buffer driver for an embedded device, the results show that as if the driver is directly responsible for contacting with the LCD, so it looks like it's even below a video card driver.
So is a frame buffer driver actually a video card driver?
A framebuffer driver provides an interface for
Modesetting
Memory access to the video buffer
Basic 2D acceleration operations (e.g. for scrolling)
To provide this interface, the framebuffer driver generally talks to the hardware directly.
For example, the vesafb framebuffer driver will use the VESA standard interface to talk to the video hardware. However, this standard is limited, so there isn't really much hardware acceleration going on and drawing is slow.
Another example is the intelfb framebuffer driver. It talks to some intel hardware using a proprietary interface, that exposes more acceleration facilities, so it is faster.
Nowadays, KMS drivers are used instead for most hardware. They do both expose a framebuffer and also access to other GPU functionality, e.g. OpenGL, through DRM.
Your confusion seems to arise from the fact, that the framebuffer driver and the X11 GPU driver are in fact competing! This is why, if you have a KMS system, the switch between graphical and text consoles is instant, however, with a non-KMS system, it is slow, as both the fb driver and the X11 driver need to re-initialize the video hardware on console switch.
Find more information in the comprehensive talk Linux Graphics Demystified by Martin Fiedler:
http://keyj.emphy.de/files/linuxgraphics_en.pdf
Related
I am studying Operating Systems, and came across divice controllers.
I gathered that a device controller is hardware whereas a device driver is software.
I also know that a HDD and a SSD both have a small PCB buit into them and I assume those PCB's are the device controllers.
Now what I want to know is if there is another device controller on the PC/motherboard side of the bus/cable connecting the HDD/SSD to the OS?
Is the configuration: OS >> Device Driver >> Bus >> Device Controller >> HDD/SSD
Or is it: OS >> Device Driver >> Device Controler >> Bus >> Device Controller >> HDD/SSD
Or is it some other configuration?
Sites I visited for answers:
Tutorialspoint
JavaPoint
Idc online
Quora
Most hard-disks on desktop are SATA or NVME. eMMC is popular for smartphones but some might use something else. These are hardware interface standards that describe the way to interact electrically with those disks. It tells you what voltage at what frequency and for what amount of time you need to apply (a signal) to a certain pin (a bus line) to make the device behave or react in a certain way.
Most computers are separated in a few external chips. On desktop, it is mostly SATA, NVME, DRAM, USB, Audio Output, network card and graphics card. Even though there is few chips, the CPU would be very expensive if it had to support all those hardware interface standards on the same silicon chip. Instead, the CPU implements PCI/PCI-e as a general interface to interact with all those chips using memory mapped registers. Each of these devices have an external PCI-e controller between the device and the CPU. In the same order as above, you have AHCI, NVME controller, DRAM (not PCI and in the CPU), xHCI (almost everywhere) and Intel HDA (example). Network cards are PCI-e and there isn't really a controller outside the card. Graphics card are also self standing PCI-e devices.
So, the OS detects the registers of those devices that are mapped in the address space. The OS writes at those locations, and it will write the registers of the devices. PCI-e devices can read/write DRAM directly but this is managed by the CPU in its general implementation of the PCI-e standard most likely by doing some bus arbitration. The CPU really doesn't care what's the device that it is writing. It knows that there is a PCI register there and the OS instructs to write it with something so it does. It just happens that this device is an implementation of a standard and that the OS developers read the standard so they write the proper values in those registers and the proper data structures in DRAM to make sure that the device knows what to do.
Drivers implement the standard of the software interface of those controllers. The drivers are the ones instructing the CPU on values to write and writing the proper data structures in DRAM for giving commands to the controllers. The user thread simply places the syscall number in a conventionnal register determined by the OS developers and they call an instruction to jump into the kernel at a specific address that the kernel decides by writing a register at boot. Once there, the kernel looks at the register for the number and determines what driver to call based on the operation.
On Linux and some place else, it is done with files. You call syscalls on files and the OS has a driver attached to the file. They are called virtual files. A lot of transfer mechanisms are similar to the reading/writing files pattern so Linux uses that to make a general driver model where the kernel doesn't even need to understand the driver. The driver just says create me a file there that's not really on the hard disk and if someone opens it and calls an operation on it then call this function that's there in my driver. From there, the driver can do whatever it wants because it is in kernel mode. It just creates the proper data structures in DRAM and writes the registers of the device it drives to make it do something.
Is there any way in any Linux-based OS to access the instructions sent to have it transfer the instructions meant for the GPU to an external device connected via USB3.0 (and, obviously get the pixel output back)?
In Windows there is no native support for this kind of thing. Although many external graphics card exist which connect to a PCi slot, I didn't come across any USB ones.
Is there any way to accomplish this with native support of any of the Linux OSes?
EDIT: I was misunderstood, as far as I can see the comments. I want to program an external graphics card, I'm just looking for a way to get the GPU instructions to get to my device and get back the pixel array.
Currently, I am developing my Own Video Frame Buffer Driver with help of Linux PCIe and Virtual Frame Buffer Driver.
My Custom Driver works fine on 720X480P Video Resolution but getting some slow on 720P Video Resolution.
I have just directly mapped frame buffer memory with DDR2 Memory coming with PCIe Interface on FPGA System.
DMA Controller Stuffs are implemented in FPGA System.
How can I develop DMA Read/Write Operation in my own frame buffer driver to solve my slow frame rate issue?
Please let me know if anyone has idea about this.
In Linux, HAL provides hardware abstraction and device driver too provide hardware abstraction. Can you please clarify me the difference between two ?
The device driver communicates with a specific device at a specific buffer and control flag block location. A hardware abstraction layer abstracts away the details of how specific devices work. For example, the driver for a USB mouse is very different from the driver for a PS2 mouse but at the HAL layer they are both mice and can be treated interchangeably.
I would say that HAL provides hardware abstraction using device drivers. From a certain point of view, no device can work without a driver. HAL goes one step ahead, offering a uniform (or, "easier") API for the application.
You can bypass HAL and talk directly to the device driver, but you can not bypass the device driver and talk directly to the hardware (this last sentence is more or less valid in general, depending on OS and environment).
The main difference is what they provide abstraction for. HAL abstracts processors, device drivers abstract different devices. So in a sense HAL is the "device" driver of the processor or the motherboard in PCs.
Back in the day, every programmer who coded an app also codes drivers for the various hardware that they wanted to support. So, if you have an idea to develop an app which needs to use network capabilities, you also needed to know how to program hardware drivers for the network card. Then came in the HAL.
So instead of having your software and OS directly reaching out to the hardware, there is now a layer in between called the HAL. The HAL lies underneath the operating system layer or within.
Now nobody is allowed to access the hardware, except that they do it through and by the hardware abstraction layer(HAL). Just the HAL is allowed to access the hardware.
Now it's something which is standard. All Devs have to do is make the game/app work with the HAL.
Now we have the drivers. The drivers tell the HAL how to access the actual hardware.
So whoever makes the sound card, they just make a driver that tells the HAL how to access that sound card.
So overall, our software interacts with the HAL, The HAL uses drivers to interact with the hardware. We are telling the HAL how to access that sound card or network card etc. with the use of the drivers.
I'm trying to develop a "virtual" video driver based on ViVi project example. It's virtual since it doesn't interact with any camera. It gets a video stream from a user program (C++) and also it acts as video driver for another user program (Flash) which displays the video stream.
So, if I have a /dev/video0. One program needs write frame to it and another reads one from. Is that possible?
I need this because Flash doesn't recognize this camera, so I use a virtual driver as a bridge from my grabber (which uses the real driver) and Flash.
Yes.
More generally, a device driver can allow as many simultaneous opens as it wants. Take a look at Linux Device Drivers for more info. You can use filp->private_data to store data relevant to the specific open instance.
For even more flexibility, a device driver isn't even limited to having a single device node in /dev.
There used the vloopback driver, which did exactly what you want to do. However, it wasn't part of the standard kernel. Some time ago, I wrote a library (dv4linux that intercepted libc read/writes to /dev/video to achieve something similar. The current version has serious issues with newer firefox's malloc handling, though. berlios.de may fo out of service soon.
Can a driver be used by two program :
It usually can, but it is driver dependant. When it comes to data capture, you often have one process which gets all the data, and other processes have only limited access to the driver functionnality. So in the end, the API is ok with multiple process opening a driver, but in the end it all depends on the driver.
Can the VIVI driver be used as a bridge driver :
No. It is a video capture emulation driver, but there is no "video output" or "video sink" capability in this driver. You will have to understant why flash doesn't work with your real driver, but does work with a virtual driver. strace is your friend.