While building Linux kernel from source, I noticed that it is also building some drivers (e.g. drivers/gpu/drm/i915 or nouveau etc).
On the other hand, on my system I also have xserver-xorg-video-intel package installed (Ubuntu). So the question is: how does the xserver-xorg-video-intel driver go with drivers/gpu/drm/i915 from kernel? Are they two separate things with different purpose (e.g. the second is for X11 only)?
Linux graphic stack is a wide and complex ecosystem.
you have a general overview here :
or a more complete and technical one from Stephane Marchesin which is one of the nouveau hackers.
Basically, graphics toolkits (Qt, Gtk, efl, etc..) talk with Xorg. XOrg use libdrm to interact with the kernel DRM infrastructure which stands upon and abstract video card drivers (nouveau, i915, ..).
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I'm studying Operating Systems. I read Window have lots of system calls for manage windows and GUI components. I have read you can change the GUI manager of your Linux Operating System. Then does Linux have system calls for GUI managements? How GUI works in Linux?
I'll take x86 as an example as I am more aware of x86 stuff than ARM stuff. Also, I may get some information wrong as I've been doing some research on this question while answering. Feel free to correct me if I am wrong.
System booting
Some time ago, Linux used to boot with a legacy bootloader (GRUB legacy version). The GRUB bootloader would be started by the BIOS at 0x7c00 in RAM and then would read the kernel from the hard-disk. It would follow the multiboot specification. The multiboot specification mentions the state that the computer needs to be in before jumping to the kernel's entry point. The kernel would then launch a first process (init) that every process would be a child of.
Today, most Linux distributions boot with UEFI (with the option of legacy booting also available). A UEFI app is placed on the boot partition partionned as a GPT ESP (EFI System Partition). This EFI app is launched and then follows the Linux Boot Protocol to launch Linux. The init process was also replaced by systemd. Linux will thus launch systemd as the first process of the computer. Actually, as stated on the manpage for systemd:
systemd is usually not invoked directly by the user, but is
installed as the /sbin/init symlink and started during early
boot.
The process that will be started is thus /sbin/init but it is a symlink to systemd. The systemd process will then read several configuration files on the hard-disk called units. These units are often targets which specify several units to read. Targets are thus units which specify several units to read. At first systemd will read default.target which specifies several other units. Some of these other units will start some processes among which is the Display Manager (fancy terminology which means login prompt). Recently, Ubuntu starts the Gnome Display Manager (GDM) as the first displaying program (gdm.service unit). This program will start the X server before presenting the user login screen (https://en.wikipedia.org/wiki/X_display_manager).
When the display manager runs on the user's computer, it starts the X server before presenting the user the login screen, optionally repeating when the user logs out.
Once logged in, GDM will start several other binaries responsible to let you interact with the system (the actual desktop, a binary to gather input for this desktop, etc). All of these components depend on the X server to work properly.
The DRM
The X server is a user program which makes extensive use of the Direct Rendering Manager (DRM) of the Linux kernel. The DRM is a system call interface which is used to interact with graphics cards. When the DRM detects a graphics card, it exposes a file like /dev/dri/card0 which is a character device (http://manpages.ubuntu.com/manpages/bionic/man7/drm.7.html).
In earlier days, the kernel framework was solely used to provide raw hardware access to
priviledged user-space processes which implement all the hardware abstraction layers. But
more and more tasks were moved into the kernel. All these interfaces are based on ioctl(2)
commands on the DRM character device. The libdrm library provides wrappers for these
system-calls and many helpers to simplify the API.
When a GPU is detected, the DRM system loads a driver for the detected hardware type. Each
connected GPU is then presented to user-space via a character-device that is usually
available as /dev/dri/card0 and can be accessed with open(2) and close(2). However, it
still depends on the grapics driver which interfaces are available on these devices. If an
interface is not available, the syscalls will fail with EINVAL.
The ioctl call allows to have any number of operations on the /dev/dri/card0 file since it is a general call which includes a request argument which is simply an unsigned long. It also takes a variable amount of arguments (see https://man7.org/linux/man-pages/man2/ioctl.2.html).
The ioctl call thus allows hardware vendors (like NVIDIA, AMD, etc) to provide drivers for their cards with the general ioctl call used as a general interface between user mode and kernel mode.
OpenGL
There exists several 3D rendering APIs available (OpenGL, Direct3D). OpenGL is mostly a set of C headers and a convention. The convention says what a certain call should do. It is up to the hardware vendor to implement the convention for their own card. Mesa3D has been an attempt to create an open source implementation of OpenGL for certain graphics cards. It worked quite well for integrated Intel HD Graphics (since documentation is open) and AMD (since they cooperated and offered some insight into the workings of their cards), but not for NVIDIA (the Nouveau driver is mostly not working or slow).
When you program some OpenGL, you include the OpenGL headers and link with libraries provided by hardware vendors which provide the definitions of the functions in the headers. These definitions make use of the DRM and cooperate with the X server to show content on the screen.
I'm studying Operating Systems. I read Window have lots of system calls for manage windows and GUI components. I have read you can change the GUI manager of your Linux Operating System. Then does Linux have system calls for GUI managements? How GUI works in Linux?
System calls (provided by the kernel) are often buried (e.g. in some cases deliberately undocumented and proprietary) and should not be used. Almost everything you see are actually normal functions in dynamically linked libraries/shared libraries. This allows the kernel's system calls to be radically changed without breaking everything (because everything only depends on the dynamically linked libraries/shared libraries); and reduces the functionality needed in the kernel itself.
For an example; most of the "system calls for managing windows and GUI components" you think Windows has could (internally, inside the relevant DLL) just end up using a single "send_message()" system call (to tell a different process, the GUI, that you want to create a window or change its position or ...).
For Linux it's roughly similar. The kernel's system calls (which actually are documented, for no sane reason - it goes against the spirit of SYS-V specs and means badly written "linux executables" aren't compatible with other Unix clones like FreeBSD or Solaris or OSX) exist to use things like low level memory management and raw file IO and sockets; but (like Windows) the kernel's system calls are buried under layers of shared libraries, and those shared libraries (e.g. like Xlib, GLib, KWindowSystem, Qt, ...) just use "something" (file IO, pipes, sockets, ...) provided by kernel to talk to another process (display server, GUI, ..).
Linux and Windows fall under separate categories; Linux is just a kernel, i.e. the piece under the hood that gives us the basic functionality we expect to run programs, like threads, memory and process management, etc. Windows is a full operating system, including the user facing components and numerous system libraries. An apter comparison would be a specific Linux distro and Windows.
On that note, distros, as independent operating systems, obviously can have different implementations of any OS component. Some distros, like Arch, don't come with a GUI by default at all. That said, essentially the entire Linux ecosystem uses Xorg and/or Wayland; I would recommend looking into the implementation details of those two.
A Linux GUI has quite a few differences compared to Windows GUI. For example, the GUI is not considered to be a part of the operating system, but rather an external part of it; that means no syscalls (not embedded whatsoever in the OS). After all, like the previous answer says, Linux is a kernel, that means it's only something really basic (allows execution of programs, memory/threads management, processes management, but not really much more). Whatever comes next (GUI, for example) are added features using packages.
This allows, for example, installing a GUI on top of a minimal installation of any Linux distro (CentOS, for example), and that GUI can be the one you want (Gnome, KDE...).
I may have a couple assumptions of Linux incorrect about the Linux system, and for that I apologize.
I have been educating myself on the Android and Linux systems for a while now and I started looking into installing a custom boot loader and Linux system onto an older Samsung tablet. Immediately upon looking into the feasibility of this, most of the answers I could find were that it wasn't possible because you would need the drivers that are being used by the android kernel to communicate with the OEM hardware in whatever Linux kernel you are installing.
I have one of these tablets rooted and I believe I may have found the drivers I need (not sure on that yet), and so I guess my question is, is it possible to take the drivers and put them into a Linux kernel within a distribution install image and install Linux on the device (using also a custom boot loader)?
I presume because someone hasn't done this before there is a pretty good reason why, but I am basically looking to be able to use Linux on my old tablet without the resources being taken from Android, and personally in my opinion, if I don't need Android and can install Linux straight on to the machine, then why keep it?
In the long run I am looking into LFS to create a custom distribution that can be installed on these tablets, but the most important question to me right now is if I do create this distribution can I get the drivers that the hardware needs (and even then will my kernel be able to use them?).
I also understand that some of these drivers may be proprietary drivers provided by the manufacturer, but I am not looking to profit off of this but instead research the feasibility of a better personal on-the-go computing setup.
I may be terribly wrong on how I may have described some things, so here are some of my assumptions:
The .ko files in the Android /lib/modules/ directory are the static kernel drivers I am looking for for that device.
The drivers aren't written for specifically the Android system, but for all Linux variants and would be compatible with another distribution.
If the drivers were written for the Android system, then one would be able to edit or modify those drivers to work with a different distribution.
One could "put the drivers into an installation image", or if not, then one would have to compile the kernel from source with those static drivers.
TL:DR, If this all just amounts to rambling, here are my specific questions:
Is it possible to copy the static kernel drivers of a rooted Android device to something like the SDcard?
Is it then possible to "put" or "compile" those same static drivers into a Linux distribution before installing it onto said tablet using something like Odin, or the like?
I want to start some Linux development for my research. Writing few simple scheduling algorithms and test them. I have few questions:
1) How do you develop for the linux kernel? IDE? How do you import the kernel files and see how they are related or connected?
2) Once you write your code, how do you simulate/debug it? I mean one can't just build the kernel for 20 - 30 minutes, make a new image and change boot.ini each time. This is a lenghty process plus you can't simulate or debug just observe if it works or not.
3) Is there A guide for starting developing in Linux. I find the lack of documentation surprising
I am developing for ARM-based boards
Excuse my ignorance.
Thanks
How do you develop for Linux kernel?
There are many components in the Linux kernel. Typically, kernel is divided into core and driver parts.
Core includes scheduling, MMU, memory management, process management etc
Drivers includes file system, networking, peripheral device drivers, USB etc
IDE is not a must to develop kernel code. For kernel veterans, VIM/nano is also OK. The development environment is up to you. If you are new to the kernel code, you want to build the function relationship views, some tools can be helpful:
Source Insight (Commercial)
vim + ctags (http://vim.wikia.com/wiki/Single_tags_file_for_a_source_tree)
How to debug it?
There are many Linux favors/distributions. You can use Software emulator or Hardware boards to debug the kernel. Android is based on Linux and there are many mobile phones or development boards that support Android. iOS is also derived from Linux and it is its own debug method.
Where to find the kernel documents?
For kernel part, there are many readme articles in kernel source tree. e.g. http://lxr.free-electrons.com/source/Documentation/debugging-via-ohci1394.txt
printk is powerful enough for newbies.
For ARM part, there are many articles in infocenter.arm.com
Debugging Linux kernels using DS-5
http://infocenter.arm.com/help/topic/com.arm.doc.den0024a/ch18s03s03.html?resultof=%22%6b%65%72%6e%65%6c%22%20
I have experience writing a C program and burning the program into a chip using an IDE provided by the chip manufacturer.
I also heard that there is a concept called SoC, which means an operating system, like Linux, is running on a chip. In this case, I can run my program on the chip just like on a Linux PC.
I don't really know the differences between these two kinds of chips. Are they the same? Can I install Linux on every chip?
And I have to use a chip called Renesas V850 in my work. Which kind of chip is this V850?
SoC is just a marketing term for 'more than a processor on a chip'. It doesn't mean Linux or operating system.
Years ago, each part of a system was on its own chip: processor, serial port, memory, ADC, DAC, etc. You had a PCB and a schematic that tied them all together.
Over time, more and more got integrated into the processor, particularly for application-specific processors and microcontrollers. Today, pretty much only big iron processors like Intel and AMD flagship processors are stand-alone, and even then there's some x86 chip produced that are 'SoC's (like the AMD Geode line, if that's still around). Everything else has USB ports, serial ports, ADCs, DACs, even wireless radios integrated into the same die.
As for 'what is a Renasas v850?' You'd do better to google that and read the product documentation. It isn't an ARM or MIPs core, and it doesn't appear to support the mainline Linux kernel, only μClinux.
The Renesas V850 Wikipedia page states that the Linux kernel support for v850 has been absent since version 2.6.27 (which released in 2008).
Typically, you need to know what group your chip belongs to and to read more about it on Renesas website. They provide all the documentation you may need. There is also a section for application notes and sample code that may also help.
I have a fairly large PCIe driver written on/for Linux, now I need to port it on FreeBSD. I don't yet know the BSD version, but I think at this point it's irrelevant, as I'd like to understand in general what major items will have to be modified during the porting efforts.
The good thing is that the driver is partitioned into OS independent "library" layer (OSI) and OS dependent, so it already has a "framework" permitting to port it on other OS-es, and I hope most of the efforts will be focused on OSI side. So far I see the following big chunks of work:
init code, i.e. the OS-specific code that "plugs" the driver into
system (similar to what init_module, cleanup_module does in Linux)
code registering driver in a PCI core subsystem of the kernel
character driver registration code 4) DMA operations
What else should I be paying attention to? This driver is a device doing hardware encryption, so it is offload device (ingress packets from NIC enter system normally and then diverted to the device).
If there are useful web links to description of BSD drivers development/porting (similar to LDD), I'd happily accept it :)
In 2011, Jeff Roberson (and later Mellanox) added some shims to ease porting Linux drivers, which makes most of the code be used as-is, when he ported the Linux InfiniBand drivers to FreeBSD. So, assuming I am some newcomer from Linux driver development world, I'd start by looking at:
https://svnweb.freebsd.org/base/head/sys/ofed/include/linux/
Where you would find implementations of many required Linux driver API and their FreeBSD native counterpart.
There is another quickstart document by John-Mark, here, helpful for those who are already familiar with driver writing.
If you would prefer starting from the beginning, I think the FreeBSD Architecture Handbook would be an useful start point.
Additionally, there is a book by Kirk McKusick, Robert Watson and George Neville-Neil, titled "The Design and Implementation of the FreeBSD Operating System", the latest version at this time is 2nd edition, and the chapter 8 detailed device drivers.
Most device drivers are merely wrappers of hardware operation to fit OS interfaces, so a well layered driver should be relatively easy to port nowadays.
If you have questions, or is a vendor of hardware, you can also join various FreeBSD mailing lists (freebsd-drivers#, etc.).