QThread::start(priority) vs Linux - linux

I am using some QThread based worker threads in KDE Neon 18.04 (based on Ubuntu 18.04, Kernel 4.15.0-46-generic). The worker threads interfered with my desktop applications so I decided to reduce their priority.
The Qt documentation of QThread::start(priority) says:
The effect of the priority parameter is dependent on the operating
system's scheduling policy. In particular, the priority will be
ignored on systems that do not support thread priorities (such as on
Linux, see http://linux.die.net/man/2/sched_setscheduler for more
details).
After reading the above documentation I expected priorities would have no effect on my Linux system. Still I gave it a try. And guess what - it worked perfectly.
So, why does the Qt documentation state there would be no thread priorities on Linux? And why does it work anyway?

Depending on which flavour of Linux/Unix/*Nix you use the scheduler may or may not support it. As far as I'm aware the majority of Qt's priority levels are supported on most Linux systems now, but not all of the priority levels. I suspect the documentation says it's unsupported so they don't need to list every combination of OS variant and scheduler variant that do support priority levels and which levels are supported.
You can validate it has created it with the correct priority by using htop or top and processing with awk: https://unix.stackexchange.com/questions/19301/what-is-a-command-to-find-priority-of-process-in-linux

Related

Linux's system calls for GUI? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I'm studying Operating Systems. I read Window have lots of system calls for manage windows and GUI components. I have read you can change the GUI manager of your Linux Operating System. Then does Linux have system calls for GUI managements? How GUI works in Linux?
I'll take x86 as an example as I am more aware of x86 stuff than ARM stuff. Also, I may get some information wrong as I've been doing some research on this question while answering. Feel free to correct me if I am wrong.
System booting
Some time ago, Linux used to boot with a legacy bootloader (GRUB legacy version). The GRUB bootloader would be started by the BIOS at 0x7c00 in RAM and then would read the kernel from the hard-disk. It would follow the multiboot specification. The multiboot specification mentions the state that the computer needs to be in before jumping to the kernel's entry point. The kernel would then launch a first process (init) that every process would be a child of.
Today, most Linux distributions boot with UEFI (with the option of legacy booting also available). A UEFI app is placed on the boot partition partionned as a GPT ESP (EFI System Partition). This EFI app is launched and then follows the Linux Boot Protocol to launch Linux. The init process was also replaced by systemd. Linux will thus launch systemd as the first process of the computer. Actually, as stated on the manpage for systemd:
systemd is usually not invoked directly by the user, but is
installed as the /sbin/init symlink and started during early
boot.
The process that will be started is thus /sbin/init but it is a symlink to systemd. The systemd process will then read several configuration files on the hard-disk called units. These units are often targets which specify several units to read. Targets are thus units which specify several units to read. At first systemd will read default.target which specifies several other units. Some of these other units will start some processes among which is the Display Manager (fancy terminology which means login prompt). Recently, Ubuntu starts the Gnome Display Manager (GDM) as the first displaying program (gdm.service unit). This program will start the X server before presenting the user login screen (https://en.wikipedia.org/wiki/X_display_manager).
When the display manager runs on the user's computer, it starts the X server before presenting the user the login screen, optionally repeating when the user logs out.
Once logged in, GDM will start several other binaries responsible to let you interact with the system (the actual desktop, a binary to gather input for this desktop, etc). All of these components depend on the X server to work properly.
The DRM
The X server is a user program which makes extensive use of the Direct Rendering Manager (DRM) of the Linux kernel. The DRM is a system call interface which is used to interact with graphics cards. When the DRM detects a graphics card, it exposes a file like /dev/dri/card0 which is a character device (http://manpages.ubuntu.com/manpages/bionic/man7/drm.7.html).
In earlier days, the kernel framework was solely used to provide raw hardware access to
priviledged user-space processes which implement all the hardware abstraction layers. But
more and more tasks were moved into the kernel. All these interfaces are based on ioctl(2)
commands on the DRM character device. The libdrm library provides wrappers for these
system-calls and many helpers to simplify the API.
When a GPU is detected, the DRM system loads a driver for the detected hardware type. Each
connected GPU is then presented to user-space via a character-device that is usually
available as /dev/dri/card0 and can be accessed with open(2) and close(2). However, it
still depends on the grapics driver which interfaces are available on these devices. If an
interface is not available, the syscalls will fail with EINVAL.
The ioctl call allows to have any number of operations on the /dev/dri/card0 file since it is a general call which includes a request argument which is simply an unsigned long. It also takes a variable amount of arguments (see https://man7.org/linux/man-pages/man2/ioctl.2.html).
The ioctl call thus allows hardware vendors (like NVIDIA, AMD, etc) to provide drivers for their cards with the general ioctl call used as a general interface between user mode and kernel mode.
OpenGL
There exists several 3D rendering APIs available (OpenGL, Direct3D). OpenGL is mostly a set of C headers and a convention. The convention says what a certain call should do. It is up to the hardware vendor to implement the convention for their own card. Mesa3D has been an attempt to create an open source implementation of OpenGL for certain graphics cards. It worked quite well for integrated Intel HD Graphics (since documentation is open) and AMD (since they cooperated and offered some insight into the workings of their cards), but not for NVIDIA (the Nouveau driver is mostly not working or slow).
When you program some OpenGL, you include the OpenGL headers and link with libraries provided by hardware vendors which provide the definitions of the functions in the headers. These definitions make use of the DRM and cooperate with the X server to show content on the screen.
I'm studying Operating Systems. I read Window have lots of system calls for manage windows and GUI components. I have read you can change the GUI manager of your Linux Operating System. Then does Linux have system calls for GUI managements? How GUI works in Linux?
System calls (provided by the kernel) are often buried (e.g. in some cases deliberately undocumented and proprietary) and should not be used. Almost everything you see are actually normal functions in dynamically linked libraries/shared libraries. This allows the kernel's system calls to be radically changed without breaking everything (because everything only depends on the dynamically linked libraries/shared libraries); and reduces the functionality needed in the kernel itself.
For an example; most of the "system calls for managing windows and GUI components" you think Windows has could (internally, inside the relevant DLL) just end up using a single "send_message()" system call (to tell a different process, the GUI, that you want to create a window or change its position or ...).
For Linux it's roughly similar. The kernel's system calls (which actually are documented, for no sane reason - it goes against the spirit of SYS-V specs and means badly written "linux executables" aren't compatible with other Unix clones like FreeBSD or Solaris or OSX) exist to use things like low level memory management and raw file IO and sockets; but (like Windows) the kernel's system calls are buried under layers of shared libraries, and those shared libraries (e.g. like Xlib, GLib, KWindowSystem, Qt, ...) just use "something" (file IO, pipes, sockets, ...) provided by kernel to talk to another process (display server, GUI, ..).
Linux and Windows fall under separate categories; Linux is just a kernel, i.e. the piece under the hood that gives us the basic functionality we expect to run programs, like threads, memory and process management, etc. Windows is a full operating system, including the user facing components and numerous system libraries. An apter comparison would be a specific Linux distro and Windows.
On that note, distros, as independent operating systems, obviously can have different implementations of any OS component. Some distros, like Arch, don't come with a GUI by default at all. That said, essentially the entire Linux ecosystem uses Xorg and/or Wayland; I would recommend looking into the implementation details of those two.
A Linux GUI has quite a few differences compared to Windows GUI. For example, the GUI is not considered to be a part of the operating system, but rather an external part of it; that means no syscalls (not embedded whatsoever in the OS). After all, like the previous answer says, Linux is a kernel, that means it's only something really basic (allows execution of programs, memory/threads management, processes management, but not really much more). Whatever comes next (GUI, for example) are added features using packages.
This allows, for example, installing a GUI on top of a minimal installation of any Linux distro (CentOS, for example), and that GUI can be the one you want (Gnome, KDE...).

What Operating System services are necessary to support kernel-level threads?

I am studying Solaris and Linux and am viewing Kernel Level Threads (KLTs) as the fundamental entity that can be scheduled and dispatched by the OS. I know that a multi-threaded OS must store thread execution context and provide mechanisms to schedule and dispatch KLTs, and that kernel level threads handle interrupts, system calls, and provide an interface to the CPU as a resource at the user-kernel interface. I am not clear on what services are necessary to support KLTs in a multi-threaded OS.
I cannot determine if there is a core kernel process that is necessary to support all KLTs, or if KLTs run interdependently as the base-level of computing. I would like to understand what minimum set of operations (resource allocation, scheduling) is necessary to support an OS with KLTs.
I have looked at Tanenbaums Tanenbaum's discussion of threads in his distributed systems book, Understanding the Linux Kernel, and MultiThreading the SunOS kernel but I cannot find an answer to my question.
I believe that answering the question -- What Operating System services are necessary to support kernel-level threads? -- will help me understand how KLTs are implemented.

Linux and RTOS using SoC (ARM, Xilinx)

I am facing a design "issue". I have a board with Xilinx Zynq Soc including dual-core ARM9 and I need to develop an application to support real-time property control application (time deadlines to response time) and also application to do heavy processing (image etc.) and some basic communications between them, but most importantly I will need to be able to control the Linux part (at least e.g. to somehow suspend it, "pause it" in best case to have possibility to shut it down and then run it again). So I was wondering how to combine it.
One of the option, could be RTLinux, which at least to description, what I found offers possibility to run realtime kernel and linux kernel next to it as a thread but it seems that it is now proprieatary by WindRiver..
Then I stepped up over MicroBlaze, where it could be possible to "create" soft processor on Programmable logic, but I am not sure if I can run RTOS on ARM and Linux there?
There are two things that seem to be known as rtlinux. The one you mention, a Wind River revival of the MERT system is a product of that company. Another one, seemingly “RT Linux”, is a real time patch to the mainline kernel which provides deterministic scheduling and fine grained kernel pre-emption.
I think it is the latter one that you want. 10s of google indicates that there is a kconfig target for this SoC, so all the pieces you need should be there.
Do remember there is more to a real time system than just the ability to be real time; the subsystems also have to be well behaved.
Given your description, you have (at least) the following design options:
Dual kernel approach: this means patching the Linux kernel with a (quite invasive) patch that runs a tiny real-time kernel alongside the standard kernel. This approach allows reaching good real-time performance (even in the order of us) at the cost of complexity. It was implemented by the RTLinux project (acquired and then discontinued by Windriver), then by RTAI (mostly focusing on x86) and Xenomai.
If you go along this path, you can see if Xenomai supports your specific SoC; then patch, configure and rebuild the kernel; and finally write the real-time code following Xenomai's API.
Improving the responsiveness of the Linux standard kernel: this is what the PREEMPT_RT project aims at. The real-time performance is lower with respect to the previous approach, but you don't have to write real-time specific code. With this approach, you can patch and build the kernel, then see if the real-time performance is sufficient for your needs.
Synthesizing a Microblaze soft-core on the FPGA, then run Linux on the ARM cores and the real-time code ((either bare-metal or with an RTOS) on the Microblaze.
Unfortunately, your specific SoC does not support ARM's virtualization extensions. Otherwise there would be the additional option of Multi-OS approach: running the Linux OS on one ARM core and the real-time code (either bare-metal or with an RTOS like ERIKA Enterprise) on the other ARM core, through a hypervisor like Jailhouse or Xen.

Multithreded applications on different CPUS

If, for example, there is a let's say embedded application which run on unicore CPU. And then that application would be ported on multi core CPU. Would that app run on single or multiple cores?
To be more specific I am interested in ARM CPU (but not only) and toolchain specifics e. g. standard C/C++ libraries.
The intention of this question is this: is it CPU's responsibility to "decide" to execute on multiple cores or compiler toolchain, developer and standard platfor specific libraries? And again, I am interested also in other systems' tendencies out there.
There are plenty of applications and RTOS (for example Linux) that run on different CPUs but the same architecture, so does that mean that they are compiled differently?
Generally speaking single-threaded code will always run on one core. To take advantage of multiple cores you need to have either multiple processes, multiple threads, or both.
There's nothing your compiler can do to help you here. This is an architectural consideration.
If you have multiple threads, for example, most multi-core systems will run them on whatever cores are available if the operating system you're running is properly compiled to support that. Running an OS that's been compiled single-core only will obviously limit your options here.
A single threaded program will run in one thread. It is theoretically possible for the thread to be scheduled to move to a different core, but the scheduler cannot turn a single thread into multiple threads and give you any parallel processing.
EDIT
I misunderstood your question. If there are multiple threads in the application, and that application is binary compatible with the new multicore CPU, the threads will indeed be scheduled to run on different CPUs, if the OS scheduler deems it appropriate.
Well it all depends on the software that if it wants to utilize other cores or not (if present). Lets take an example of Linux on ARM's cortexA53.
Initially a vendor provided boot loader runs on, FSBL (First state bootloader). It then passes control to Arm trusted firmware. ATF then runs uboot. All these run on a single core. Then uboot loads linux kernel and passes control to it. Linux then initializes some stuff and looks into some option, first in the bootargs for smp or nosmp flags. if smp it will get the number of CPUs assigned to it from dtb and then using SMC calls to ATF it will start other cores and then assign work to those cores to provide true feel of multiprocessing environment. This is normally called load balancing and in linux it is mostly done in fair.c file.

Modify Timer Interrupt in Linux

At college I'm studying Operative Systems, and as a first part of the project we have to modify the Timer Interrupt to execute my own code, may be with threads, and I think that Linux present less restrictions to access the Interrupt Vector that Windows does, is not?
Can you give me more details if it's better use Windows or Linux (like Ubuntu) to do this.
Thanks.
I would use Linux, because I think you might fail your assignment if you use Windows. The reason being that the commonly accessible timers (i.e. non-driver stuff) under Windows are not really interrupts, they're messages posted to your thread's message queue.
Whereas under Linux signal/sigaction in combination with timer_create will send a signal, which really counts as "interrupt".

Resources