Efficiency of monolithic kernels - linux

I know that monolithic kernel runs all services in itself. And I searched over the internet to find why every article say its efficient. I couldn't find the reason. Why does it tend to be efficient than other kernels?

Before we start, let us summersize what a kernel is.
In computing, a kernel, which is a fundamental part of modern operating systems(OSs), is a program that allocates all your systems' resources and manages the
input/output (I/O) requests, acting as an interface between the software and the hardware.
The kernel performs its tasks in the kernel space, whereas every other tasks are done in the user space. This separation ensures that the kernel data and
the user data do not interfere, hence maintaining stability.
Kernel types:
A monolithic kernel has all its components running in the kernel space. This aproach was designed to provide robust performance.
A micro kernel has only its crucial parts running in the kernel space while the rest runs in the user space. It was designed to be faster and more modular for easy maintanence.
A hybrid kernel is a kernel that attempts to combine the aspects of both monolithic and micro kernels. It has a structure similar to that of a micro kernel, but implimented in the manner of a monolithic kernel.
Now coming to the main topic: Why is a monolithic kernel is more efficient?
Because of their design, monolithic kernels have better performance, hence provide rich and more powerful hardware access. Nowadays they also consist of dynamically loaded and unloaded module, which provide the efficiency of modularity.
Hoped you liked my answer and found it useful
=)

Microkernels offer the bare essentials to get a system operating. Microkernel systems have small kernelspaces and large userspaces.
Monolithic kernels, however, contain much more. Monolithic systems have large kernelspaces. For instance, one difference is the placement of device drivers. Monolithic kernels contain drivers (modules) and place them in kernelspace while microkernels lack drivers. In such systems, the device drivers are offered in another way and placed in the userspace. This means microkernel system still have drivers, but they are not part of the kernel. In other words, the drivers exist in another part of the operating system.
Also, in the modern day approach to monolithic architecture, the kernel consists of different modules which can be dynamically loaded and un-loaded. This modular approach allows easy extension of OS's capabilities. With this approach, maintainability of kernel became very easy as only the concerned module needs to be loaded and unloaded every time there is a change or bug fix in a particular module. So, there is no need to bring down and recompile the whole kernel for a smallest bit of change.
So,as per modern day changes, the monolithic kernels have took a grip over their limitations and are evolving better. Hence, everybody seems to praise it for its improved efficiency!

Related

Difference between user-space driver and kernel driver [duplicate]

This question already has answers here:
Userspace vs kernel space driver
(2 answers)
Closed 5 years ago.
I have been reading "Linux Device Drivers" by Jonathan Corbet. I have some questions that I want to know:
What are the main differences between a user-space driver and a kernel driver?
What are the limitations of both of them?
Why user-space drivers are commonly used and preferred nowadays over kernel drivers?
What are the main differences between a user-space driver and a kernel driver?
User space drivers run in user space. Kernel drivers run in kernel space.
What are the limitations of both of them?
The kernel driver can do anything the kernel can, so you could say it has no limitations. But kernel drivers are much harder to "prove correct" and debug. It's all-to-easy to introduce race conditions, or use a kernel function in the wrong context or with the wrong locking. Things will appear to work for a while, but cause problems (including crashing the whole system) down the road. Drivers must also be wary when reading all user input (both from the device and from userspace) because invalid data can sometimes cause crashes.
A user-space driver usually needs a small shim in the kernel to do it's bidding. Usually, that 'shim' provides a simpler API. For example, the FUSE layer lets people write file systems in any language. They can be mounted, read/written, then unmounted. The shim must also protect the kernel against all invalid input.
User-space drivers have lots of limitations. For example, the kernel reserves some memory for use during emergencies, but that is not available for users-space. During memory pressure, the kernel will kill random user-space programs, but never kill kernel threads. User-space programs may be swapped out, which could lead to your device being unavailable for several seconds. (Kernel code can not be swapped out.) Running code in user-space requires several context switches. These waste a "lot" of CPU time. If your device is a 300 baud modem, nobody will notice. But if it's a gigabit Ethernet card, and every packet has to go to your userspace driver before it gets to the real user, the system will have major bottlenecks.
User space programs are also "harder" to use because you have to install that user-space software, which often has many library dependencies. Kernel modules "just work".
Why user-space drivers are commonly used and preferred nowadays over kernel drivers?
The question is "Does this complexity really need to be in the kernel?"
I used to work for a company that made USB dongles that talked a particular protocol. We could have written a full kernel driver, but instead just wrote our program on top of libUSB.
The advantages: The program was portable between Linux, Mac, Win. No worrying about our code vs the GPL.
The disadvantages: If the device needed to data to the PC and get a response quickly, there is no guarantee that would happen. For example, if we needed a real-time control loop on the PC, it would be harder to have bounded response times. (Maybe not entirely impossible on Linux.)
If there is a way to do it in userspace, I would try that first. Only if there are significant performance bottlenecks, or significant complexity in keeping it in userspace would you move it. Even then, consider the "shim" approach, and/or the "emulator" approach (where your kernel module makes your device look like a serial port or a block device.)
On the other hand, if there are already several kernel modules similar to what you want, then start there.

Direct Cpu Threads or OpenCL

I have search the various questions (and web) but did not find any satisfactory answer.
I am curious about whether to use threads to directly load the cores of the CPU or use an OpenCL implementation. Is OpenCl just there to make multi processors/cores just more portable, meaning porting the code to either GPU or CPU or is OpenCL faster and more efficient? I am aware that GPU's have more processing units but that is not the question. Is it indirect multi threading in code or using OpneCL?
Sorry I have another question...
If the IGP shares PCI lines with the Descrete Graphics Card and its drivers can not be loaded under Windows 7, I have to assume that it will not be available, even if you want to use the processing cores of the integrated GPU only. Is this correct or is there a way to access the IGP without drivers.
EDIT: As #Yann Vernier point out in the comment section, I haven't be strict enough with the terms I used. So in this post I use the term thread as a synonym of workitem. I'm not refering to the CPU threads.
I can’t really compare OCL with any other technologies that will allow using the different cores of a CPU as I only used OCL so far.
However I might bring some input about OCL especially that I don’t really agree with ScottD.
First of all, even though an OCL kernel developed to run on a GPU will run as well on a CPU it doesn’t mean that it’ll be efficient. The reason is simply that OCL doesn’t work the same way on CPU and GPU. To have a good understanding of how it differs, see the chap 6 of “heterogeneous computing with opencl”. To summary, while the GPU will launch a bunch of threads within a given workgroup at the same time, the CPU will execute on a core one thread after another within the same workgroup. See as well the point 3.4 of the standard about the two different types of programming models supported by OCL. This can explain why an OCL kernel could be less efficient on a CPU than a “classic” code: because it was design for a GPU. Whether a developer will target the CPU or the GPU is not a problem of “serious work” but is simply dependent of the type of programming model that suits best your need. Also, the fact that OCL support CPU as well is nice since it can degrade gracefully on computer not equipped with a proper GPU (though it must be hard to find such computer).
Regarding the AMD platform I’ve noticed some problem with the CPU as well on a laptop with an ATI. I observed low performance on some of my code and crashes as well. But the reason was due to the fact that the processor was an Intel. The AMD platform will declare to have a CPU device available even if it is an Intel CPU. However it won’t be able to use it as efficiently as it should. When I run the exact same code targeting the CPU but after installing (and using) the Intel platform all the issues were gone. That’s another possible reason for poor performance.
Regarding the iGPU, it does not share PCIe lines, it is on the CPU die (at least of Intel) and yes you need the driver to use it. I assume that you tried to install the driver and got a message like” your computer does not meet the minimum requirement…” or something similar. I guess it depends on the computer, but in my case, I have a desktop equipped with a NVIDIA and an i7 CPU (it has an HD4000 GPU). In order to use the iGPU I had first to enable it in the BIOS, which allowed me to install the driver. Of Course only one of the two GPU is used by the display at a time (depending on the BIOS setting), but I can access both with OCL.
In recent experiments using the Intel opencl tools we experienced that the opencl performance was very similar to CUDA and intrincics based AVX code on gcc and icc -- way better than earlier experiments (some years ago) where we saw opencl perform worse.

Hard real time in user space with preempt_rt patch

From: https://rt.wiki.kernel.org/articles/f/r/e/Frequently_Asked_Questions_7407.html
Real-time only has impact on the kernel; Userspace does not notice the difference except for better real time behavior.
Does it mean that if we write the applications in user space, they won't get the hard real time effect?
It depends what you mean with "real-time effect". Usually you want a guaranteed timing behavior in a real-time system. You won't get that. However, your application will run more "smoothly" and will be more responsive. For many best-effort systems, that will be sufficient.
No, that's not what it meant.
It means that with PREEMPT_RT you get lower maximum latency in user-space without the need of adapting your code or using additional libraries/tools. In practice: PREEMPT_RT doesn't need user-level applications to use specific APIs.
The APIs within the kernel code, instead, are significantly changed (e.g., by changing any spinlock to a mutex, etc.)
By the way, keep in mind that PREEMPT_RT reduces the maximum latency experienced by a task, but the system throughput will be lower (i.e., more context switches) and the average latency likely increased.
I believe that question can be best answered in context -- asking if there were any APIs introduced by that specific patchset that application authors can use -- and none are added by this patchset. You won't need to recompile your application and there is no benefit to recompiling. You also won't be locked into any specific API.
If you have a well-written userspace application that relies on being able to run as soon as possible when hardware conditions dictate it should respond, then yes, these patches can help. But you can still write poor applications that prevent good real-time behavior and the patchset cannot help you.
It means that Real-Time Patch will manipulate some codes in kernel and the effect of this manipulation is that we will have a fine grained preemptive kernel.
All programs in user space will benefit from real-time preemptive kernel, without any modification. even no recompile is needed!
PREEMPT_RT patch goal is to convert Linux to a Hard Real Time System and it`s really good for most of the tasks. but in safety critical systems such as military and aerospace, Linux has nothing to offer and we should use other RTOSes like VxWorks, QNX and Integirty!

Why doesn't Linux use the hardware context switch via the TSS?

I read the following statement:
The x86 architecture includes a
specific segment type called the Task
State Segment (TSS), to store hardware
contexts. Although Linux doesn't use
hardware context switches, it is
nonetheless forced to set up a TSS for
each distinct CPU in the system.
I am wondering:
Why doesn't Linux use the hardware support for context switch?
Isn't the hardware approach much faster than the software approach?
Is there any OS which does take advantage of the hardware context switch? Does windows use it?
At last and as always, thanks for your patience and reply.
-----------Added--------------
http://wiki.osdev.org/Context_Switching got some explanation.
People as confused as me could take a look at it. 8^)
The x86 TSS is very slow for hardware multitasking and offers almost no benefits when compared to software task switching. (In fact, I think doing it manually beats the TSS a lot of times)
The TSS is known also for being annoying and tedious to work with and it is not portable, even to x86-64. Linux aims at working on multiple architectures so they probably opted to use software task switching because it can be written in a machine independent way. Also, Software task switching provides a lot more power over what can be done and is generally easier to setup than the TSS is.
I believe Windows 3.1 used the TSS, but at least the NT >5 kernel does not. I do not know of any Unix-like OS that uses the TSS.
Do note that the TSS is mandatory. The thing that OSs do though is create a single TSS entry(per processor) and everytime they need to switch tasks, they just change out this single TSS. And also the only fields used in the TSS by software task switching is ESP0 and SS0. This is used to get to ring 0 from ring 3 code for interrupts. Without a TSS, there would be no known Ring 0 stack which would of course lead to a GPF and eventually triple fault.
Linux used to use HW-based switching, in the pre-1.3 timeframe iirc. I believe sw-based context switching turned out to be faster, and it is more flexible.
Another reason may have been minimizing arch-specific code. The first port of Linux to a non-x86 architecture was Alpha. Alpha didn't have TSS, so more code could be shared if all archs used SW switching. (Just a guess.) Unfortunately the kernel changelogs for the 1.2-1.3 kernel period are not well-preserved, so I can't be more specific.
Linux doesn't use a segmented memory model, so this segmentation specific feature isn't used.
x86 CPUs have many different kinds of hardware support for context switching, so the distinction isn't hardware vs software, but more how does an OS use the various hardware features available. It isn't necessary to use them all.
Linux is so efficiency focussed that you can bet that someone has profiled every option that is possible, and that the options currently used are the best available compromise.

When should I write a Linux kernel module?

Some people want to move code from user space to kernel space in Linux for some reason. A lot of times the reason seems to be that the code should have particularly high priority or simply "kernel space is faster".
This seems strange to me. When should I consider writing a kernel module? Are there a set of criterias?
How can I motivate keeping code in user space that (I believe) belong there?
Rule of thumb: try your absolute best to keep your code in user-space. If you don't think you can, spend as much time researching alternatives to kernel code as you would writing the code (ie: a long time), and then try again to implement it in user-space. If you still can't, research more to ensure you're making the right choice, then very cautiously move into the kernel. As others have said, there are very few circumstances that dictate writing kernel modules and debugging kernel code can be quite hellish, so steer clear at all costs.
As far as concrete conditions you should check for when considering writing kernel-mode code, here are a few: Does it need access to extremely low-level resources, such as interrupts? Is your code defining a new interface/driver for hardware that cannot be built on top of currently exported functionality? Does your code require access to data structures or primitives that are not exported out of kernel space? Are you writing something that will be primarily used by other kernel subsystems, such as a scheduler or VM system (even here it isn't entirely necessary that the subsystem be kernel-mode: Mach has strong support for user-mode virtual memory pagers, so it can definitely be done)?
There are very limited reasons to put stuff into the kernel. If you're writing device drivers it's ok. Any standard application: never.
The drawbacks are huge. Debugging gets harder, errors become more frequent and hard to find. You might compromise security and stability. You might have to adapt to kernel changes more frequently. It becomes impossible to port to other UNIX OSs.
The closest I've ever come to the kernel was a custom filesystem (with mysql in the background) and even for that we used FUSE (where the U stands for userspace).
I'm not sure the question is the right way around. There should be a good reason to move things to kernel space. If there aren't any reasons, don't do it.
For one thing, debugging is made harder, and the effect of bugs is far worse (crash/panic instead of simple coredump).
Basically, I agree with rpj. Code has to be in user-space, unless it's REALLY necessary.
But, to emphasize your question, which condition?
Some people claims that driver has to be in the kernel, which is not true. Some drivers are not timing sensitive, in fact lots of drivers are like that.
For example, the framer, RTC timer, i2c devices, etc. Those drivers can be easily moved to user space. There are even some file-systems that are written in user-space.
You should move to kernel space where the overhead, eg. user-kernel swap, becomes unacceptable for your code to work properly.
But there are lots of way to deal with this. For example, the /dev/mem provides a good way to access your physical memory, just like you do it from the kernel space.
When people talk about going to RTOS, I'm usually skeptical.
These days, the processor is so powerful, that most of the time, the real-time aspect becomes negligible.
But even, let's say, you're dealing with SONET, and you need to do a protection switching within 50ms (actually even less, since the 50ms constrains applies to the whole ring), you still can do the switching very fast, IF your hardware supports it.
Lots of framer these days can give you a hardware support that reduces the amount of writes that you need to do. Your job is basically responds to the interrupt as quickly as possible. And Linux is not bad at all. The interrupt latency I got was less 1ms, even if I have tons of other interrupts running (eg. IDE, ethernet, etc.).
And if that's still not enough, then maybe your hardware design is wrong. Some things are better left on the hardware. And when I said hardware, I mean ASIC, FPGA, Network Processor, or other advanced logic.
Code running in the kernel accesses memory, peripherals, system functions in ways that are different from userspace code and thus has the ability to be more efficient. Not to mention the reduced security restrictions for kernel code. However, all this usually comes at a cost, such as increasing the possibility of opening the kernel up to security threats, locking up the OS, complicating the debugging, and so forth.
If your people want really high priority, determinism, low latency etc, the right way to go is to use some real-time version of Linux (or other OS).
Also look at the preemptible kernel options etc. Exactly what you should do depends on the requirements, but to put the code in kernel modules is not likely the right solution, unless you are interfacing some hardware directly.
Another reason to not move code into kernel space is that when you use it in production or commercial situations, you will have to publish that code due to the GPL agreement. A situation that many software companies don't want to come into. :)
As general rule. Think on what you want to know and if that is something you would see in an operating system development book or class then it has a good chance to belong into the kernel. If not, keep it out of the kernel. If you have a very good reason to break that rule, be sure, you will have enough knowledge to know it by yourself or you will be working with someone that has that knowledge.
Yes, might sound harsh, but this exactly what I meant, if you don't know, then be almost sure the answer is no, don't do it in the kernel. Moving your development to kernel space opens a giant can of worms that you must be sure to be able to handle.
If you just need lower latency, higher throughput, etc., it is probably cheaper to buy a faster computer than to develop kernel code.
Kernel modules may be faster (due to less context switches, less system call overhead, and less interruptions), and certainly do run at very high priority. If you want to export a small amount of fairly simple code into kernel space, this might be OK. That is, if a small piece of code is found to be crucial to performance, and is the sort of code that would benefit from being placed in kernel mode, then it may be justified to place it there.
But moving large parts of your program into kernel space should be avoided unless all other options are completely exhausted. Aside from the difficulty of doing so, the performance benefit is not likely to be very large.
If you're asking such a question, then you shouldn't go to the kernel layer. Basically just wondering means you don't need to. The time of the context switch is so negligible that it doesn't matter anyway these days.

Resources