kernel space and user space process identification - linux

How Linux scheduler identifies which process is from kernel space and which process is from user space?
Suppose I installed an application in the Linux. I started the application. Simultaneously there are other kernel space processes into the ready queue. Now how can the Linux scheduler identify that the which queued process is from kernel space and which one is from user space..?

I'm not expert but I started reading the kernel source 2 days ago and when it comes to processes you can almost always find all the data that you need from one struct. The struct is called task_struct and is defined in include/linux/sched.h file of the kernel source tree.
You can look it up here: https://github.com/torvalds/linux/blob/master/include/linux/sched.h#L1274
From what I understand, although I might be wrong, the kernel has no idea if the process/thread that it schedules is a user process or a kernel thread.
According to Robert Love's book on the Linux Kernel, the main difference from the system's point of view between a user process and a kernel thread is that kernel threads do not have an address space. Their mm pointer in their task_struct is NULL.
So from the above, I assume that if you really want to know whether a task is a kernel thread you could check this structure.

Related

Are user code in the end executed in kernel mode?

For what i learned from Operating System Concepts and online searching:
all user threads are finally mapped to kernel threads for being scheduled to physical CPUs
kernel threads can only be executed in kernel mode
above two arguments leads to the conclusion:
user code are all executed in kernel mode
is this right?
i have read the whole book and searched for many articles, the question still holds.
at Wikipedia, it says about LWP:
Kernel threads
Kernel threads are handled entirely by the kernel. They need not be associated with a process; a kernel can create them whenever it needs to perform a particular task. Kernel threads cannot execute in user mode. LWPs (in systems where they are a separate layer) bind to kernel threads and provide a user-level context. This includes a link to the shared resources of the process to which the LWP belongs. When a LWP is suspended, it needs to store its user-level registers until it resumes, and the underlying kernel thread must also store its own kernel-level registers.
also what does it means when saying about user-level registers and kernel level registers?
after digging and digging, i have following temp conclusion, but i am not sure. Hope the question further be answered and clearifed:
kernel thread, depending on discussion context, has two meanings:
when talking about user/kernel threading, kernel thread means a kernel task that totally execute in kernel mode and only execute kernel codes, like ksoftirqd for handling bottom half of interrupts
when taking about threading model, namely how user code is mapped into schedulable entities in kernel, kernel thread means a task that is schedulable by kernel
further about threading model and light weight processes in Linux:
in old times the operating system does not know thread, it only know processes(tasks) and threads are implmented by thread libraries totally in user side. There is a inherent problem for this that is if one user thread is blocked, such as I/O, all the user threads are blocked, because there is only one schedulable tasks in the kernel for this process. From the perspective of the kernel, the whole process is blocked. To solve this problem, light weight process(LWP), also called virtual processor(VP) is invented.
LWP is a intermedia data structure between user thread and a kernel thread(the second meaning above). LWP binds a user thread with a kernel thread(task), which in before is bounded with a user process. Simply put: in before a user process occupies a kernel thread(task), now with LWP a user thread can occupy a individual kernel thread(task), without sharing it with other user threads. (I think) This is why it is called light weight process. The advantage of this model is obvious, if one of the user thread is blocked, other user threads has ways to continue being executed by other kernel threads(tasks).
A kernel thread(task) acutually knows nothing about user process. It is just a task, a schedulable entity created, managed, destroyed totally by kernel itself. But a LWP belongs to a specific process and knows other LWPs that also belongs to the same one. LWP is like a bridge between user process and kernel thread(task).
When a kernel thread(task) that is bound to a LWP is scheduled by the kernel, the user level registers(pointed by LWP) is loaded into CPU, also the kernel thread(task) has registers and they are also loaded into CPU. From the standing point of CPU, a LWP is a kernel thread(task). It does not care it executes kernel code or user code.
user/kernel mode, user/kernel thread: they are independent. In Linux, a user thread created by pthread essentially is a kernel thread and this thread can execute in both user mode or kernel mode, depending on whether the thread is executing user code or kernel code.
All user threads are finally mapped to kernel threads.
That is not a useful way to think about threads. In most operating systems, a program can ask the OS to create a new thread and the program can provide a pointer to a function for the new thread to call. There's no "mapping" that happens there.* The new thread runs in exactly the same way as the program's original (a.k.a., "main") thread. It runs application code in user mode except, occasionally, when it makes a system call, and then for the duration of the system call it runs kernel code in kernel mode.
Many programming languages come with an OS-independent library that provides some kind of a Thread object. The thread object is not the same thing as the actual thread. It's more of a handle that the application uses to control the OS thread. If you like, you can say that those thread objects are "mapped" to OS threads, but that's still somewhat abusing the notion of what a "mapping" is.
kernel threads can only be executed in kernel mode
If you aren't writing OS code, it's best to avoid saying "kernel thread" altogether. In the Linux OS in particular, "kernel thread" means something, and it has nothing whatever to do with application code. Linux kernel threads are threads that are created by the OS for the OS, and they never run "user" (i.e., application) code.
It's possible for an application program to create and schedule its own threads, completely unknown to the OS. Some people call those "user threads." Some used to call them "green threads." Back in the old days, before any OS had thread support, we just called them "threads." Doing threads that way is a lot of work, for little reward. (Can't schedule them preemptively.) Outside of the realm of tiny, embedded, real-time systems, almost nobody bothers to do it anymore.
* But wait! Things will get more complicated in the near future when Java's Project Loom hits the main stream. Threads traditionally are expensive. In particular, each thread must have its own contiguous call stack—usually a chunk of at least a few megabytes—allocated to it. The goal of project loom is to make threads as cheap as any other object.
They way they intend to make threads "cheap" is to "virtualize" them, and to break up their call stacks into linked lists of reclaimable heap objects. Under project loom, a limited number of real OS threads that are scheduled by the OS scheduler will, in turn, schedule and execute the code of a multitude of "virtual" application threads, and so there really will be something going on that feels a bit like "mapping."
I won't be at all surprised if the same idea spreads to other languages.
There are two different meanings of kernel threads. When threading people talk about "kernel threads" they mean "threads the kernel knows about" i.e. "threads that are controlled by the kernel". When kernel people talk about "kernel threads" they mean "threads that run in kernel mode".
"Threads the kernel knows about" are contrasted to "user threads" which are hidden from the kernel and controlled by the program itself.
No, not all threads controlled by the kernel run in kernel mode. The kernel controls the scheduling of threads that run in kernel mode, and also threads that run in user mode.
The quote about LWPs is talking about systems where the scheduler thinks that all threads are kernel-mode threads. To run a user-mode thread (which they call an LWP because it's not really a thread because all threads are kernel-mode threads) the thread has to call a function like RunLWP(pointer_to_lwp);.
I don't know which system is like this. Linux is not like this; Windows is not like this. This is a weird, overly complicated design which is why it's not normally used.
The "registers" are where the CPU remembers what it is currently doing. The most important one is the "instruction pointer" register (some CPUs call it something different) which remembers which instruction is next. If you remember all of the register values, and then come back later and set them to the same values, the CPU will carry on like nothing happened. That's why threading works - the thread can't tell that it's been interrupted, because all of the registers have the same values as if it wasn't interrupted. Here's a list of registers on x86-class CPUs. You don't need to know them for this question - it just might be interesting.
When an interrupt happens, depending on the CPU type, the CPU will save the instruction pointer and maybe one or two other registers. The interrupt handler has to save the rest (or be careful not to change them). Here about halfway down you can see how an x86-class CPU switches from user-space to an interrupt handler when an interrupt occurs.
So this RunLWP function would save the current registers (from the kernel) and set them according to the last time the LWP stopped running. Then the LWP runs. Then when some interrupt happens, the interrupt handler would save the current registers (from user-space) and set them according to the saved kernel handlers, so the kernel code after RunLWP runs. Probably. Again, I don't know any actual system like this, but it seems like the logical way to do things. The reason it should return back to the kernel code instead of the user code is so that the kernel code can decide whether it wants to keep running the LWP or not.
I don't know why they would say the interrupt handler would save both the kernel-space and user-space registers. Current CPUs generally only have one set of registers which software has to swap out when it wants to make the CPU change what it is doing. RunLWP would have to save the kernel registers and load the user ones, then the interrupt handler would have to save the user registers and load the interrupt handler ones. It could be that the CPUs which these systems were designed for did have two sets of registers.

Is a coroutine a kind of thread that is managed by the user-program itself (rather than managed by the kernel)?

In my opinion,
Kernel is an alias for a running program whose program text is in the kernel area and can access all memory spaces;
Process is an alias for a running program whose program has an independent memory space in the user memory area. Which process can get the use of the CPU is completely managed by the kernel;
Thread is an alias for a running program whose program-text is in the memory space of a process and completely shares the memory space with another thread of the same process. Which thread can get the use of the CPU is completely managed by the kernel;
Coroutine is an alias for a running program whose program-text is in the memory space of a process.And it is a user thread that the process decides itself (not the kernel) how to use, and the kernel is only responsible for allocating CPU resources to the process.
Since the process itself has no right to schedule like the kernel, the coroutine can only be concurrent but not parallel.
Am I correct in saying Above?
process is an alias for a running program...
The modern way to think of a process is to think of it as a container for a collection of threads and the resources that those threads need to execute.
Every process (except for "zombie" processes that can exist in some systems) must have at least one thread. It also has a virtual address space, open file handles, and maybe sockets and other resources that are shared by the threads.
Thread is an alias for a running program...
The problem with saying that is, "running program" sounds too much like "process," and a thread is most definitely not a process. (E.g., a thread can only exist in a process.)
A computer scientist might tell you that a thread is one particular execution of the application's code. I like to think of a thread as an independent agent who executes the code.
coroutine...is a user thread...
I'm going to mostly leave that one alone. "Coroutine" seems to mean something different from the highly formalized, and not particularly useful coroutines that I learned about more than forty years ago. What people call "coroutines" today seem to have somewhat in common with what I call "green threads," but there are details of how and when and why they are used that I don't yet understand.
Green threads (a.k.a., "user mode threads") simply are threads that the kernel doesn't know about. They are pretty much just like the threads that the kernel does know about except, the kernel scheduler never preempts them because, Duh! it doesn't know about them. Context switches between green threads can only happen at specific points where the application allows it (e.g., by calling a yield() function or, by calling some library function that is a documented yield point.)
kernel is an alias for a running program...
The kernel also is most definitely not a process.
I don't know every detail about every operating system, but the bulk of kernel code does not run independently of the applications that the kernel serves. It only runs when an application thread enters kernel mode by making a system call. The thread that runs the kernel code still belongs to the application process, but the code that determines the thread's behavior at that point is written or chosen by the kernel developers, not by the application developers.

Is there a separate kernel level thread for handling system calls by user processes?

I understand that user level threads are implemented in user space and kernel level threads in kernel space. I have also read that user level threads are mapped onto kernel level threads to actually run the user level threads.
What exactly is meant by "implemented"? Does this mean the thread control blocks are defined in user and kernel space respectively?
What happens when a system call is made? Which kernel thread (or user thread IDK) does this system call run on? And does each kernel level stack have its own stack?
I have an understanding that threads are just parts of a process. When we deal with kernel threads, what is the corresponding process here? And what are the kernel processes and can you give examples?
I have referred to other answers as well, but haven't received satisfaction.
It depends on the implementation of the OS.
But usually, like in Linux, the system call is executed on the thread that called it. And each thread has a user stack and a kernel stack.
See How does a system call work and How is the system call in Linux implemented? for more details. And I hope this link can clear up your question about "kernel threads".

execution of user-level-threads on Kernel threads - many to one [duplicate]

So two questions here really. First, (and yes, I have searched this already, but wanted clarification), what is the difference between a user thread and a kernel thread? Is it simply that one is generated by a user program and the other by an OS, with the latter having access to privileged instructions? Are they conceptually the same or are there actual differences in the threads themselves?
Second, and the real problem of my question is: the book I am using says that "a relationship must exist between user threads and kernel threads," going on to list the different models of such a relationship. But the book fails to clearly explain why a user thread must always be mapped to a specific kernel thread. Why is this?
A kernel thread is a thread object maintained by the operating system. It is an actual thread that is capable of being scheduled and executed by the processor. Typically, kernel threads are heavyweight objects with permissions settings, priorities, etc. The kernel thread scheduler is in charge of scheduling kernel threads.
User programs can make their own thread schedulers too. They can make their own "threads" and simulate context-switches to switch between them. However, these threads aren't kernel threads. Each user thread can't actually run on its own, and the only way for a user thread to run is if a kernel thread is actually told to execute the code contained in a user thread. That said, user threads have major advantages over kernel threads. They can be a lot more lightweight, since they don't necessarily need to have their own priorities, can be managed by a single process (which might have better info about what threads need to run when), and don't create lots of kernel objects for purposes of security and locking.
The reason that user threads have to be associated with kernel threads is that by itself a user thread is just a bunch of data in a user program. Kernel threads are the real threads in the system, so for a user thread to make progress the user program has to have its scheduler take a user thread and then run it on a kernel thread. The mapping between user threads and kernel threads doesn't have to be one-to-one (1 : 1); you can have multiple user threads share the same kernel thread (only one of those user threads runs at a time), and you can have a single user thread which is rotated across different kernel threads in a 1 : n mapping.
I think a real world example will clear the confusion, so let’s see how things are done in Linux.
First of all Linux doesn’t differentiate between process and thread, entity that can be scheduled is called task in Linux and represented by task_struct. So whenever you execute a fork() system call, a new task_struct is created which holds data (or pointer) associated with new task.
So in Linux world a kernel thread means a task_struct object.
Because scheduler only knows about these entities which can be assigned to different CPU’s (logical or physical). In other words if you want Linux scheduler to schedule your process you must create a task_struct.
User thread is something that is supported and managed outside of kernel by some execution environment (EE from now on) such as JVM. These EE’s will provide you with some functions to create new threads.
But why a user thread must always be mapped to a specific kernel thread.
Let’s say you created some threads using your EE. eventually they must be executed by the CPU and from above explanation we know that the thread must have a task_struct in order to be assigned to some CPU. That is why the mapping must exist. It’s the duty of your EE to create task_structs.
If your EE uses many to one model then it will create only one task_struct for all the threads and it will schedule all these threads onto that task_struct. Think of it as there is one CPU (task_struct) and many processes (threads created in EE), your operating system (the EE) will multiplex these processes on that single CPU.
If it uses one to one model than there will be one task_struct for every thread created in EE. So when you create a new thread in your EE, corresponding task_struct gets created in the kernel.
Windows does things differentlly ( process and thread is different ) but general idea stays the same that is kernel thread is the entity that CPU scheduler considers for assignment hence user threads must be mapped to corresponding kernel threads (if you want CPU to execute them).

Linux Kernel Threads - scheduler

Is Linux Kernel scheduler a part of init process? My understanding is that it is part of Kernel threads managed internally not visible to user by either top or ps. Please correct my understanding.
Is it possible to view standard kernel threads through any kernel debugger to see how standard threads occupy cpu activity?
-Kartlee
Kernel threads can be seen through "top" and "ps" and can be distinguished by having zero VM size (they have no userspace, so no userspace memory map).
These are created by kernel_thread (or its friends). Some facilities create one thread per CPU and tie it to a CPU, so you see stuff like aio/0 aio/1 on the PS list.
Also some work is done through the several deferred execution mechanisms and gets attributed to other tasks, typically something called "events/0" (one per CPU). Time spent "really" in interrupts isn't counted anywhere (it just runs at the expense of whatever task happened to be on that CPU at the time).
1) Is Linux Kernel scheduler a part of init process?
-> no, scheduler is a subsystem, init process is just process but special and is scheduled by scheduler.
2) My understanding is that it is part of Kernel threads managed internally not visible to user by either top or ps. Please correct my understanding.
-> It is a kind of kernel thread and typically not shown to user.
3) Is it possible to view standard kernel threads through any kernel debugger to see how standard threads occupy cpu activity?
-> yes!
use ps aux, the kernel thread's name is surrounded by square brackets, e.g. [kthreadd]
kernel threads are created by kthread_create function. And it is finally handled by kthreadd, i.e. the PID=2 thread in the kernel;
And all the kernel threads is forked/copied/cloned by kthreadd (pid=2). Not init(pid=1).
the source code is here: https://elixir.bootlin.com/linux/latest/source/kernel/kthread.c

Resources