Linux thread implementation - linux

I was reading about thread implementation in linux. It seems that linux infact models threads as "special" processes (using clone system call) passing appropriate flags to it so that address space is shared.
Multiple threads that spawned this way would have separate pid (process id - logical thread id) but same tgid (thread group id - logical process id). How would thread spawning be quicker than process spawning (since threads are modelled as process)?
How would operations like TLB flushes work? Would context switch between (logical) threads still be be cheap?

Related

how kernel distinguishes between thread and process

in Linux threads are called light weight processes. Whether process or thread, they are implemented by task_struct data structure.
1> So, in that sense how kernel distinguishes between thread and process?
2> when context switching happens, how do threads get less overhead in context switching? because prior to this thread, another thread from another process may be running. So kernel should load all resources even if resources are shared between threads of a processes.
how kernel distinguishes between thread and process.
From http://www.kernel.org/doc/ols/2002/ols2002-pages-330-337.pdf and from Linux - Threads and Process
This was addressed during the 2.4 development cycle with the
addition of a concept called a ’thread group’. There is a linked
list of all tasks that are part of the thread group, and there is an
ID that represents the group, called the tgid. This ID is actually the
pid of the first task in the group (pid is the task ID assigned with a
Linux task), similar to the way sessions and process groups work. This
feature is enabled via a flag to clone().
and
In the kernel, each thread has it's own ID, called a PID (although it would possibly make more sense to call this a TID, or thread ID) and they also have a TGID (thread group ID) which is the PID of the thread that started the whole process.
Simplistically, when a new process is created, it appears as a thread
where both the PID and TGID are the same (new) number.
When a thread starts another thread, that started thread gets its own
PID (so the scheduler can schedule it independently) but it inherits
the TGID from the original thread.
So a main thread is a thread with the same PID and TGID and this PID is a process PID. A thread (but not a main thread) has different PID but the same TID.
Inside kernel, each process and threads have a unique id (even threads of same process) which is stored in pid variable and threads of same process also share a common id which is stored in tgid variable, and is returned to user when getpid() is invoked therefore allowing kernel to distinguish them as different entities which are schedulable in themselves.
When a thread is preempted by another thread of same process, since various segments such as .text, .bss, .data, file descriptors etc. are shared and hence allows a fast context switch compared to when different processes are context switched, or when threads of different processes are context switched.
It seems you mixed some concepts together, implemented by the same data structure does not mean they run in the same way.
you can read
what-is-the-difference-between-a-process-and-a-thread to clarify your comprehension about process and thread firstly.

How does linux clean up threads when process exits if they're really just processes under the hood?

My understanding is that threads and processes are really the same entity on Linux, the difference being in what memory is shared between them. I'm finding that it's...difficult to ensure that child processes are properly cleaned up without explicit communication between the parent and child. I'd like to be able to run sub-processes with a similar mental model as threads, in that they're cleaned up automatically when the parent exits, but with the memory safety that processes provide. How does Linux manage to clean up threads automatically, and can that same mechanism be used for child processes?
After reading the Linux source, I think I have the answer. Tasks are differentiated by their task ID and thread group ID. getpid() actually returns the thread group ID of the tasks, which is the same for all tasks in the group. This lets the kernel have a single notion of schedulable task which can be used to implement threading.
Since glibc 2.3, exit() actually invokes the exit_group syscall, rather than just the exit syscall. This syscall kills all the tasks in a thread group rather than just the calling task. It does this by sending a SIGKILL to all the tasks with the same thread ID.

execution of user-level-threads on Kernel threads - many to one [duplicate]

So two questions here really. First, (and yes, I have searched this already, but wanted clarification), what is the difference between a user thread and a kernel thread? Is it simply that one is generated by a user program and the other by an OS, with the latter having access to privileged instructions? Are they conceptually the same or are there actual differences in the threads themselves?
Second, and the real problem of my question is: the book I am using says that "a relationship must exist between user threads and kernel threads," going on to list the different models of such a relationship. But the book fails to clearly explain why a user thread must always be mapped to a specific kernel thread. Why is this?
A kernel thread is a thread object maintained by the operating system. It is an actual thread that is capable of being scheduled and executed by the processor. Typically, kernel threads are heavyweight objects with permissions settings, priorities, etc. The kernel thread scheduler is in charge of scheduling kernel threads.
User programs can make their own thread schedulers too. They can make their own "threads" and simulate context-switches to switch between them. However, these threads aren't kernel threads. Each user thread can't actually run on its own, and the only way for a user thread to run is if a kernel thread is actually told to execute the code contained in a user thread. That said, user threads have major advantages over kernel threads. They can be a lot more lightweight, since they don't necessarily need to have their own priorities, can be managed by a single process (which might have better info about what threads need to run when), and don't create lots of kernel objects for purposes of security and locking.
The reason that user threads have to be associated with kernel threads is that by itself a user thread is just a bunch of data in a user program. Kernel threads are the real threads in the system, so for a user thread to make progress the user program has to have its scheduler take a user thread and then run it on a kernel thread. The mapping between user threads and kernel threads doesn't have to be one-to-one (1 : 1); you can have multiple user threads share the same kernel thread (only one of those user threads runs at a time), and you can have a single user thread which is rotated across different kernel threads in a 1 : n mapping.
I think a real world example will clear the confusion, so let’s see how things are done in Linux.
First of all Linux doesn’t differentiate between process and thread, entity that can be scheduled is called task in Linux and represented by task_struct. So whenever you execute a fork() system call, a new task_struct is created which holds data (or pointer) associated with new task.
So in Linux world a kernel thread means a task_struct object.
Because scheduler only knows about these entities which can be assigned to different CPU’s (logical or physical). In other words if you want Linux scheduler to schedule your process you must create a task_struct.
User thread is something that is supported and managed outside of kernel by some execution environment (EE from now on) such as JVM. These EE’s will provide you with some functions to create new threads.
But why a user thread must always be mapped to a specific kernel thread.
Let’s say you created some threads using your EE. eventually they must be executed by the CPU and from above explanation we know that the thread must have a task_struct in order to be assigned to some CPU. That is why the mapping must exist. It’s the duty of your EE to create task_structs.
If your EE uses many to one model then it will create only one task_struct for all the threads and it will schedule all these threads onto that task_struct. Think of it as there is one CPU (task_struct) and many processes (threads created in EE), your operating system (the EE) will multiplex these processes on that single CPU.
If it uses one to one model than there will be one task_struct for every thread created in EE. So when you create a new thread in your EE, corresponding task_struct gets created in the kernel.
Windows does things differentlly ( process and thread is different ) but general idea stays the same that is kernel thread is the entity that CPU scheduler considers for assignment hence user threads must be mapped to corresponding kernel threads (if you want CPU to execute them).

Threads: Why must all user threads be mapped to a kernel thread?

So two questions here really. First, (and yes, I have searched this already, but wanted clarification), what is the difference between a user thread and a kernel thread? Is it simply that one is generated by a user program and the other by an OS, with the latter having access to privileged instructions? Are they conceptually the same or are there actual differences in the threads themselves?
Second, and the real problem of my question is: the book I am using says that "a relationship must exist between user threads and kernel threads," going on to list the different models of such a relationship. But the book fails to clearly explain why a user thread must always be mapped to a specific kernel thread. Why is this?
A kernel thread is a thread object maintained by the operating system. It is an actual thread that is capable of being scheduled and executed by the processor. Typically, kernel threads are heavyweight objects with permissions settings, priorities, etc. The kernel thread scheduler is in charge of scheduling kernel threads.
User programs can make their own thread schedulers too. They can make their own "threads" and simulate context-switches to switch between them. However, these threads aren't kernel threads. Each user thread can't actually run on its own, and the only way for a user thread to run is if a kernel thread is actually told to execute the code contained in a user thread. That said, user threads have major advantages over kernel threads. They can be a lot more lightweight, since they don't necessarily need to have their own priorities, can be managed by a single process (which might have better info about what threads need to run when), and don't create lots of kernel objects for purposes of security and locking.
The reason that user threads have to be associated with kernel threads is that by itself a user thread is just a bunch of data in a user program. Kernel threads are the real threads in the system, so for a user thread to make progress the user program has to have its scheduler take a user thread and then run it on a kernel thread. The mapping between user threads and kernel threads doesn't have to be one-to-one (1 : 1); you can have multiple user threads share the same kernel thread (only one of those user threads runs at a time), and you can have a single user thread which is rotated across different kernel threads in a 1 : n mapping.
I think a real world example will clear the confusion, so let’s see how things are done in Linux.
First of all Linux doesn’t differentiate between process and thread, entity that can be scheduled is called task in Linux and represented by task_struct. So whenever you execute a fork() system call, a new task_struct is created which holds data (or pointer) associated with new task.
So in Linux world a kernel thread means a task_struct object.
Because scheduler only knows about these entities which can be assigned to different CPU’s (logical or physical). In other words if you want Linux scheduler to schedule your process you must create a task_struct.
User thread is something that is supported and managed outside of kernel by some execution environment (EE from now on) such as JVM. These EE’s will provide you with some functions to create new threads.
But why a user thread must always be mapped to a specific kernel thread.
Let’s say you created some threads using your EE. eventually they must be executed by the CPU and from above explanation we know that the thread must have a task_struct in order to be assigned to some CPU. That is why the mapping must exist. It’s the duty of your EE to create task_structs.
If your EE uses many to one model then it will create only one task_struct for all the threads and it will schedule all these threads onto that task_struct. Think of it as there is one CPU (task_struct) and many processes (threads created in EE), your operating system (the EE) will multiplex these processes on that single CPU.
If it uses one to one model than there will be one task_struct for every thread created in EE. So when you create a new thread in your EE, corresponding task_struct gets created in the kernel.
Windows does things differentlly ( process and thread is different ) but general idea stays the same that is kernel thread is the entity that CPU scheduler considers for assignment hence user threads must be mapped to corresponding kernel threads (if you want CPU to execute them).

Threads: some questions

I have couple of questions on threads. Could you please clarify.
Suppose process with one or multiple threads. If the process is prempted/suspended, does the threads also get preempted or does the threads continue to run?
When the suspended process rescheduled, does the process threads also gets scheduled? If the process has process has multiple threads, which threads will be rescheduled and on what basis?
if the thread in the process is running and recieves a signal(say Cntrl-C) and the default action of the signal is to terminate a process, does the running thread terminates or the parent process will also terminate? What happens to the threads if the running process terminates because of some signal?
If the thread does fork fallowed exec, does the exece'd program overlays the address space of parent process or the running thread? If it overlays the parent process what happens to threads, their data, locks they are holding and how they get scheduled once the exec'd process terminates.
Suppose process has multiple threads, how does the threads get scheduled. If one of the thread blocks on some I/O, how other threads gets scheduled. Does the threads scheduled with the parent process is running?
While the thread is running what the current kernel variable points(parent process task_stuct or threads stack_struct?
If the process with the thread is running, when the thread starts does the parent
process gets preempted and how each threads gets scheduled?
If the process running on CPU creates multiple threads, does the threads created by the parent process schedule on another CPU on multiprocessor system?
Thanks,
Ganesh
First, I should clear up some terminology that you appear to be confused about. In POSIX, a "process" is a single address space plus at least one thread of control, identified by a process ID (PID). A thread is an individually-scheduled execution context within a process.
All processes start life with just one thread, and all processes have at least one thread. Now, onto the questions:
Suppose process with one or multiple threads. If the process is prempted/suspended, does the threads also get preempted or does the threads continue to run?
Threads are scheduled independently. If a thread blocks on a function like connect(), then other threads within the process can still be scheduled.
It is also possible to request that every thread in a process be suspended, for example by sending SIGSTOP to the process.
When the suspended process rescheduled, does the process threads also gets scheduled? If the process has process has multiple threads, which threads will be rescheduled and on what basis?
This only makes sense in the context that an explicit request was made to stop the entire process. If you send the process SIGCONT to restart the process, then any of the threads which are not blocked can run. If more threads are runnable than there are processors available to run them, then it is unspecified which one(s) run first.
If the thread in the process is running and recieves a signal(say Cntrl-C) and the default action of the signal is to terminate a process, does the running thread terminates or the parent process will also terminate? What happens to the threads if the running process terminates because of some signal?
If a thread recieves a signal like SIGINT or SIGSEGV whose action is to terminate the process, then the entire process is terminated. This means that every thread in the process is unceremoniously killed.
If the thread does fork followed by exec, does the exece'd program overlays the address space of parent process or the running thread? If it overlays the parent process what happens to threads, their data, locks they are holding and how they get scheduled once the exec'd process terminates.
The fork() call creates a new process by duplicating the address space of the original process, and duplicating just the single thread that called fork() within that new address space.
If that thread in the new process calls execve(), it will replace the new, duplicated address space with the exec'd program. The original process, and all its threads, continue running normally.
Suppose process has multiple threads, how does the threads get scheduled. If one of the thread blocks on some I/O, how other threads gets scheduled. Does the threads scheduled with the parent process is running?
The threads are scheduled independently. Any of the threads that are not blocked can run.
While the thread is running what the current kernel variable points(parent process task_stuct or threads stack_struct?
Each thread has its own task_struct within the kernel. What userspace calls a "thread" is called a "process" in kernel space. Thus current always points at the task_struct corresponding to the currently executing thread (in the userspace sense of the word).
If the process with [a second] thread is running, when the thread starts does the parent process gets preempted and how each threads gets scheduled?
Presumably you mean "the process's main thread" rather than "parent process" here. As before, the threads are scheduled independently. It's unspecified whether one runs before the other - and if you have multiple CPUs, both might run simultaneously.
If the process running on CPU creates multiple threads, does the threads created by the parent process schedule on another CPU on multiprocessor system?
That's really up to the kernel, but the threads are certainly allowed to execute on other CPUs.
Depends. If a thread is preempted because the OS scheduler decides to give CPU time to some other thread, then other threads in the process will continue running. If the process is suspended (i.e. it gets the SIGSTP signal) then AFAIK all the threads will be suspended.
When a suspended process is woken up, all the threads are marked as waiting or blocked (if they are waiting e.g. on a mutex). Then the scheduler at some points run them. There is no guarantee about any specific order the threads are run after waking up the process.
The process will terminate, and with it the threads as well.
When you fork you get a new address space, so there is no "overlay". Note that fork() and the exec() family affect the entire process, not only the thread from which they where called. When you call fork() in a multi-threaded process, the child gets a copy of that process, but with only the calling thread. Then if you call exec() in one or both of the processes (presumably only in the child process, but that's up to you), then the process which calls exec() (and with it, all its threads) is replaced by the exec()'ed program.
The thread scheduling order is decided by the OS scheduler, there is no guarantee given about any particular order.
From the kernel perspective a process is an address space with one or more threads (and some other gunk). There is no concept of threads that somehow exist without a process.
There is no such thing as a process without a single thread. A "plain process" is just a process with a single thread.
Probably yes. This is determined by the OS scheduler. Note that there are API's and tools (numactl) that one can use to force some thread(s) to run on a specific CPU core.
Assuming your questions are about POSIX threads, then
1a. A process that's preempted by the O/S will have all its threads preempted.
1b. The O/S will suspend all the threads of a process that is sent a SIGSTOP.
The O/S will resume all thread of a suspended process that is sent a SIGCONT.
By default, a SIGINT will terminate all the threads in a process.
If a thread calls fork(), then all its threads are duplicated. If it then call one of the exec() functions, then all the duplicated threads disappear.
POSIX allows for user-selection of the thread scheduling algorithm.
I don't understand the question.
I don't understand the question.
How threads are mapped to CPU-s is implementation-dependent. Many implementations will try to distribute threads amongst the available CPU-s to improve performance.
The Linux kernel doesn't distinguish between threads and processes. As far as kernel is concerned, a thread is simply another process which happens to share address space with other processes. (You would call the set of "processes" (i.e. threads) which share a single address space a "process".)
So POSIX threads are scheduled exactly as full-blown processes would be. There is no difference in scheduling whether you have one process with five threads, or five separate processes.
There are kernel calls that provide fine grained control over what is shared between processes. The POSIX threads API wraps over them.

Resources