How linux kernel decide the next thread id - linux

I have a question regarding linux kernel scheduling.
We know that, usually, linux maintains the current largest pid. If we want to start a new process, the kernel will use that largest id. So, if we kill and restart a new process, the process id are not sequential. Linux will use the largest id until it hits a limit.
But my question is how linux decides thread ID.
Say, process A and B are running. Process A crashes but process B is spawning new threads. Will process B just reuse that old tid belonging to process A, or, process B will also use the largest id as tid. Which case is more often? Do we have documents?
Thanks.

The kernel sets a maximum number of process/thread ids and simply recycles identifiers when the threads are garbage collected. So if process B spawns enough threads it will eventually reclaim thread ids from process A assuming it has been properly destroyed
Edit: Here are some links that can provide you with more specific answers
Difference between pid and tid
https://stackoverflow.com/a/8787888/5768168
"what is the value range of thread and process id?"
what is the value range of thread and process id?
"Linux PID recycling"
https://stackoverflow.com/a/11323428/5768168
"Process identifer"
https://en.wikipedia.org/wiki/Process_identifier#Unix-like
"The Linux kernel: Processes"
https://www.win.tue.nl/~aeb/linux/lk/lk-10.html

It sounds like you need to run your threads in with a PTHREAD_CREATE_JOINABLE attribute passed to pthread_create(), then have one reaper thread in your process dedicated to using pthread_join() or pthread_tryjoin() to wait for terminated threads. Rather than having an outside process trying to sort it out, have your process record the PID/TID pair after pthread_create() succeeds and have the reaper thread remove the pair when it detects the thread has terminated.
I typically combined that with a main thread that did nothing but spawn the thread-creation and reaper threads, then wait for a termination signal and terminate the thread-creator and reaper. The thread-creator stops immediately when signaled, the reaper stops when no more unterminated threads are running, the main thread terminates when both the thread-creator and reaper threads can be pthread_join()'d. Since the main thread's so simple it's unlikely to crash, which means most crashes in work threads simply deliver them to the reaper. If you want absolute certainty, your outside process should be the one to start your main process, then it can use wait() or it's siblings to monitor whether the main process has terminated (normally or by crashing).

Related

How are threads and processes implemented in Windows CE?

I read that Threads are the primary unit of execution on Windows CE. What does this mean exactly? How are threads implemented and how are processes implemented?
In ce a process defines a isolated address space and lives as long as its main thread is running. Typically the main thread is the main() entry point in a classical C program and the process is terminated as soon as you return from that function (ok, there are some initialization and destruction steps...but let's keep this simple). When the main thread terminates all the memory and resources allocated by all the threads of the process are released.
Threads are the execution units. You need to have at least the main thread, but you can create additional ones. Those are terminated when the main thread terminates.
The only difference between main thread and all the other threads is that main determines the "lifetime" of your process.
The scheduler cares about threads. When a thread time quantum terminates or an high-priority thread needs to run, it schedules the new thread and if it belongs to a different process than the one running, it re-configures the virtual memory address space to match.
If a process has one thread and another one has 99 and all of them have the same priority and all of them keep the CPU busy for the whole quantum process one will use 1% of the CPU and process 2 99%.
Of course, being and hard realtime OS, CE has also priority management, but this, again, applies to threads and not processes.

how kernel distinguishes between thread and process

in Linux threads are called light weight processes. Whether process or thread, they are implemented by task_struct data structure.
1> So, in that sense how kernel distinguishes between thread and process?
2> when context switching happens, how do threads get less overhead in context switching? because prior to this thread, another thread from another process may be running. So kernel should load all resources even if resources are shared between threads of a processes.
how kernel distinguishes between thread and process.
From http://www.kernel.org/doc/ols/2002/ols2002-pages-330-337.pdf and from Linux - Threads and Process
This was addressed during the 2.4 development cycle with the
addition of a concept called a ’thread group’. There is a linked
list of all tasks that are part of the thread group, and there is an
ID that represents the group, called the tgid. This ID is actually the
pid of the first task in the group (pid is the task ID assigned with a
Linux task), similar to the way sessions and process groups work. This
feature is enabled via a flag to clone().
and
In the kernel, each thread has it's own ID, called a PID (although it would possibly make more sense to call this a TID, or thread ID) and they also have a TGID (thread group ID) which is the PID of the thread that started the whole process.
Simplistically, when a new process is created, it appears as a thread
where both the PID and TGID are the same (new) number.
When a thread starts another thread, that started thread gets its own
PID (so the scheduler can schedule it independently) but it inherits
the TGID from the original thread.
So a main thread is a thread with the same PID and TGID and this PID is a process PID. A thread (but not a main thread) has different PID but the same TID.
Inside kernel, each process and threads have a unique id (even threads of same process) which is stored in pid variable and threads of same process also share a common id which is stored in tgid variable, and is returned to user when getpid() is invoked therefore allowing kernel to distinguish them as different entities which are schedulable in themselves.
When a thread is preempted by another thread of same process, since various segments such as .text, .bss, .data, file descriptors etc. are shared and hence allows a fast context switch compared to when different processes are context switched, or when threads of different processes are context switched.
It seems you mixed some concepts together, implemented by the same data structure does not mean they run in the same way.
you can read
what-is-the-difference-between-a-process-and-a-thread to clarify your comprehension about process and thread firstly.

Understanding process pool : how does a process pool use wait() to reap child process?

if a process pool is created and there are 10 processes
but my program only use 4 processes
it means there are 6 idle processes
to use a process pool,
generally the pseudo code is like:
pool=create_process_pool(M)
for i in 1:N:
pool.run(task i)
pool.wait()
pool.close()
how does the pool decide when to call pool.wait()?
there are some cases:
1 if M>N, for example M=10, N=6, then there are 4 idle processes. For the 6 used processes, when they finished running and exit, they can inform the pool.wait(), but for the 4 idle processes, since they didn't run, how can they inform the pool.wait() that they finishes?
2 if M < N, is a process finishes a task and exit, it may be used for another task. So how can this process know that it will have no tasks any more and so inform pool.wait()
can anyone explain a bit how process pool works in this regard?
thanks!
You could implement a process pool (e.g. in C++) with
some Process class (in particular, knowing the pid of each fork-ed process). It would have some empty instance (whose pid would be 0).
some global array of Process-es
a Command class representing a command to be started (when possible) in the process pool.
a std::deque<Command> of commands, when possible a Command would fire some Process
an event loop taking account of SIGCHLD; when a SIGCHLD occurs, you would waitpid with WNOHANG and get the pid of the ended Process so find the actual Process instance and do whatever is needed ; that event loop would probably pop Command-s to run (so would start non-idle Process-es), manage pipes, etc...
Then idle processes would just be represented by a Process slot with a zero pid; no need to fork it explicitly. So they won't be unix processes.... just some internal representation in the process pool software.
My point is that a process pool mechanism don't (necessarily) have to start (with fork system call) idle processes. It could maintain a pool of process descriptors, and for idle slots mark the descriptor specially. That process descriptor could actually be a pid_t and empty slots having (pid_t)0 which is never the pid of any real Unix process. So there is no need to create processes in advance (but only lazily, as necessary). Hence, no need for idle processes.
I strongly suggest to take some hours to read Advanced Linux Programming. It will teach you better than what I could in a few minutes.
As an example, look at the Unix (or GNU) batch (and at) command. It does not use any idle process. And it does manage a pool of process queues. It is free software, so you can study (and improve) its source code.

Terminating a process -- Transition from allproc list to zombieproc list

How does a process get terminated? Lets say a process has three threads A,B & C. Now when we send a SIG_KILL signal to the process. All is fine so far, Now each process has exit status field in its structure! So, when a process is sent a kill signal, my understanding is that it is sent to all the threads. A thread gets killed either when it traps into kernel. If it is alreading in the kernel, it quits when it exits from the kernel. If a thread is sleeping it exits when it wakes up. Is my understanding right or am i misunderstading/missing something?
If my understanding is correct, when is a process put into zombie list? when all the threads exited or as soon as it receives a kill signal?
Lets say a process has three threads A,B & C.
Ok. I assume modern Linux, with kernel supporting threads (Linux 2.6 + glibc > 2.3)
Then the process (or thread group) consists of 3 threads (or, there is a 3 threads with different tids and same tgid=PID)
Now when we send a SIG_KILL signal to the process.
So, you use a tgid (PID) here. Ok.
Now each process has exit status field in its structure!
Wwwhat? Yes, but killing and exiting from thread group have a special code to get right exit code to waiter. For killing, the exit status is get from signal; for exiting (syscall sys_group_exit) it is the argument of syscall.
So, when a process is sent a kill signal,... it is sent to all the threads.
No.
Basically there can be two kinds of signals:
process-wide - it will be delivered to ANY thread in the process
thread (Can't name it correctly) - which is delivered by tid to some thread and not another.
So, SIGKILL is process-wide, it will kill entire process. It is delivered to some thread.
When kernel will deliver this signal - it will call do_group_exit() function ( http://lxr.linux.no/linux+v2.6.28/kernel/exit.c#L1156 called from http://lxr.linux.no/linux+v2.6.28/kernel/signal.c#L1870) to kill all threads in thread group (in process).
There is a zap_other_threads() function to iterate over all threads and kill them (with resending a thread-delivered SIGKILL) http://lxr.linux.no/linux+v2.6.28/kernel/signal.c#L966
when is a process put into zombie list?
After do_exit() kernel function call. It has a tsk->state = TASK_DEAD; line at the end.
when all the threads exited or as soon as it receives a kill signal?
The moment, when task get its state to TASK_DEAD is after receiving SIGKILL. This signal is already redelivered to all threads of the process at this moment. Can't find the actual exit time of threads, but all threads have a flag of pending fatal signal, so they will be killed at any resched.
UPDATE: all threads of process must be killed (must receive the KILL signal and do a cleanup), as they have some accounting information to be accumulated in the first thread (here first mention not the first-started, but the thread which got an original process-wide SIGKILL; or the thread, which called an exit_group syscall). First thread must wait all another threads; and it will change status only after that.
In FreeBSD, a zombie process cannot execute any code. Therefore, everything that needs the moribund process to do something is performed before that point. If you see a process in this state in ps(1) (usually only if it gets stuck), it has a usual state such as D, S, R or I, with E (trying to exit) appended to it.
A signal is delivered to one thread (either a particular thread or any thread, depending on how the signal was generated). The act of terminating the process (default action of various signals) has a process-global effect. One of the things that happens is that the thread that was chosen to deliver the signal (or that called _exit(2)) requests all other threads to exit.
A thread does not have an exit status at the kernel level; the value available via pthread_join() is a userland feature.

Threads: some questions

I have couple of questions on threads. Could you please clarify.
Suppose process with one or multiple threads. If the process is prempted/suspended, does the threads also get preempted or does the threads continue to run?
When the suspended process rescheduled, does the process threads also gets scheduled? If the process has process has multiple threads, which threads will be rescheduled and on what basis?
if the thread in the process is running and recieves a signal(say Cntrl-C) and the default action of the signal is to terminate a process, does the running thread terminates or the parent process will also terminate? What happens to the threads if the running process terminates because of some signal?
If the thread does fork fallowed exec, does the exece'd program overlays the address space of parent process or the running thread? If it overlays the parent process what happens to threads, their data, locks they are holding and how they get scheduled once the exec'd process terminates.
Suppose process has multiple threads, how does the threads get scheduled. If one of the thread blocks on some I/O, how other threads gets scheduled. Does the threads scheduled with the parent process is running?
While the thread is running what the current kernel variable points(parent process task_stuct or threads stack_struct?
If the process with the thread is running, when the thread starts does the parent
process gets preempted and how each threads gets scheduled?
If the process running on CPU creates multiple threads, does the threads created by the parent process schedule on another CPU on multiprocessor system?
Thanks,
Ganesh
First, I should clear up some terminology that you appear to be confused about. In POSIX, a "process" is a single address space plus at least one thread of control, identified by a process ID (PID). A thread is an individually-scheduled execution context within a process.
All processes start life with just one thread, and all processes have at least one thread. Now, onto the questions:
Suppose process with one or multiple threads. If the process is prempted/suspended, does the threads also get preempted or does the threads continue to run?
Threads are scheduled independently. If a thread blocks on a function like connect(), then other threads within the process can still be scheduled.
It is also possible to request that every thread in a process be suspended, for example by sending SIGSTOP to the process.
When the suspended process rescheduled, does the process threads also gets scheduled? If the process has process has multiple threads, which threads will be rescheduled and on what basis?
This only makes sense in the context that an explicit request was made to stop the entire process. If you send the process SIGCONT to restart the process, then any of the threads which are not blocked can run. If more threads are runnable than there are processors available to run them, then it is unspecified which one(s) run first.
If the thread in the process is running and recieves a signal(say Cntrl-C) and the default action of the signal is to terminate a process, does the running thread terminates or the parent process will also terminate? What happens to the threads if the running process terminates because of some signal?
If a thread recieves a signal like SIGINT or SIGSEGV whose action is to terminate the process, then the entire process is terminated. This means that every thread in the process is unceremoniously killed.
If the thread does fork followed by exec, does the exece'd program overlays the address space of parent process or the running thread? If it overlays the parent process what happens to threads, their data, locks they are holding and how they get scheduled once the exec'd process terminates.
The fork() call creates a new process by duplicating the address space of the original process, and duplicating just the single thread that called fork() within that new address space.
If that thread in the new process calls execve(), it will replace the new, duplicated address space with the exec'd program. The original process, and all its threads, continue running normally.
Suppose process has multiple threads, how does the threads get scheduled. If one of the thread blocks on some I/O, how other threads gets scheduled. Does the threads scheduled with the parent process is running?
The threads are scheduled independently. Any of the threads that are not blocked can run.
While the thread is running what the current kernel variable points(parent process task_stuct or threads stack_struct?
Each thread has its own task_struct within the kernel. What userspace calls a "thread" is called a "process" in kernel space. Thus current always points at the task_struct corresponding to the currently executing thread (in the userspace sense of the word).
If the process with [a second] thread is running, when the thread starts does the parent process gets preempted and how each threads gets scheduled?
Presumably you mean "the process's main thread" rather than "parent process" here. As before, the threads are scheduled independently. It's unspecified whether one runs before the other - and if you have multiple CPUs, both might run simultaneously.
If the process running on CPU creates multiple threads, does the threads created by the parent process schedule on another CPU on multiprocessor system?
That's really up to the kernel, but the threads are certainly allowed to execute on other CPUs.
Depends. If a thread is preempted because the OS scheduler decides to give CPU time to some other thread, then other threads in the process will continue running. If the process is suspended (i.e. it gets the SIGSTP signal) then AFAIK all the threads will be suspended.
When a suspended process is woken up, all the threads are marked as waiting or blocked (if they are waiting e.g. on a mutex). Then the scheduler at some points run them. There is no guarantee about any specific order the threads are run after waking up the process.
The process will terminate, and with it the threads as well.
When you fork you get a new address space, so there is no "overlay". Note that fork() and the exec() family affect the entire process, not only the thread from which they where called. When you call fork() in a multi-threaded process, the child gets a copy of that process, but with only the calling thread. Then if you call exec() in one or both of the processes (presumably only in the child process, but that's up to you), then the process which calls exec() (and with it, all its threads) is replaced by the exec()'ed program.
The thread scheduling order is decided by the OS scheduler, there is no guarantee given about any particular order.
From the kernel perspective a process is an address space with one or more threads (and some other gunk). There is no concept of threads that somehow exist without a process.
There is no such thing as a process without a single thread. A "plain process" is just a process with a single thread.
Probably yes. This is determined by the OS scheduler. Note that there are API's and tools (numactl) that one can use to force some thread(s) to run on a specific CPU core.
Assuming your questions are about POSIX threads, then
1a. A process that's preempted by the O/S will have all its threads preempted.
1b. The O/S will suspend all the threads of a process that is sent a SIGSTOP.
The O/S will resume all thread of a suspended process that is sent a SIGCONT.
By default, a SIGINT will terminate all the threads in a process.
If a thread calls fork(), then all its threads are duplicated. If it then call one of the exec() functions, then all the duplicated threads disappear.
POSIX allows for user-selection of the thread scheduling algorithm.
I don't understand the question.
I don't understand the question.
How threads are mapped to CPU-s is implementation-dependent. Many implementations will try to distribute threads amongst the available CPU-s to improve performance.
The Linux kernel doesn't distinguish between threads and processes. As far as kernel is concerned, a thread is simply another process which happens to share address space with other processes. (You would call the set of "processes" (i.e. threads) which share a single address space a "process".)
So POSIX threads are scheduled exactly as full-blown processes would be. There is no difference in scheduling whether you have one process with five threads, or five separate processes.
There are kernel calls that provide fine grained control over what is shared between processes. The POSIX threads API wraps over them.

Resources