What is the difference between exit() and exit_group(). Any process that has multiple threads should use exit_group instead of exit?
To answer the question why do you ask - we are having an process that has around forty threads. When a thread is locked up, we automatically exit the process and then restart the process. And we print the backtrace of the thread that was locked up. We wanted to know whether calling exit in this case is any different from exit_group.
From the docs: This system call is equivalent to exit(2) except that it terminates not only the calling thread, but all threads in the calling process's thread group - However, what is the difference between exiting the process and exiting all the threads. Isn't exiting process == exiting all the threads.
All thread libraries I know (e.g. recent glibc or musl-libc) are using the low-level clone(2) system call for their thread implementations (and some C libraries are even using clone to fork a process).
clone is a difficult Linux syscall. Unless you are a thread library implementor, you should not use it directly but only thru library functions (like e.g. pthread_create(3)); see also futex(7) used in pthread_mutex* functions
The clone syscall is used to create tasks: either threads (sharing address space in a multi-threaded process) or processes.
The exit_group syscall is related to exiting these tasks.
In short, you'll never use directly exit_group or clone. Your libc is doing that for you. So don't care about exit_group or _Exit; you should use the standard library function exit(3) only, which deals notably with atexit(3) & on_exit(3) registered handlers and flushes <stdio.h> buffers. In the rare cases you don't want that to happen, use _exit(2) (but you probably don't need that).
Of course, if you are reimplementing your own libc from scratch, you need to care about exit_group & clone; but otherwise you don't care about them..
If you care about gory implementation details, dive into the source code of your libc. Details may be libc-version, kernel-version, and compiler specific!
Related
pthread_detach marks a thread so that when it terminates, its resources are automatically released without requiring the parent thread to call pthread_join. How can it do this? From the perspective of Linux in particular, there are two resources in particular I am curious about:
As an implementation detail, I would expect that if a wait system call is not performed on the terminated thread, then the thread would become a zombie. I assume that the pthread library's solution to this problem does not involve SIGCHLD, because (I think) it still works regardless of what action the program has specified to occur when SIGCHLD is received.
Threads are created using the clone system call. The caller must allocate memory to serve as the child thread's stack area before calling clone. Elsewhere on Stack Overflow, it was recommended that the caller use mmap to allocate the stack for the child. How can the stack be unmapped after the thread exits?
It seems to me that pthread_detach must somehow provide solutions to both of these problems, otherwise, a program that spawns and detaches many threads would eventually lose the ability to continue spawning new threads, even though the detached threads may have terminated already.
The pthreads library (on Linux, NPTL) provides a wrapper around lower-level primitives such as clone(2). When a thread is created with pthread_create, the function passed to clone is a wrapper function. That function allocates the stack and stores that information plus any other metadata into a structure, then calls the user-provided start function. When the user-provided start function returns, cleanup happens. Finally, an internal function called __exit_thread is called to make a system call to exit the thread.
When such a thread is detached, it still returns from the user-provided start function and calls the cleanup code as before, except the stack and metadata is freed as part of this since there is nobody waiting for this thread to complete. This would normally be handled by pthread_join.
If a thread is killed or exits without having run, then the cleanup is handled by the next pthread_create call, which will call any cleanup handlers yet to be run.
The reason a SIGCHLD is not sent to the parent nor is wait(2) required is because the CLONE_THREAD flag to clone(2) is used. The manual page says the following about this flag:
A new thread created with CLONE_THREAD has the same parent process as the process that made the clone call (i.e., like CLONE_PARENT), so that calls to getppid(2) return the same value for all of the threads in a thread group. When a CLONE_THREAD thread terminates, the thread that created it is not sent a SIGCHLD (or other termination) signal; nor can the status of such a thread be obtained using wait(2). (The thread is said to be detached.)
As you noted, this is required for the expected POSIX semantics to occur.
With user-level threads there are N user-level threads running on top of a single kernel thread. This is in contrast to pthreads where only one user thread runs on a kernel thread.
The N user-level threads are preemptively scheduled on the single kernel thread. But what are the details of how that is done.
I heard something that suggested that the threading library sets things up so that a signal is sent by the kernel and that is the mechanism to yank execution from an individual user-level thread to a signal handler that can then do the preemptive scheduling.
But what are the details of how state such as registers and thread structs are saved and/or mutated to make this all work? Is there maybe a very simple of user-level threads that is useful for learning the details?
To get the details right, use the source! But this is what I remember from when I read it...
There are two ways user-level threads can be scheduled: voluntarily and preemptively.
Voluntary scheduling: threads must call a function periodically to pass the use of the CPU to another thread. This function is called yield() or schedule() or something like that.
Preemptive scheduling: the library forcefully removes the CPU from one thread and passes it to another. This is usually done with timer signals, such as SIGALARM (see man ualarm for the details).
About how to do the real switch, if your OS is friendly and provides the necessary functions, that is easy. In Linux you have the makecontext() / swapcontext() functions that make swapping from one task to another easy. Again, see the man pages for details.
Unfortunately, these functions are removed from POSIX, so other UNIX may not have them. If that's the case, there are other tricks that can be done. The most popular was the one calling sigaltstack() to set up an alternate stack for managing the signals, then kill() itself to get to the alternate stack, and longjmp() from the signal function to the actual user-mode-thread you want to run. Clever, uh?
As a side note, in Windows user-mode threads are called fibers and are fully supported also (see the docs of CreateFiber()).
The last resort is using assembler, that can be made to work almost everywhere, but it is totally system specific. The steps to create a UMT would be:
Allocate a stack.
Allocate and initialize a UMT context: a struct to hold the value of the relevant CPU registers.
And to switch from one UMT to another:
Save the current context.
Switch the stack.
Restore the next context in the CPU and jump to the next instruction.
These steps are relatively easy to do in assembler, but quite impossible in plain C without support from any of the tricks cited above.
i have been trying to undertand the system calls, and want to understand how set_tid_address works. bascially from what i have read is that it returns the pid of the program or process which is executed.
I have tested this with ls, however with some commands like uptime, top etc i dont see set_tid_address being used. Why is that?
The clone() syscall can take a CLONE_CHILD_CLEARTID flag, that the value at child_tidptr (another clone() argument) gets cleared and an associated futex signal a wake-up when the child thread exits. This is used to implement pthread_join() (the parent thread waits on the futex).
set_tid_address() allows to pthread_join() on the initial thread. More information in the following LKML threads:
[patch] threading fix, tid-2.5.47-A3
[patch] user-vm-unlock-2.5.31-A2
As to why some programs call set_tid_address() and others don't, the answer is easy. Programs linked (directly or indirectly) to libpthread call set_tid_address. ls is linked to librt, which is linked to libpthread, so it runs the initialization for NPTL.
According to the Linux Programmer's Manual, set_tid_address is used to:
set pointer to thread ID
When it is finished, it returns the PID of the calling process. Unfortunately the manual is vague as to when you would actually want to use this system call.
In any case, why do you think that these commands are using set_tid_address?
I'm here to ask you the difference between a process and a thread in linux. I know that a thread for linux is just a "task", which shares with the father process things that they need to have in common (the address space and other important informations). I also know that the two are creating calling the same function ('clone()'), but there's still something that I'm missing: what really happens when a thread exit? What function is called inside the linux kernel?
I know that when a process exits calls the do_exit function, but here or somewhere else there should be a way to understand if it is just a thread exiting or a whole process. Can you explain me this thing or redirect to some textbook?? I tried 'Understanding the linux kernel' but I was not satisfied with it.
I'm asking this thing because a need to add things to the task_struct struct, but I need to discriminate how to manage those informations for a process and its children.
Thank you.
The exit() syscall exits a single thread, and the exit_group() syscall exits the entire POSIX process ("thread group").
The main difference between processes and threads is that proceses run in their own virtual memory space, apart from every other process. That means two processes cannot access each other's data. The only way for two processes to interact is through the operating system somehow (shared memory sections, semaphores, sockets, etc.).
Threads on the other hand all exist within their creating process. That means threads have access to all the same data (variables, pointers, handles, etc.) that any other thread in the same process has. That is the main difference.
There are some implications of this. For instance, when the process terminates for some reason, all its threads go with it. It is also a lot easier to get multi-processing errors like torn data in threads, just because nothing is forcing you to use the OS syncronization functions that you really ought to be using.
I'm monitoring a process with strace/ltrace in the hope to find and intercept a call that checks, and potentially activates some kind of globally shared lock.
While I've dealt with and read about several forms of interprocess locking on Linux before, I'm drawing a blank on what to calls to look for.
Currently my only suspect is futex() which comes up very early on in the process' execution.
Update0
There is some confusion about what I'm after. I'm monitoring an existing process for calls to persistent interprocess memory or equivalent. I'd like to know what system and library calls to look for. I have no intention call these myself, so naturally futex() will come up, I'm sure many libraries will implement their locking calls in terms of this, etc.
Update1
I'd like a list of function names or a link to documentation, that I should monitor at the ltrace and strace levels (and specifying which). Any other good advice about how to track and locate the global lock in mind would be great.
If you can start monitored process in valgrind, then there are two projects:
http://code.google.com/p/data-race-test/wiki/ThreadSanitizer
and Helgrind
http://valgrind.org/docs/manual/hg-manual.html
Helgrind is aware of all the pthread
abstractions and tracks their effects
as accurately as it can. On x86 and
amd64 platforms, it understands and
partially handles implicit locking
arising from the use of the LOCK
instruction prefix.
So, this tools can detect even atomic memory accesses. And they will check pthread usage
flock is another good one
There are many system calls can be used for locking: flock, fcntl, and even create.
When you are using pthreads/sem_* locks they may be executed in user space so you'll never
see them in strace as futex is called only for pending operations. Like when you actually
need to wait.
Some operations can be done in user space only - like spinlocks - you'll never see them
unless they do some waits for timer - backoff so you may see only stuff like nanosleep when one lock waits for other.
So there is no "generic" way to trace them.
on systems with glibc ~ >= 2.5 (glibc + nptl) you can use process shared
semaphores (last parameter to sem_init), more precisely, posix unnamed semaphores
posix mutexes (with PTHREAD_PROCESS_SHARED to pthread_mutexattr_setpshared)
posix named semaphores (got from sem_open/sem_unlink)
system v (sysv) semaphores: semget, semop
On older systems with glibc 2.2, 2.3 with linuxthreads or on embedded systems with uClibc you can use ONLY system v (sysv) semaphores for iterprocess communication.
upd1: any IPC and socker must be checked.