Kernel thread exit in linux - multithreading

I'm here to ask you the difference between a process and a thread in linux. I know that a thread for linux is just a "task", which shares with the father process things that they need to have in common (the address space and other important informations). I also know that the two are creating calling the same function ('clone()'), but there's still something that I'm missing: what really happens when a thread exit? What function is called inside the linux kernel?
I know that when a process exits calls the do_exit function, but here or somewhere else there should be a way to understand if it is just a thread exiting or a whole process. Can you explain me this thing or redirect to some textbook?? I tried 'Understanding the linux kernel' but I was not satisfied with it.
I'm asking this thing because a need to add things to the task_struct struct, but I need to discriminate how to manage those informations for a process and its children.
Thank you.

The exit() syscall exits a single thread, and the exit_group() syscall exits the entire POSIX process ("thread group").

The main difference between processes and threads is that proceses run in their own virtual memory space, apart from every other process. That means two processes cannot access each other's data. The only way for two processes to interact is through the operating system somehow (shared memory sections, semaphores, sockets, etc.).
Threads on the other hand all exist within their creating process. That means threads have access to all the same data (variables, pointers, handles, etc.) that any other thread in the same process has. That is the main difference.
There are some implications of this. For instance, when the process terminates for some reason, all its threads go with it. It is also a lot easier to get multi-processing errors like torn data in threads, just because nothing is forcing you to use the OS syncronization functions that you really ought to be using.

Related

Is a coroutine a kind of thread that is managed by the user-program itself (rather than managed by the kernel)?

In my opinion,
Kernel is an alias for a running program whose program text is in the kernel area and can access all memory spaces;
Process is an alias for a running program whose program has an independent memory space in the user memory area. Which process can get the use of the CPU is completely managed by the kernel;
Thread is an alias for a running program whose program-text is in the memory space of a process and completely shares the memory space with another thread of the same process. Which thread can get the use of the CPU is completely managed by the kernel;
Coroutine is an alias for a running program whose program-text is in the memory space of a process.And it is a user thread that the process decides itself (not the kernel) how to use, and the kernel is only responsible for allocating CPU resources to the process.
Since the process itself has no right to schedule like the kernel, the coroutine can only be concurrent but not parallel.
Am I correct in saying Above?
process is an alias for a running program...
The modern way to think of a process is to think of it as a container for a collection of threads and the resources that those threads need to execute.
Every process (except for "zombie" processes that can exist in some systems) must have at least one thread. It also has a virtual address space, open file handles, and maybe sockets and other resources that are shared by the threads.
Thread is an alias for a running program...
The problem with saying that is, "running program" sounds too much like "process," and a thread is most definitely not a process. (E.g., a thread can only exist in a process.)
A computer scientist might tell you that a thread is one particular execution of the application's code. I like to think of a thread as an independent agent who executes the code.
coroutine...is a user thread...
I'm going to mostly leave that one alone. "Coroutine" seems to mean something different from the highly formalized, and not particularly useful coroutines that I learned about more than forty years ago. What people call "coroutines" today seem to have somewhat in common with what I call "green threads," but there are details of how and when and why they are used that I don't yet understand.
Green threads (a.k.a., "user mode threads") simply are threads that the kernel doesn't know about. They are pretty much just like the threads that the kernel does know about except, the kernel scheduler never preempts them because, Duh! it doesn't know about them. Context switches between green threads can only happen at specific points where the application allows it (e.g., by calling a yield() function or, by calling some library function that is a documented yield point.)
kernel is an alias for a running program...
The kernel also is most definitely not a process.
I don't know every detail about every operating system, but the bulk of kernel code does not run independently of the applications that the kernel serves. It only runs when an application thread enters kernel mode by making a system call. The thread that runs the kernel code still belongs to the application process, but the code that determines the thread's behavior at that point is written or chosen by the kernel developers, not by the application developers.

Multi-threaded fork()

In a multi-threaded application, if a thread calls fork(), it will copy the state of only that thread. So the child process created would be a single-thread process. If some other thread were to hold a lock required by the thread which called the fork(), that lock would never be released in the child process. This is a problem.
To counter this, we can modify the fork() in two ways. Either we can copy all the threads instead of only that single one. Or we can make sure that any lock held by the (other) non-copied threads will be released. So what will be the modified fork() system call in both these cases. And which of these two would be better, or what would be the advantages and disadvantages of either option?
This is a thorny question.
POSIX has pthread_atfork() to work through the mess of mixing forks and thread creation. The NOTES section of that man page discusses mutexes etc. However, it acknowledges that getting it right is hard.
The function isn't so much an alternative to fork() as it is a way to explain to the pthread library how your program needs to be prepared for the use of fork().
In general not trying to launch a thread from the child of fork but either exiting that child or calling exec asap, will minimize problems.
This post has a good discussion of pthread_atfork().
...Or we can make sure that any lock held by the (other) non-copied threads will be released.
That's going to be harder than you realize because a program can implement "locks" entirely in user-mode code, in which case, the OS would have no knowledge of them.
Even if you were careful only to use locks that were known to the OS you still have a more general problem: Creating a new process with just the one thread would effectively be no different from creating a new process with all of the threads and then immediately killing all but one of them.
Read about why we don't kill threads. In a nutshell: Locks aren't the only state that needs to be cleaned up. Any of the threads that existed in the parent but not in the child could, at the moment of the fork call, been in the middle of making a mess that needs to be cleaned up. If that thread doesn't exist in the child, then you've lost the knowledge of what needs to be cleaned up.
we can copy all the threads instead of only that single one...
That also is a potential problem. The one thread that calls fork() would know when and why fork() was called, and it would be prepared for the fork call. None of the other threads would have any warning. And, if any of those threads is interacting with something outside of the process (e.g., talking to a remote service) then,where you previously had one client talking to the service, you suddenly have two clients, talking to the same service, and they both think that they are the only one. That's not going to end well.
Don't call fork() from multi-threaded programs.
In one project I worked on: We had a big multi-threaded program that needed to spawn other processes. How we did it is, we had it spawn a simple, single-threaded "helper" program before it created any new threads. Then, whenever it needed to spawn another process, it sent a message to the helper, and the helper did it.

Forking vs Threading

I have used threading before in my applications and know its concepts well, but recently in my operating system lecture I came across fork(). Which is something similar to threading.
I google searched difference between them and I came to know that:
Fork is nothing but a new process that looks exactly like the old or the parent process but still it is a different process with different process ID and having it’s own memory.
Threads are light-weight process which have less overhead
But, there are still some questions in my mind.
When should you prefer fork() over threading and vice-verse?
If I want to call an external application as a child, then should I use fork() or threads to do it?
While doing google search I found people saying it is bad thing to call a fork() inside a thread. why do people want to call a fork() inside a thread when they do similar things?
Is it True that fork() cannot take advantage of multiprocessor system because parent and child process don't run simultaneously?
The main difference between forking and threading approaches is one of operating system architecture. Back in the days when Unix was designed, forking was an easy, simple system that answered the mainframe and server type requirements best, as such it was popularized on the Unix systems. When Microsoft re-architected the NT kernel from scratch, it focused more on the threading model. As such there is today still a notable difference with Unix systems being efficient with forking, and Windows more efficient with threads. You can most notably see this in Apache which uses the prefork strategy on Unix, and thread pooling on Windows.
Specifically to your questions:
When should you prefer fork() over threading and vice-verse?
On a Unix system where you're doing a far more complex task than just instantiating a worker, or you want the implicit security sandboxing of separate processes.
If I want to call an external application as a child, then should I use fork() or threads to do it?
If the child will do an identical task to the parent, with identical code, use fork. For smaller subtasks use threads. For separate external processes use neither, just call them with the proper API calls.
While doing google search I found people saying it is bad thing to call a fork() inside a thread. why do people want to call a fork() inside a thread when they do similar things?
Not entirely sure but I think it's computationally rather expensive to duplicate a process and a lot of subthreads.
Is it True that fork() cannot take advantage of multiprocessor system because parent and child process don't run simultaneously?
This is false, fork creates a new process which then takes advantage of all features available to processes in the OS task scheduler.
A forked process is called a heavy-weight process, whereas a threaded process is called light-weight process.
The following are the difference between them:
A forked process is considered a child process whereas a threaded process is called a sibling.
Forked process shares no resource like code, data, stack etc with the parent process whereas a threaded process can share code but has its own stack.
Process switching requires the help of OS but thread switching it is not required
Creating multiple processes is a resource intensive task whereas creating multiple thread is less resource intensive task
Each process can run independently whereas one thread can read/write another threads data.
Thread and process lecture
fork() spawns a new copy of the process, as you've noted. What isn't mentioned above is the exec() call which often follows. This replaces the existing process with a new process (a new executable) and as such, fork()/exec() is the standard means of spawning a new process from an old one.
e.g. that's how your shell will invoke a process from the command line. You specify your process (ls, say) and the shell forks and then execs ls.
Note that this operates at a very different level from threading. Threading runs multiple lines of execution intra-process. Forking is a means of creating new processes.
As #2431234123412341234123 said, on Linux thanks to COW, processes are not much heavier than threads and boils down to their usage. COW - copy on write means that a memory page of the forked process gets copied only when forked process makes changes to it, otherwise OS keeps redirecting it to pages of the parent process.
From a programming use case, let us say in the heap memory you have a big data structure a 2d array[2000000][100] (200 mb), and the page size of the kernel is around 4 kb. When the process is forked, no new memory for this array will be allocated. If one particular row (100 bytes) is changed (in either parent process or child), only the corresponding page (4 kb or 8kb if it is overlapping in two pages) will be copied and updated for the forked thread.
Other memory portions of memory work in forked processes same as threads (code is same, registers and call stack are separate).
On Windows as #Niels Keurentjes said, thrads might be better from a performance view, but on Linux it is more of use case.

When is clone() and fork better than pthreads?

I am beginner in this area.
I have studied fork(), vfork(), clone() and pthreads.
I have noticed that pthread_create() will create a thread, which is less overhead than creating a new process with fork(). Additionally the thread will share file descriptors, memory, etc with parent process.
But when is fork() and clone() better than pthreads? Can you please explain it to me by giving real world example?
Thanks in Advance.
clone(2) is a Linux specific syscall mostly used to implement threads (in particular, it is used for pthread_create). With various arguments, clone can also have a fork(2)-like behavior. Very few people directly use clone, using the pthread library is more portable. You probably need to directly call clone(2) syscall only if you are implementing your own thread library - a competitor to Posix-threads - and this is very tricky (in particular because locking may require using futex(2) syscall in machine-tuned assembly-coded routines, see futex(7)). You don't want to directly use clone or futex because the pthreads are much simpler to use.
(The other pthread functions require some book-keeping to be done internally in libpthread.so after a clone during a pthread_create)
As Jonathon answered, processes have their own address space and file descriptor set. And a process can execute a new executable program with the execve syscall which basically initialize the address space, the stack and registers for starting a new program (but the file descriptors may be kept, unless using close-on-exec flag, e.g. thru O_CLOEXEC for open).
On Unix-like systems, all processes (except the very first process, usuallyinit, of pid 1) are created by fork (or variants like vfork; you could, but don't want to, use clone in such way as it behaves like fork).
(technically, on Linux, there are some few weird exceptions which you can ignore, notably kernel processes or threads and some rare kernel-initiated starting of processes like /sbin/hotplug ....)
The fork and execve syscalls are central to Unix process creation (with waitpid and related syscalls).
A multi-threaded process has several threads (usually created by pthread_create) all sharing the same address space and file descriptors. You use threads when you want to work in parallel on the same data within the same address space, but then you should care about synchronization and locking. Read a pthread tutorial for more.
I suggest you to read a good Unix programming book like Advanced Unix Programming and/or the (freely available) Advanced Linux Programming
The strength and weakness of fork (and company) is that they create a new process that's a clone of the existing process.
This is a weakness because, as you pointed out, creating a new process has a fair amount of overhead. It also means communication between the processes has to be done via some "approved" channel (pipes, sockets, files, shared-memory region, etc.)
This is a strength because it provides (much) greater isolation between the parent and the child. If, for example, a child process crashes, you can kill it and start another fairly easily. By contrast, if a child thread dies, killing it is problematic at best -- it's impossible to be certain what resources that thread held exclusively, so you can't clean up after it. Likewise, since all the threads in a process share a common address space, one thread that ran into a problem could overwrite data being used by all the other threads, so just killing that one thread wouldn't necessarily be enough to clean up the mess.
In other words, using threads is a little bit of a gamble. As long as your code is all clean, you can gain some efficiency by using multiple threads in a single process. Using multiple processes adds a bit of overhead, but can make your code quite a bit more robust, because it limits the damage a single problem can cause, and makes it much easy to shut down and replace a process if it does run into a major problem.
As far as concrete examples go, Apache might be a pretty good one. It will use multiple threads per process, but to limit the damage in case of problems (among other things), it limits the number of threads per process, and can/will spawn several separate processes running concurrently as well. On a decent server you might have, for example, 8 processes with 8 threads each. The large number of threads helps it service a large number of clients in a mostly I/O bound task, and breaking it up into processes means if a problem does arise, it doesn't suddenly become completely un-responsive, and can shut down and restart a process without losing a lot.
These are totally different things. fork() creates a new process. pthread_create() creates a new thread, which runs under the context of the same process.
Thread share the same virtual address space, memory (for good or for bad), set of open file descriptors, among other things.
Processes are (essentially) totally separate from each other and cannot modify each other.
You should read this question:
What is the difference between a process and a thread?
As for an example, if I am your shell (eg. bash), when you enter a command like ls, I am going to fork() a new process, and then exec() the ls executable. (And then I wait() on the child process, but that's getting out of scope.) This happens in an entire different address space, and if ls blows up, I don't care, because I am still executing in my own process.
On the other hand, say I am a math program, and I have been asked to multiply two 100x100 matrices. We know that matrix multiplication is an Embarrassingly Parallel problem. So, I have the matrices in memory. I spawn of N threads, who each operate on the same source matrices, putting their results in the appropriate location in the result matrix. Remember, these operate in the context of the same process, so I need to make sure they are not stamping on each other's data. If N is 8 and I have an eight-core CPU, I can effectively calculate each part of the matrix simultaneously.
Process creation mechanism on unix using fork() (and family) is very efficient.
Morever , most unix system doesnot support kernel level threads i.e thread is not entity recognized by kernel. Hence thread on such system cannot get benefit of CPU scheduling at kernel level. pthread library does that which is not kerenl rather some process itself.
Also on such system pthreads are implemented using vfork() and as light weight process only.
So using threading has no point except portability on such system.
As per my understanding Sun-solaris and windows has kernel level thread and linux family doesn't support kernel threads.
with processes pipes and unix doamin sockets are very efficient IPC without synchronization issues.
I hope it clears why and when thread should be used practically.

Thread and Process

What is the best definition of a thread and what is a process?
If I call a function, how do I know that a thread is calling it or a process (or am I not understanding it??!). This is in a multi-core system (quadcore).
From http://wiki.answers.com/Q/What_is_the_difference_between_a_computer_process_and_thread:
A single process can have multiple threads that share global data and address space with other threads running in the same process, and therefore can operate on the same data set easily. Processes do not share address space and a different mechanism must be used if they are to share data.
If we consider running a word processing program to be a process, then the auto-save and spell check features that occur in the background are different threads of that process which are all operating on the same data set (your document).
One thing to add is how does a multi-core processor handle this. Think of a thread as the sequential execution of your code.
A core in a CPU can only execute one thread at a time. So if this thread is blocked because the program is waiting for an I/O operation to finish, the process is blocked (very simplified example: Word not responding). Multi-threading allows us to execute multiple code paths at the same time. "Same time" is a bit of a lie, since only one thread can actually execute at a time in a core, but the CPU gives some small chunk of time to each thread, so it appears as if all these threads are executing at the same time. A good example here is the spell checker in Word.
If you have multiple cores, the only difference is that in an N-Core CPU you can have N threads executing at the same time. To simplify a lot, it doesn't matter what process the threads belong to. To simply even further, you'd expect a N times performance increase. :-D
In every modern OS I know of, everything runs in a thread, which runs in a process.
The OS can keep track of multiple processes, and each process can host an arbitrary number of threads. So all code is executed within a thread and within a process (since the thread runs in a process).
The main distinction between the two is that each process has its own virtual address space. Separate processes do not have access to each others' data, file handles or anything else, and are essentially not aware that other processes exist.
On the other hand, every thread in a process share the same address space, and all threads can therefore inspect or modify each others' data, call the same functions and everything else.
It is often (but not always) the cases that one program consists of one process and a number of threads.
A process is composed of one or more threads (one by default for most environments). A process can create additional threads though.
Like the previous answer says, each Process has its own memory space (each can have a pointer to 0x12345, with that memory location having different values for each process), while all the Threads of a process would actually point to the exact same memory location, since they're all in the same memory space.
When calling a function, it's almost always called on the same thread that the caller is running on. In Objective-C, there are exceptions (performSelectorOnMainThread), and there might be for other languages as well, but that sort of functionality is necessary only in special cases.
From a user's point of view, the main distinction is that threads share memory with each other, while processes do not. That means you can easily share data between threads, while processes require some kind of OS call to do so.
Some call this a benifit of threads, but sharing data between multiple threads of control is fraught with danger, so it can be argued that processes lead to more reliable code.
There's a lot more to it, particularly if you are an OS person.

Resources