Mutex/semaphore: thread vs process - multithreading

I've worked almost entirely with VxWorks and am thus accustomed to only dealing with threads. I'm now working in a Linux environment where both threads and processes exist, and would like to understand: do mutexes and semaphores work across processes in addition to threads?
If they work across processes, I'd like to ask further: how does one process "know" that another process is using the same mutex/semaphore? To ask by example, if process A declares global mutex gMutex, and process B also declares global mutex gMutex, don't those two mutexes exist distinct from each other, within each process' respective memory space? I.e. how could a mutex be considered "shared" between two processes any more than a global struct Foo in each process?
I wonder, too, if the term "threadsafety" has thrown me off in thinking mutexes and semaphores perhaps only applied to threads but not processes?
I've tried to do a little bit of reading on the matter, but I get the impression that many people inappropriately use "process" and "thread" interchangeably, so I'm hoping for a targeted answer by asking this question specifically.

Related

How are threads made?

I was reading up on threads and as I understand it, they are a set of values for an execution context. From what I understand, a thread is comprised of values (registers, PC, stack, etc.) that allow a CPU to continue running a set of instructions.
However, my question is: how are these threads made? I hear some of my professors throw around the word thread as a way to break up a process into multiple (mostly) independent parts of code (ie. multithreading). How does this work? Is there another section of memory that stores specifically what a thread can run, as well as it's state?
First of all you have to understand that operating systems vary greatly in their general working as well as in their implementations of seemingly identical functions.
so don't go into these kind of questions thinking that if one operating system does something in some way then other operating systems would do that in similar manner.
Now to your question
how are these threads made?
I will answer it using Linux as an example. When creating a new process Linux lets you specify which data structures (file descriptors, IO context etc) new process would share with its parent process. you can do this using the clone system call.
you can see in the documentation of clone that it takes some parameters specifying the sharing properties.
Now you can call a task_struct thread if it shares all sharable data structures with its parent ( because this property is consistent with the conventional definition of a thread). and if it shares none then you would call it a process.
But as far as Linux is concerned there is no notion of a thread or a process, all you have is a task_struct which may share certain resources with its parent.

Why are named semaphores usable by threads in any processes that know their names?

From APUE
POSIX semaphores are available in two flavors: named and unnamed. They differ
in how they are created and destroyed, but otherwise work the same. Unnamed
semaphores exist in memory only and require that processes have access to the memory
to be able to use the semaphores. This means they can be used only by threads in the
same process or threads in different processes that have mapped the same memory
extent into their address spaces. Named semaphores, in contrast, are accessed by name
and can be used by threads in any processes that know their names.
Unnamed semaphores "can be used only by threads in the
same process or threads in different processes that have mapped the same memory
extent into their address spaces", because "Unnamed semaphores exist in memory only".
What is the reason that named semaphores are usable by threads in any processes that know their names?
Thanks.
From man page of sem_overview:
On Linux, named semaphores are created in a virtual file system,
normally mounted under /dev/shm, with names of the form sem.somename
So those are accessible for 'threads in any processes' similar way than normal files are.
pthread library may then map those files to memory.
You're thinking about this backwards. The question is: "if I need to synchronize use of a shared resource between teo unrelated processes, hiw do I do it?" And the answer is "you can give a semaphore a name, and then it's not restricted to use in processes which share memory."
Why is that even useful? Well, the use cases may not be common -- perhaps you've never run into one -- but they certainly exist. There are lots of resources which are shared between unrelated processes: databases, configuration files, serial ports, printer queues, and many more. You can mediate between shared uses of these resources with lock files, but it's clunky and you end up reinventing the wheel on every project. Semaphores, on the other hand, are easy to use and have well-defined documented semantics.
However, most uses of semaphores are indeed between related processes which share memory. And you wouldn't want to unnecessarily pay the overhead for maintaining a name in a filesystem.
So we end up with two kinds of semaphores: cheap low-overhead ones which serve the most frequent use cases, and heavier higher-overhead ones which can be used more generally. The nice thing is that the semantics and API are very similar, so you don't need to learn a whole new set of concepts when you start using named semaphores.

How is a process and a thread the same thing in Linux?

I have read that a process and a thread are the same thing in Linux, for example in this question it says:
There is absolutely no difference between a thread and a process on
Linux.
But I don't understand how can a process and a thread means the same thing. I mean a thread is what gets executed by the CPU, and a process is simply an "enclosure" for the threads which allows the threads to have shared memory. This image shows the relationship between a process and its threads:
So clearly a process and a thread does not mean the same thing!
Linux didn't use to have special support for (POSIX) threads, and it simply treated them as processes that shared their address space as well as a few other resources (filedescriptors, signal actions, ...) with other "processes".
That implementation, while elegant, made certain things required for threads by POSIX difficult, so Linux did end up gaining that special support for threads and your premise is now no longer true.
Nevertheless, processes and threads still both remain represented as tasks within the kernel (but now the kernel has support for grouping those tasks into thread groups as well and APIs for working with those ((tgkill, tkill, exit_group, ...)).
You can google LinuxThreads and NPTL threads to learn more about the topic.

Linux/POSIX: Why doesn't fork() fork *all* threads

It is well-known that the default way to create a new process under POSIX is to use fork() (under Linux this internally maps to clone(...))
What I want to know is the following: It is well-known that when one calls fork() "The child process is created with a single thread--the one that called fork()"
(cf. https://linux.die.net/man/2/fork). This can of course cause problems if for example some other thread currently holds a lock. To me not also forking all the threads that exist in the process intuitively feels like a "leaky abstraction".
So I would like to know: What is the reason why only the thread calling fork() will exist in the child process instead of all threads of the process? Is there a good technical reason for this?
I know that on Multithreaded fork there is a related question, but the answers given there don't answer mine.
Of these two possibilities:
only the thread calling fork() continues running in the child process
Downside: if another thread was holding on to an internal resource such as a lock, it will not be released.
after fork(), all threads are duplicated into the child process
Downside: threads that were interacting with external resources continue running in parallel. If a thread was appending data to a file: now it happens twice.
Both are bad, but the first one choice only deadlocks the new child process, while the second choice results in corruption outside of the process. This could be described as "bad".
POSIX did standardize pthread_atfork to try to allow automatic cleanup in the first case, but it cannot possibly work.
tl;dr Don't use both threads and forks. Use posix_spawn if you have to.

When is clone() and fork better than pthreads?

I am beginner in this area.
I have studied fork(), vfork(), clone() and pthreads.
I have noticed that pthread_create() will create a thread, which is less overhead than creating a new process with fork(). Additionally the thread will share file descriptors, memory, etc with parent process.
But when is fork() and clone() better than pthreads? Can you please explain it to me by giving real world example?
Thanks in Advance.
clone(2) is a Linux specific syscall mostly used to implement threads (in particular, it is used for pthread_create). With various arguments, clone can also have a fork(2)-like behavior. Very few people directly use clone, using the pthread library is more portable. You probably need to directly call clone(2) syscall only if you are implementing your own thread library - a competitor to Posix-threads - and this is very tricky (in particular because locking may require using futex(2) syscall in machine-tuned assembly-coded routines, see futex(7)). You don't want to directly use clone or futex because the pthreads are much simpler to use.
(The other pthread functions require some book-keeping to be done internally in libpthread.so after a clone during a pthread_create)
As Jonathon answered, processes have their own address space and file descriptor set. And a process can execute a new executable program with the execve syscall which basically initialize the address space, the stack and registers for starting a new program (but the file descriptors may be kept, unless using close-on-exec flag, e.g. thru O_CLOEXEC for open).
On Unix-like systems, all processes (except the very first process, usuallyinit, of pid 1) are created by fork (or variants like vfork; you could, but don't want to, use clone in such way as it behaves like fork).
(technically, on Linux, there are some few weird exceptions which you can ignore, notably kernel processes or threads and some rare kernel-initiated starting of processes like /sbin/hotplug ....)
The fork and execve syscalls are central to Unix process creation (with waitpid and related syscalls).
A multi-threaded process has several threads (usually created by pthread_create) all sharing the same address space and file descriptors. You use threads when you want to work in parallel on the same data within the same address space, but then you should care about synchronization and locking. Read a pthread tutorial for more.
I suggest you to read a good Unix programming book like Advanced Unix Programming and/or the (freely available) Advanced Linux Programming
The strength and weakness of fork (and company) is that they create a new process that's a clone of the existing process.
This is a weakness because, as you pointed out, creating a new process has a fair amount of overhead. It also means communication between the processes has to be done via some "approved" channel (pipes, sockets, files, shared-memory region, etc.)
This is a strength because it provides (much) greater isolation between the parent and the child. If, for example, a child process crashes, you can kill it and start another fairly easily. By contrast, if a child thread dies, killing it is problematic at best -- it's impossible to be certain what resources that thread held exclusively, so you can't clean up after it. Likewise, since all the threads in a process share a common address space, one thread that ran into a problem could overwrite data being used by all the other threads, so just killing that one thread wouldn't necessarily be enough to clean up the mess.
In other words, using threads is a little bit of a gamble. As long as your code is all clean, you can gain some efficiency by using multiple threads in a single process. Using multiple processes adds a bit of overhead, but can make your code quite a bit more robust, because it limits the damage a single problem can cause, and makes it much easy to shut down and replace a process if it does run into a major problem.
As far as concrete examples go, Apache might be a pretty good one. It will use multiple threads per process, but to limit the damage in case of problems (among other things), it limits the number of threads per process, and can/will spawn several separate processes running concurrently as well. On a decent server you might have, for example, 8 processes with 8 threads each. The large number of threads helps it service a large number of clients in a mostly I/O bound task, and breaking it up into processes means if a problem does arise, it doesn't suddenly become completely un-responsive, and can shut down and restart a process without losing a lot.
These are totally different things. fork() creates a new process. pthread_create() creates a new thread, which runs under the context of the same process.
Thread share the same virtual address space, memory (for good or for bad), set of open file descriptors, among other things.
Processes are (essentially) totally separate from each other and cannot modify each other.
You should read this question:
What is the difference between a process and a thread?
As for an example, if I am your shell (eg. bash), when you enter a command like ls, I am going to fork() a new process, and then exec() the ls executable. (And then I wait() on the child process, but that's getting out of scope.) This happens in an entire different address space, and if ls blows up, I don't care, because I am still executing in my own process.
On the other hand, say I am a math program, and I have been asked to multiply two 100x100 matrices. We know that matrix multiplication is an Embarrassingly Parallel problem. So, I have the matrices in memory. I spawn of N threads, who each operate on the same source matrices, putting their results in the appropriate location in the result matrix. Remember, these operate in the context of the same process, so I need to make sure they are not stamping on each other's data. If N is 8 and I have an eight-core CPU, I can effectively calculate each part of the matrix simultaneously.
Process creation mechanism on unix using fork() (and family) is very efficient.
Morever , most unix system doesnot support kernel level threads i.e thread is not entity recognized by kernel. Hence thread on such system cannot get benefit of CPU scheduling at kernel level. pthread library does that which is not kerenl rather some process itself.
Also on such system pthreads are implemented using vfork() and as light weight process only.
So using threading has no point except portability on such system.
As per my understanding Sun-solaris and windows has kernel level thread and linux family doesn't support kernel threads.
with processes pipes and unix doamin sockets are very efficient IPC without synchronization issues.
I hope it clears why and when thread should be used practically.

Resources