Do child processes copy entire arrays? - linux

I'm writing a basic UNIX program that involves processes sending messages to each other. My idea to synchronize the processes is to simply have an array of flags to indicate whether or not a process has reached a certain point in the code.
For example, I want all the processes to wait until they've all been created. I also want them to wait until they've all finished sending messages to each other before they begin reading their pipes.
I'm aware that a process performs a copy-on-write operation when it writes to a previously defined variable.
What I'm wondering is, if I make an array of flags, will the pointer to that array be copied, or will the entire array be copied (thus making my idea useless).
I'd also like any tips on inter-process communication and process synchronization.
EDIT: The processes are writing to each other process' pipe. Each process will send the following information:
typedef struct MessageCDT{
pid_t destination;
pid_t source;
int num;
} Message;
So, just the source of the message and some random number. Then each process will print out the message to stdout: Something along the lines of "process 20 received 5724244 from process 3".

Unix processes have independent address spaces. This means that the memory in one is totally separate from the memory in another. When you call fork(), you get a new copy of the process. Immediately on return from fork(), the only thing different between the two processes is fork()'s return value. All of the data in the two processes are the same, but they are copies. Updating memory in one cannot be known by the other, unless you take steps to share the memory.
There are many choices for interprocess communication (IPC) in Unix, including shared memory, semaphores, pipes (named and unnamed), sockets, message queues and signals. If you Google these things you will find lots to read.
In your particular case, trying to make several processes wait until they all reach a certain point, I might use a semaphore or shared memory, depending on whether there is some master process that started them all or not.
If there is a master process that launches the others, then the master could setup the semaphore with a count equal to the number of processes to synchronize and then launch them. Each child could then decrement the semaphore value and wait for the semaphore value to reach zero.
If there is no master process, then I might create a shared memory segment that contains a count of processes and a flag for each process. But when you have two or more processes using shared memory, then you also need some kind of locking mechanism (probably a semaphore again) to ensure that two processes do not try to update the shared memory simultaneously.
Keep in mind that reading a pipe that nobody is writing to will block the reader until data appears. I don't know what your processes do, but perhaps that is synchronization enough? One other thing to consider if you have multiple processes writing to a given pipe, their data may become interleaved if the writes are larger than PIPE_BUF. The value and location of this macro are system dependent.
-Kevin

The entire array of flags will seem to be copied. It will not actually be copied until one process or another writes to it of course. But that's an implementation detail and transparent to the individual processes. As far as each process is concerned, they each get a copy of the array.
There are ways to make this not happen. You can use mmap with the MAP_SHARED option for the memory used for your flags. Then each sub-process will share the same region of memory. There's also Posix shared memory (which I, BTW, think is an awful hack). To find out about Posix shared memory, look at the shm_overview(7) man page.
But using memory in this way isn't really a good idea. On multi-core systems it's not always the case that when one process (or thread) writes to an area of shared memory that all other processes will see the value written right away. Frequently the value will hang out for awhile in the L2 cache and not be immediately flushed.
If you want to communicate using shared memory, you will have to used mutexes or the C++11 atomic operations to ensure that writes are properly seen by the other processes.

Related

Time waste of execv() and fork()

I am currently learning about fork() and execv() and I had a question regarding the efficiency of the combination.
I was shown the following standard code:
pid = fork();
if(pid < 0){
//handle fork error
}
else if (pid == 0){
execv("son_prog", argv_son);
//do father code
I know that fork() clones the entire process (copying the entire heap, etc) and that execv() replaces the current address space with that of the new program. With this in mind, doesn't it make it very inefficient to use this combination? We are copying the entire address space of a process and then immediately overwrite it.
So my question:
What is the advantage that is achieved by using this combo (instead of some other solution) that makes people still use this, even though we have waste?
What is the advantage that is achieved by using this combo (instead of some other solution) that makes people still use this even though we have waste?
You have to create a new process somehow. There are very few ways for a userspace program to accomplish that. POSIX used to have vfork() alognside fork(), and some systems may have their own mechanisms, such as Linux-specific clone(), but since 2008, POSIX specifies only fork() and the posix_spawn() family. The fork + exec route is more traditional, is well understood, and has few drawbacks (see below). The posix_spawn family is designed as a special purpose substitute for use in contexts that present difficulties for fork(); you can find details in the "Rationale" section of its specification.
This excerpt from the Linux man page for vfork() may be illuminating:
Under Linux, fork(2) is implemented using copy-on-write pages, so the only penalty incurred by fork(2) is the time and memory required to duplicate the parent’s page tables, and to create a unique task structure for the child. However, in the bad old days a fork(2) would require making a complete copy of the caller’s data space, often needlessly, since usually immediately afterwards an exec(3) is done. Thus, for greater efficiency, BSD introduced the vfork() system call, which did not fully copy the address space of the parent process, but borrowed the parent’s memory and thread of control until a call to execve(2) or an exit occurred. The parent process was suspended while the child was using its resources. The use of vfork() was tricky: for example, not modifying data in the parent process depended on knowing which variables are held in a register.
(Emphasis added)
Thus, your concern about waste is not well-founded for modern systems (not limited to Linux), but it was indeed an issue historically, and there were indeed mechanisms designed to avoid it. These days, most of those mechanisms are obsolete.
Another answer states:
However, in the bad old days a fork(2) would require making a complete copy of the caller’s data space, often needlessly, since usually immediately afterwards an exec(3) is done.
Obviously, one person's bad old days are a lot younger than others remember.
The original UNIX systems did not have the memory for running multiple processes and they did not have an MMU for keeping several processes in physical memory ready-to-run at the same logical address space: they swapped out processes to disk that it wasn't currently running.
The fork system call was almost entirely the same as swapping out the current process to disk, except for the return value and for not replacing the remaining in-memory copy by swapping in another process. Since you had to swap out the parent process anyway in order to run the child, fork+exec was not incurring any overhead.
It's true that there was a period of time when fork+exec was awkward: when there were MMUs that provided a mapping between logical and physical address space but page faults did not retain enough information that copy-on-write and a number of other virtual-memory/demand-paging schemes were feasible.
This situation was painful enough, not just for UNIX, that page fault handling of the hardware was adapted to become "replayable" pretty fast.
Not any longer. There's something called COW (Copy On Write), only when one of the two processes (Parent/Child) tries to write to a shared data, it is copied.
In the past:
The fork() system call copied the address space of the calling process (the parent) to create a new process (the child).
The copying of the parent's address space into the child was the most expensive part of the fork() operation.
Now:
A call to fork() is frequently followed almost immediately by a call to exec() in the child process, which replaces the child's memory with a new program. This is what the the shell typically does, for example. In this case, the time spent copying the parent's address space is largely wasted, because the child process will use very little of its memory before calling exec().
For this reason, later versions of Unix took advantage of virtual memory hardware to allow the parent and child to share the memory mapped into their respective address spaces until one of the processes actually modifies it. This technique is known as copy-on-write. To do this, on fork() the kernel would copy the address space mappings from the parent to the child instead of the contents of the mapped pages, and at the same time mark the now-shared pages read-only. When one of the two processes tries to write to one of these shared pages, the process takes a page fault. At this point, the Unix kernel realizes that the page was really a "virtual" or "copy-on-write" copy, and so it makes a new, private, writable copy of the page for the faulting process. In this way, the contents of individual pages aren't actually copied until they are actually written to. This optimization makes a fork() followed by an exec() in the child much cheaper: the child will probably only need to copy one page (the current page of its stack) before it calls exec().
It turns out all those COW page faults are not at all cheap when the process has a few gigabytes of writable RAM. They're all gonna fault once even if the child has long since called exec(). Because the child of fork() is no longer allowed to allocate memory even for the single threaded case (you can thank Apple for that one), arranging to call vfork()/exec() instead is hardly more difficult now.
The real advantage to the vfork()/exec() model is you can set the child up with an arbitrary current directory, arbitrary environment variables, and arbitrary fs handles (not just stdin/stdout/stderr), an arbitrary signal mask, and some arbitrary shared memory (using the shared memory syscalls) without having a twenty-argument CreateProcess() API that gets a few more arguments every few years.
It turned out the "oops I leaked handles being opened by another thread" gaffe from the early days of threading was fixable in userspace w/o process-wide locking thanks to /proc. The same would not be in the giant CreateProcess() model without a new OS version, and convincing everybody to call the new API.
So there you have it. An accident of design ended up far better than the directly designed solution.
It's not that expensive (relatively to spawning a process directly), especially with copy-on-write forks like you find in Linux , and it's kind of elegant for:
when you really just want to fork off a clone of the current process (I find this to be very useful for testing)
for when you need to do something just before loading the new executable
(redirect filedescriptors, play with signal masks/dispositions, uids, etc.)
POSIX now has posix_spawn that effectively allows you to combine fork/and-exec (possibly more efficiently than fork+exec; if it is more efficient, it'll usually be implemented through some cheaper but less robust fork (clone/vfork) followed by exec), but the way it achieves #2 is through a ton of relatively messy options, which can never be as complete and powerful and clean as just allowing you to run arbitrary code just before the new process image is loaded.
A process created by exec() et al, will inherit its file handles from the parent process (including stdin, stdout, stderr). If the parent changes these after calling fork() but before calling exec() then it can control the child's standard streams.

Semaphores & threads - what is the point?

I've been reading about semaphores and came across this article:
www.csc.villanova.edu/~mdamian/threads/posixsem.html
So, this page states that if there are two threads accessing the same data, things can get ugly. The solution is to allow only one thread to access the data at the same time.
This is clear and I understand the solution, only why would anyone need threads to do this? What is the point? If the threads are blocked so that only one can execute, why use them at all? There is no advantage. (or maybe this is a just a dumb example; in such a case please point me to a sensible one)
Thanks in advance.
Consider this:
void update_shared_variable() {
sem_wait( &g_shared_variable_mutex );
g_shared_variable++;
sem_post( &g_shared_variable_mutex );
}
void thread1() {
do_thing_1a();
do_thing_1b();
do_thing_1c();
update_shared_variable(); // may block
}
void thread2() {
do_thing_2a();
do_thing_2b();
do_thing_2c();
update_shared_variable(); // may block
}
Note that all of the do_thing_xx functions still happen simultaneously. The semaphore only comes into play when the threads need to modify some shared (global) state or use some shared resource. So a thread will only block if another thread is trying to access the shared thing at the same time.
Now, if the only thing your threads are doing is working with one single shared variable/resource, then you are correct - there is no point in having threads at all (it would actually be less efficient than just one thread, due to context switching.)
When you are using multithreading not everycode that runs will be blocking. For example, if you had a queue, and two threads are reading from that queue, you would make sure that no thread reads at the same time from the queue, so that part would be blocking, but that's the part that will probably take the less time. Once you have retrieved the item to process from the queue, all the rest of the code can be run asynchronously.
The idea behind the threads is to allow simultaneous processing. A shared resource must be governed to avoid things like deadlocks or starvation. If something can take a while to process, then why not create multiple instances of those processes to allow them to finish faster? The bottleneck is just what you mentioned, when a process has to wait for I/O.
Being blocked while waiting for the shared resource is small when compared to the processing time, this is when you want to use multiple threads.
This is of course a SSCCE (Short, Self Contained, Correct Example)
Let's say you have 2 worker threads that do a lot of work and write the result to a file.
you only need to lock the file (shared resource) access.
The problem with trivial examples....
If the problem you're trying to solve can be broken down into pieces that can be executed in parallel then threads are a good thing.
A slightly less trivial example - imagine a for loop where the data being processed in each iteration is different every time. In that circumstance you could execute each iteration of the for loop simultaneously in separate threads. And indeed some compilers like Intel's will convert suitable for loops to threads automatically for you. In that particular circumstances no semaphores are needed because of the iterations' data independence.
But say you were wanting to process a stream of data, and that processing had two distinct steps, A and B. The threadless approach would involve reading in some data then doing A then B and then output the data before reading more input. Or you could have a thread reading and doing A, another thread doing B and output. So how do you get the interim result from the first thread to the second?
One way would be to have a memory buffer to contain the interim result. The first thread could write the interim result to a memory buffer and the second could read from it. But with two threads operating independently there's no way for the first thread to know if it's safe to overwrite that buffer, and there's no way for the second to know when to read from it.
That's where you can use semaphores to synchronise the action of the two threads. The first thread takes a semaphore that I'll call empty, fills the buffer, and then posts a semaphore called filled. Meanwhile the second thread will take the filled semaphore, read the buffer, and then post empty. So long as filled is initialised to 0 and empty is initialised to 1 it will work. The second thread will process the data only after the first has written it, and the first won't write it until the second has finished with it.
It's only worth it of course if the amount of time each thread spends processing data outweighs the amount of time spent waiting for semaphores. This limits the extent to which splitting code up into threads yields a benefit. Going beyond that tends to mean that the overall execution is effectively serial.
You can do multithreaded programming without semaphores at all. There's the Actor model or Communicating Sequential Processes (the one I favour). It's well worth looking up JCSP on Wikipedia.
In these programming styles data is shared between threads by sending it down communication channels. So instead of using semaphores to grant another thread access to data it would be sent a copy of that data down something a bit like a network socket, or a pipe. The advantage of CSP (which limits that communication channel to send-finishes-only-if-receiver-has-read) is that it stops you falling into the many many pitfalls that plague multithreaded do programs. It sounds inefficient (copying data is inefficient), but actually it's not so bad with Intel's QPI architecture, AMD's Hypertransport. And it means hat the 'channel' really could be a network connection; scalability built in by design.

fork and IPC mechanism

I'm writing a mono-thread memory heavy proof of concept application.
This application doesn't manipulate much data per se, will mainly load GBs of data and then do some data analysis on it.
I don't want to manage concurrency via MT implementation, don't want to have to implement locks (i.e. mutexes, spinlocks, ...) so I've decided this time around to use the dear old fork().
On Linux, where memory is CoW, I should be able to efficiently analyse same datasets without having to copy them explicitly and with simple parallel mono-thread logic (again, this is a proof of concept).
Now that I spawn child processes, with fork() is very easy to setup input parameters for a sub-task (sub-process in this case), but then I have to get back the results to the main process. And sometimes these results are 10s of GB large. All the IPC mechanisms I have in mind are:
PIPEs/Sockets (and then epoll equivalent to wait for results in a mono-thread fashion)
Hybrid PIPEs/Shared Memory (epoll equivalent to wait for results with reference to Shared Memory, then copy data from Shared Memory into parent process, destroy Shared Memory)
What else could I use? Apart the obvious "go multi-thread", I really would like to leverage the CoW and single-thread multi-process architecture for this proof of concept. Any ideas?
Thanks
After some experimenting the conclusion I got to is the following:
When a child process has to communicate with parent, before spawning such child process I create a segment of shared memory (i.e. 16 MB)
if coordination is needed a semaphore is created in sh mem segment
Then upon forking, I pipe2 with nonblocking sockets so child can notify parent when some data is available
The pipe fd is then used into epoll
epoll is used as Level Triggered so I can interleave requests if the child processes are really fast in sending data
The segment of shared memory is used to communicate data directly if the structures are pod or with simple template<...> binary read/write functions if those are not
I believe this is a good solution.
Cheers
You could also use a regular file.
Parent process could wait for the child process (to analyse the data on memory and then write to file its result and) to exit and once it does, you must be able to read data from the file. As you mentioned, input parameter is not a problem, you could just specify the file name to write to in one of the input parameters. This way, there is no locking required or except for wait() on exit status of child process.
I wonder if each of your child processes return 10s of GB large data, this way it is much better to use regular files, as you will have enough time to process each of the child process's result. But is this 10GBs data shared across child processes? If that was the case, you would have preferred to use locks, so I assume it isn't.

When is clone() and fork better than pthreads?

I am beginner in this area.
I have studied fork(), vfork(), clone() and pthreads.
I have noticed that pthread_create() will create a thread, which is less overhead than creating a new process with fork(). Additionally the thread will share file descriptors, memory, etc with parent process.
But when is fork() and clone() better than pthreads? Can you please explain it to me by giving real world example?
Thanks in Advance.
clone(2) is a Linux specific syscall mostly used to implement threads (in particular, it is used for pthread_create). With various arguments, clone can also have a fork(2)-like behavior. Very few people directly use clone, using the pthread library is more portable. You probably need to directly call clone(2) syscall only if you are implementing your own thread library - a competitor to Posix-threads - and this is very tricky (in particular because locking may require using futex(2) syscall in machine-tuned assembly-coded routines, see futex(7)). You don't want to directly use clone or futex because the pthreads are much simpler to use.
(The other pthread functions require some book-keeping to be done internally in libpthread.so after a clone during a pthread_create)
As Jonathon answered, processes have their own address space and file descriptor set. And a process can execute a new executable program with the execve syscall which basically initialize the address space, the stack and registers for starting a new program (but the file descriptors may be kept, unless using close-on-exec flag, e.g. thru O_CLOEXEC for open).
On Unix-like systems, all processes (except the very first process, usuallyinit, of pid 1) are created by fork (or variants like vfork; you could, but don't want to, use clone in such way as it behaves like fork).
(technically, on Linux, there are some few weird exceptions which you can ignore, notably kernel processes or threads and some rare kernel-initiated starting of processes like /sbin/hotplug ....)
The fork and execve syscalls are central to Unix process creation (with waitpid and related syscalls).
A multi-threaded process has several threads (usually created by pthread_create) all sharing the same address space and file descriptors. You use threads when you want to work in parallel on the same data within the same address space, but then you should care about synchronization and locking. Read a pthread tutorial for more.
I suggest you to read a good Unix programming book like Advanced Unix Programming and/or the (freely available) Advanced Linux Programming
The strength and weakness of fork (and company) is that they create a new process that's a clone of the existing process.
This is a weakness because, as you pointed out, creating a new process has a fair amount of overhead. It also means communication between the processes has to be done via some "approved" channel (pipes, sockets, files, shared-memory region, etc.)
This is a strength because it provides (much) greater isolation between the parent and the child. If, for example, a child process crashes, you can kill it and start another fairly easily. By contrast, if a child thread dies, killing it is problematic at best -- it's impossible to be certain what resources that thread held exclusively, so you can't clean up after it. Likewise, since all the threads in a process share a common address space, one thread that ran into a problem could overwrite data being used by all the other threads, so just killing that one thread wouldn't necessarily be enough to clean up the mess.
In other words, using threads is a little bit of a gamble. As long as your code is all clean, you can gain some efficiency by using multiple threads in a single process. Using multiple processes adds a bit of overhead, but can make your code quite a bit more robust, because it limits the damage a single problem can cause, and makes it much easy to shut down and replace a process if it does run into a major problem.
As far as concrete examples go, Apache might be a pretty good one. It will use multiple threads per process, but to limit the damage in case of problems (among other things), it limits the number of threads per process, and can/will spawn several separate processes running concurrently as well. On a decent server you might have, for example, 8 processes with 8 threads each. The large number of threads helps it service a large number of clients in a mostly I/O bound task, and breaking it up into processes means if a problem does arise, it doesn't suddenly become completely un-responsive, and can shut down and restart a process without losing a lot.
These are totally different things. fork() creates a new process. pthread_create() creates a new thread, which runs under the context of the same process.
Thread share the same virtual address space, memory (for good or for bad), set of open file descriptors, among other things.
Processes are (essentially) totally separate from each other and cannot modify each other.
You should read this question:
What is the difference between a process and a thread?
As for an example, if I am your shell (eg. bash), when you enter a command like ls, I am going to fork() a new process, and then exec() the ls executable. (And then I wait() on the child process, but that's getting out of scope.) This happens in an entire different address space, and if ls blows up, I don't care, because I am still executing in my own process.
On the other hand, say I am a math program, and I have been asked to multiply two 100x100 matrices. We know that matrix multiplication is an Embarrassingly Parallel problem. So, I have the matrices in memory. I spawn of N threads, who each operate on the same source matrices, putting their results in the appropriate location in the result matrix. Remember, these operate in the context of the same process, so I need to make sure they are not stamping on each other's data. If N is 8 and I have an eight-core CPU, I can effectively calculate each part of the matrix simultaneously.
Process creation mechanism on unix using fork() (and family) is very efficient.
Morever , most unix system doesnot support kernel level threads i.e thread is not entity recognized by kernel. Hence thread on such system cannot get benefit of CPU scheduling at kernel level. pthread library does that which is not kerenl rather some process itself.
Also on such system pthreads are implemented using vfork() and as light weight process only.
So using threading has no point except portability on such system.
As per my understanding Sun-solaris and windows has kernel level thread and linux family doesn't support kernel threads.
with processes pipes and unix doamin sockets are very efficient IPC without synchronization issues.
I hope it clears why and when thread should be used practically.

Thread and Process

What is the best definition of a thread and what is a process?
If I call a function, how do I know that a thread is calling it or a process (or am I not understanding it??!). This is in a multi-core system (quadcore).
From http://wiki.answers.com/Q/What_is_the_difference_between_a_computer_process_and_thread:
A single process can have multiple threads that share global data and address space with other threads running in the same process, and therefore can operate on the same data set easily. Processes do not share address space and a different mechanism must be used if they are to share data.
If we consider running a word processing program to be a process, then the auto-save and spell check features that occur in the background are different threads of that process which are all operating on the same data set (your document).
One thing to add is how does a multi-core processor handle this. Think of a thread as the sequential execution of your code.
A core in a CPU can only execute one thread at a time. So if this thread is blocked because the program is waiting for an I/O operation to finish, the process is blocked (very simplified example: Word not responding). Multi-threading allows us to execute multiple code paths at the same time. "Same time" is a bit of a lie, since only one thread can actually execute at a time in a core, but the CPU gives some small chunk of time to each thread, so it appears as if all these threads are executing at the same time. A good example here is the spell checker in Word.
If you have multiple cores, the only difference is that in an N-Core CPU you can have N threads executing at the same time. To simplify a lot, it doesn't matter what process the threads belong to. To simply even further, you'd expect a N times performance increase. :-D
In every modern OS I know of, everything runs in a thread, which runs in a process.
The OS can keep track of multiple processes, and each process can host an arbitrary number of threads. So all code is executed within a thread and within a process (since the thread runs in a process).
The main distinction between the two is that each process has its own virtual address space. Separate processes do not have access to each others' data, file handles or anything else, and are essentially not aware that other processes exist.
On the other hand, every thread in a process share the same address space, and all threads can therefore inspect or modify each others' data, call the same functions and everything else.
It is often (but not always) the cases that one program consists of one process and a number of threads.
A process is composed of one or more threads (one by default for most environments). A process can create additional threads though.
Like the previous answer says, each Process has its own memory space (each can have a pointer to 0x12345, with that memory location having different values for each process), while all the Threads of a process would actually point to the exact same memory location, since they're all in the same memory space.
When calling a function, it's almost always called on the same thread that the caller is running on. In Objective-C, there are exceptions (performSelectorOnMainThread), and there might be for other languages as well, but that sort of functionality is necessary only in special cases.
From a user's point of view, the main distinction is that threads share memory with each other, while processes do not. That means you can easily share data between threads, while processes require some kind of OS call to do so.
Some call this a benifit of threads, but sharing data between multiple threads of control is fraught with danger, so it can be argued that processes lead to more reliable code.
There's a lot more to it, particularly if you are an OS person.

Resources