Cause a Linux poll event on a shared memory file - linux

Two Linux processes open and mmap the same /dev/shm/ shared memory file and use it as common memory. Question: what is the simplest and best way for one process to "wake up" the other process to notify that it should look in the memory?
For example, can one process cause a poll() event for the other process's file descriptor?
The solution doesn't need to be portable but I would like it to be simple.

That's why POSIX has condition variables.
Define a shared POSIX condition variable and its associated mutex in the shared memory region.
Then have one thread wait on the condition variable and the other signal the condition variable event when it wants the other thread to look in the memory.
There's a lot of material on the web on condition variables.
Here is one pretty good short one: https://computing.llnl.gov/tutorials/pthreads/#ConditionVariables

You may please consider using a semaphore (POSIX named semaphore) also to solve this.
One simple example, using shared Memory (In the example it is in System V, but you can use it with POSIX too) and POSIX semaphore is in the link ,
How can 2 processes talk to each other without pipe()?

Related

How to avoid shared memory leaks

I'm using shared memory between 2 processes on Suse Linux and I'm wondering how can I avoid the shared memory leaks in case one process crashes or both. Does a leak occur in this case? If yes, how can I avoid it?
You could allocate space for two counters in the shared memory region: one for each process. Every few seconds, each process increments its counter, and checks that the other counter has been incremented as well. That makes it easy for these two processes, or an external watchdog, to tear down the shared memory if somebody crashes or exits.
If the subprocess is a simple fork() from the parent process, then mmap() with MAP_SHARED should work.
If the subprocess does an exec() to start a different executable, you can often pass file descriptors from shm_open() or a similar non-portable system call (see Is there anything like shm_open() without filename?) On many operating systems, including Linux, you can shm_unlink() the file descriptor from shm_open() so it doesn't leak memory when your processes die, and use fcntl() to clear the close-on-exec flag on the shm file descriptor so that your child process can inherit it across exec. This is not well defined in the POSIX standard but it appears to be very portable in practice.
If you need to use a filename instead of just a file descriptor number to pass the shared memory object to an unrelated process, then you have to figure out some way to shm_unlink() the file yourself when it's no longer needed; see John Zwinck's answer for one method.

When is clone() and fork better than pthreads?

I am beginner in this area.
I have studied fork(), vfork(), clone() and pthreads.
I have noticed that pthread_create() will create a thread, which is less overhead than creating a new process with fork(). Additionally the thread will share file descriptors, memory, etc with parent process.
But when is fork() and clone() better than pthreads? Can you please explain it to me by giving real world example?
Thanks in Advance.
clone(2) is a Linux specific syscall mostly used to implement threads (in particular, it is used for pthread_create). With various arguments, clone can also have a fork(2)-like behavior. Very few people directly use clone, using the pthread library is more portable. You probably need to directly call clone(2) syscall only if you are implementing your own thread library - a competitor to Posix-threads - and this is very tricky (in particular because locking may require using futex(2) syscall in machine-tuned assembly-coded routines, see futex(7)). You don't want to directly use clone or futex because the pthreads are much simpler to use.
(The other pthread functions require some book-keeping to be done internally in libpthread.so after a clone during a pthread_create)
As Jonathon answered, processes have their own address space and file descriptor set. And a process can execute a new executable program with the execve syscall which basically initialize the address space, the stack and registers for starting a new program (but the file descriptors may be kept, unless using close-on-exec flag, e.g. thru O_CLOEXEC for open).
On Unix-like systems, all processes (except the very first process, usuallyinit, of pid 1) are created by fork (or variants like vfork; you could, but don't want to, use clone in such way as it behaves like fork).
(technically, on Linux, there are some few weird exceptions which you can ignore, notably kernel processes or threads and some rare kernel-initiated starting of processes like /sbin/hotplug ....)
The fork and execve syscalls are central to Unix process creation (with waitpid and related syscalls).
A multi-threaded process has several threads (usually created by pthread_create) all sharing the same address space and file descriptors. You use threads when you want to work in parallel on the same data within the same address space, but then you should care about synchronization and locking. Read a pthread tutorial for more.
I suggest you to read a good Unix programming book like Advanced Unix Programming and/or the (freely available) Advanced Linux Programming
The strength and weakness of fork (and company) is that they create a new process that's a clone of the existing process.
This is a weakness because, as you pointed out, creating a new process has a fair amount of overhead. It also means communication between the processes has to be done via some "approved" channel (pipes, sockets, files, shared-memory region, etc.)
This is a strength because it provides (much) greater isolation between the parent and the child. If, for example, a child process crashes, you can kill it and start another fairly easily. By contrast, if a child thread dies, killing it is problematic at best -- it's impossible to be certain what resources that thread held exclusively, so you can't clean up after it. Likewise, since all the threads in a process share a common address space, one thread that ran into a problem could overwrite data being used by all the other threads, so just killing that one thread wouldn't necessarily be enough to clean up the mess.
In other words, using threads is a little bit of a gamble. As long as your code is all clean, you can gain some efficiency by using multiple threads in a single process. Using multiple processes adds a bit of overhead, but can make your code quite a bit more robust, because it limits the damage a single problem can cause, and makes it much easy to shut down and replace a process if it does run into a major problem.
As far as concrete examples go, Apache might be a pretty good one. It will use multiple threads per process, but to limit the damage in case of problems (among other things), it limits the number of threads per process, and can/will spawn several separate processes running concurrently as well. On a decent server you might have, for example, 8 processes with 8 threads each. The large number of threads helps it service a large number of clients in a mostly I/O bound task, and breaking it up into processes means if a problem does arise, it doesn't suddenly become completely un-responsive, and can shut down and restart a process without losing a lot.
These are totally different things. fork() creates a new process. pthread_create() creates a new thread, which runs under the context of the same process.
Thread share the same virtual address space, memory (for good or for bad), set of open file descriptors, among other things.
Processes are (essentially) totally separate from each other and cannot modify each other.
You should read this question:
What is the difference between a process and a thread?
As for an example, if I am your shell (eg. bash), when you enter a command like ls, I am going to fork() a new process, and then exec() the ls executable. (And then I wait() on the child process, but that's getting out of scope.) This happens in an entire different address space, and if ls blows up, I don't care, because I am still executing in my own process.
On the other hand, say I am a math program, and I have been asked to multiply two 100x100 matrices. We know that matrix multiplication is an Embarrassingly Parallel problem. So, I have the matrices in memory. I spawn of N threads, who each operate on the same source matrices, putting their results in the appropriate location in the result matrix. Remember, these operate in the context of the same process, so I need to make sure they are not stamping on each other's data. If N is 8 and I have an eight-core CPU, I can effectively calculate each part of the matrix simultaneously.
Process creation mechanism on unix using fork() (and family) is very efficient.
Morever , most unix system doesnot support kernel level threads i.e thread is not entity recognized by kernel. Hence thread on such system cannot get benefit of CPU scheduling at kernel level. pthread library does that which is not kerenl rather some process itself.
Also on such system pthreads are implemented using vfork() and as light weight process only.
So using threading has no point except portability on such system.
As per my understanding Sun-solaris and windows has kernel level thread and linux family doesn't support kernel threads.
with processes pipes and unix doamin sockets are very efficient IPC without synchronization issues.
I hope it clears why and when thread should be used practically.

Thread control block in linux

What is the structure used for saving thread state like PC, SP and registers during thread context switch in linux? The equivalent of TCB in freebsd. If possible please point to the source file here.
Note that PCB itself is not enough, as we have PC, SP etc. per thread not per process.
It's actually task_struct. In Linux, a task can be a thread, a process, or something in between. A thread is just the name you give to a task that shares most things (VMA's, file descriptors, etc...) with other tasks.
This is much in line with the idea that a thread is just a particular kind of process, and can be handled via the same functions, etc... Plan 9's rfork() and Linux's clone() allow to create a process with a customizable level of sharing, so you end up using the same machinery to create processes and threads.
Perhaps you want setcontext and friends (but your code won't be very portable, and tricky to get right)?
Or are you talking from inside the kernel? Then perhaps task_struct could be what you look for??

Write to file from many threads

i have a problem.
my progrem creates a some number of processes in the system (Windows).
They all must write some data in ONE file on a disk.
so, i need to synchronize them... but don't know...
exactly: my program calls SetWindowsHook and injects one dll in many processes. and they all need to write some data to one file
The synchronisation object that works across processes is a mutex.
Windows have a lock foreach file, so if one process is writting in a file windows wont let another write. Mutex is what you want, protect the code where you are writting into the file with one.
Single System As David mentioned, you can use a mutex to accomplish this task. In Windows, this is done by using named mutexes and (if you want) named semaphores to do this.
The function CreateMutex can be used to both create the mutex (by the first process) and open it by the other processes. Supply the same lpName value in all processes. Use WaitForSingleObject to gain ownership of the mutex. And use ReleaseMutex to give up ownership.
An example of creating a named mutex can be found here.
If use a named semaphore, you can accomplish the same thing by giving the semaphore an initial count of 1. Use CreateSemaphore to create and open it, WaitForSingleObject to gain ownership (same as with a mutex) and ReleaseSemaphore to give up ownwership.
Multiple Systems The above approach assumes that the processes are all running on the same system. If the processes are running on different systems, then you may need to use the idea mentioned by DVD. You can lock portions of a file and use that as the synchronization method. For example, you could, by convention, lock 1 byte at some offset (it can even be past the end of the file) as a type of semaphore. Using this mechanism, though, may mean you need to implement some kind of efficient wait depending on the functions you use. If you use CreateFile and LockFileEx, you can have the function do a blocked wait by not specifying LOCKFILE_FAIL_IMMEDIATELY in the call.
The answer to your problem is to implement Thread synchronization.
If you are using C#, you can put a lock{} statement over your file writing code.
For other languages, you must use a Monitor or Mutex class to synchronize.
Use Stream.Synchronized()
See http://msdn.microsoft.com/en-us/library/system.io.stream.synchronized.aspx. This method only works for C# though
I recently had to do almost this exact thing (for logging to a single file from a dll injected into multiple processes).
Use _fsopen() to open the file in each process and then use a mutex for synchronization to ensure that only one process at a time is ever writing to the file.

Kernel thread exit in linux

I'm here to ask you the difference between a process and a thread in linux. I know that a thread for linux is just a "task", which shares with the father process things that they need to have in common (the address space and other important informations). I also know that the two are creating calling the same function ('clone()'), but there's still something that I'm missing: what really happens when a thread exit? What function is called inside the linux kernel?
I know that when a process exits calls the do_exit function, but here or somewhere else there should be a way to understand if it is just a thread exiting or a whole process. Can you explain me this thing or redirect to some textbook?? I tried 'Understanding the linux kernel' but I was not satisfied with it.
I'm asking this thing because a need to add things to the task_struct struct, but I need to discriminate how to manage those informations for a process and its children.
Thank you.
The exit() syscall exits a single thread, and the exit_group() syscall exits the entire POSIX process ("thread group").
The main difference between processes and threads is that proceses run in their own virtual memory space, apart from every other process. That means two processes cannot access each other's data. The only way for two processes to interact is through the operating system somehow (shared memory sections, semaphores, sockets, etc.).
Threads on the other hand all exist within their creating process. That means threads have access to all the same data (variables, pointers, handles, etc.) that any other thread in the same process has. That is the main difference.
There are some implications of this. For instance, when the process terminates for some reason, all its threads go with it. It is also a lot easier to get multi-processing errors like torn data in threads, just because nothing is forcing you to use the OS syncronization functions that you really ought to be using.

Resources