Linux fork, execve - no wait zombies - linux

In Linux & C, will not waiting (waitpid) for a fork-execve launched process create zombies?
What is the correct way to launch a new program (many times) without waiting and without resource leaks?
It would also be launched from a 2nd worker thread.
Can the first program terminate first cleanly if launched programs have not completed?
Additional: In my case I have several threads that can fork-execve processes at ANY TIME and THE SAME TIME -
1) Some I need to wait for completion and want to report any errors codes with waitpid
2) Some I do not want to block the thread and but would like to report errors
3) Some I don't want to wait and don't care about the outcome and could run after the program terminates
For #2, should I have to create an additional thread to do waitpid ?
For #3, should I do a fork-fork-execve and would ending the 1st fork cause the 2nd process to get cleaned up (no zombie) separately via init ?
Additional: I've read briefly (not sure I understand all) about using nohup, double fork, setgpid(0,0), signal(SIGCHLD, SIG_IGN).
Doesn't global signal(SIGCHLD, SIG_IGN) have too many side effects like getting inherited (or maybe not) and preventing monitoring other processes you do want to wait for ?
Wouldn't relying on init to cleanup resources leak while the program continues to run (weeks in my case)?

In Linux & C, will not waiting (waitpid) for a fork-execve launched process create zombies?
Yes, they become zombies after death.
What is the correct way to launch a new program (many times) without waiting and without resource leaks? It would also be launched from a 2nd worker thread.
Set SIGCHLd to SIG_IGN.
Can the first program terminate first cleanly if launched programs have not completed?
Yes, orphaned processes will be adopted by init.

I ended up keeping an array of just the fork-exec'd pids I did not wait for (other fork-exec'd pids do get waited on) and periodically scanned the list using
waitpid( pids[xx], &status, WNOHANG ) != 0
which gives me a chance report outcome and avoid zombies.
I avoided using global things like signal handlers that might affect other code elsewhere.
It seemed a bit messy.
I suppose that fork-fork-exec would be an alternative to asynchronously monitor the other program's completion by the first fork, but then the first fork needs cleanup.
In Windows, you just keep a handle to the process open if you want to check status without worry of pid reuse, or close the handle if you don't care what the other process does.
(In Linux, there seems no way for multiple threads or processes to monitor the status of the same process safely, only the parent process-thread can, but not my issue here.)

Related

Understanding process pool : how does a process pool use wait() to reap child process?

if a process pool is created and there are 10 processes
but my program only use 4 processes
it means there are 6 idle processes
to use a process pool,
generally the pseudo code is like:
pool=create_process_pool(M)
for i in 1:N:
pool.run(task i)
pool.wait()
pool.close()
how does the pool decide when to call pool.wait()?
there are some cases:
1 if M>N, for example M=10, N=6, then there are 4 idle processes. For the 6 used processes, when they finished running and exit, they can inform the pool.wait(), but for the 4 idle processes, since they didn't run, how can they inform the pool.wait() that they finishes?
2 if M < N, is a process finishes a task and exit, it may be used for another task. So how can this process know that it will have no tasks any more and so inform pool.wait()
can anyone explain a bit how process pool works in this regard?
thanks!
You could implement a process pool (e.g. in C++) with
some Process class (in particular, knowing the pid of each fork-ed process). It would have some empty instance (whose pid would be 0).
some global array of Process-es
a Command class representing a command to be started (when possible) in the process pool.
a std::deque<Command> of commands, when possible a Command would fire some Process
an event loop taking account of SIGCHLD; when a SIGCHLD occurs, you would waitpid with WNOHANG and get the pid of the ended Process so find the actual Process instance and do whatever is needed ; that event loop would probably pop Command-s to run (so would start non-idle Process-es), manage pipes, etc...
Then idle processes would just be represented by a Process slot with a zero pid; no need to fork it explicitly. So they won't be unix processes.... just some internal representation in the process pool software.
My point is that a process pool mechanism don't (necessarily) have to start (with fork system call) idle processes. It could maintain a pool of process descriptors, and for idle slots mark the descriptor specially. That process descriptor could actually be a pid_t and empty slots having (pid_t)0 which is never the pid of any real Unix process. So there is no need to create processes in advance (but only lazily, as necessary). Hence, no need for idle processes.
I strongly suggest to take some hours to read Advanced Linux Programming. It will teach you better than what I could in a few minutes.
As an example, look at the Unix (or GNU) batch (and at) command. It does not use any idle process. And it does manage a pool of process queues. It is free software, so you can study (and improve) its source code.

Quit main loop maybe the thread is still running

Hi all~ I have a problem boring me so much.
Sometimes when I exit my program, there are some thread still running, in Linux system, it will cause crash after I quit the main loop. Is there any method that can kill all threads when I quit main loop?
It would help a lot if you specified your programming language and threading library of choice.
The usual way to control this type of situation (that is for a parent thread to wait until children complete before terminating) is to call a function supplied by the library, usually named join or wait.
pthread supplies you with pthread_join, for example.
If you're spawning processes via fork, you should use wait or waitpid in the parent to halt until the child completes - try man waitpid or take a look at this.
This way you can inform your children that you are about to exit via the usual means, wait until they wrap up and terminate, then cleanly exit the main loop.
Does this help? This is the least brutal way of synchronizing termination, if you want to actively kill the children threads there are alternatives, of course (like pthread_kill for pthreads, for example).
If you are using java try using the jconsole (Java Monitoring & Management Console) shipped with jdk6u23 in my case. You can get the thread name that is not killed. You can use join for that thread to complete.
But there can be program issue like, in my case i had a timer thread hanging [Timer-0] java.util.Timer to make an a timer.cancel() which closed that timer.

Should I be worried about the order, in which processes in a process goup receive signals?

I want to terminate a process group by sending SIGTERM to processes within it. This can be accomplished via the kill command, but the manuals I found provide few details about how exactly it works:
int kill(pid_t pid, int sig);
...
If pid is less than -1, then sig is sent to every process in
the process group whose ID is -pid.
However, in which order will the signal be sent to the processes that form the group? Imagine the following situation: a pipe is set between master and slave processes in the group. If slave is killed during processing kill(-pid), while the master is still not, the master might report this as an internal failure (upon receiving notification that the child is dead). However, I want all processes to understand that such termination was caused by something external to their process group.
How can I avoid this confusion? Should I be doing something more than mere kill(-pid,SIGTERM)? Or it is resolved by underlying properties of the OS, about which I'm not aware?
Note that I can't modify the code of the processes in the group!
Try doing it as a three-step process:
kill(-pid, SIGSTOP);
kill(-pid, SIGTERM);
kill(-pid, SIGCONT);
The first SIGSTOP should put all the processes into a stopped state. They cannot catch this signal, so this should stop the entire process group.
The SIGTERM will be queued for the process but I don't believe it will be delivered, since the processes are stopped (this is from memory, and I can't currently find a reference but I believe it is true).
The SIGCONT will start the processes again, allowing the SIGTERM to be delivered. If the slave gets the SIGCONT first, the master may still be stopped so it will not notice the slave going away. When the master gets the SIGCONT, it will be followed by the SIGTERM, terminating it.
I don't know if this will actually work, and it may be implementation dependent on when all the signals are actually delivered (including the SIGCHLD to the master process), but it may be worth a try.
My understanding is that you cannot rely on any specific order of signal delivery.
You could avoid the issue if you send the TERM signal to the master process only, and then have the master kill its children.
Even if all the various varieties of UNIX would promise to deliver the signals in a particular order, the scheduler might still decide to run the critical child process code before the parent code.
Even your STOP/TERM/CONT sequence will be vulnerable to this.
I'm afraid you may need something more complicated. Perhaps the child process could catch the SIGTERM and then loop until its parent exits before it exits itself? Be sure and add a timeout if you do this.
Untested: Use shared memory and put in some kind of "we're dying" semaphore, which may be checked before I/O errors are treated as real errors. mmap() with MAP_ANONYMOUS|MAP_SHARED and make sure it survives your way of fork()ing processes.
Oh, and be sure to use the volatile keyword or your semaphore is optimized away.

What happens when a process is forked?

I've read about fork and from what I understand, the process is cloned but which process? The script itself or the process that launched the script?
For example:
I'm running rTorrent on my machine and when a torrent completes, I have a script run against it. This script fetches data from the web so it takes a few seconds to complete. During this time, my rtorrent process is frozen. So I made the script fork using the following
my $pid = fork();
if ($pid == 0) { blah blah blah; exit 0; }
If I run this script from the CLI, it comes back to the shell within a second while it runs in the background, exactly as I intended. However, when I run it from rTorrent, it seems to be even slower than before. So what exactly was forked? Did the rtorrent process clone itself and my script ran in that, or did my script clone itself? I hope this makes sense.
The fork() function returns TWICE! Once in the parent process, and once in the child process. In general, both processes are IDENTICAL in every way, as if EACH one had just returned from fork(). The only difference is that in one, the return value from fork() is 0, and in the other it is non-zero (the PID of the child process).
So whatever process was running your Perl script (if it is an embedded Perl interpreter inside rTorrent then rTorrent would be the process) would be duplicated at exactly the point that the fork() happened.
I believe I found the problem by looking through rTorrent's source. For some processes, it will read all of the output sent to stdout before continuing. If this is happening to your process, rTorrent will block until you close the stdout process. Because you're forking, your child process shares the same stdout as the parent. Your parent process will exit, but the pipe remains open (because your child process is still running). If you did an strace of rTorrent, I'd bet that it'd be blocked on this read() call while executing your command.
Try closing/redirecting stdout in your perl script before the fork().
The entire process containing the interpreter forks. Fortunately memory is copy-on-write so it doesn't need to copy all the process memory in order to fork. However, things such as file descriptors remain open. This allows child processes to handle them, but may cause issues if they aren't closed appropriately. In general, fork() should not be used in an embedded interpreter except under extreme duress.
To answer the nominal question, since you commented that the accepted answer fails to do so, fork affects the process in which it is called. In your example of rTorrent spawning a Perl process which then calls fork, it is the Perl process which is duplicated, since it was the Perl process which called fork.
In the general case, there is no way for a process to fork any process other than itself. If it were possible to tell another arbitrary process to go fork itself, that would open up no end of security and performance issues.
My advice would be "don't do that".
If the Perl interpreter is embedded within the rtorrent process, you've almost certainly forked an entire rtorrent process, the effects of which are probably ill-defined at best. It's generally a bad idea to play with process-level stuff in an embedded interpreter regardless of language.
There's an excellent chance that some sort of lock is not being properly released, or that threads within the processes are proceeding in unintended and possibly competing ways.
When we create a process using fork the child process will have the copy of the address space.So the child also can use the address space.And it also can access the files which is opened by the parent.We can have the control over the child.To get the complete status of the child we can use wait.

Starting and stopping a forked process

Is it possible for a parent process to start and stop a child (forked) process in Unix?
I want to implement a task scheduler (see here) which is able to run multiple processes at the same time which I believe requires either separate processes or threads.
How can I stop the execution of a child process and resume it after a given amount of time?
(If this is only possible with threads, how are threads implemented?)
You could write a simple scheduler imitation using signals.
If you have the permissions, then stop signal (SIGSTOP) stops the execution of a process, and continue signal (SIGCONT) continues it.
With signals you would not have any fine grained control on the "scheduling",
but I guess OS grade scheduler is not the purpose of this execersice any way.
Check kill (2) and signal (7) manual pages.
There are also many guides to using Unix signals in the web.
You can use signals, but in the usual UNIX world it's probably easier to use semaphores. Once you set the semaphore to not let the other process proceed, the scheduler will swap it out in the normal course of things; when you clear the semaphore, it will become ready to run again.
You can do the exact same thing with threads of course; the only dramatic difference is you save a heavyweight context switch.
Just a side note: If you are using signal(), the behavior may be different on different unixes. If you are using Linux, check the "Portability" section of the signal manpage, and the sigaction manpage, which is preferred.

Resources