I have a scenario in which after the fork the child executes using the excele() command
a linux system command in which its executes a small shell script .
And the parent does only a wait() after that . So my question is , does the parent executes
wait after an execle() which the child process executes ?
Thanks
Smita
I'm not too sure what you're asking, but the parent is in a wait() system call it will wait there until any child exits. There are other things like signals that will take it out of the exit too.
You do have to be careful in the child process that you don't accidently fall through into the parent code on error.
This (a child process doing some execve after its parent fork-ed, and the parent wait- or waitpid-ing it) is a very common scenario; most shells are working this way. You could e.g. strace -f an interactive bash shell to learn more, or study the source code of simple shells like sash
Notice that after a fork(2) syscall, the parent and the child processes may run simultanously (e.g. at the same time, especially on multi-core machines).
Related
My understanding is that sub shells fork a child process off of the parent process and that any commands in the parentheses are executed using execve. The parent process waits for the child process to finish executing. Am I missing anything here?
The sub shell may fork to create the subshell, and then for each external cmmand in (), fork again before calling execve.
If the commands in () are internal commands, then it would not need to fork and execve for those.
I'm working with parallel processing and rather than dealing with cvars and locks I've found it's much easier to run a few commands in a shell script in sequence to avoid race conditions in one place. The new problem is that one of these commands calls another program, which the OS has decided to put into a new process. I need to kill this process from the parent program, but the parent program only knows the pid of the parent (shell script), so this process keeps executing on its own.
Is there a way in bash to set a subprocess to die when the parent dies? I've tried to figure out how to execute it as a daemon because I read daemons exit when the parent dies, but it's tricky and I can't quite get it right. Thanks!
Found the problem, and this fixed it (except for some pesky messages that somehow cannot be redirected to /dev/null).
trap "trap - SIGTERM && kill -- -$$" SIGINT SIGTERM EXIT
Does the Linux scheduler prefer to run the child process after fork() to the father process?
Usually, the forked process will execute exec of some kind so, it is better to let child process to run before father process(to prevent copy on write).
I assume that the child will execute exec as first operation after it will be created.
Is my assumption (that the scheduler will prefer child process) correct. If not, why? If yes, is there more reasons to run child first?
To quote The Linux Programming Interface (pg. 525) for a general answer:
After a fork(), it is indeterminate which process - the parent or the child - next has access to the CPU. (On a multiprocessor system, they may both simultaneously get access to a CPU.)
The book goes on about the differences in kernel versions and also mentions CFS / Linux 2.6.32:
[...] since Linux 2.6.32, it is once more the parent that is, by default, run first after a fork(). This default can be changed by assigning a nonzero value to the Linux-specific /proc/sys/kernel/sched_child_runs_first file.
This behaviour is still present with CFS although there are some concerns for the future of this feature. Looking at the CFS implementation, it seems to schedule the parent before the child.
The way to go for you would be to set /proc/sys/kernel/sched_child_runs_first to a non-zero value.
Edit: This answer analyzes the default behaviour and compares it to sched_child_runs_first.
For the case where the child calls exec at the first opportunity you can use vfork instead of fork. vfork suspends the parent until the child calls _exit or exec*. However, once it calls exec the child will be suspended if code has to be loaded from disk. In this case, the parent has a good chance to continue before the child does.
The theory says that, if wait is not called parent wont be getting information about terminated child and child becomes zombie. But when we create a process, zombies are not created even if we are not calling wait. My question is whether the wait is called automatically?
In many languages, calling a sub process will call wait() for you. For example, in ruby or perl, you often shell out like this:
#!/usr/bin/ruby
system("ls /tmp")
`ls /tmp`
This is doing a bunch of magic for you, including calling wait(). In fact, Ruby must wait for the process to exit anyway to collect the output before the program can continue.
You can easily create zombies like this:
#!/usr/bin/ruby
if fork
sleep 1000 # Parent ignoring the child
else
exec "ls /tmp" # short-lived child
end
When we manually fork/exec, there is no magic calling wait() for us, and a zombie will be created. But when the parent exits, the zombie child will get re-parented to init, which will always call wait() to clean up zombies.
I've read about fork and from what I understand, the process is cloned but which process? The script itself or the process that launched the script?
For example:
I'm running rTorrent on my machine and when a torrent completes, I have a script run against it. This script fetches data from the web so it takes a few seconds to complete. During this time, my rtorrent process is frozen. So I made the script fork using the following
my $pid = fork();
if ($pid == 0) { blah blah blah; exit 0; }
If I run this script from the CLI, it comes back to the shell within a second while it runs in the background, exactly as I intended. However, when I run it from rTorrent, it seems to be even slower than before. So what exactly was forked? Did the rtorrent process clone itself and my script ran in that, or did my script clone itself? I hope this makes sense.
The fork() function returns TWICE! Once in the parent process, and once in the child process. In general, both processes are IDENTICAL in every way, as if EACH one had just returned from fork(). The only difference is that in one, the return value from fork() is 0, and in the other it is non-zero (the PID of the child process).
So whatever process was running your Perl script (if it is an embedded Perl interpreter inside rTorrent then rTorrent would be the process) would be duplicated at exactly the point that the fork() happened.
I believe I found the problem by looking through rTorrent's source. For some processes, it will read all of the output sent to stdout before continuing. If this is happening to your process, rTorrent will block until you close the stdout process. Because you're forking, your child process shares the same stdout as the parent. Your parent process will exit, but the pipe remains open (because your child process is still running). If you did an strace of rTorrent, I'd bet that it'd be blocked on this read() call while executing your command.
Try closing/redirecting stdout in your perl script before the fork().
The entire process containing the interpreter forks. Fortunately memory is copy-on-write so it doesn't need to copy all the process memory in order to fork. However, things such as file descriptors remain open. This allows child processes to handle them, but may cause issues if they aren't closed appropriately. In general, fork() should not be used in an embedded interpreter except under extreme duress.
To answer the nominal question, since you commented that the accepted answer fails to do so, fork affects the process in which it is called. In your example of rTorrent spawning a Perl process which then calls fork, it is the Perl process which is duplicated, since it was the Perl process which called fork.
In the general case, there is no way for a process to fork any process other than itself. If it were possible to tell another arbitrary process to go fork itself, that would open up no end of security and performance issues.
My advice would be "don't do that".
If the Perl interpreter is embedded within the rtorrent process, you've almost certainly forked an entire rtorrent process, the effects of which are probably ill-defined at best. It's generally a bad idea to play with process-level stuff in an embedded interpreter regardless of language.
There's an excellent chance that some sort of lock is not being properly released, or that threads within the processes are proceeding in unintended and possibly competing ways.
When we create a process using fork the child process will have the copy of the address space.So the child also can use the address space.And it also can access the files which is opened by the parent.We can have the control over the child.To get the complete status of the child we can use wait.