Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last year.
Improve this question
EDIT: If managing child processes for a shell script is really purely a matter of "opinion"......no wonder there are so many terrible shell scripts. Thanks for continuing that.
I'm having trouble understanding how SIGTERM is conventionally handled with relation to child processes in Linux.
I am writing a command line utility in Bash.
It looks like
command1
command2
command3
Very simple, right?
However, if my program is sent SIGTERM signal, the Bash script will end but the current child process (e.g. command2) will continue.
But with some more code, I can write my program like this
trap 'jobs -p | xargs -r kill' TERM
command1 &
wait
command2 &
wait
command3 &
wait
That will propogate SIGTERM to the currently running child process. I haven't often seen Bash scripts written like that, but that's what it would take.
Should I:
Write my program in the second style each time I create a child process?
Or expect users to launch my program in a process group if they want to send SIGTERM?
What's the best practice/conventions for process management responsibilities with respect to SIGTERM for children?
tl;dr
The first way.
If a process starts a child process and waits for it to finish (the example), nothing special is necessary.
If a process starts a child process and may prematurely terminate it, it should start that child in a new process group and send signals to the group.
Details
Oddly for how often this applies (like, every shell script), I can't find a good answer about convention/best practice.
Some deduction:
Creating and signaling process groups are very common. In particular, interactive shells do this. So (unless it takes extra steps to prevent it) a processes' children can receive SIGINT signals at any time, in very normal circumstances.
In the interest of supporting as few paradigms as possible, it seems to make sense to rely on that always.
That means the first style is okay, and the burden of process management is placed on processes that deliberately terminate their children during regular operation (which is relatively less common).
See also "Case study: timeout" below for further evidence.
How to do it
While the perspective of the question was from the requirements of a vanilla callee program, this answer prompts the question: how does one start a process in a new process group (in the non-vanilla case that one wishes to prematurely interrupt the process)?
This is easy in some languages and difficult in others. I've created a utility run-pgrp to assist in the latter case.
#!/usr/bin/env python3
# Run the command in a new process group, and forward signals.
import os
import signal
import sys
pid = os.fork()
if not pid:
os.setpgid(0, 0)
os.execvp(sys.argv[1], sys.argv[1:])
def receiveSignal(sig, frame):
os.killpg(pid, sig)
signal.signal(signal.SIGINT, receiveSignal)
signal.signal(signal.SIGTERM, receiveSignal)
_, status = os.waitpid(-1, 0)
sys.exit(status)
The caller can use that to wrap the process that it prematurely terminate.
Node.js example:
const childProcess = require("child_process");
(async () => {
const process = childProcess.spawn(
"run-pgrp",
["bash", "-c", "echo start; sleep 600; echo done"],
{ stdio: "inherit" }
);
/* leaves orphaned process
const process = childProcess.spawn(
"bash",
["-c", "echo start; sleep 600; echo done"],
{ stdio: "inherit" }
);
*/
await new Promise(res => setTimeout(res, /* 1s */ 1000));
process.kill();
if (process.exitCode == null) {
await new Promise(res => process.on("exit", res));
}
})();
At the end of this program, the sleep process is terminated. If the command invoked directly without run-pgrp, the sleep process continues to run.
Case study: timeout
The GNU timeout utility is a program that may terminate its child process.
Notably, it runs the child in a new process group. This supports the conclusion that potential interruptions should be preceded by creating a new process group.
Interestingly, however, timeout puts itself in the process group as well, to avoid complexities around forwarding signals, but causing some strange behavior. https://unix.stackexchange.com/a/57692/56781
For example, in an interactive shell, run
bash -c "echo start; timeout 600 sleep 600; echo done"
Try to interrupt this (Ctrl+C). It doesn't respond, because timeout never gets the signal!
In contrast, my run-pgrp utility keeps itself in the original process group and forwards SIGINT/SIGTERM to the child group.
Related
I have a shell script which spawns a child process in the background and waits on it(via wait command) . It also catches SIGTERM and passes it to the child. But whenever I send a SIGTERM to the parent process it comes out of the "wait" even though the child is still running(child catches SIGTERM) . Is it possible to achieve truly waiting on a child from inside a shell script until the child dies?
This is explicit behavior which common idiom depends on. Observe the following difference:
# this waits 10 seconds, and doesn't handle signal handlers until later
sleep 10
# this returns immediately when a signal is received
sleep 10 & wait $!
You're perfectly able to check whether remaining background tasks exist and wait again:
sleep 10 & pid=$!
while kill -0 "$pid"; do wait "$pid"; done
For a full discussion of signal handling, including the behavior described here, see SignalTrap.
I have a child process spawned using child_process.fork and would like to terminate it. The problem is that the child process does some lengthy CPU bound calculation and I don't have control over it. That is, the CPU bound code fragment cannot be restructured to make use of process.nextTick or polling.
A very simplified example:
parent.js
var cp = require('child_process');
var child = cp.fork('child.js');
child.js
...
while(true){} // lengthy computation which I cannot modify
...
Is it possible to terminate it? Preferably in a way that allows catching the exit event in the child in order to do some cleanups?
Sending SIGTERM/SIGKILL/etc using child.kill() doesn't
seem to work on Windows. I assume even if it works on other OSes it wouldn't kill the process anyway due to child not being able to process events while doing the computation.
I've done this the messy way by using the PID of the process and killing it at the OS level.
Not sure how to do it in windows, but in Linux/mac I've done:
var cp = require('child_process'),
badJob = cp.fork('badFile.js');
cp.execSync('kill -9 ' + badJob.pid);
The signal 9 is caught at the Kernel level, so the condition of the process is irrelevant.
Edit: In Windows you can use taskkill instead of kill. ex:
cp.execSync('taskkill /f ' + badJob.pid);
I have a shell script background process that runs "nohupped". This process shall receive signals in a trap, but when playing around with some code, I noticed that some signals are ignored if the interval between them is too small. The execution of the trap function takes too much time and therefore the subsequent signal goes
unserved. Unfortunately, the trap command doesn't have some kind of signal queue, that's why I am asking: What is the best way to solve this problem?
A simple example:
function receive_signal()
{
local TIMESTAMP=`date '+%Y%m%d%H%M%S'`
echo "some text" > $TIMESTAMP
}
trap receive_signal USR1
while :
do
sleep 5
done
The easiest change, without redesigning your approach, is to use realtime signals, which queue.
This is not portable. Realtime signals themselves are an optional extension, and shell and utility support for them are not required by the extension in any case. However, it so happens that the relevant GNU utilities on Linux — bash(1) and kill(1) — do support realtime signals in a commonsense way. So, you can say:
trap sahandler RTMIN+1
and, elsewhere:
$ kill RTMIN+1 $pid_of_my_process
Did you consider multiple one line trap statements? One for each signal you want to block or process?
trap dosomething 15
trap segfault SEGV
Also you want to have the least possible code in a signal handler for the reason you just encountered.
Edit - for bash you can code your own error handling / signal handling in C, or anything else using modern signal semantics if you want with dynamically loadable modules:
http://cfajohnson.com/shell/articles/dynamically-loadable/
I have one simple question about signals in Linux systems. As I understand every process has it's PID and PGID. When I create a process it gets it's unique PID, now if I would fork a new process with fork() function I would get child process with different PID but the same PGID.
Now, the code
#include<stdio.h>
#include<unistd.h>
int main()
{
int i=3;
int j;
for(j=0;j<i;++j)
{
if (fork() == 0)
{
while(1)
{
}
}
}
printf("created\n");
while(1)
{
}
return 0;
}
when I compile this program and run it with the command
./foo
and wait a sec so he creates his children and I do CTRL-C and then ps aux I can see that the parent and the children are gone, but if I do
./foo
wait for forking to complete and in other terminal do
kill -INT <pid_of_foo>
and ps aux I can see that the parent is gone but children are still alive and eating my CPU.
I am not sure, but it seems that CTRL-C sends the signal to every process that is in some process group and the KILL -SIGNAL pid command sends the signal to the process with PID=pid not PGID=pid.
Am I on the right track? If yes, why the key combination kills processes with PGID and not PID?
Signal delivery, process groups, and sessions
Yes, you are on the right track.
Modern Unix variants since the BSD releases implement sessions and process groups.
You can look at sessions as groups of process groups. The idea was that everything resulting from a single login on a tty or pseudo-tty line is part of a session, and things relating to a single shell pipeline or other logical grouping of processes would be organized into a single process group.
This makes moving "jobs" between the foreground and background and delivering signals more convenient. The shell users mostly doesn't need to worry about individual processes but can control-C a group of related commands in an intuitive manner.
Signals generated by the keyboard are sent to the foreground process group in a session. The CLI kill command you are using delivers signals to individual processes. If you want to try to duplicate the ^C delivery mechanism you can use kill 0; that will send the signal to every member of the same process group, and if executed from a script it may do what you want.
Note: I edited your question to change GPID to PGID.
I'm trying to write a Perl script to capture system log output while a loop runs a system command at intervals. I want the script to do the equivalent of something I often do on the (unix) command line: taking Java process thread dumps by tailing /var/log/jbossas/default/console.log into a new file in the background, while running kill -QUIT [PID] an arbitrary number of times at intervals, in the foreground. I do not need to examine or process the log file output while it's being tailed, I just want it to go to a new file while my loop runs; once the loop exits, the background task should exit too.
# basic loop
# $process is PID given as argument
my $duration = 6;
my $dumps = 0;
until ($dumps == $duration) {
system "kill -QUIT $process";
$dumps++;
print STDOUT "$dumps of $duration thread dumps sent to log.\n";
print STDOUT "sleeping for $frequency seconds\n";
sleep 30;
}
Somehow I need to wrap this loop in another loop that will know when this one exits, and then exit the background log tailing task. I realize that this should be trivial in Perl, but I am not sure of how to proceed, and other questions or examples I've found are not doing quite what I'm trying to do here. Seems like using Perl's system blocks my proceeding into the inner loop; exec forks off the tail job so I'm not sure how I'd exit it after my inner loop runs. I'd strongly prefer to use only core Perl modules, and not File::Tail or any additional CPAN modules.
Thanks in advance for any feedback, and feel free to mock my Perlessness. I've looked for similar questions answered here, but if I've missed one that seems to address my problem, I'd appreciate your linking me to it.
This is probably best suited with an event loop. Read up on the answer to Making a Perl daemon that runs 24/7 and reads from named pipes, that'll give you an intro on reading a filehandle in an event loop. Just open a pipe to the tail output, print it off to the file, run the kill on a timer event, then once the timer events are done just signal an exit.