How to Kill Current Command When Bash Script is Killed - linux

I current have a script that looks like this.
# code
mplayer "$vid"
# more code
The problem is that if this script is killed the mplayer process lives. I wondering how I could make it so that killing the script would kill mplayer as well.
I can't use exec because I need to run commands after mplayer.
exec mplayer "$vid"
The only possible solution I can think of is to spawn it in the background and wait until it finishes manually. That way I can get it's PID and kill it when the script gets killed, not exactly elegant. I was wondering what the "proper" or best way of doing this is.

I was able to test the prctl idea I posted about in a comment and it seems to work. You will need to compile this:
#include "sys/prctl.h"
#include "stdlib.h"
#include "string.h"
#include "unistd.h"
int main(int argc, char ** argv){
prctl(PR_SET_PDEATHSIG, atoi(argv[1]),0,0,0);
char * argv0 = strdup(argv[2]);
char * slashptr = strrchr(argv0, '/');
if(slashptr){
argv0 = slashptr + 1;
}
return execvp(argv0, &(argv[2]));
}
Let's say you have compiled the above to an executable named "prun" and it is in your path. Let's say your script is called "foo.sh" and it is also in your path. Make a wrapper script that calls
prun 15 foo.sh
foo.sh should get SIGTERM when the wrapper script is terminated for any reason, even SIGKILL.
Note: this is a linux only solution and the c source code presented is without detailed checking of arguments

Thanks to Mux for the lead. It appears that there is no way to do this in bash except for manually catching signals. Here is a final working (overly commented) version.
trap : SIGTERM SIGINT # Trap these two (killing) signals. These will cause wait
# to return a value greater than 128 immediately after received.
mplayer "$vid" & # Start in background (PID gets put in `$!`)
pid=$!
wait $pid # Wait for mplayer to finish.
[ $? -gt 128 ] && { kill $pid ; exit 128; } ; # If a signal was recieved
# kill mplayer and exit.
Refrences:
- traps: http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_12_02.html

(Updated) I think I understand what you are looking for now:
You can accomplish this by spawning a new terminal to run your script:
gnome-terminal -x /path_to_dir_of_your_script/your_script_name
(or use xterm -e or konsole -e instead of gnome-terminal -x, depending on what system you are on)
So now whenever your script ends / exits (I assume you have exit 0 or exit 1 in certain parts of the script), the newly spawned terminal will also exit since the script is finished - this will in turn also kill any applications spawned under that new terminal.
For example, I just tested the above command with this script:
#!/bin/bash
gedit &
pid=$!
echo "$pid"
sleep 5
exit 0
As you can see, there are no explicit calls to kill the new gedit process, but the application (gedit) closes as soon as the script exits anyway.
(Previous answer: alternatively, if you were simply asking about how to kill a process) Here's a short example of how you can accomplish that with kill.
#!/bin/bash
gedit &
pid=$!
echo "$pid"
sleep 5
kill -s SIGKILL $pid
Unless I misunderstood your question, you can get the PID of the spawned process right away instead of waiting until it finishes.

Well, you can simply kill the process group instead, this way the whole process tree will be killed, first find out the group id
ps x -o "%p %r %c" | grep <name>
And then use kill like so:
kill -TERM -<gid>
Note the dash before the process group id. Or a one-liner:
kill -TERM -$(pgrep <name>)

Perhaps use command substitution to run mplayer "$vid" in a subshell:
$(mplayer "$vid")
I tested it this way:
tesh.sh:
#!/bin/sh
$vid = "..."
$(mplayer "$vid")
% test.sh
In a separate terminal:
% pkill test.sh
In the orginal terminal, mplayer stops, printing to stderr
Terminated
MPlayer interrupted by signal 13 in module: av_sync

Related

Parallel run and wait for pocesses from subshell

Hi all/ I'm trying to make something like parallel tool for shell simply because the functionality of parallel is not enough for my task. The reason is that I need to run different versions of compiler.
Imagine that I need to compile 12 programs with different compilers, but I can run only 4 of them simultaneously (otherwise PC runs out of memory and crashes :). I also want to be able to observe what's going on with each compile, therefore I execute every compile in new window.
Just to make it easier here I'll replace compiler that I run with small script that waits and returns it's process id sleep.sh:
#!/bin/bash
sleep 30
echo $$
So the main script should look like parallel_run.sh :
#!/bin/bash
for i in {0..11}; do
xfce4-terminal -H -e "./sleep.sh" &
pids[$i]=$!
pstree -p $pids
if (( $i % 4 == 0 ))
then
for pid in ${pids[*]}; do
wait $pid
done
fi
done
The problem is that with $! I get pid of xfce4-terminal and not the program it executes. So if I look at ptree of 1st iteration I can see output from main script:
xfce4-terminal(31666)----{xfce4-terminal}(31668)
|--{xfce4-terminal}(31669)
and sleep.sh says that it had pid = 30876 at that time. Thus wait doesn't work at all in this case.
Q: How to get right PID of compiler that runs in subshell?
Maybe there is the other way to solve task like this?
It seems like there is no way to trace PID from parent to child if you invoke process in new xfce4-terminal as terminal process dies right after it executed given command. So I came to the solution which is not perfect, but acceptable in my situation. I run and put compiler's processes in background and redirect output to .log file. Then I run tail on these logfiles and I kill all tails which belongs to current $USER when compilers from current batch are done, then I run the other batch.
#!/bin/bash
for i in {1..8}; do
./sleep.sh > ./process_$i.log &
prcid=$!
xfce4-terminal -e "tail -f ./process_$i.log" &
pids[$i]=$prcid
if (( $i % 4 == 0 ))
then
for pid in ${pids[*]}; do
wait $pid
done
killall -u $USER tail
fi
done
Hopefully there will be no other tails running at that time :)

Don't show the output of kill command in a Linux bash script [duplicate]

How can you suppress the Terminated message that comes up after you kill a
process in a bash script?
I tried set +bm, but that doesn't work.
I know another solution involves calling exec 2> /dev/null, but is that
reliable? How do I reset it back so that I can continue to see stderr?
In order to silence the message, you must be redirecting stderr at the time the message is generated. Because the kill command sends a signal and doesn't wait for the target process to respond, redirecting stderr of the kill command does you no good. The bash builtin wait was made specifically for this purpose.
Here is very simple example that kills the most recent background command. (Learn more about $! here.)
kill $!
wait $! 2>/dev/null
Because both kill and wait accept multiple pids, you can also do batch kills. Here is an example that kills all background processes (of the current process/script of course).
kill $(jobs -rp)
wait $(jobs -rp) 2>/dev/null
I was led here from bash: silently kill background function process.
The short answer is that you can't. Bash always prints the status of foreground jobs. The monitoring flag only applies for background jobs, and only for interactive shells, not scripts.
see notify_of_job_status() in jobs.c.
As you say, you can redirect so standard error is pointing to /dev/null but then you miss any other error messages. You can make it temporary by doing the redirection in a subshell which runs the script. This leaves the original environment alone.
(script 2> /dev/null)
which will lose all error messages, but just from that script, not from anything else run in that shell.
You can save and restore standard error, by redirecting a new filedescriptor to point there:
exec 3>&2 # 3 is now a copy of 2
exec 2> /dev/null # 2 now points to /dev/null
script # run script with redirected stderr
exec 2>&3 # restore stderr to saved
exec 3>&- # close saved version
But I wouldn't recommend this -- the only upside from the first one is that it saves a sub-shell invocation, while being more complicated and, possibly even altering the behavior of the script, if the script alters file descriptors.
EDIT:
For more appropriate answer check answer given by Mark Edgar
Solution: use SIGINT (works only in non-interactive shells)
Demo:
cat > silent.sh <<"EOF"
sleep 100 &
kill -INT $!
sleep 1
EOF
sh silent.sh
http://thread.gmane.org/gmane.comp.shells.bash.bugs/15798
Maybe detach the process from the current shell process by calling disown?
The Terminated is logged by the default signal handler of bash 3.x and 4.x. Just trap the TERM signal at the very first of child process:
#!/bin/sh
## assume script name is test.sh
foo() {
trap 'exit 0' TERM ## here is the key
while true; do sleep 1; done
}
echo before child
ps aux | grep 'test\.s[h]\|slee[p]'
foo &
pid=$!
sleep 1 # wait trap is done
echo before kill
ps aux | grep 'test\.s[h]\|slee[p]'
kill $pid ## no need to redirect stdin/stderr
sleep 1 # wait kill is done
echo after kill
ps aux | grep 'test\.s[h]\|slee[p]'
Is this what we are all looking for?
Not wanted:
$ sleep 3 &
[1] 234
<pressing enter a few times....>
$
$
[1]+ Done sleep 3
$
Wanted:
$ (set +m; sleep 3 &)
<again, pressing enter several times....>
$
$
$
$
$
As you can see, no job end message. Works for me in bash scripts as well, also for killed background processes.
'set +m' disables job control (see 'help set') for the current shell. So if you enter your command in a subshell (as done here in brackets) you will not influence the job control settings of the current shell. Only disadvantage is that you need to get the pid of your background process back to the current shell if you want to check whether it has terminated, or evaluate the return code.
This also works for killall (for those who prefer it):
killall -s SIGINT (yourprogram)
suppresses the message... I was running mpg123 in background mode.
It could only silently be killed by sending a ctrl-c (SIGINT) instead of a SIGTERM (default).
disown did exactly the right thing for me -- the exec 3>&2 is risky for a lot of reasons -- set +bm didn't seem to work inside a script, only at the command prompt
Had success with adding 'jobs 2>&1 >/dev/null' to the script, not certain if it will help anyone else's script, but here is a sample.
while true; do echo $RANDOM; done | while read line
do
echo Random is $line the last jobid is $(jobs -lp)
jobs 2>&1 >/dev/null
sleep 3
done
Another way to disable job notifications is to place your command to be backgrounded in a sh -c 'cmd &' construct.
#!/bin/bash
# ...
pid="`sh -c 'sleep 30 & echo ${!}' | head -1`"
kill "$pid"
# ...
# or put several cmds in sh -c '...' construct
sh -c '
sleep 30 &
pid="${!}"
sleep 5
kill "${pid}"
'
I found that putting the kill command in a function and then backgrounding the function suppresses the termination output
function killCmd() {
kill $1
}
killCmd $somePID &
Simple:
{ kill $! } 2>/dev/null
Advantage? can use any signal
ex:
{ kill -9 $PID } 2>/dev/null

How to get right PID of a group of background command and kill it?

Ok, just like in this thread, How to get PID of background process?, I know how to get the PID of background process. However, what I need to do countains more than one operation.
{
sleep 300;
echo "Still running after 5 min, killing process manualy.";
COMMAND COMMAND COMMAND
echo "Shutdown complete"
}&
PID_CHECK_STOP=$!
some stuff...
kill -9 $PID_CHECK_STOP
But it doesn't work. It seems i get either a bad PID or I just can't kill it. I tried to run ps | grep sleep and the pid it gives is always right next to the one i get in PID_CHECK_STOP. Is there a way to make it work? Can i wrap those commands an other way so i can kill them all when i need to?
Thx guys!
kill -9 kills the process before it can do anything else, including signalling its children to exit. Use a gentler signal (kill by itself, which sends a TERM, should be sufficient). You do need to have the process signal its children to exit (if any) explicitly, though, via a trap command.
I'm assuming sleep is a placeholder for the real command. sleep is tricky, however, as it ignores any signals until it returns (i.e., it is non-interruptible). To make your example work, put sleep itself in the background and immediately wait on it. When you kill the "outer" background process, it will interrupt the wait call, which will allow sleep to be killed as well.
{
trap 'kill $(jobs -p)' EXIT
sleep 300 & wait
echo "Still running after 5 min, killing process manualy.";
COMMAND COMMAND COMMAND
echo "Shutdown complete"
}&
PID_CHECK_STOP=$!
some stuff...
kill $PID_CHECK_STOP
UPDATE: COMMAND COMMAND COMMAND includes a command that runs via sudo. To kill that process, kill must also be run via sudo. Keep in mind that doing so will run the external kill program, not the shell built-in (there is little difference between the two; the built-in exists to allow you to kill a process when your process quota has been reached).
You can have another script containing those commands and kill that script. If you are dynamically generating code for the block, just write out a script, execute it and kill when you are done.
The { ... } surrounding the statements starts a new shell, and you get its PID afterwards. sleep and other commands within the block get separate PIDs.
To illustrate, look for your process in ps afux | less - the parent shell process (above the sleep) has the PID you were just given.

Sleep in a while loop gets its own pid

I have a bash script that does some parallel processing in a loop. I don't want the parallel process to spike the CPU, so I use a sleep command. Here's a simplified version.
(while true;do sleep 99999;done)&
So I execute the above line from a bash prompt and get something like:
[1] 12345
Where [1] is the job number and 12345 is the process ID (pid) of the while loop. I do a kill 12345 and get:
[1]+ Terminated ( while true; do
sleep 99999;
done )
It looks like the entire script was terminated. However, I do a ps aux|grep sleep and find the sleep command is still going strong but with its own pid! I can kill the sleep and everything seems fine. However, if I were to kill the sleep first, the while loop starts a new sleep pid. This is such a surprise to me since the sleep is not parallel to the while loop. The loop itself is a single path of execution.
So I have two questions:
Why did the sleep command get its own process ID?
How do I easily kill the while loop and the sleep?
Sleep gets its own PID because it is a process running and just waiting. Try which sleep to see where it is.
You can use ps -uf to see the process tree on your system. From there you can determine what the PPID (parent PID) of the shell (the one running the loop) of the sleep is.
Because "sleep" is a process, not a build-in function or similar
You could do the following:
(while true;do sleep 99999;done)&
whilepid=$!
kill -- -$whilepid
The above code kills the process group, because the PID is specified as a negative number (e.g. -123 instead of 123). In addition, it uses the variable $!, which stores the PID of the most recently executed process.
Note:
When you execute any process in background on interactive mode (i.e. using the command line prompt) it creates a new process group, which is what is happening to you. That way, it's relatively easy to "kill 'em all", because you just have to kill the whole process group. However, when the same is done within a script, it doesn't create any new group, because all new processes belong to the script PID, even if they are executed in background (jobs control is disabled by default). To enable jobs control in a script, you just have to put the following at the beginning of the script:
#!/bin/bash
set -m
Have you tried doing kill %1, where 1 is the number you get after launching the command in background?
I did it right now after launching (while true;do sleep 99999;done)& and it correctly terminated it.
"ps --ppid" selects all processes with the specified parent pid, eg:
$ (while true;do sleep 99999;done)&
[1] 12345
$ ppid=12345 ; kill -9 $ppid $(ps --ppid $ppid -o pid --no-heading)
You can kill the process group.
To find the process group of your process run:
ps --no-headers -o "%r" -p 15864
Then kill the process group using:
kill -- -[PGID]
You can do it all in one command. Let's try it out:
$ (while true;do sleep 99999;done)&
[1] 16151
$ kill -- -$(ps --no-headers -o "%r" -p 16151)
[1]+ Terminated ( while true; do
sleep 99999;
done )
To kill the while loop and the sleep using $! you can also use a trap signal handler inside the subshell.
(trap 'kill ${!}; exit' TERM; while true; do sleep 99999 & wait ${!}; done)&
kill -TERM ${!}

Kill bash script foreground children when a signal comes

I am wrapping a fastcgi app in a bash script like this:
#!/bin/bash
# stuff
./fastcgi_bin
# stuff
As bash only executes traps for signals when the foreground script ends I can't just kill -TERM scriptpid because the fastcgi app will be kept alive.
I've tried sending the binary to the background:
#!/bin/bash
# stuff
./fastcgi_bin &
PID=$!
trap "kill $PID" TERM
# stuff
But if I do it like this, apparently the stdin and stdout aren't properly redirected because it does not connect with lighttpds mod_fastgi, the foreground version does work.
EDIT: I've been looking at the problem and this happens because bash redirects /dev/null to stdin when a program is launched in the background, so any way of avoiding this should solve my problem as well.
Any hint on how to solve this?
There are some options that come to my mind:
When a process is launched from a shell script, both belong to the same process group. Killing the parent process leaves the children alive, so the whole process group should be killed. This can be achieved by passing the negated PGID (Process Group ID) to kill, which is the same as the parent's PID. ej: kill -TERM -$PARENT_PID
Do not execute the binary as
a child, but replacing the script
process with exec. You lose the
ability to execute stuff afterwards
though, because exec completely
replaces the parent process.
Do not kill the shell script process, but the FastCGI binary. Then, in the script, examine the return code and act accordingly. e.g: ./fastcgi_bin || exit -1
Depending on how mod_fastcgi handles worker processes, only the second option might be viable.
I have no idea if this is an option for you or not, but since you have a bounty I am assuming you might go for ideas that are outside the box.
Could you rewrite the bash script in Perl? Perl has several methods of managing child processes. You can read perldoc perlipc and more specifics in the core modules IPC::Open2 and IPC::Open3.
I don't know how this will interface with lighttpd etc or if there is more functionality in this approach, but at least it gives you some more flexibility and some more to read in your hunt.
I'm not sure I fully get your point, but here's what I tried and the process seems to be able to manage the trap (call it trap.sh):
#!/bin/bash
trap "echo trap activated" TERM INT
echo begin
time sleep 60
echo end
Start it:
./trap.sh &
And play with it (only one of those commands at once):
kill -9 %1
kill -15 %1
Or start in foreground:
./trap.sh
And interrupt with control-C.
Seems to work for me.
What exactly does not work for you?
I wrote this script just minutes ago to kill a bash script and all of its children...
#!/bin/bash
# This script will kill all the child process id for a given pid
# based on http://www.unix.com/unix-dummies-questions-answers/5245-script-kill-all-child-process-given-pid.html
ppid=$1
if [ -z $ppid ] ; then
echo "This script kills the process identified by pid, and all of its kids";
echo "Usage: $0 pid";
exit;
fi
for i in `ps j | awk '$3 == '$ppid' { print $2 }'`
do
$0 $i
kill -9 $i
done
Make sure the script is executable, or you will get an error on the $0 $i
You can override the implicit </dev/null for a background process by redirecting stdin yourself, for example:
sh -c 'exec 3<&0; { read x; echo "[$x]"; } <&3 3<&- & exec 3<&-; wait'
Try keeping the original stdin using ./fastcgi_bin 0<&0 &:
#!/bin/bash
# stuff
./fastcgi_bin 0<&0 &
PID=$!./fastcgi_bin 0<&0 &
trap "kill $PID" TERM
# stuff
# test
#sh -c 'sleep 10 & lsof -p ${!}'
#sh -c 'sleep 10 0<&0 & lsof -p ${!}'
You can do that with a coprocess.
Edit: well, coprocesses are background processes that can have stdin and stdout open (because bash prepares fifos for them). But you still need to read/write to those fifos, and the only useful primitive for that is bash's read (possibly with a timeout or a file descriptor); nothing robust enough for a cgi. So on second thought, my advice would be not to do this thing in bash. Doing the extra work in the fastcgi, or in an http wrapper like WSGI, would be more convenient.

Resources