Don't show the output of kill command in a Linux bash script [duplicate] - linux

How can you suppress the Terminated message that comes up after you kill a
process in a bash script?
I tried set +bm, but that doesn't work.
I know another solution involves calling exec 2> /dev/null, but is that
reliable? How do I reset it back so that I can continue to see stderr?

In order to silence the message, you must be redirecting stderr at the time the message is generated. Because the kill command sends a signal and doesn't wait for the target process to respond, redirecting stderr of the kill command does you no good. The bash builtin wait was made specifically for this purpose.
Here is very simple example that kills the most recent background command. (Learn more about $! here.)
kill $!
wait $! 2>/dev/null
Because both kill and wait accept multiple pids, you can also do batch kills. Here is an example that kills all background processes (of the current process/script of course).
kill $(jobs -rp)
wait $(jobs -rp) 2>/dev/null
I was led here from bash: silently kill background function process.

The short answer is that you can't. Bash always prints the status of foreground jobs. The monitoring flag only applies for background jobs, and only for interactive shells, not scripts.
see notify_of_job_status() in jobs.c.
As you say, you can redirect so standard error is pointing to /dev/null but then you miss any other error messages. You can make it temporary by doing the redirection in a subshell which runs the script. This leaves the original environment alone.
(script 2> /dev/null)
which will lose all error messages, but just from that script, not from anything else run in that shell.
You can save and restore standard error, by redirecting a new filedescriptor to point there:
exec 3>&2 # 3 is now a copy of 2
exec 2> /dev/null # 2 now points to /dev/null
script # run script with redirected stderr
exec 2>&3 # restore stderr to saved
exec 3>&- # close saved version
But I wouldn't recommend this -- the only upside from the first one is that it saves a sub-shell invocation, while being more complicated and, possibly even altering the behavior of the script, if the script alters file descriptors.
EDIT:
For more appropriate answer check answer given by Mark Edgar

Solution: use SIGINT (works only in non-interactive shells)
Demo:
cat > silent.sh <<"EOF"
sleep 100 &
kill -INT $!
sleep 1
EOF
sh silent.sh
http://thread.gmane.org/gmane.comp.shells.bash.bugs/15798

Maybe detach the process from the current shell process by calling disown?

The Terminated is logged by the default signal handler of bash 3.x and 4.x. Just trap the TERM signal at the very first of child process:
#!/bin/sh
## assume script name is test.sh
foo() {
trap 'exit 0' TERM ## here is the key
while true; do sleep 1; done
}
echo before child
ps aux | grep 'test\.s[h]\|slee[p]'
foo &
pid=$!
sleep 1 # wait trap is done
echo before kill
ps aux | grep 'test\.s[h]\|slee[p]'
kill $pid ## no need to redirect stdin/stderr
sleep 1 # wait kill is done
echo after kill
ps aux | grep 'test\.s[h]\|slee[p]'

Is this what we are all looking for?
Not wanted:
$ sleep 3 &
[1] 234
<pressing enter a few times....>
$
$
[1]+ Done sleep 3
$
Wanted:
$ (set +m; sleep 3 &)
<again, pressing enter several times....>
$
$
$
$
$
As you can see, no job end message. Works for me in bash scripts as well, also for killed background processes.
'set +m' disables job control (see 'help set') for the current shell. So if you enter your command in a subshell (as done here in brackets) you will not influence the job control settings of the current shell. Only disadvantage is that you need to get the pid of your background process back to the current shell if you want to check whether it has terminated, or evaluate the return code.

This also works for killall (for those who prefer it):
killall -s SIGINT (yourprogram)
suppresses the message... I was running mpg123 in background mode.
It could only silently be killed by sending a ctrl-c (SIGINT) instead of a SIGTERM (default).

disown did exactly the right thing for me -- the exec 3>&2 is risky for a lot of reasons -- set +bm didn't seem to work inside a script, only at the command prompt

Had success with adding 'jobs 2>&1 >/dev/null' to the script, not certain if it will help anyone else's script, but here is a sample.
while true; do echo $RANDOM; done | while read line
do
echo Random is $line the last jobid is $(jobs -lp)
jobs 2>&1 >/dev/null
sleep 3
done

Another way to disable job notifications is to place your command to be backgrounded in a sh -c 'cmd &' construct.
#!/bin/bash
# ...
pid="`sh -c 'sleep 30 & echo ${!}' | head -1`"
kill "$pid"
# ...
# or put several cmds in sh -c '...' construct
sh -c '
sleep 30 &
pid="${!}"
sleep 5
kill "${pid}"
'

I found that putting the kill command in a function and then backgrounding the function suppresses the termination output
function killCmd() {
kill $1
}
killCmd $somePID &

Simple:
{ kill $! } 2>/dev/null
Advantage? can use any signal
ex:
{ kill -9 $PID } 2>/dev/null

Related

suspend a shell command without pid

I need something like $command & stop This should execute a command and suspend it. The application later resumes back the command for complete results.
I understand that job can be suspended with stop signal to the corresponding pid.
$kill -SIGSTOP 12753
When we execute a command, we barely know its pid. There is extra command involved to take a pid and do the required. I want to avoid the extra command and a time interval.
Basically The application is for a measure of network performance. Trigger all the commands put them in halt mode. The halted commands are resumed back as per the kind of traffic needed.
The process ID of the most recently started background command is available in the shell parameter $!:
$ command & kill -SIGSTOP $!
(Check the documentation for your shell's implementation of kill for the correct format.)
Try killall with the --signal option where you can specify the name of the process.
linux:~ # killall
Usage: killall [OPTION]... [--] NAME...
killall -l, --list
killall -V, --version
-e,--exact require exact match for very long names
-I,--ignore-case case insensitive process name match
-g,--process-group kill process group instead of process
-i,--interactive ask for confirmation before killing
-l,--list list all known signal names
-q,--quiet don't print complaints
-r,--regexp interpret NAME as an extended regular expression
-s,--signal SIGNAL send this signal instead of SIGTERM
-u,--user USER kill only process(es) running as USER
-v,--verbose report if the signal was successfully sent
-V,--version display version information
-w,--wait wait for processes to die
Verified by starting md5sum in a shell session:
linux$ md5sum
and in another session, ran:
killall -s SIGSTOP md5sum
yielding the following in the md5sum session:
[1]+ Stopped md5sum
Kindly confirm if you want to halt your command or run in background(append '&' to your command)?
If your application is expected to start halted command later, then why dont you start your command(to be halted) in that application itself.
This helps :
sleep 5 & kill -SIGSTOP $!
In above, have executed sleep(demo command) for 5 seconds in background.
Next have send to kill for stopping it using its PID obtained by $!.
Demo & kludge using timeout, (for some reason timeout intereprets a '0s' duration as "run forever"), to stop yes before it outputs anything:
# run 'yes' command, let it print 5 numbered lines, but stop it immediately
timeout -s SIGSTOP .000000001s yes | head -n 5 | cat -n
Output (to STDERR):
[1]+ Stopped timeout -s SIGSTOP .000000001s yes | head -n 5 | cat -n
Now restart it:
fg > /dev/null
Output:
1 y
2 y
3 y
4 y
5 y
Technique for users stuck with v8.12 or earlier coreutils, (pre-2011), wherein timeout lacks sub-second intervals. Requires waiting a second.
Wrap the command string in a shell invocation, preceded by a 1s wait -- so timeout waits 1 second, and simultaneously, so does the command string. Total wait time 1 second:
timeout -s SIGSTOP 1s sh -c "sleep 1s; yes | head -n 5 | cat -n"
Output is the same as before, fg is the same too.
Finesse, if waiting even 1 second before sleeping is too much, it can be run in the background like so:
timeout -s SIGSTOP 1s sh -c "sleep 1s; yes | head -n 5 | cat -n" &
Output (process number will vary):
[1] 14601
Then after a second, the output will be the same as the previous two timeout examples.
Assuming you are using the same command, find the command name in ps output, you can launch it in one terminal then open a new terminal
ps -ely
after retrieving the command name:
command & kill -SIGSTOP $(pidof command_name)
pidof needs the exact command name to be able to find the pid.
then to resume it:
kill -SIGCONT $(pidof command_name)
if the command name is not constant, but there is a pattern, you can create a script like this, you can call it pof.sh:
ps -ely | grep $1 | tr -s ' ' | cut -d" " -f3
command & kill -SIGSTOP $(bash pof.sh pattern)
One drawback with this script, is that in case many lines match the pattern it will returns all of theirs pids, if this is a problem, you can put the output in an array and go on from there.

Bash trap not killing children, causes unexpected ctrl-c behavior

edit
For future readers. The root of this problem really came down to running the function in an interactive shell vs. putting it in a separate script.
Also, there are many things that could be improved in the code I originally posted. Please see comments for things that could/should have been done better.
/edit
I have a bash function intended to rerun a process in the background when files in a directory change (think like Grunt, but for general purposes). The script functions as desired while running:
The subprocess is correctly started (including any children)
On file change, the sub is killed (including children) and started again
However, on exit (ctrl-c) none of the processes are killed. Additionally, pressing ctrl-c a second time will kill the current terminal session. I'm assuming this is a problem with my trap, but have been unable to identify a reason for the issue.
Here is the code of rerun.sh
#!/bin/bash
# rerun.sh
_kill_children() {
isTop=$1
curPid=$2
# Get pids of children
children=`ps -o pid --no-headers --ppid ${curPid}`
for child in $children
do
# Call this function to get grandchildren as well
_kill_children 0 $child
done
# Parent calls this with 1, all other with 0 so only children are killed
if [[ $isTop -eq 0 ]]; then
kill -9 $curPid 2> /dev/null
fi
}
rerun() {
trap " _kill_children 1 $$; exit 0" SIGINT SIGTERM
FORMAT=$(echo -e "\033[1;33m%w%f\033[0m written")
#Command that should be repeatedly run is passed as args
args=$#
$args &
#When a file changes in the directory, rerun the process
while inotifywait -qre close_write --format "$FORMAT" .
do
#Kill current bg proc and it's children
_kill_children 1 $$
$args & #Rerun the proc
done
}
#This is sourced in my bash profile so I can run it any time
To test this, create a pair of executable files parent.sh and child.sh as follows:
#!/bin/bash
#parent.sh
./child.sh
#!/bin/bash
#child.sh
sleep 86400
Then source the rerun.sh file and run rerun ./parent.sh. In another terminal window I watch "ps -ef | grep pts/4" to see all processes for the rerun (in this example on pts/4). Touching a file in the directory triggers a restart of parent.sh and children. [ctrl-c] exits, but leaves the pids running. [ctrl-c] again kills bash and all other processes on pts/4.
Desired behavior: on [ctrl-c], kill children and exit to shell normally. Help?
--
Code sources:
Inotify idea from: https://exyr.org/2011/inotify-run/
Kill children from: http://riccomini.name/posts/linux/2012-09-25-kill-subprocesses-linux-bash/
This isn't a good practice to follow in the first place. Track your children explicitly:
children=( )
foo & children+=( "$!" )
...then, you can kill or wait for them explicitly, referring to "${children[#]}" for the list. If you want to get grandchildren as well, this is a good user for fuser -k and a lockfile:
lockfile_name="$(mktemp /tmp/lockfile.XXXXXX)" # change appropriately
trap 'rm -f "$lockfile_name"' 0
exec 3>"$lockfile_name" # open lockfile on FD 3
kill_children() {
# close our own handle on the lockfile
exec 3>&-
# kill everything that still has it open (our children and their children)
fuser -k "$lockfile_name" >/dev/null
# ...then open it again.
exec 3>"$lockfile_name"
}
rerun() {
trap 'kill_children; exit 0' SIGINT SIGTERM
printf -v format '%b' "\033[1;33m%w%f\033[0m written"
"$#" &
#When a file changes in the directory, rerun the process
while inotifywait -qre close_write --format "$format" .; do
kill_children
"$#" &
done
}

How to Kill Current Command When Bash Script is Killed

I current have a script that looks like this.
# code
mplayer "$vid"
# more code
The problem is that if this script is killed the mplayer process lives. I wondering how I could make it so that killing the script would kill mplayer as well.
I can't use exec because I need to run commands after mplayer.
exec mplayer "$vid"
The only possible solution I can think of is to spawn it in the background and wait until it finishes manually. That way I can get it's PID and kill it when the script gets killed, not exactly elegant. I was wondering what the "proper" or best way of doing this is.
I was able to test the prctl idea I posted about in a comment and it seems to work. You will need to compile this:
#include "sys/prctl.h"
#include "stdlib.h"
#include "string.h"
#include "unistd.h"
int main(int argc, char ** argv){
prctl(PR_SET_PDEATHSIG, atoi(argv[1]),0,0,0);
char * argv0 = strdup(argv[2]);
char * slashptr = strrchr(argv0, '/');
if(slashptr){
argv0 = slashptr + 1;
}
return execvp(argv0, &(argv[2]));
}
Let's say you have compiled the above to an executable named "prun" and it is in your path. Let's say your script is called "foo.sh" and it is also in your path. Make a wrapper script that calls
prun 15 foo.sh
foo.sh should get SIGTERM when the wrapper script is terminated for any reason, even SIGKILL.
Note: this is a linux only solution and the c source code presented is without detailed checking of arguments
Thanks to Mux for the lead. It appears that there is no way to do this in bash except for manually catching signals. Here is a final working (overly commented) version.
trap : SIGTERM SIGINT # Trap these two (killing) signals. These will cause wait
# to return a value greater than 128 immediately after received.
mplayer "$vid" & # Start in background (PID gets put in `$!`)
pid=$!
wait $pid # Wait for mplayer to finish.
[ $? -gt 128 ] && { kill $pid ; exit 128; } ; # If a signal was recieved
# kill mplayer and exit.
Refrences:
- traps: http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_12_02.html
(Updated) I think I understand what you are looking for now:
You can accomplish this by spawning a new terminal to run your script:
gnome-terminal -x /path_to_dir_of_your_script/your_script_name
(or use xterm -e or konsole -e instead of gnome-terminal -x, depending on what system you are on)
So now whenever your script ends / exits (I assume you have exit 0 or exit 1 in certain parts of the script), the newly spawned terminal will also exit since the script is finished - this will in turn also kill any applications spawned under that new terminal.
For example, I just tested the above command with this script:
#!/bin/bash
gedit &
pid=$!
echo "$pid"
sleep 5
exit 0
As you can see, there are no explicit calls to kill the new gedit process, but the application (gedit) closes as soon as the script exits anyway.
(Previous answer: alternatively, if you were simply asking about how to kill a process) Here's a short example of how you can accomplish that with kill.
#!/bin/bash
gedit &
pid=$!
echo "$pid"
sleep 5
kill -s SIGKILL $pid
Unless I misunderstood your question, you can get the PID of the spawned process right away instead of waiting until it finishes.
Well, you can simply kill the process group instead, this way the whole process tree will be killed, first find out the group id
ps x -o "%p %r %c" | grep <name>
And then use kill like so:
kill -TERM -<gid>
Note the dash before the process group id. Or a one-liner:
kill -TERM -$(pgrep <name>)
Perhaps use command substitution to run mplayer "$vid" in a subshell:
$(mplayer "$vid")
I tested it this way:
tesh.sh:
#!/bin/sh
$vid = "..."
$(mplayer "$vid")
% test.sh
In a separate terminal:
% pkill test.sh
In the orginal terminal, mplayer stops, printing to stderr
Terminated
MPlayer interrupted by signal 13 in module: av_sync

Bash not trapping interrupts during rsync/subshell exec statements

Context:
I have a bash script that contains a subshell and a trap for the EXIT pseudosignal, and it's not properly trapping interrupts during an rsync. Here's an example:
#!/bin/bash
logfile=/path/to/file;
directory1=/path/to/dir
directory2=/path/to/dir
cleanup () {
echo "Cleaning up!"
#do stuff
trap - EXIT
}
trap '{
(cleanup;) | 2>&1 tee -a $logfile
}' EXIT
(
#main script logic, including the following lines:
(exec sleep 10;);
(exec rsync --progress -av --delete $directory1 /var/tmp/$directory2;);
) | 2>&1 tee -a $logfile
trap - EXIT #just in case cleanup isn't called for some reason
The idea of the script is this: most of the important logic runs in a subshell which is piped through tee and to a logfile, so I don't have to tee every single line of the main logic to get it all logged. Whenever the subshell ends, or the script is stopped for any reason (the EXIT pseudosignal should capture all of these cases), the trap will intercept it and run the cleanup() function, and then remove the trap. The rsync and sleep commands (the sleep is just an example) are run through exec to prevent the creation of zombie processes if I kill the parent script while they're running, and each potentially-long-running command is wrapped in its own subshell so that when exec finishes, it won't terminate the whole script.
The problem:
If I interrupt the script (via kill or CTRL+C) during the exec/subshell wrapped sleep command, the trap works properly, and I see "Cleaning up!" echoed and logged. If I interrupt the script during the rsync command, I see rsync end, and write rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(544) [sender=3.0.6] to the screen, and then the script just dies; no cleanup, no trapping. Why doesn't an interrupting/killing of rsync trigger the trap?
I've tried using the --no-detach switch with rsync, but it didn't change anything.
I have bash 4.1.2, rsync 3.0.6, centOS 6.2.
How about just having all the output from point X be redirected to tee without having to repeat it everywhere and mess with all the sub-shells and execs ... (hope I didn't miss something)
#!/bin/bash
logfile=/path/to/file;
directory1=/path/to/dir
directory2=/path/to/dir
exec > >(exec tee -a $logfile) 2>&1
cleanup () {
echo "Cleaning up!"
#do stuff
trap - EXIT
}
trap cleanup EXIT
sleep 10
rsync --progress -av --delete $directory1 /var/tmp/$directory2
In addition to set -e, I think you want set -E:
If set, any trap on ERR is inherited by shell functions, command substitutions, and commands executed in a sub‐shell environment. The ERR trap is normally not inherited in such cases.
Alternatively, instead of wrapping your commands in subshells use curly braces which will still give you the ability to redirect command outputs but will execute them in the current shell.
The interupt will be properly caught if you add INT to the trap
trap '{
(cleanup;) | 2>&1 tee -a $logfile
}' EXIT INT
Bash is trapping interrupts correctly. However, this does not anwer the question, why the script traps on exit if sleep is interupted, nor why it does not trigger on rsync, but makes the script work as it is supposed to. Hope this helps.
Your shell might be configured to exit on error:
bash # enter subshell
set -e
trap "echo woah" EXIT
sleep 4
If you interrupt sleep (^C) then the subshell will exit due to set -e and print woah in the process.
Also, slightly unrelated: your trap - EXIT is in a subshell (explicitly), so it won't have an effect after the cleanup function returns
It's pretty clear from experimentation that rsync behaves like other tools such as ping and do not inherit signals from the calling Bash parent.
So you have to get a little creative with this and do something like the following:
$ cat rsync.bash
#!/bin/sh
set -m
trap '' SIGINT SIGTERM EXIT
rsync -avz LargeTestFile.500M root#host.mydom.com:/tmp/. &
wait
echo FIN
Now when I run it:
$ ./rsync.bash
X11 forwarding request failed
building file list ... done
LargeTestFile.500M
^C^C^C^C^C^C^C^C^C^C
sent 509984 bytes received 42 bytes 92732.00 bytes/sec
total size is 524288000 speedup is 1027.96
FIN
And we can see the file did fully transfer:
$ ll -h | grep Large
-rw-------. 1 501 games 500M Jul 9 21:44 LargeTestFile.500M
How it works
The trick here is we're telling Bash via set -m to disable job controls on any background jobs within it. We're then backgrounding the rsync and then running a wait command which will wait on the last run command, rsync, until it's complete.
We then guard the entire script with the trap '' SIGINT SIGTERM EXIT.
References
https://access.redhat.com/solutions/360713
https://access.redhat.com/solutions/1539283

Kill bash script foreground children when a signal comes

I am wrapping a fastcgi app in a bash script like this:
#!/bin/bash
# stuff
./fastcgi_bin
# stuff
As bash only executes traps for signals when the foreground script ends I can't just kill -TERM scriptpid because the fastcgi app will be kept alive.
I've tried sending the binary to the background:
#!/bin/bash
# stuff
./fastcgi_bin &
PID=$!
trap "kill $PID" TERM
# stuff
But if I do it like this, apparently the stdin and stdout aren't properly redirected because it does not connect with lighttpds mod_fastgi, the foreground version does work.
EDIT: I've been looking at the problem and this happens because bash redirects /dev/null to stdin when a program is launched in the background, so any way of avoiding this should solve my problem as well.
Any hint on how to solve this?
There are some options that come to my mind:
When a process is launched from a shell script, both belong to the same process group. Killing the parent process leaves the children alive, so the whole process group should be killed. This can be achieved by passing the negated PGID (Process Group ID) to kill, which is the same as the parent's PID. ej: kill -TERM -$PARENT_PID
Do not execute the binary as
a child, but replacing the script
process with exec. You lose the
ability to execute stuff afterwards
though, because exec completely
replaces the parent process.
Do not kill the shell script process, but the FastCGI binary. Then, in the script, examine the return code and act accordingly. e.g: ./fastcgi_bin || exit -1
Depending on how mod_fastcgi handles worker processes, only the second option might be viable.
I have no idea if this is an option for you or not, but since you have a bounty I am assuming you might go for ideas that are outside the box.
Could you rewrite the bash script in Perl? Perl has several methods of managing child processes. You can read perldoc perlipc and more specifics in the core modules IPC::Open2 and IPC::Open3.
I don't know how this will interface with lighttpd etc or if there is more functionality in this approach, but at least it gives you some more flexibility and some more to read in your hunt.
I'm not sure I fully get your point, but here's what I tried and the process seems to be able to manage the trap (call it trap.sh):
#!/bin/bash
trap "echo trap activated" TERM INT
echo begin
time sleep 60
echo end
Start it:
./trap.sh &
And play with it (only one of those commands at once):
kill -9 %1
kill -15 %1
Or start in foreground:
./trap.sh
And interrupt with control-C.
Seems to work for me.
What exactly does not work for you?
I wrote this script just minutes ago to kill a bash script and all of its children...
#!/bin/bash
# This script will kill all the child process id for a given pid
# based on http://www.unix.com/unix-dummies-questions-answers/5245-script-kill-all-child-process-given-pid.html
ppid=$1
if [ -z $ppid ] ; then
echo "This script kills the process identified by pid, and all of its kids";
echo "Usage: $0 pid";
exit;
fi
for i in `ps j | awk '$3 == '$ppid' { print $2 }'`
do
$0 $i
kill -9 $i
done
Make sure the script is executable, or you will get an error on the $0 $i
You can override the implicit </dev/null for a background process by redirecting stdin yourself, for example:
sh -c 'exec 3<&0; { read x; echo "[$x]"; } <&3 3<&- & exec 3<&-; wait'
Try keeping the original stdin using ./fastcgi_bin 0<&0 &:
#!/bin/bash
# stuff
./fastcgi_bin 0<&0 &
PID=$!./fastcgi_bin 0<&0 &
trap "kill $PID" TERM
# stuff
# test
#sh -c 'sleep 10 & lsof -p ${!}'
#sh -c 'sleep 10 0<&0 & lsof -p ${!}'
You can do that with a coprocess.
Edit: well, coprocesses are background processes that can have stdin and stdout open (because bash prepares fifos for them). But you still need to read/write to those fifos, and the only useful primitive for that is bash's read (possibly with a timeout or a file descriptor); nothing robust enough for a cgi. So on second thought, my advice would be not to do this thing in bash. Doing the extra work in the fastcgi, or in an http wrapper like WSGI, would be more convenient.

Resources