I'd like to block CTRL-C but it doesn't work as expected.
I was following the answer described [here] (https://stackoverflow.com/a/37148777/12512199) but without success.
I must be missing something but can't figure out what. It's as if CTRL-C is intercepted but still propagated:
First I ran the following script and hit CTRL-C; the message is displayed but the script exited.
echo "
#!/bin/bash
trap 'echo "Ctrl + C happened"' SIGINT
sleep infinity
" > test.sh
chmod +x test.sh
./test.sh
Then I checked if it would behave differently in a container as pid 1:
echo "
#!/bin/bash
trap 'echo "Ctrl + C happened"' SIGINT
sleep infinity
" > test.sh
chmod +x test.sh
docker rm -f conttest
docker container create --name conttest -it --entrypoint="bash" ubuntu:20.04 -x -c /test.sh
docker cp test.sh conttest:/test.sh
docker container start --attach -i conttest
But no, it's the same behavior.
I ran those tests on Unbuntu 20.04.
Read https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#trap but still haven't found any clue ...
Any idea?
Control+C or any other key combination mapped to intr in output of
stty -a sends SIGINT to all processes in the foreground. Shell
receives it but so does sleep infinity which dies and shell exits
because it has nothing to do. If you want your script it to run
continuously and do something on SIGINT you have to use an endless
loop:
#!/bin/bash
trap 'echo "Ctrl + C happened"' SIGINT
while true
do
sleep infinity
done
If you only want to ignore SIGINT:
#!/bin/bash
trap '' SIGINT
sleep infinity
Related
I found this article, that explains how to redirect output of a bash script to syslog. This is exactly what I needed, but there is a problem.
#!/bin/bash
# Don't ignore any error and return when first error occurs.
exec 1> >(logger -s -t $(basename $0)) 2>&1
set -e
# a list of command(s) that can fail:
chown -R user1:user1 /tmp/myappData/*
chown -R user1:user1 /tmp/myappTmp/*
chown -R user1:user1 /tmp/myappLog/*
#...
exit 0
When I execute above script, and an error occurs, I see that sometimes, the prompt doesn't return after the script is executed. I can't figure out why this is happening. The prompt doesn't return unless I hit enter.
I am concerned that if an app uses this script, it may not get proper exit code back.
If I comment out "set -e", then the prompt always returns properly after the script has executed.
So my question is, what is the proper way to setup a script so that it exits on error, and logs the corresponding message to syslog?
Thank you for your help and suggestions!
The problem here is that the logger pipeline is still running after your script exits, so some of the last content to be logged print after the parent shell has emitted its prompt. If you scroll up, you'll find the prompt hidden somewhere in that prior output.
If you have a very, very new bash, you can collect the PID of the process substitution, and wait for it later.
exec {orig_out}>&1 {orig_err}>&2 1> >(logger -s -t "${0##*/}") 2>&1; logger_pid=$!
[[ $logger_pid ]] || { echo "ERROR: Needs a newer bash" >&2; exit 1; }
cleanup() {
exec >&$orig_out 2>&$orig_err
wait "$logger_pid"
}
trap cleanup EXIT
With an older bash, you can consider other tricks. For example, on Linux, you can use the flock command to try to grab exclusive access to a lockfile before exiting, after ensuring that that lock is held for as long as the logger is running:
log_lock=$(mktemp "${TMPDIR:-/tmp}/logger.XXXXXX")
exec >(flock -x "$log_lock" logger -s -t "${0##*/}") 2>&1
cleanup() {
exec >/dev/tty 2>&1 || exec >/dev/null 2>&1
flock -x "$log_lock" true
}
trap cleanup EXIT
I am trying to use daemon on Ubuntu, but I am not sure how to use it even after reading the man page.
I have the following testing script foo.sh
#!/bin/bash
while true; do
echo 'hi' >> ~/hihihi
sleep 10
done
Then I tried this command but nothing happened:
daemon --name="foo" -b ~/daemon.out -l ~/daemon.err -v -- foo.sh
The file hihihi was not updated, and I found this in the errlog:
20161221 12:12:36 foo: client (pid 176193) exited with 1 status
How could I use the daemon command properly?
AFAIK, most daemon or deamonize programs change the current dir to root as part of the daemonization process. That means that you must give the full path of the command:
daemon --name="foo" -b ~/daemon.out -l ~/daemon.err -v -- /path/to/foo.sh
If it still did not work, you could try to specify a shell:
daemon --name="foo" -b ~/daemon.out -l ~/daemon.err -v -- /bin/bash -c /path/to/foo.sh
It is not necessary to use daemon command in bash. You can daemonize your script manually. For example:
#!/bin/bash
# At first you have to redirect stdout and stderr to /dev/null
exec >/dev/null
exec 2>/dev/null
# Fork and go to background
(
while true; do
echo 'hi' >> ~/hihihi
sleep 10
done
)&
# Parent process finished but child still working
The following ssh command does not return to terminal. It hangs though the execution is completed. The execution hangs after echo hi command.
ssh user#testserver "echo hello;source .profile;source .bash_profile;/apps/myapp/deploytools/ciInstallAndRun.sh; echo hi"
Output
hello
<outoutfrom remote script"
hi
ciInstallAndRun.sh
echo 'starting'
cd /apps/myapp/current
./tctl kill
cd /apps/myapp
mv myapp_v1.0 "myapp_v1.0_`date '+%Y%m%d%H%M'`"
unzip -o /apps/myapp/myappdist-bin.zip
java -classpath .:/apps/myapp/deploytools/cleanup.jar se.telenor.project.cleanup.Cleanup /apps/myapp myapp_v1.0_ 3
cd /apps/myapp/myapp_v1.0
echo 'Done with deploy'
chmod -R 775 *
echo 'Done'
./tctl start test
Source OS: Redhat
Dest Os: Solaris 10 8/07
Any idea to fix this.
Any idea to fix this.
Your installation script has spawned a child process.
Add a ps -f or ptree $$ command before echo hi. You'll see a child process or multiple child processes spawned by your install script.
To stop the SSH command from hanging, you need to detach such child process(es) from your terminal's input/output. You can sedirect your script's output to a file - both stdout and stderr with > /some/output/file 2>&1, and also redirect its input with < /dev/null.
Or you can use the nohup command.
You haven't provided an MCVE, as others have noted, but this is likely the problem command in you install script, since your question implies that you see the expected output from your install script:
./tctl start test
You probably would do better to replace it with something like:
./tctl start test </dev/null >/some/log/file/path.log 2>&1
I'm trying to run a script in a tmux environment on another computer using ssh, but the ssh connection won't terminate until the script has finished. Let me explain this in detail:
This is test_ssh.sh:
#!/bin/bash
name="computername"
ssh $name /bin/bash <<\EOF
cd /scratch
mkdir test
cd test
cp /home/user/test_tmux3.sh .
tmux -c ./test_tmux3.sh &
echo 1 # at this point it waits until test_tmux3.sh is finished, instead of exiting :(
EOF
This is test_tmux3.sh (as a test to see if anything happens):
#!/bin/bash
mkdir 0min
sleep 60
mkdir 1min
sleep 60
mkdir 2min
At the end I would like to loop over multiple computers ($name) to start a script on each of them. The problem I am having right now is that test_ssh.sh waits after the echo 1 and only exits after tmux -c test_tmux3.sh & is finished (after 2 minutes). If I manually enter control-C test_ssh.sh stops and tmux -c test_tmux3.sh & continues running on the computer $name (which is what I want). How can automate that last step and get ssh to exit on its own?
Start the command in a detached tmux session.
#!/bin/bash
name="computername"
ssh $name /bin/bash <<\EOF
mkdir /scratch/test
cd /scratch/test
cp /home/user/test_tmux3.sh .
tmux new-session -d ./test_tmux3.sh
echo 1
EOF
Now, the tmux command will exit as soon as the new session is created and the script is started in that session.
Have you tried to use nohup command to tell to the process keep running after exit?:
#!/bin/bash
name="computername"
ssh $name /bin/bash <<\EOF
cd /scratch
mkdir test
cd test
cp /home/user/test_tmux3.sh .
nohup tmux -c ./test_tmux3.sh &
echo 1 # at this point it waits until test_tmux3.sh is finished, instead of exiting :(
EOF
Context:
I have a bash script that contains a subshell and a trap for the EXIT pseudosignal, and it's not properly trapping interrupts during an rsync. Here's an example:
#!/bin/bash
logfile=/path/to/file;
directory1=/path/to/dir
directory2=/path/to/dir
cleanup () {
echo "Cleaning up!"
#do stuff
trap - EXIT
}
trap '{
(cleanup;) | 2>&1 tee -a $logfile
}' EXIT
(
#main script logic, including the following lines:
(exec sleep 10;);
(exec rsync --progress -av --delete $directory1 /var/tmp/$directory2;);
) | 2>&1 tee -a $logfile
trap - EXIT #just in case cleanup isn't called for some reason
The idea of the script is this: most of the important logic runs in a subshell which is piped through tee and to a logfile, so I don't have to tee every single line of the main logic to get it all logged. Whenever the subshell ends, or the script is stopped for any reason (the EXIT pseudosignal should capture all of these cases), the trap will intercept it and run the cleanup() function, and then remove the trap. The rsync and sleep commands (the sleep is just an example) are run through exec to prevent the creation of zombie processes if I kill the parent script while they're running, and each potentially-long-running command is wrapped in its own subshell so that when exec finishes, it won't terminate the whole script.
The problem:
If I interrupt the script (via kill or CTRL+C) during the exec/subshell wrapped sleep command, the trap works properly, and I see "Cleaning up!" echoed and logged. If I interrupt the script during the rsync command, I see rsync end, and write rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(544) [sender=3.0.6] to the screen, and then the script just dies; no cleanup, no trapping. Why doesn't an interrupting/killing of rsync trigger the trap?
I've tried using the --no-detach switch with rsync, but it didn't change anything.
I have bash 4.1.2, rsync 3.0.6, centOS 6.2.
How about just having all the output from point X be redirected to tee without having to repeat it everywhere and mess with all the sub-shells and execs ... (hope I didn't miss something)
#!/bin/bash
logfile=/path/to/file;
directory1=/path/to/dir
directory2=/path/to/dir
exec > >(exec tee -a $logfile) 2>&1
cleanup () {
echo "Cleaning up!"
#do stuff
trap - EXIT
}
trap cleanup EXIT
sleep 10
rsync --progress -av --delete $directory1 /var/tmp/$directory2
In addition to set -e, I think you want set -E:
If set, any trap on ERR is inherited by shell functions, command substitutions, and commands executed in a sub‐shell environment. The ERR trap is normally not inherited in such cases.
Alternatively, instead of wrapping your commands in subshells use curly braces which will still give you the ability to redirect command outputs but will execute them in the current shell.
The interupt will be properly caught if you add INT to the trap
trap '{
(cleanup;) | 2>&1 tee -a $logfile
}' EXIT INT
Bash is trapping interrupts correctly. However, this does not anwer the question, why the script traps on exit if sleep is interupted, nor why it does not trigger on rsync, but makes the script work as it is supposed to. Hope this helps.
Your shell might be configured to exit on error:
bash # enter subshell
set -e
trap "echo woah" EXIT
sleep 4
If you interrupt sleep (^C) then the subshell will exit due to set -e and print woah in the process.
Also, slightly unrelated: your trap - EXIT is in a subshell (explicitly), so it won't have an effect after the cleanup function returns
It's pretty clear from experimentation that rsync behaves like other tools such as ping and do not inherit signals from the calling Bash parent.
So you have to get a little creative with this and do something like the following:
$ cat rsync.bash
#!/bin/sh
set -m
trap '' SIGINT SIGTERM EXIT
rsync -avz LargeTestFile.500M root#host.mydom.com:/tmp/. &
wait
echo FIN
Now when I run it:
$ ./rsync.bash
X11 forwarding request failed
building file list ... done
LargeTestFile.500M
^C^C^C^C^C^C^C^C^C^C
sent 509984 bytes received 42 bytes 92732.00 bytes/sec
total size is 524288000 speedup is 1027.96
FIN
And we can see the file did fully transfer:
$ ll -h | grep Large
-rw-------. 1 501 games 500M Jul 9 21:44 LargeTestFile.500M
How it works
The trick here is we're telling Bash via set -m to disable job controls on any background jobs within it. We're then backgrounding the rsync and then running a wait command which will wait on the last run command, rsync, until it's complete.
We then guard the entire script with the trap '' SIGINT SIGTERM EXIT.
References
https://access.redhat.com/solutions/360713
https://access.redhat.com/solutions/1539283