Prompt not printed after redirecting bash script output to syslog - linux

I found this article, that explains how to redirect output of a bash script to syslog. This is exactly what I needed, but there is a problem.
#!/bin/bash
# Don't ignore any error and return when first error occurs.
exec 1> >(logger -s -t $(basename $0)) 2>&1
set -e
# a list of command(s) that can fail:
chown -R user1:user1 /tmp/myappData/*
chown -R user1:user1 /tmp/myappTmp/*
chown -R user1:user1 /tmp/myappLog/*
#...
exit 0
When I execute above script, and an error occurs, I see that sometimes, the prompt doesn't return after the script is executed. I can't figure out why this is happening. The prompt doesn't return unless I hit enter.
I am concerned that if an app uses this script, it may not get proper exit code back.
If I comment out "set -e", then the prompt always returns properly after the script has executed.
So my question is, what is the proper way to setup a script so that it exits on error, and logs the corresponding message to syslog?
Thank you for your help and suggestions!

The problem here is that the logger pipeline is still running after your script exits, so some of the last content to be logged print after the parent shell has emitted its prompt. If you scroll up, you'll find the prompt hidden somewhere in that prior output.
If you have a very, very new bash, you can collect the PID of the process substitution, and wait for it later.
exec {orig_out}>&1 {orig_err}>&2 1> >(logger -s -t "${0##*/}") 2>&1; logger_pid=$!
[[ $logger_pid ]] || { echo "ERROR: Needs a newer bash" >&2; exit 1; }
cleanup() {
exec >&$orig_out 2>&$orig_err
wait "$logger_pid"
}
trap cleanup EXIT
With an older bash, you can consider other tricks. For example, on Linux, you can use the flock command to try to grab exclusive access to a lockfile before exiting, after ensuring that that lock is held for as long as the logger is running:
log_lock=$(mktemp "${TMPDIR:-/tmp}/logger.XXXXXX")
exec >(flock -x "$log_lock" logger -s -t "${0##*/}") 2>&1
cleanup() {
exec >/dev/tty 2>&1 || exec >/dev/null 2>&1
flock -x "$log_lock" true
}
trap cleanup EXIT

Related

Is there a way in the shell script that if [ <script exits with non-0 value> ] then; do <something>

In the shell script, I want to do that
if the shell script failed ( exited with non zero value), then before exiting the process, do something.
How could I insert such a if statement block in my shell script.
Is that feasible?
For example,
set -e
echo $password > confidential.txt
rm <file-that-does-not-exist>
rm confidential.txt
I want to make sure that the confidential.txt is made sure to be removed anyways
Use the trap command:
trap 'if [ $? -ne 0 ]; then echo failed; fi' EXIT
The EXIT trap is run when the script exits, and $? contains the status of the last command before it exited.
Note that a shell script's exit status is the status of the last command that it executed. So in your script, it will be the status of
rm confidential.txt
not the error from
rm filethatdoesnotexist
Unless you use set -e in the script, which makes it exit as soon as any command gets an error.
Use trap with the EXIT pseudo signal:
remove_secret () {
rm -f /path/to/confidential.txt
}
trap remove_secret EXIT
You probably don't want the file to remain if the script exits with 0, so EXIT happens regardless of the exit code.
Note that without set -e, rm on a non-existent file doesn't stop the script.
Assuming you're on Linux (or another operating system with /proc/*/fd), you have an even better option: Delete confidential.txt before putting the password into it at all.
That can look something like the following:
exec 3<>confidential.txt
rm -f -- confidential.txt
printf '%s\n' "$password" >&3
...and then, to read from that deleted file:
cat "/proc/$$/fd/3" ## where $$ is the PID of the shell that ran the exec command above
Because the file is already deleted, it's guaranteed to be eligible for garbage collection by your filesystem the moment your script (or the last program it started inheriting its file descriptors) exits or is killed, even if it's killed in a way that doesn't permit traps or signal processing to take place.

how to daemonize a script

I am trying to use daemon on Ubuntu, but I am not sure how to use it even after reading the man page.
I have the following testing script foo.sh
#!/bin/bash
while true; do
echo 'hi' >> ~/hihihi
sleep 10
done
Then I tried this command but nothing happened:
daemon --name="foo" -b ~/daemon.out -l ~/daemon.err -v -- foo.sh
The file hihihi was not updated, and I found this in the errlog:
20161221 12:12:36 foo: client (pid 176193) exited with 1 status
How could I use the daemon command properly?
AFAIK, most daemon or deamonize programs change the current dir to root as part of the daemonization process. That means that you must give the full path of the command:
daemon --name="foo" -b ~/daemon.out -l ~/daemon.err -v -- /path/to/foo.sh
If it still did not work, you could try to specify a shell:
daemon --name="foo" -b ~/daemon.out -l ~/daemon.err -v -- /bin/bash -c /path/to/foo.sh
It is not necessary to use daemon command in bash. You can daemonize your script manually. For example:
#!/bin/bash
# At first you have to redirect stdout and stderr to /dev/null
exec >/dev/null
exec 2>/dev/null
# Fork and go to background
(
while true; do
echo 'hi' >> ~/hihihi
sleep 10
done
)&
# Parent process finished but child still working

ssh does not return even after execution

The following ssh command does not return to terminal. It hangs though the execution is completed. The execution hangs after echo hi command.
ssh user#testserver "echo hello;source .profile;source .bash_profile;/apps/myapp/deploytools/ciInstallAndRun.sh; echo hi"
Output
hello
<outoutfrom remote script"
hi
ciInstallAndRun.sh
echo 'starting'
cd /apps/myapp/current
./tctl kill
cd /apps/myapp
mv myapp_v1.0 "myapp_v1.0_`date '+%Y%m%d%H%M'`"
unzip -o /apps/myapp/myappdist-bin.zip
java -classpath .:/apps/myapp/deploytools/cleanup.jar se.telenor.project.cleanup.Cleanup /apps/myapp myapp_v1.0_ 3
cd /apps/myapp/myapp_v1.0
echo 'Done with deploy'
chmod -R 775 *
echo 'Done'
./tctl start test
Source OS: Redhat
Dest Os: Solaris 10 8/07
Any idea to fix this.
Any idea to fix this.
Your installation script has spawned a child process.
Add a ps -f or ptree $$ command before echo hi. You'll see a child process or multiple child processes spawned by your install script.
To stop the SSH command from hanging, you need to detach such child process(es) from your terminal's input/output. You can sedirect your script's output to a file - both stdout and stderr with > /some/output/file 2>&1, and also redirect its input with < /dev/null.
Or you can use the nohup command.
You haven't provided an MCVE, as others have noted, but this is likely the problem command in you install script, since your question implies that you see the expected output from your install script:
./tctl start test
You probably would do better to replace it with something like:
./tctl start test </dev/null >/some/log/file/path.log 2>&1

Redirect stdout using exec in subscript

I have one main script which is started as an service.
I can't modify this main script, because it is often updated.
This main script starts a program, which echo any log to stdout.
So i can't see any log of this program.
But this main script calls at the beginning an hook-script, that I can modify.
If I redirect the stdout to a file in this hook-script, it works for that script, but not for the main script.
Is it possible to change the stdout for the whole process?
main (enigma2.sh):
# hook to execute scripts always before enigma2 start
if [ -x enigma2_pre_start.sh ]; then
enigma2_pre_start.sh
fi
...
#this logs to stdout
/usr/bin/enigma2
...
hook (enigma2_pre_start.sh)
exec > /tmp/`date +"%s"`.log
exec 2> /tmp/`date +"%s"`_error.log
Edit:
Is it possible, to attach an tee (or similar) to the main process after it is started?
I know the main script is only run once. So i can get the process id with ps.
You have to source enigma_pre_start.sh instead of executing it, so that the exec commands run in the same process whose file handles you want to change.
if [ -x enigma2_pre_start.sh ]; then
. enigma2_pre_start.sh
fi
Otherwise, you are redirecting the standard output and error of the process which executes the hook script, which exits as soon as the script completes.
Solution based on the comment from "Zaboj Campula":
if /bin/grep -q "^[[:space:]]/usr/bin/enigma2_pre_start.sh$" /var/bin/enigma2.sh
then
echo Unpatched > /tmp/enigma.sh
/bin/sed -e 's/^\t\(\/usr\/bin\/enigma2_pre_start.sh\)$/\t\. \1/g' /var/bin/enigma2.sh -i
pid=`/bin/ps -o ppid= $$` && /bin/kill $pid
fi

Bash not trapping interrupts during rsync/subshell exec statements

Context:
I have a bash script that contains a subshell and a trap for the EXIT pseudosignal, and it's not properly trapping interrupts during an rsync. Here's an example:
#!/bin/bash
logfile=/path/to/file;
directory1=/path/to/dir
directory2=/path/to/dir
cleanup () {
echo "Cleaning up!"
#do stuff
trap - EXIT
}
trap '{
(cleanup;) | 2>&1 tee -a $logfile
}' EXIT
(
#main script logic, including the following lines:
(exec sleep 10;);
(exec rsync --progress -av --delete $directory1 /var/tmp/$directory2;);
) | 2>&1 tee -a $logfile
trap - EXIT #just in case cleanup isn't called for some reason
The idea of the script is this: most of the important logic runs in a subshell which is piped through tee and to a logfile, so I don't have to tee every single line of the main logic to get it all logged. Whenever the subshell ends, or the script is stopped for any reason (the EXIT pseudosignal should capture all of these cases), the trap will intercept it and run the cleanup() function, and then remove the trap. The rsync and sleep commands (the sleep is just an example) are run through exec to prevent the creation of zombie processes if I kill the parent script while they're running, and each potentially-long-running command is wrapped in its own subshell so that when exec finishes, it won't terminate the whole script.
The problem:
If I interrupt the script (via kill or CTRL+C) during the exec/subshell wrapped sleep command, the trap works properly, and I see "Cleaning up!" echoed and logged. If I interrupt the script during the rsync command, I see rsync end, and write rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(544) [sender=3.0.6] to the screen, and then the script just dies; no cleanup, no trapping. Why doesn't an interrupting/killing of rsync trigger the trap?
I've tried using the --no-detach switch with rsync, but it didn't change anything.
I have bash 4.1.2, rsync 3.0.6, centOS 6.2.
How about just having all the output from point X be redirected to tee without having to repeat it everywhere and mess with all the sub-shells and execs ... (hope I didn't miss something)
#!/bin/bash
logfile=/path/to/file;
directory1=/path/to/dir
directory2=/path/to/dir
exec > >(exec tee -a $logfile) 2>&1
cleanup () {
echo "Cleaning up!"
#do stuff
trap - EXIT
}
trap cleanup EXIT
sleep 10
rsync --progress -av --delete $directory1 /var/tmp/$directory2
In addition to set -e, I think you want set -E:
If set, any trap on ERR is inherited by shell functions, command substitutions, and commands executed in a sub‐shell environment. The ERR trap is normally not inherited in such cases.
Alternatively, instead of wrapping your commands in subshells use curly braces which will still give you the ability to redirect command outputs but will execute them in the current shell.
The interupt will be properly caught if you add INT to the trap
trap '{
(cleanup;) | 2>&1 tee -a $logfile
}' EXIT INT
Bash is trapping interrupts correctly. However, this does not anwer the question, why the script traps on exit if sleep is interupted, nor why it does not trigger on rsync, but makes the script work as it is supposed to. Hope this helps.
Your shell might be configured to exit on error:
bash # enter subshell
set -e
trap "echo woah" EXIT
sleep 4
If you interrupt sleep (^C) then the subshell will exit due to set -e and print woah in the process.
Also, slightly unrelated: your trap - EXIT is in a subshell (explicitly), so it won't have an effect after the cleanup function returns
It's pretty clear from experimentation that rsync behaves like other tools such as ping and do not inherit signals from the calling Bash parent.
So you have to get a little creative with this and do something like the following:
$ cat rsync.bash
#!/bin/sh
set -m
trap '' SIGINT SIGTERM EXIT
rsync -avz LargeTestFile.500M root#host.mydom.com:/tmp/. &
wait
echo FIN
Now when I run it:
$ ./rsync.bash
X11 forwarding request failed
building file list ... done
LargeTestFile.500M
^C^C^C^C^C^C^C^C^C^C
sent 509984 bytes received 42 bytes 92732.00 bytes/sec
total size is 524288000 speedup is 1027.96
FIN
And we can see the file did fully transfer:
$ ll -h | grep Large
-rw-------. 1 501 games 500M Jul 9 21:44 LargeTestFile.500M
How it works
The trick here is we're telling Bash via set -m to disable job controls on any background jobs within it. We're then backgrounding the rsync and then running a wait command which will wait on the last run command, rsync, until it's complete.
We then guard the entire script with the trap '' SIGINT SIGTERM EXIT.
References
https://access.redhat.com/solutions/360713
https://access.redhat.com/solutions/1539283

Resources