Force `tee` to run for every command in a shell script? - linux

I would like to have a script wherein all commands are tee'd to a log file.
Right now I am running every command in the script thusly:
<command> | tee -a $LOGFILE
Is there a way to force every command in a shell script to pipe to tee?
I cannot force users to add appropriate teeing when running the script, and want to ensure it logs properly even if the calling user doesn't add a logging call of their own.

You can do a wrapper inside your script:
#!/bin/bash
{
echo 'hello'
some_more_commands
echo 'goodbye'
} | tee -a /path/to/logfile
Edit:
Here's another way:
#!/bin/bash
exec > >(tee -a /path/to/logfile)
echo 'hello'
some_more_commands
echo 'goodbye'

Why not expose a wrapper that's simply:
/path/to/yourOriginalScript.sh | tee -a $LOGFILE
Your users should not execute (nor even know about) yourOriginalScript.sh.

Assuming that your script doesn't take a --tee argument, you can do this (if you do use that argument, just replace --tee below with an argument you don't use):
#!/bin/bash
if [ -z "$1" ] || [ "$1" != --tee ]; then
$0 --tee "$#" | tee $LOGFILE
exit $?
else
shift
fi
# rest of script follows
This just has the script re-run itself, using the special argument --tee to prevent infinite recursion, piping its output into tee.

Some approach would be creation of runner script "run_it" that all users invoke their own scripts.
run_it my_script
All the magic would be done within, e.g. could look like that:
LOG_DIR=/var/log/
$# | tee -a $LOG_DIR/

Related

How can I pass argument to a Bash script for 'tee'?

I want to create a log output file based on the arguments I pass. I tried the below, which didn't work.
#!/bin/bash
echo "hello" | tee -a log_$1.log
I want a log_test.log to be created, instead log_.log is created:
./script test
Using ${1} would resolve the issue.
#!/bin/bash
echo "hello" | tee -a log_${1}.log

command to redirect output to console and to a file at the same time works fine in bash. But how do i make it work in korn shell(ksh)

command to redirect output to console and to a file at the same time works fine in bash. But how do i make it work in korn shell(ksh).
All my scripts runs on korn shell so cant change them to bash for this particular command to work.
exec > >(tee -a $LOGFILE) 2>&1
In the code beneath I use the variable logfile, lowercase is better.
You can try something like
touch "${logfile}"
tail -f "${logfile}"&
tailpid=$!
trap 'kill -9 ${tailpid}' EXIT INT TERM
exec 1>"${logfile}" 2>&1
A not too unreasonable technique is to re-exec the shell with output to tee. That is, at the top of the script, do something like:
#!/bin/sh
test -z "$REXEC" && { REXEC=1 exec "$0" "$#" | tee -a $LOGFILE; exit; }

Check if script was started by another script [duplicate]

Let's assume I have 3 shell scripts:
script_1.sh
#!/bin/bash
./script_3.sh
script_2.sh
#!/bin/bash
./script_3.sh
the problem is that in script_3.sh I want to know the name of the caller script.
so that I can respond differently to each caller I support
please don't assume I'm asking about $0 cause $0 will echo script_3 every time no matter who is the caller
here is an example input with expected output
./script_1.sh should echo script_1
./script_2.sh should echo script_2
./script_3.sh should echo user_name or root or anything to distinguish between the 3 cases?
Is that possible? and if possible, how can it be done?
this is going to be added to a rm modified script... so when I call rm it do something and when git or any other CLI tool use rm it is not affected by the modification
Based on #user3100381's answer, here's a much simpler command to get the same thing which I believe should be fairly portable:
PARENT_COMMAND=$(ps -o comm= $PPID)
Replace comm= with args= to get the full command line (command + arguments). The = alone is used to suppress the headers.
See: http://pubs.opengroup.org/onlinepubs/009604499/utilities/ps.html
In case you are sourceing instead of calling/executing the script there is no new process forked and thus the solutions with ps won't work reliably.
Use bash built-in caller in that case.
$ cat h.sh
#! /bin/bash
function warn_me() {
echo "$#"
caller
}
$
$ cat g.sh
#!/bin/bash
source h.sh
warn_me "Error: You did not do something"
$
$ . g.sh
Error: You did not do something
g.sh
$
Source
The $PPID variable holds the parent process ID. So you could parse the output from ps to get the command.
#!/bin/bash
PARENT_COMMAND=$(ps $PPID | tail -n 1 | awk "{print \$5}")
Based on #J.L.answer, with more in depth explanations, that works for linux :
cat /proc/$PPID/comm
gives you the name of the command of the parent pid
If you prefer the command with all options, then :
cat /proc/$PPID/cmdline
explanations :
$PPID is defined by the shell, it's the pid of the parent processes
in /proc/, you have some dirs with the pid of each process (linux). Then, if you cat /proc/$PPID/comm, you echo the command name of the PID
Check man proc
Couple of useful files things kept in /proc/$PPID here
/proc/*some_process_id*/exe A symlink to the last executed command under *some_process_id*
/proc/*some_process_id*/cmdline A file containing the last executed command under *some_process_id* and null-byte separated arguments
So a slight simplification.
sed 's/\x0/ /g' "/proc/$PPID/cmdline"
If you have /proc:
$(cat /proc/$PPID/comm)
Declare this:
PARENT_NAME=`ps -ocomm --no-header $PPID`
Thus you'll get a nice variable $PARENT_NAME that holds the parent's name.
You can simply use the command below to avoid calling cut/awk/sed:
ps --no-headers -o command $PPID
If you only want the parent and none of the subsequent processes, you can use:
ps --no-headers -o command $PPID | cut -d' ' -f1
You could pass in a variable to script_3.sh to determine how to respond...
script_1.sh
#!/bin/bash
./script_3.sh script1
script_2.sh
#!/bin/bash
./script_3.sh script2
script_3.sh
#!/bin/bash
if [ $1 == 'script1' ] ; then
echo "we were called from script1!"
elsif [ $1 == 'script2' ] ; then
echo "we were called from script2!"
fi

zsh script [process complete] not returning back to shell

I wrote a zsh function to help me do some grepping at my job.
function rgrep (){
if [ -n "$1" ] && [ -n "$2" ]
then
exec grep -rnw $1 -r $2
elif [ -n "$1" ]
then
exec grep -rnw $1 -r "./"
else
echo "please enter one or two args"
fi
}
Works great, however, grep finishes executing I don't get thrown back into the shell. it just hangs at [process complete] any ideas?
I have the function in my .zshrc
In addition to getting rid of the unnecessary exec, you can remove the if statement as well.
function rgrep (){
grep -rwn "${1:?please enter one or two args}" -r "${2:-./}"
}
If $1 is not set (or null valued), an error will be raised and the given message displayed. If $2 is not set, a default value of ./ will be used in its place.
Do not use exec as it replace the existing shell.
exec [-cl] [-a name] [command [arguments]]
If command is supplied, it replaces the shell without creating a new process. If the -l option is supplied, the shell places a dash at the beginning of the zeroth argument passed to command. This is what the login program does. The -c option causes command to be executed with an empty environment. If -a is supplied, the shell passes name as the zeroth argument to command. If no command is specified, redirections may be used to affect the current shell environment. If there are no redirection errors, the return status is zero; otherwise the return status is non-zero.
Try this instead:
rgrep ()
{
if [ -n "$1" ] && [ -n "$2" ]
then
grep -rnw "$1" -r "$2"
elif [ -n "$1" ]
then
grep -rnw "$1" -r "./"
else
echo "please enter one or two args"
fi
}
As a completely different approach, I like to build command shortcuts like this as minimal shell scripts, rather than functions (or aliases):
% echo 'grep -rwn "$#"' >rgrep
% chmod +x rgrep
% ./rgrep
Usage: grep [OPTION]... PATTERN [FILE]...
Try `grep --help' for more information.
%
(This relies on a traditional behavior of Unix: executable text files without #! lines are considered shell scripts and are executed by /bin/sh. If that doesn't work on your system, or you need to run specifically under zsh, use an appropriate #! line.)
One of the main benefits of this approach is that shell scripts in a directory in your PATH are full citizens of the environment, not local to the current shell like functions and aliases. This means they can be used in situations where only executable files are viable commands, such as xargs, sudo, or remote invocation via ssh.
This doesn't provide the ability to give default arguments (or not easily, anyway), but IMAO the benefits outweigh the drawbacks. (And in the specific case of defaulting grep to search PWD recursively, the real solution is to install ack.)

Possible to get all outputs to stdout in a script?

I would like to log all error messages that the commands in a Bash script contains.
The problem is that if I have to add
E=$( ... 2>&1 ); echo $E >> $LOG
to all commands, then the script will become quite hard to read.
Question
Is it somehow possible to get a global variable, so all STDERR becomes STDOUT?
Just start your script with this:
exec 2>&1
You can do things like:
#!/bin/sh
test -z "$DOLOGGING" && { DOLOGGING=no exec $0 "${#}" 2>&1 | tee log-file; exit; }
...
to duplicate all output/errors to log-file. Although it seems I misread the question and it seems you just want to add exec 2>&1 >/dev/null to the top of your script to print all errors to stdout and discard all output.

Resources