command to redirect output to console and to a file at the same time works fine in bash. But how do i make it work in korn shell(ksh) - linux

command to redirect output to console and to a file at the same time works fine in bash. But how do i make it work in korn shell(ksh).
All my scripts runs on korn shell so cant change them to bash for this particular command to work.
exec > >(tee -a $LOGFILE) 2>&1

In the code beneath I use the variable logfile, lowercase is better.
You can try something like
touch "${logfile}"
tail -f "${logfile}"&
tailpid=$!
trap 'kill -9 ${tailpid}' EXIT INT TERM
exec 1>"${logfile}" 2>&1

A not too unreasonable technique is to re-exec the shell with output to tee. That is, at the top of the script, do something like:
#!/bin/sh
test -z "$REXEC" && { REXEC=1 exec "$0" "$#" | tee -a $LOGFILE; exit; }

Related

nohup append the executed command at the top of the output file

Let's say that we invoke the nohup in the following way:
nohup foo.py -n 20 2>&1 &
This will write the output to the nohup.out.
How could we achieve to have the whole command nohup foo.py -n 20 2>&1 & sitting at the top of the nohup.out (or any other specified output file) after which the regular output of the executed command will be written to that file?
The reason for this is for purely debugging purpose as there will be thousands of commands like this executed and very often some of them will crash due to various reasons. It's like a basic report kept in a file with the executed command written at the top followed by the output of the executed command.
A straightforward alternative would be something like:
myNohup() {
(
set +m # disable job control
[[ -t 0 ]] && exec </dev/null # redirect stdin away from tty
[[ -t 1 ]] && exec >nohup.out # redirect stdout away from tty
[[ -t 2 ]] && exec 2>&1 # redirect stderr away from tty
set -x # enable trace logging of all commands run
"$#" # run our arguments as a command
) & disown -h "$!" # do not forward any HUP signal to the child process
}
To define a command we can test this with:
waitAndWrite() { sleep 5; echo "finished"; }
...and run:
myNohup waitAndWrite
...will return immediately and, after five seconds, leave the following in nohup.out:
+ waitAndWrite
+ sleep 5
+ echo finished
finished
If you only want to write the exact command run without the side effects of xtrace, replace the set -x with (assuming bash 5.0 or newer) printf '%s\n' "${*#Q}".
For older versions of bash, you might instead consider printf '%q ' "$#"; printf '\n'.
This does differ a little from what the question proposes:
Redirections and other shell directives are not logged by set -x. When you run nohup foo 2>&1 &, the 2>&1 is not passed as an argument to nohup; instead, it's something the shell does before nohup is started. Similarly, the & is not an argument but an instruction to the shell not to wait() for the subprocess to finish before going on to future commands.

Wrapper script that write logs of arguments, stdin and stdout

I want to create a wrapper script which writes logs of arguments, stdin and stdout.
I have written the following script wrapper.sh, which works almost fine.
#! /bin/bash
wrapped_command="/path/to/command" # Set the path to the command which we want to wrap
log_dir="/tmp/stdio-log"
mkdir -p $log_dir
args_logfile=$log_dir/args
stdin_logfile=$log_dir/stdin
stdout_logfile=$log_dir/stdout
stderr_logfile=$log_dir/stderr
echo "$#" > "$args_logfile"
tee -a "$stdin_logfile" |
"$wrapped_command" "$#" > >(tee -a "$stdout_logfile") 2> >(tee -a "$stderr_logfile" >&2)
I expect that ./wrapper.sh arg1 arg2 gives the same result as /path/to/command arg1 arg2 with logs in /tmp/stdio-log/.
But it gives a slightly different result in Example 2 below.
Example 1: a command that accepts standard inputs
#! /bin/bash
while read line
do
echo "input: $line"
done
The above wrapper script works as expected with this example.
Example 2: a command that does not accept standard inputs
#! /bin/bash
echo "Example command"
With this example, I got the following different behavior:
/path/to/command exits immediately.
./wrapper.sh does not exit immediately. I must type <Return> once to finish wrapper.sh.
Question
How can I fix the wrapper script (or rewrite with different methods) so that it works as expected with both examples simultaneously?
The in the foreground invoked tee -a "$stdin_logfile" waits for input.
How can I fix the wrapper script …?
It works as you expect if stdin is handled just like stdout with Process Substitution.
"$wrapped_command" "$#" < <(tee -a "$stdin_logfile")\
> >(tee -a "$stdout_logfile") 2> >(tee -a "$stderr_logfile" >&2)

Linux: start a script after another has finished

I read the answer for this issue from this link
in Stackoverflow.com. But I am so new in writing shell script that I did something wrong. The following are my scripts:
testscript:
#!/bin/csh -f
pid=$(ps -opid= -C csh testscript1)
while [ -d /proc/$pid ] ; do
sleep 1
done && csh testscript2
exit
testscript1:
#!/bin/csh -f
/usr/bin/firefox
exit
testscript2:
#!/bin/csh -f
echo Done
exit
The purpose is for testscript to call testscript1 first; once testscript1 already finish (which means the firefox called in script1 is closed) testscript will call testscript2. However I got this result after running testscript:
$ csh testscript
Illegal variable name.
Please help me with this issue. Thanks ahead.
I believe this line is not CSH:
pid=$(ps -opid= -C csh testscript1)
In general in csh you define variables like this:
set pid=...
I am not sure what the $() syntax is, perhaps back ticks woudl work as a replacement:
set pid=`ps -opid= -C csh testscript1`
Perhaps you didn't notice that the scripts you found were written for bash, not csh, but
you're trying to process them with the csh interpreter.
It looks like you've misunderstood what the original code was trying to do -- it was
intended to monitor an already-existing process, by looking up its process id using the process name.
You seem to be trying to start the first process from inside the ps command. But
in that case, there's no need for you to do anything so complicated -- all you need
is:
#!/bin/csh
csh testscript1
csh testscript2
Unless you go out of your way to run one of the scripts in the background,
the second script will not run until the first script is finished.
Although this has nothing to do with your problem, csh is more oriented toward
interactive use; for script writing, it's considered a poor choice, so you might be
better off learning bash instead.
Try,
below script will check testscript1's pid, if it is not found then it will execute testscirpt2
sp=$(ps -ef | grep testscript1 | grep -v grep | awk '{print $2}')
/bin/ls -l /proc/ | grep $sp > /dev/null 2>&1 && sleep 0 || /bin/csh testscript2

Possible to get all outputs to stdout in a script?

I would like to log all error messages that the commands in a Bash script contains.
The problem is that if I have to add
E=$( ... 2>&1 ); echo $E >> $LOG
to all commands, then the script will become quite hard to read.
Question
Is it somehow possible to get a global variable, so all STDERR becomes STDOUT?
Just start your script with this:
exec 2>&1
You can do things like:
#!/bin/sh
test -z "$DOLOGGING" && { DOLOGGING=no exec $0 "${#}" 2>&1 | tee log-file; exit; }
...
to duplicate all output/errors to log-file. Although it seems I misread the question and it seems you just want to add exec 2>&1 >/dev/null to the top of your script to print all errors to stdout and discard all output.

Force `tee` to run for every command in a shell script?

I would like to have a script wherein all commands are tee'd to a log file.
Right now I am running every command in the script thusly:
<command> | tee -a $LOGFILE
Is there a way to force every command in a shell script to pipe to tee?
I cannot force users to add appropriate teeing when running the script, and want to ensure it logs properly even if the calling user doesn't add a logging call of their own.
You can do a wrapper inside your script:
#!/bin/bash
{
echo 'hello'
some_more_commands
echo 'goodbye'
} | tee -a /path/to/logfile
Edit:
Here's another way:
#!/bin/bash
exec > >(tee -a /path/to/logfile)
echo 'hello'
some_more_commands
echo 'goodbye'
Why not expose a wrapper that's simply:
/path/to/yourOriginalScript.sh | tee -a $LOGFILE
Your users should not execute (nor even know about) yourOriginalScript.sh.
Assuming that your script doesn't take a --tee argument, you can do this (if you do use that argument, just replace --tee below with an argument you don't use):
#!/bin/bash
if [ -z "$1" ] || [ "$1" != --tee ]; then
$0 --tee "$#" | tee $LOGFILE
exit $?
else
shift
fi
# rest of script follows
This just has the script re-run itself, using the special argument --tee to prevent infinite recursion, piping its output into tee.
Some approach would be creation of runner script "run_it" that all users invoke their own scripts.
run_it my_script
All the magic would be done within, e.g. could look like that:
LOG_DIR=/var/log/
$# | tee -a $LOG_DIR/

Resources