How to suppress command output safely? - linux

Usually on unix systems you can suppress command output by redirecting STDIN and/or STDERR to a file or /dev/null. But what, if you need to pass content to a piped command via STDIN in a bash script?
The example below should make clear what is meant. It's just an example, though - I'm not searching for a solution to this command in specific but to that kind of situation in general. Sadly there are numerous situations where you would want to suppress output in a script, but need to pass content via STDIN, when a command has no switch to submit the information in an other way.
My "problem" is that I wrote a function to execute commands with proper error handling and in which I would like to redirect all output produced by the executed commands to a log file.
Example problem:
[18:25:35] [V] root#vbox:~# echo 'test' |read -p 'Test Output' TMP &>/dev/null
[18:25:36] [V] root#vbox:~# echo $TMP
[18:25:36] [V] root#vbox:~#
Any ideas on how to solve my problem?

What user000001 is saying is that all commands in a bash pipeline are executed in subshells. So, when the subshell handling the read command exits, the $TMP variable disappears too. You have to account for this and either:
avoid subshells (examples given in comment above)
do all your work with variables in the same subshell
echo test | { read value; echo subshell $value; }; echo parent $value
use a different shell
$ ksh -c 'echo test | { read value; echo subshell $value; }; echo parent $value'
subshell test
parent test

Related

How to capture error messages from a program that fails only outside the terminal?

On a Linux server, I have a script here that will work fine when I start it from the terminal, but fail when started and then detached by another process. So there is probably a difference in the script's environment to fix.
The trouble is, the other process integrating that script does not provide access to its error messages when the script fails. What is an easy (and ideally generic) way to see the output of such a script when it's failing?
Let's assume I have no easy way to change the code of the process calling this script. The failure happens right at the start of the script's run, so there is not enough time to manually attach to it with strace to see its output.
(The specifics should not matter, but for what it's worth: the failing script is the backup script of Discourse, a widespread open source forum software. Discourse and this script are written in Ruby.)
The idea is to substitute original script with wrapper which calls original script and saves its stdin and stderr to files. Wrapper may be like this:
#!/bin/bash
exec /path/to/original/script "$#" 1> >(tee /tmp/out.log) 2> >(tee /tmp/err.log >&2)
1> >(tee /tmp/out.log) redirects stdout to tee /tmp/out.log input in subshell. tee /tmp/out.log passes it to stdout but saves copy to the file.
2> >(tee /tmp/err.log) redirects stderr to tee /tmp/err.log input in subshell. tee /tmp/err.log >&2 passes it to stderr but saves copy to the file.
If script is invoked multiple times you may want to append stdout and stderr to files. Use tee -a in this case.
The problem is how to force caller to execute wrapper script instead of original one.
If caller invokes script in a way that it is searched in PATH you can put wrapper script to a separate directory and provide modified PATH to the caller. For example, script name is script. Put wrapper to /some/dir/script and run caller as
$ PATH="/some/dir:$PATH" caller
/path/to/original/script in wrapper must be absolute.
If caller invokes script from specific path then you can rename original script e.g. to original-script and name wrapper as script. In this case wrapper should call /path/to/original/original-script.
Another problem may rise if script behaves differently depending on name it's called. In this case exec -a ... may be needed.
You can use a bash script that (1) does "busy waiting" until it sees the targeted process, and then (2) immediately attaches to it with strace and prints its output to the terminal.
#!/bin/sh
# Adapt to a regex that matches only your target process' full command.
name_pattern="bin/ruby.*spawn_backup_restore.rb"
# Wait for a process to start, based on its name, and capture its PID.
# Inspiration and details: https://unix.stackexchange.com/a/410075
pid=
while [ -z "$pid" ] ; do
pid="$(pgrep --full "$name_pattern" | head -n 1)"
# Set delay for next check to 1ms to try capturing all output.
# Remove completely if this is not enough to capture from the start.
sleep 0.001
done
echo "target process has started, pid is $pid"
# Print all stdout and stderr output of the process we found.
# Source and explanations: https://unix.stackexchange.com/a/58601
strace -p "$pid" -s 9999 -e write

How to redirect the output of fuser to a file? [duplicate]

Is it possible to redirect all of the output of a Bourne shell script to somewhere, but with shell commands inside the script itself?
Redirecting the output of a single command is easy, but I want something more like this:
#!/bin/sh
if [ ! -t 0 ]; then
# redirect all of my output to a file here
fi
# rest of script...
Meaning: if the script is run non-interactively (for example, cron), save off the output of everything to a file. If run interactively from a shell, let the output go to stdout as usual.
I want to do this for a script normally run by the FreeBSD periodic utility. It's part of the daily run, which I don't normally care to see every day in email, so I don't have it sent. However, if something inside this one particular script fails, that's important to me and I'd like to be able to capture and email the output of this one part of the daily jobs.
Update: Joshua's answer is spot-on, but I also wanted to save and restore stdout and stderr around the entire script, which is done like this:
# save stdout and stderr to file
# descriptors 3 and 4,
# then redirect them to "foo"
exec 3>&1 4>&2 >foo 2>&1
# ...
# restore stdout and stderr
exec 1>&3 2>&4
Addressing the question as updated.
#...part of script without redirection...
{
#...part of script with redirection...
} > file1 2>file2 # ...and others as appropriate...
#...residue of script without redirection...
The braces '{ ... }' provide a unit of I/O redirection. The braces must appear where a command could appear - simplistically, at the start of a line or after a semi-colon. (Yes, that can be made more precise; if you want to quibble, let me know.)
You are right that you can preserve the original stdout and stderr with the redirections you showed, but it is usually simpler for the people who have to maintain the script later to understand what's going on if you scope the redirected code as shown above.
The relevant sections of the Bash manual are Grouping Commands and I/O Redirection. The relevant sections of the POSIX shell specification are Compound Commands and I/O Redirection. Bash has some extra notations, but is otherwise similar to the POSIX shell specification.
Typically we would place one of these at or near the top of the script. Scripts that parse their command lines would do the redirection after parsing.
Send stdout to a file
exec > file
with stderr
exec > file
exec 2>&1
append both stdout and stderr to file
exec >> file
exec 2>&1
As Jonathan Leffler mentioned in his comment:
exec has two separate jobs. The first one is to replace the currently executing shell (script) with a new program. The other is changing the I/O redirections in the current shell. This is distinguished by having no argument to exec.
You can make the whole script a function like this:
main_function() {
do_things_here
}
then at the end of the script have this:
if [ -z $TERM ]; then
# if not run via terminal, log everything into a log file
main_function 2>&1 >> /var/log/my_uber_script.log
else
# run via terminal, only output to screen
main_function
fi
Alternatively, you may log everything into logfile each run and still output it to stdout by simply doing:
# log everything, but also output to stdout
main_function 2>&1 | tee -a /var/log/my_uber_script.log
For saving the original stdout and stderr you can use:
exec [fd number]<&1
exec [fd number]<&2
For example, the following code will print "walla1" and "walla2" to the log file (a.txt), "walla3" to stdout, "walla4" to stderr.
#!/bin/bash
exec 5<&1
exec 6<&2
exec 1> ~/a.txt 2>&1
echo "walla1"
echo "walla2" >&2
echo "walla3" >&5
echo "walla4" >&6
[ -t <&0 ] || exec >> test.log
I finally figured out how to do it. I wanted to not just save the output to a file but also, find out if the bash script ran successfully or not!
I've wrapped the bash commands inside a function and then called the function main_function with a tee output to a file. Afterwards, I've captured the output using if [ $? -eq 0 ].
#! /bin/sh -
main_function() {
python command.py
}
main_function > >(tee -a "/var/www/logs/output.txt") 2>&1
if [ $? -eq 0 ]
then
echo 'Success!'
else
echo 'Failure!'
fi

shell prompt seemingly does not reappear after running a script that uses exec with tee to send stdout output to both the terminal and a file

I have a shell script which writes all output to logfile
and terminal, this part works fine, but if I execute the script
a new shell prompt only appear if I press enter. Why is that and how do I fix it?
#!/bin/bash
exec > >(tee logfile)
echo "output"
First, when I'm testing this, there always is a new shell prompt, it's just that sometimes the string output comes after it, so the prompt isn't last. Did you happen to overlook it? If so, there seems to be a race where the shell prints the prompt before the tee in the background completes.
Unfortunately, that cannot fixed by waiting in the shell for tee, see this question on unix.stackexchange. Fragile workarounds aside, the easiest way to solve this that I see is to put your whole script inside a list:
{
your-code-here
} | tee logfile
If I run the following script (suppressing the newline from the echo), I see the prompt, but not "output". The string is still written to the file.
#!/bin/bash
exec > >(tee logfile)
echo -n "output"
What I suspect is this: you have three different file descriptors trying to write to the same file (that is, the terminal): standard output of the shell, standard error of the shell, and the standard output of tee. The shell writes synchronously: first the echo to standard output, then the prompt to standard error, so the terminal is able to sequence them correctly. However, the third file descriptor is written to asynchronously by tee, so there is a race condition. I don't quite understand how my modification affects the race, but it appears to upset some balance, allowing the prompt to be written at a different time and appear on the screen. (I expect output buffering to play a part in this).
You might also try running your script after running the script command, which will log everything written to the terminal; if you wade through all the control characters in the file, you may notice the prompt in the file just prior to the output written by tee. In support of my race condition theory, I'll note that after running the script a few times, it was no longer displaying "abnormal" behavior; my shell prompt was displayed as expected after the string "output", so there is definitely some non-deterministic element to this situation.
#chepner's answer provides great background information.
Here's a workaround - works on Ubuntu 12.04 (Linux 3.2.0) and on OS X 10.9.1:
#!/bin/bash
exec > >(tee logfile)
echo "output"
# WORKAROUND - place LAST in your script.
# Execute an executable (as opposed to a builtin) that outputs *something*
# to make the prompt reappear normally.
# In this case we use the printf *executable* to output an *empty string*.
# Use of `$ec` is to ensure that the script's actual exit code is passed through.
ec=$?; $(which printf) ''; exit $ec
Alternatives:
#user2719058's answer shows a simple alternative: wrapping the entire script body in a group command ({ ... }) and piping it to tee logfile.
An external solution, as #chepner has already hinted at, is to use the script utility to create a "transcript" of your script's output in addition to displaying it:
script -qc yourScript /dev/null > logfile # Linux syntax
This, however, will also capture stderr output; if you wanted to avoid that, use:
script -qc 'yourScript 2>/dev/null' /dev/null > logfile
Note, however, that this will suppress stderr output altogether.
As others have noted, it's not that there's no prompt printed -- it's that the last of the output written by tee can come after the prompt, making the prompt no longer visible.
If you have bash 4.4 or newer, you can wait for your tee process to exit, like so:
#!/usr/bin/env bash
case $BASH_VERSION in ''|[0-3].*|4.[0-3]) echo "ERROR: Bash 4.4+ needed" >&2; exit 1;; esac
exec {orig_stdout}>&1 {orig_stderr}>&2 # make a backup of original stdout
exec > >(tee -a "_install_log"); tee_pid=$! # track PID of tee after starting it
cleanup() { # define a function we'll call during shutdown
retval=$?
exec >&$orig_stdout # Copy your original stdout back to FD 1, overwriting the pipe to tee
exec 2>&$orig_stderr # If something overwrites stderr to also go through tee, fix that too
wait "$tee_pid" # Now, wait until tee exits
exit "$retval" # and complete exit with our original exit status
}
trap cleanup EXIT # configure the function above to be called during cleanup
echo "Writing something to stdout here"

Need explanations for Linux bash builtin exec command behavior

From Bash Reference Manual I get the following about exec bash builtin command:
If command is supplied, it replaces the shell without creating a new process.
Now I have the following bash script:
#!/bin/bash
exec ls;
echo 123;
exit 0
This executed, I got this:
cleanup.sh ex1.bash file.bash file.bash~ output.log
(files from the current directory)
Now, if I have this script:
#!/bin/bash
exec ls | cat
echo 123
exit 0
I get the following output:
cleanup.sh
ex1.bash
file.bash
file.bash~
output.log
123
My question is:
If when exec is invoked it replaces the shell without creating a new process, why when put | cat, the echo 123 is printed, but without it, it isn't. So, I would be happy if someone can explain what's the logic of this behavior.
Thanks.
EDIT:
After #torek response, I get an even harder to explain behavior:
1.exec ls>out command creates the out file and put in it the ls's command result;
2.exec ls>out1 ls>out2 creates only the files, but do not put inside any result. If the command works as suggested, I think the command number 2 should have the same result as command number 1 (even more, I think it should not have had created the out2 file).
In this particular case, you have the exec in a pipeline. In order to execute a series of pipeline commands, the shell must initially fork, making a sub-shell. (Specifically it has to create the pipe, then fork, so that everything run "on the left" of the pipe can have its output sent to whatever is "on the right" of the pipe.)
To see that this is in fact what is happening, compare:
{ ls; echo this too; } | cat
with:
{ exec ls; echo this too; } | cat
The former runs ls without leaving the sub-shell, so that this sub-shell is therefore still around to run the echo. The latter runs ls by leaving the sub-shell, which is therefore no longer there to do the echo, and this too is not printed.
(The use of curly-braces { cmd1; cmd2; } normally suppresses the sub-shell fork action that you get with parentheses (cmd1; cmd2), but in the case of a pipe, the fork is "forced", as it were.)
Redirection of the current shell happens only if there is "nothing to run", as it were, after the word exec. Thus, e.g., exec >stdout 4<input 5>>append modifies the current shell, but exec foo >stdout 4<input 5>>append tries to exec command foo. [Note: this is not strictly accurate; see addendum.]
Interestingly, in an interactive shell, after exec foo >output fails because there is no command foo, the shell sticks around, but stdout remains redirected to file output. (You can recover with exec >/dev/tty. In a script, the failure to exec foo terminates the script.)
With a tip of the hat to #Pumbaa80, here's something even more illustrative:
#! /bin/bash
shopt -s execfail
exec ls | cat -E
echo this goes to stdout
echo this goes to stderr 1>&2
(note: cat -E is simplified down from my usual cat -vET, which is my handy go-to for "let me see non-printing characters in a recognizable way"). When this script is run, the output from ls has cat -E applied (on Linux this makes end-of-line visible as a $ sign), but the output sent to stdout and stderr (on the remaining two lines) is not redirected. Change the | cat -E to > out and, after the script runs, observe the contents of file out: the final two echos are not in there.
Now change the ls to foo (or some other command that will not be found) and run the script again. This time the output is:
$ ./demo.sh
./demo.sh: line 3: exec: foo: not found
this goes to stderr
and the file out now has the contents produced by the first echo line.
This makes what exec "really does" as obvious as possible (but no more obvious, as Albert Einstein did not put it :-) ).
Normally, when the shell goes to execute a "simple command" (see the manual page for the precise definition, but this specifically excludes the commands in a "pipeline"), it prepares any I/O redirection operations specified with <, >, and so on by opening the files needed. Then the shell invokes fork (or some equivalent but more-efficient variant like vfork or clone depending on underlying OS, configuration, etc), and, in the child process, rearranges the open file descriptors (using dup2 calls or equivalent) to achieve the desired final arrangements: > out moves the open descriptor to fd 1—stdout—while 6> out moves the open descriptor to fd 6.
If you specify the exec keyword, though, the shell suppresses the fork step. It does all the file opening and file-descriptor-rearranging as usual, but this time, it affects any and all subsequent commands. Finally, having done all the redirections, the shell attempts to execve() (in the system-call sense) the command, if there is one. If there is no command, or if the execve() call fails and the shell is supposed to continue running (is interactive or you have set execfail), the shell soldiers on. If the execve() succeeds, the shell no longer exists, having been replaced by the new command. If execfail is unset and the shell is not interactive, the shell exits.
(There's also the added complication of the command_not_found_handle shell function: bash's exec seems to suppress running it, based on test results. The exec keyword in general makes the shell not look at its own functions, i.e., if you have a shell function f, running f as a simple command runs the shell function, as does (f) which runs it in a sub-shell, but running (exec f) skips over it.)
As for why ls>out1 ls>out2 creates two files (with or without an exec), this is simple enough: the shell opens each redirection, and then uses dup2 to move the file descriptors. If you have two ordinary > redirects, the shell opens both, moves the first one to fd 1 (stdout), then moves the second one to fd 1 (stdout again), closing the first in the process. Finally, it runs ls ls, because that's what's left after removing the >out1 >out2. As long as there is no file named ls, the ls command complains to stderr, and writes nothing to stdout.

Linux commands (cp, rm) are not executed in a perl script. Some works. But no error are returned

I'm having a very strange error.
I run a perl script which executes linux commands. They are executed like this:
my $err = `cp -r $HTML /tssobe/www/tstweb/$subpath/$HTMLDIR1`;
myLog("$err");
And $err is empty, which mean the command didn't return and error. (right?)
I tried to execute the linux command with exec "" or system (), but no success.
I tried to change the path. Same.
Also, I tried to run only the cp command in a new perl script. It works.
But not in my full perl script.
In this perl script, some commands are working, some are not.
The script was working yesterday, Not anymore this morning. No changes have been made in the meantime.
I tried a lot of things, I would be glad if anybody has an idea.
EDIT:
The server was having a lot of processes unterminated. Cleaning those solved the problem.
So the problem is related to another application, but I'll improve the logging thanks to your comments.
Small problem: you are NOT capturing STDERR, so you won't see the error (you are also not checking $? return code).
You should do
my $err = `cp -r $HTML /tssobe/www/tstweb/$subpath/$HTMLDIR1 2>&1`;
to redirect STDERR to STDOUT, or use one of the modules for running commands.
Large problem:
You should not run system commands from Perl for which Perl-native modules exist. In this case: File::Copy::Recursive module.
You can also roll your own directory copied from File::Copy.
Are you using backticks? Add -v to the cp commmand to see something in STDOUT and redirect the STDERR to STDOUT and check the cmd exitcode not the error message in the STDERR.
What about printing out the command output right after the execution?
my $err = `cp -rv $HTML /tssobe/www/tstweb/$subpath/$HTMLDIR1 2>&1`;
my $exitcode = $? >> 8;
warn "Output: $err\nexitcode: $exitcode\n";
It would be better to use qx. Check this: http://www.perlmonks.org/?node_id=454715
You may also want to quote arguments that may potentially contain any shell special characters, including spaces. As shell will do word splitting on the string given to it, if $HTML contains a space cp would get more arguments than you expect. Perl has very simple mechanism for that: \Q\E. Here is how you do it:
my $err = `cp -r \Q$HTML\E \Q/tssobe/www/tstweb/$subpath/$HTMLDIR1\E 2>&1`;
Anything except for alphanumeric will be backslash escaped before passing it to the shell. And you would provide exactly 2 arguments to cp regardless of what is in those variables.

Resources