How to redirect all stderr in bash? - linux

I'm looking for a way to redirect all the stderr streams in interactive bash (ideally to its calling parent process).
I don't want to redirect stderr stream from each individual command, which I could do by appending 2> a_file to each command.
By default, these stderr streams are redirected to the stdout of an interactive bash. I would like to get them on the stderr of this interactive bash process in order to prevent my stdout to be polluted by error messages and be able to treat them separatly.
Any ideas?
I still haven't found an answer ... But maybe it's actually a tty parameter. Does anybody knows something about tty/interactive shell responsibility for handling stderr ?

Use the exec builtin in bash:
exec 2> /tmp/myfile

You could launch a new bash process redirecting the stderr of that process:
$ bash -i 2> stderr.log
$

I find a good way is to surround the commands by parentheses, '()', (launch a sub-shell) or curly-braces, '{}' (no sub-shell; faster):
{
cmd1
cmd2
...
cmdN
} 2> error.log
Of course, this can be done on 1 line:
{ cmd1; cmd2; ... cmdN; } 2> error.log

Try your commands in doublequotes, like so:
ssh remotehost "command" 2>~/stderr
Tested on my local system using a nonexistant file on the remote host.
$ ssh remotehost "tail x;head x" 2>~/stderr
$ cat stderr
tail: cannot open `x' for reading: No such file or directory
head: cannot open `x' for reading: No such file or directory

I don't see your problem it works as designed:
$ ssh remotehost 'ls nosuchfile; ls /etc/passwd' >/tmp/stdout 2>/tmp/stderr
$ cat /tmp/stdout
/etc/passwd
$ cat /tmp/stderr
nosuchfile not found

Two things:
Using 2>&1 in a remote ssh command results in the error ending up inside the local tarfile, resulting in a 'broken' backup.
If you want to apply a redirect on the other side of the ssh, remember to escape the redirect command.
My suggestion would be to redirect stderr on the remote side to a file and pick it up later, in case of an error.
example:
ssh -t remotehost tar -cf - /mnt/backup 2\>backup.err > localbackup.tar
EXITSTATUS=$?
if [ $EXITSTATUS != "0" ] then
echo Error occurred!
ssh remotehost cat backup.err >localbackup.errors
cat localbackup.errors
ssh remotehost rm backup.err
else
echo Backup completed successfully!
ssh remotehost rm backup.err
fi

Tried ssh -t to create a pseudo-TTY at the remote end?

Related

How to redirect standard error to a file

In linux if I want to redirect standard error to a file, I can do this:
$ls -l /bin/usr 2> ls-error.txt
But when I try:
$foo=
$echo ${foo:?"parameter is empty"} 2> ls-error.txt
The result in terminal is:
bash: foo: parameter is empty
It doesn't work!
Can somebody explain why?
I thought ${parameter:?word} would send the value of word to standard error.
echo ${foo:?"parameter is empty"} 2>ls-error.txt redirects the stderr of echo, but the error message is produced by the shell while expanding
${foo:?"parameter is empty"}.
You can get the result you want by redirecting a block (or a subshell) instead so that the shell's stderr is included in the redirection:
{ echo "${foo:?"parameter is empty"}"; } 2>ls-error.txt
Try this command:
($echo ${foo:?"parameter is empty"}) 2> ls-error.txt
In case you would like to redirect both sandard and error output, AND to still get these messages when executing your command, you can use the tee command:
$echo ${foo:?"parameter is empty"} |& tee -a ls-error.txt

Get output on screen from linux command

I tried to get command output in a command but I could't get it. If I run manually, It return normal result in screen.
[root#mymachine ~]# dnsdist -e "topQueries()"
1 Rest 0 100.0%
I tried with dnsdist -e 'topQueries() 2>&1 1> /tmp/abc. But /tmp/abc still was empty.
I don't think screen command can resolve it.
Does anyone have any recommendations?
Addition information:
1 Rest 0 100.0%
is the result from topQueries(). '-e' mean execute a command in dnsdist. It works look like we access mysql then we call a sql statement.
As rkosegi commented, order is important. When you run:
examplecommand 2>&1 1> /tmp/abc
It takes the stderr output and sends it to where stdout is currently going and then sends stdout to the file.
You may want, instead:
examplecommand 1> /tmp/abc 2>&1
which sends stdout to the file, then sends stderr to where stdout is currently going, namely the file.
To have stderr and stdout both to te screen and in a file /tmp/abc you
can use tee.
$cat a.sh
echo "stderr" >&2
echo "stdout"
./a.sh 2>&1 | tee /tmp/abc
stderr
stdout
$ cat /tmp/abc
stderr
stdout

How can I retain stderr from nohup?

Nohup redirects stderr to stdout if it points to a terminal. But I want to retain stderr output to the terminal
Is there a way to accomplish that? Is there an alternative?
I don't know if I understood correctly or not.
you mean that you don't want to see the error in terminal?
if yes:
if you want to save the error in file:
nohup command 2> file.txt
if you don't need the errors:
nohup command 2> /dev/null
2 means the error output of command
2> file.txt means write the error output to the file.txt
Just redirect it somewhere else, so it's not the terminal:
nohup bash -c 'echo OUT ; echo ERR >& 2' 2> err
You can redirect the stderr back to stdout instead of to a file to keep the output in the terminal, but it doesn't make much sense: nohup is for situations where the terminal might get lost, in which case you'll lose the stderr.
nohup bash -c 'echo OUT ; echo ERR >& 2' 2> >(cat)

How to redirect stdout/stderr when /dev/null is not writable for normal users

How to disable stdout or stderr in bash scripts temporarily?
Of course the most common way is to redirect stdout or stderr to /dev/null.
But on some systems /dev/null may be unwritable for normal users.
I am writing some scripts that is aim to be portable, so I do not prefer using /dev/null
Some blogs/posts say that >&- can close stdout, but when I tried echo 123 >&- in a bash terminal, it just failed with the message "bash: echo: write error: Bad file descriptor"
Surely I can do it by redirecting stdout or stderr to a tmp file like this:
some_command > /tmp/null
But what I want is a more "elegant" way
I think perhaps I can achieve this by using pipe like this:
some_command | :
But in this way, it may "pollutes" the exit code of the original command
Here is a possible way to do what you want:
( my_cmd 3>&1 1>&2 2>&3- ) | :
This temporarily send stdout to a new file handle, 3 and redirect stderr to stdout so that the stderr pipes into the command (in this case, :). Then the new file handle is routed back out to stdout. These avoid piping the stdout of my_cmd into :. The - in closes the handle after it's used.
To check the exist status of my_cmd after the above you examine the environment variable $PIPESTATUS[0]. $PIPESTATUS is a bash environment array variable that holds the exit status of each piped command in the last pipe that was done.
I think the really correct answer is to investigate why /dev/null isn't world writable. Having it not so is an off-standard system configuration and may cause system problems. The above work-around is a little messy by comparison.
Based on what I wrote earlier and #nos's comment above, here's an example:
(assuming you have no file called 'zzz' in your current directory, and that '.' is readable)
#!/bin/bash
set -o pipefail
ls . 2>&1 |:
echo $?
ls zzz 2>&1 |:
echo $?
The pipelines succeed and fail silently and maintain the exit code. Note that you can probably still make a pipeline example where this would not produce the desired results. I haven't come up with one in my head yet, but that doesn't mean it's not out there. The best answer, as many have noted already, is to fix the system so that /dev/null is world writable.
EDIT: Changed /bin/sh to /bin/bash, although this probably isn't necessary. But since I haven't tested this against a true Bourne Shell, I decided to err on the side of caution.
EDIT: Another script, showing several different redirections, and using the |& shortcut for 2>&1 |. If you run this, you'll notice that some of the ls failures return a 141 exit status rather than the expected 2. This is a broken pipe exit status, but still represents a failure.
#!/bin/bash
set -o pipefail
# start with commands that should succeed
# redirect everything to ':'
echo "ls . |& :"
ls . |& :
echo $?
# redirect only stdout to ':'
echo "ls . | :"
ls . | :
echo $?
# redirect only stderr to ':'
echo "((ls . 1>&3) |& : ) 3>&1"
((ls . 1>&3) |& : ) 3>&1
echo $?
# now move to failures
# redirect everything to ':'
echo "ls zzz |& :"
ls zzz |& :
echo $?
# redirect only stdout to ':'
echo "ls zzz |:"
ls zzz |:
echo $?
# redirect only stderr to ':'
echo "((ls zzz 1>&3) |& : ) 3>&1"
((ls zzz 1>&3) |& : ) 3>&1
echo $?
I use two subshells when I'm attempting to destroy stdout but keep stderr. You could do it without the outer one. In fact, that might be better. Instead of getting a broken pipe error, you get a 1 exit status.

Shell script to compare remote directories

I have a shell script that I am using to compare directory contents. The script has to ssh to different servers to get a directory listing. When I run the script below, I am getting the contents of the server that I am logged into's /tmp directory listing and not that of the servers I am trying to ssh to. Could you please tell me what I am doing wrong?
The config file used in the script is as follows (called config.txt):
server1,server2,/tmp
The script is as follows
#!/bin/sh
CONFIGFILE="config.txt"
IFS=","
while read a b c
do
SERVER1=$a
SERVER2=$b
COMPDIR=$c
`ssh user#$SERVER1 'ls -l $COMPDIR'`| sed -n '1!p' >> server1.txt
`ssh user#$SERVER2 'ls -l $COMPDIR'`| sed -n '1!p' >> server2.txt
done < $CONFIGFILE
When I look at the outputs of server1.txt and server2.txt, they are both exactly the same - having the contents of /tmp of the server the script is running on (not server1 or 2). Doing the ssh +dir listing on command line works just fine. I am also getting the error "Pseudo-terminal will not be allocated because stdin is not a terminal". Adding the -t -t to the ssh command isnt helping either
Thank you
I have the back ticks in order to execute the command.
Backticks are not needed to execute a command - they are used to expand the standard output of the command into the command line. Certainly you don't want the output of your ssh commands to be interpreted as commands. Thus, it should work fine without the backticks:
ssh user#$SERVER1 "ls -l $COMPDIR" | sed -n '1!p' >>server1.txt
ssh user#$SERVER2 "ls -l $COMPDIR" | sed -n '1!p' >>server2.txt
(provided that double quotes to allow expansion of $COMPDIR are used).
first you need to generate keys to login to remote without keys
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub remote-host
then try to ssh without pass
ssh remote-host
then try to invoke in your script but first make sanity check
var1=$(ssh remote-host) die "Cannot connect to remote host" unless $var1;

Resources