Bash: multiple redirection - linux

Early in a script, I see this:
exec 3>&2
And later:
{ $app $conf_file &>$app_log_file & } 1>&3 2>&1
My understanding of this looks something like this:
Create fd 3
Redirect fd 3 output to stderr
(Upon app execution) redirect stdout to fd 3, then redirect stderr to stdout
Isn't that some kind of circular madness? 3>stderr>stdout>3>etc?
I'm especially concerned as to the intention/implications of this line because I'd like to start running some apps using this script with valgrind. I'd like to see valgrind's output interspersed with the app's log statements, so I'm hoping that the default output of stderr is captured by the confusing line above. However, in some of the crashes that have led me to wanting to use valgrind, I've seen glibc errors outputted straight to the terminal, rather than captured in the app's log file.
So, the question(s): What does that execution line do, exactly? Does it capture stderr? If so, why do I see glibc output on the command line when an app crashes? If not, how should I change it to accomplish this goal?

You misread the 3>&2 syntax. It means open fd 3 and make it a duplicate of fd 2. See Duplicating File Descriptors.
In the same way 2>&1 does not mean make fd 2 point to the location of fd 1 it means re-open fd 2 as a duplicate of fd 1 (mostly the same net effect but different semantics).
Also remember that all redirections occur as they happen and that there are no "pointers" here. So 2>&1 1>/dev/null does not redirect standard error to /dev/null it leaves standard error attached to wherever standard output had been attached to (probably the terminal).
So the code in question does this:
Open fd 3 as a duplicate of fd 2
Re-open fd 1 as a duplicate of fd 3
Re-open fd 2 as a duplicate of fd 1
Effectively those lines send everything to standard error (or wherever fd 2 was attached when the initial exec line ran). If the redirections had been 2>&1 1>&3 then they would have swapped locations. I wonder if that was the original intention of that line since, as written, it is fairly pointless.
Not to mention that with the redirection inside the brace list the redirections on the outside of the brace list are fairly useless.

Ok, well let's see what happens in practice:
peter#tesla:/tmp/test$ bash -c 'exec 3>&2; { sleep 60m &>logfile & } 1>&3 2>&1' > stdout 2>stderr
peter#tesla:/tmp/test$ psg sleep
peter 22147 0.0 0.0 7232 836 pts/14 S 15:51 0:00 sleep 60m
peter#tesla:/tmp/test$ ll /proc/22147/fd
total 0
lr-x------ 1 peter peter 64 Jul 8 15:51 0 -> /dev/null
l-wx------ 1 peter peter 64 Jul 8 15:51 1 -> /tmp/test/logfile
l-wx------ 1 peter peter 64 Jul 8 15:51 2 -> /tmp/test/logfile
l-wx------ 1 peter peter 64 Jul 8 15:51 3 -> /tmp/test/stderr
I'm not sure exactly why the author of your script ended up with that line of code. Presumably it made sense to them when they wrote it. The redirections outside the curly braces happen before the redirections inside, so they're both overriden by the &>logfile. Even errors from bash, like command not found would end up in the logfile.
You say you see glibc messages on your terminal when the app crashes. I think your app must be using fd 3 after it starts. i.e., it was written to be started from a script that opened fd 3 for it, or else it opens /dev/tty or something.
BTW, psg is a function I define in my .bashrc:
psg(){ ps aux | grep "${#:-$USER}" | grep -v grep; }
recently updated to:
psg(){ local pids=$(pgrep -f "${#:--u$USER}"); [[ $pids ]] && ps u -p $pids; }
psgw(){ local pids=$(pgrep -f "${#:--u$USER}"); [[ $pids ]] && ps uww -p $pids; }

You need a context first, as in #Peter Cordes example. He provided the context by setting >stdout and 2>stderr first.
I have modified his example a bit.
$ bash -c 'exec 3>&2; { sleep 60m & } 1>&3 2>&1' >stdout 2>stderr
$ ps aux | grep sleep
logan 272163 0.0 0.0 8084 580 pts/2 S 19:22 0:00 sleep 60m
logan 272165 0.0 0.0 8912 712 pts/2 S+ 19:23 0:00 grep --color=auto sleep
$ ll /proc/272163/fd
total 0
dr-x------ 2 logan logan 0 Aug 25 19:23 ./
dr-xr-xr-x 9 logan logan 0 Aug 25 19:23 ../
lr-x------ 1 logan logan 64 Aug 25 19:23 0 -> /dev/null
l-wx------ 1 logan logan 64 Aug 25 19:23 1 -> /tmp/tmp.Vld71a451u/stderr
l-wx------ 1 logan logan 64 Aug 25 19:23 2 -> /tmp/tmp.Vld71a451u/stderr
l-wx------ 1 logan logan 64 Aug 25 19:23 3 -> /tmp/tmp.Vld71a451u/stderr
First, exec 3>&2 sets fd3 to point to stderr file. Then 1>&3 sets fd1 to point to stderr file also. Lastly, 2>&1 sets fd2 to point to stderr file too! (don't get confused with stderr fd2 and in this case stderr just being a random file name)
The reason fd0 is set to /dev/null, I'm guessing, is because the command is run in a non-interactive shell.

Related

Check if a already running process has the stdin redirected on Linux?

Given the PID of a process, is there a way to find out if this process has the stdin or stdout redirected?
I have one application that reads from the stdin. For convenience, I usually launch this application with stdin redirected from file, like this:
app < input1.txt
The problem is that sometimes I launch the application and forget what was the input file that I used. Is there a way to find out which file has been used to redirect the stdin?
Using ps -aux | grep PID allow me to see the command line used. But does not give me any information about the stdin or stdout.
I have also tried to look in top, as well as in /proc/PID/* but haven't found anything.
I am using CentOS 7, if that helps.
You should just be able to look at /proc/<PID>/fd for this information. For example, if I redirect stdin for a command from a file:
sleep inf < somefile.txt
Then I fill find in the corresponding /proc directory:
$ ls -l /proc/12345/fd
lr-x------. 1 lars lars 64 Nov 4 21:43 0 -> /home/lars/somefile.txt
lrwx------. 1 lars lars 64 Nov 4 21:43 1 -> /dev/pts/5
lrwx------. 1 lars lars 64 Nov 4 21:43 2 -> /dev/pts/5
The same thing works when redirecting stdout to a file. If I run:
sleep inf > somefile.txt
Then I see:
$ ls -l /proc/23456/fd
lrwx------. 1 lars lars 64 Nov 4 21:45 0 -> /dev/pts/5
l-wx------. 1 lars lars 64 Nov 4 21:45 1 -> /home/lars/somefile.txt
lrwx------. 1 lars lars 64 Nov 4 21:45 2 -> /dev/pts/5

Is there a real file for stream i/o in linux?

I confused in concept of i/o stream in the linux.
There is 3 types of stream: standard input. standard output and standard error.
Is there a real file in the ram or hard disk for stdin, stdout and stderr?
For example: kernel writes all keyboard inputs to a stdin file? and then bash(for example) read this file?
And if this true, that's mean any software can read this file in any time?
Every process has (at least initially) the standard stdin/stdout/stderr file handles opened for it. Every process also has a representation in /proc, which is a virtual file system created by the kernel to access all kinds of stuff about the processes. So...
marc#panic:~$ ps
PID TTY TIME CMD
4367 pts/0 00:00:00 bash <--- my bash process
4394 pts/0 00:00:00 ps
marc#panic:~$ cd /proc/4367/fd <---my bash processes's /proc file descriptors
marc#panic:/proc/4367/fd$ ls -l
total 0
lrwx------ 1 marc marc 64 Nov 17 11:17 0 -> /dev/pts/0
lrwx------ 1 marc marc 64 Nov 17 11:17 1 -> /dev/pts/0
lrwx------ 1 marc marc 64 Nov 17 11:17 2 -> /dev/pts/0
lrwx------ 1 marc marc 64 Nov 17 11:18 255 -> /dev/pts/0
files 0, 1, 2 correspond to stdin, stdout, stderr, and they're simply symlinks to the particular pseudo terminal my login session is using.
I wouldn't call these real files, but:
You can use /dev/stdout etc on Linux.

bash 4.3.42 .: is /dev/fd/1 incorrect after redirecting or closing stdout?

I redirect (or close) stdout but /dev/fd (and /proc/self/fd) still shows stdout going to the tty:
% exec 1>log
% ls -l /dev/fd/ >/dev/stderr
and get this
total 0
lrwx------ 1 guest guest 64 Sep 22 15:31 0 -> /dev/pts/1
l-wx------ 1 guest guest 64 Sep 22 15:31 1 -> /dev/pts/1
lrwx------ 1 guest guest 64 Sep 22 15:31 2 -> /dev/pts/1
lr-x------ 1 guest guest 64 Sep 22 15:31 3 -> /proc/14374/fd/
(ls -l /proc/self/fd/ prints the same).
The command
% date
does not print date on screen but
% cat log > /dev/stderr
Tue Sep 22 15:59:48 PDT 2015
shows that the output of date command has been written to 'log'
I can close fd 1 in a c program (or via exec 1>&- ) and /dev/fd/1 still shows it pointing to my tty. Anyone have an explanation for this behavior?
Fedora fc22 4.1.6-201 & Archlinux version??? on my Raspberry PI
You closed file descriptor 1 of the shell, as you expected. However, when you checked to see what file descriptor 1 was, you used:
ls /dev/fd > /dev/stderr
But what’s that > do? For that single command, it reopens file descriptor 1, pointing to the file /dev/stderr. Hence, since /dev/stderr pointed to your pseudoterminal, ls’s file descriptor 1 will also point to your pseudoterminal, and /dev/fd reflects that. If you wanted to print out the file descriptors of the shell’s process, rather than ls’s process, you’d need to specifically say that:
ls -l /proc/$$/fd > /dev/stderr

How to reroute stdout, stderr back to /dev/tty

I just ssh-ed to some remote server and found that stdout and stderr of all commands/processes I am trying to run in bash is redirected to somewhere.
So, I got following questions
How to detect:
1) Which file stdout, stderr is beeing rerouted in Linux?
and
2) And how reroute by default stdout and stderr back to /dev/tty?
Thank you in advance.
A command that should do literally what you asked for in (2) is
exec >/dev/tty 2>&1
But I suspect that your analysis of the problem is incorrect. It would be useful to see the output of ssh -v ... (where ... is whatever arguments you typed in your original ssh command).
The command:
ls -l /proc/$$/fd/{1,2}
will show you which files are open as stdout (file descriptor 1) and stderr (file descriptor 2).
An answer to your first question could be found in /proc/self/fd. It contains symlinks to the files (or other things, pipes, sockets, etc) that your bash instance is connected to.
root#mammon:~# ls -l /proc/self/fd
total 0
lrwx------ 1 root root 64 May 21 02:18 0 -> /dev/pts/3
lrwx------ 1 root root 64 May 21 02:18 1 -> /dev/pts/3
lrwx------ 1 root root 64 May 21 02:18 2 -> /dev/pts/3
lr-x------ 1 root root 64 May 21 02:18 3 -> /proc/15529/fd/
root#mammon:~# ls -l /proc/self/fd < /dev/null
total 0
lr-x------ 1 root root 64 May 21 02:18 0 -> /dev/null
lrwx------ 1 root root 64 May 21 02:18 1 -> /dev/pts/3
lrwx------ 1 root root 64 May 21 02:18 2 -> /dev/pts/3
lr-x------ 1 root root 64 May 21 02:18 3 -> /proc/15536/fd/
root#mammon:~# ls -l /proc/self/fd | cat
total 0
lrwx------ 1 root root 64 May 21 02:18 0 -> /dev/pts/3
l-wx------ 1 root root 64 May 21 02:18 1 -> pipe:[497711]
lrwx------ 1 root root 64 May 21 02:18 2 -> /dev/pts/3
lr-x------ 1 root root 64 May 21 02:18 3 -> /proc/15537/fd/
root#mammon:~#
In the first example, you can see the first 3 file descriptors (which are the standard output, input, and error, respectively) all point to my pseudo-terminal /dev/pts/3. In the second example I've redirected the input to /dev/null, so the standard input file descriptor points to /dev/null. And in the final example I've sent the output of ls to cat through a pipe, and the standard input file descriptor reflects this. As far as I know there is no way to find which process has the other end of the pipe. In all examples there is the fourth file descriptor that represents the handle that ls has for reading /proc/self/fd. In this case it says /proc/15537 because /proc/self is in fact a symlink to /proc/pid where pid is the PID of the process accessing /proc/self.
It can only be done if your longing shell is started with a pipe to tee command with another console as a parameter.
Let me explain.
If you are logging in /dev/tty1 and someone else is logging in /dev/tty2. If you start your shell (bash) by following command all the STDOUT/STDERR will be rerouted/copied to another shell (/dev/tty2 in this case).
bash 2>&1 | tee /dev/tty2
So, someone sitting in /dev/tty2 will see all of your activity.
If someone logins shell is /bin/bash 2>&1 | tee /dev/tty2 instead of /bin/bash It'll happen every time he logs in. But I am not sure login shell can be set that way.
If someone reroutes all the output of your shell this way you can check it just by checking if any tee is running in background.
ps ax | grep tee
This will output something like
tee /dev/tty2

Redirect STDERR / STDOUT of a process AFTER it's been started, using command line?

In the shell you can do redirection, > <, etc., but how about AFTER a program is started?
Here's how I came to ask this question, a program running in the background of my terminal keeps outputting annoying text. It's an important process so I have to open another shell to avoid the text. I'd like to be able to >/dev/null or some other redirection so I can keep working in the same shell.
Short of closing and reopening your tty (i.e. logging off and back on, which may also terminate some of your background processes in the process) you only have one choice left:
attach to the process in question using gdb, and run:
p dup2(open("/dev/null", 0), 1)
p dup2(open("/dev/null", 0), 2)
detach
quit
e.g.:
$ tail -f /var/log/lastlog &
[1] 5636
$ ls -l /proc/5636/fd
total 0
lrwx------ 1 myuser myuser 64 Feb 27 07:36 0 -> /dev/pts/0
lrwx------ 1 myuser myuser 64 Feb 27 07:36 1 -> /dev/pts/0
lrwx------ 1 myuser myuser 64 Feb 27 07:36 2 -> /dev/pts/0
lr-x------ 1 myuser myuser 64 Feb 27 07:36 3 -> /var/log/lastlog
$ gdb -p 5636
GNU gdb 6.8-debian
Copyright (C) 2008 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Attaching to process 5636
Reading symbols from /usr/bin/tail...(no debugging symbols found)...done.
Reading symbols from /lib/librt.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib/librt.so.1
Reading symbols from /lib/libc.so.6...(no debugging symbols found)...done.
Loaded symbols for /lib/libc.so.6
Reading symbols from /lib/libpthread.so.0...(no debugging symbols found)...done.
[Thread debugging using libthread_db enabled]
[New Thread 0x7f3c8f5a66e0 (LWP 5636)]
Loaded symbols for /lib/libpthread.so.0
Reading symbols from /lib/ld-linux-x86-64.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
(no debugging symbols found)
0x00007f3c8eec7b50 in nanosleep () from /lib/libc.so.6
(gdb) p dup2(open("/dev/null",0),1)
[Switching to Thread 0x7f3c8f5a66e0 (LWP 5636)]
$1 = 1
(gdb) p dup2(open("/dev/null",0),2)
$2 = 2
(gdb) detach
Detaching from program: /usr/bin/tail, process 5636
(gdb) quit
$ ls -l /proc/5636/fd
total 0
lrwx------ 1 myuser myuser 64 Feb 27 07:36 0 -> /dev/pts/0
lrwx------ 1 myuser myuser 64 Feb 27 07:36 1 -> /dev/null
lrwx------ 1 myuser myuser 64 Feb 27 07:36 2 -> /dev/null
lr-x------ 1 myuser myuser 64 Feb 27 07:36 3 -> /var/log/lastlog
lr-x------ 1 myuser myuser 64 Feb 27 07:36 4 -> /dev/null
lr-x------ 1 myuser myuser 64 Feb 27 07:36 5 -> /dev/null
You may also consider:
using screen; screen provides several virtual TTYs you can switch between without having to open new SSH/telnet/etc, sessions
using nohup; this allows you to close and reopen your session without losing any background processes in the... process.
This will do:
strace -ewrite -p $PID
It's not that clean (shows lines like: write(#,<text you want to see>) ), but works!
You might also dislike the fact that arguments are abbreviated. To control that use the -s parameter that sets the maximum length of strings displayed.
It catches all streams, so you might want to filter that somehow:
strace -ewrite -p $PID 2>&1 | grep "write(1"
shows only descriptor 1 calls. 2>&1 is to redirect STDERR to STDOUT, as strace writes to STDERR by default.
Redirect output from a running process to another terminal, file, or screen:
tty
ls -l /proc/20818/fd
gdb -p 20818
Inside gdb:
p close(1)
p open("/dev/pts/4", 1)
p close(2)
p open("/tmp/myerrlog", 1)
q
Detach a running process from the bash terminal and keep it alive:
[Ctrl+z]
bg %1 && disown %1
[Ctrl+d]
Explanation:
20818 - just an example of running process PID
p - print result of gdb command
close(1) - close standard output
/dev/pts/4 - terminal to write to
close(2) - close error output
/tmp/myerrlog - file to write to
q - quit gdb
bg %1 - run stopped job 1 on background
disown %1 - detach job 1 from terminal
[Ctrl+z] - stop the running process
[Ctrl+d] - exit terminal
riffing off vladr's (and others') excellent research:
create the following two files in the same directory, something in your path, say $HOME/bin:
silence.gdb, containing (from vladr's answer):
p dup2(open("/dev/null",0),1)
p dup2(open("/dev/null",0),2)
detach
quit
and silence, containing:
#!/bin/sh
if [ "$0" -a "$1" ]; then
gdb -p $1 -x $0.gdb
else
echo Must specify PID of process to silence >&2
fi
chmod +x ~/bin/silence # make the script executable
Now, next time you forget to redirect firefox, for example, and your terminal starts getting cluttered with the inevitable "(firefox-bin:5117): Gdk-WARNING **: XID collision, trouble ahead" messages:
ps # look for process xulrunner-stub (in this case we saw the PID in the error above)
silence 5117 # run the script, using PID we found
You could also redirect gdb's output to /dev/null if you don't want to see it.
Not a direct answer to your question, but it's a technique I've been finding useful over the last few days: Run the initial command using 'screen', and then detach.
this is bash script part based on previous answers, which redirect log file during execution of an open process, it is used as postscript in logrotate process
#!/bin/bash
pid=$(cat /var/run/app/app.pid)
logFile="/var/log/app.log"
reloadLog()
{
if [ "$pid" = "" ]; then
echo "invalid PID"
else
gdb -p $pid >/dev/null 2>&1 <<LOADLOG
set scheduler-locking on
p close(1)
p open("$logFile", 1)
p close(2)
p open("$logFile", 1)
q
LOADLOG
LOG_FILE=$(ls /proc/${pid}/fd -l | fgrep " 1 -> " | awk '{print $11}')
echo "log file set to $LOG_FILE"
fi
}
reloadLog
updated: for gdb v7.11 and later, set scheduler-locking on or other any options mentioned here is required, because after attaching gdb, it does not stop all running threads and you may not able to close/open your log file because of file usage.
Dupx is a simple *nix utility to redirect standard output/input/error of an already running process.
https://www.isi.edu/~yuri/dupx/
You can use reredirect (https://github.com/jerome-pouiller/reredirect/).
Type
reredirect -m FILE PID
and outputs (standard and error) will be written in FILE.
reredirect README also explains how to restore original state of process, how to redirect to another command or to redirect only stdout or stderr.
reredirect also provide a script called relink that allows to redirect to current terminal:
relink PID
relink PID | grep usefull_content
(reredirect seems to have same features than Dupx described in another answer but, it does not depends on Gdb).

Resources