How to reroute stdout, stderr back to /dev/tty - linux

I just ssh-ed to some remote server and found that stdout and stderr of all commands/processes I am trying to run in bash is redirected to somewhere.
So, I got following questions
How to detect:
1) Which file stdout, stderr is beeing rerouted in Linux?
and
2) And how reroute by default stdout and stderr back to /dev/tty?
Thank you in advance.

A command that should do literally what you asked for in (2) is
exec >/dev/tty 2>&1
But I suspect that your analysis of the problem is incorrect. It would be useful to see the output of ssh -v ... (where ... is whatever arguments you typed in your original ssh command).

The command:
ls -l /proc/$$/fd/{1,2}
will show you which files are open as stdout (file descriptor 1) and stderr (file descriptor 2).

An answer to your first question could be found in /proc/self/fd. It contains symlinks to the files (or other things, pipes, sockets, etc) that your bash instance is connected to.
root#mammon:~# ls -l /proc/self/fd
total 0
lrwx------ 1 root root 64 May 21 02:18 0 -> /dev/pts/3
lrwx------ 1 root root 64 May 21 02:18 1 -> /dev/pts/3
lrwx------ 1 root root 64 May 21 02:18 2 -> /dev/pts/3
lr-x------ 1 root root 64 May 21 02:18 3 -> /proc/15529/fd/
root#mammon:~# ls -l /proc/self/fd < /dev/null
total 0
lr-x------ 1 root root 64 May 21 02:18 0 -> /dev/null
lrwx------ 1 root root 64 May 21 02:18 1 -> /dev/pts/3
lrwx------ 1 root root 64 May 21 02:18 2 -> /dev/pts/3
lr-x------ 1 root root 64 May 21 02:18 3 -> /proc/15536/fd/
root#mammon:~# ls -l /proc/self/fd | cat
total 0
lrwx------ 1 root root 64 May 21 02:18 0 -> /dev/pts/3
l-wx------ 1 root root 64 May 21 02:18 1 -> pipe:[497711]
lrwx------ 1 root root 64 May 21 02:18 2 -> /dev/pts/3
lr-x------ 1 root root 64 May 21 02:18 3 -> /proc/15537/fd/
root#mammon:~#
In the first example, you can see the first 3 file descriptors (which are the standard output, input, and error, respectively) all point to my pseudo-terminal /dev/pts/3. In the second example I've redirected the input to /dev/null, so the standard input file descriptor points to /dev/null. And in the final example I've sent the output of ls to cat through a pipe, and the standard input file descriptor reflects this. As far as I know there is no way to find which process has the other end of the pipe. In all examples there is the fourth file descriptor that represents the handle that ls has for reading /proc/self/fd. In this case it says /proc/15537 because /proc/self is in fact a symlink to /proc/pid where pid is the PID of the process accessing /proc/self.

It can only be done if your longing shell is started with a pipe to tee command with another console as a parameter.
Let me explain.
If you are logging in /dev/tty1 and someone else is logging in /dev/tty2. If you start your shell (bash) by following command all the STDOUT/STDERR will be rerouted/copied to another shell (/dev/tty2 in this case).
bash 2>&1 | tee /dev/tty2
So, someone sitting in /dev/tty2 will see all of your activity.
If someone logins shell is /bin/bash 2>&1 | tee /dev/tty2 instead of /bin/bash It'll happen every time he logs in. But I am not sure login shell can be set that way.
If someone reroutes all the output of your shell this way you can check it just by checking if any tee is running in background.
ps ax | grep tee
This will output something like
tee /dev/tty2

Related

Check if a already running process has the stdin redirected on Linux?

Given the PID of a process, is there a way to find out if this process has the stdin or stdout redirected?
I have one application that reads from the stdin. For convenience, I usually launch this application with stdin redirected from file, like this:
app < input1.txt
The problem is that sometimes I launch the application and forget what was the input file that I used. Is there a way to find out which file has been used to redirect the stdin?
Using ps -aux | grep PID allow me to see the command line used. But does not give me any information about the stdin or stdout.
I have also tried to look in top, as well as in /proc/PID/* but haven't found anything.
I am using CentOS 7, if that helps.
You should just be able to look at /proc/<PID>/fd for this information. For example, if I redirect stdin for a command from a file:
sleep inf < somefile.txt
Then I fill find in the corresponding /proc directory:
$ ls -l /proc/12345/fd
lr-x------. 1 lars lars 64 Nov 4 21:43 0 -> /home/lars/somefile.txt
lrwx------. 1 lars lars 64 Nov 4 21:43 1 -> /dev/pts/5
lrwx------. 1 lars lars 64 Nov 4 21:43 2 -> /dev/pts/5
The same thing works when redirecting stdout to a file. If I run:
sleep inf > somefile.txt
Then I see:
$ ls -l /proc/23456/fd
lrwx------. 1 lars lars 64 Nov 4 21:45 0 -> /dev/pts/5
l-wx------. 1 lars lars 64 Nov 4 21:45 1 -> /home/lars/somefile.txt
lrwx------. 1 lars lars 64 Nov 4 21:45 2 -> /dev/pts/5

Is there a real file for stream i/o in linux?

I confused in concept of i/o stream in the linux.
There is 3 types of stream: standard input. standard output and standard error.
Is there a real file in the ram or hard disk for stdin, stdout and stderr?
For example: kernel writes all keyboard inputs to a stdin file? and then bash(for example) read this file?
And if this true, that's mean any software can read this file in any time?
Every process has (at least initially) the standard stdin/stdout/stderr file handles opened for it. Every process also has a representation in /proc, which is a virtual file system created by the kernel to access all kinds of stuff about the processes. So...
marc#panic:~$ ps
PID TTY TIME CMD
4367 pts/0 00:00:00 bash <--- my bash process
4394 pts/0 00:00:00 ps
marc#panic:~$ cd /proc/4367/fd <---my bash processes's /proc file descriptors
marc#panic:/proc/4367/fd$ ls -l
total 0
lrwx------ 1 marc marc 64 Nov 17 11:17 0 -> /dev/pts/0
lrwx------ 1 marc marc 64 Nov 17 11:17 1 -> /dev/pts/0
lrwx------ 1 marc marc 64 Nov 17 11:17 2 -> /dev/pts/0
lrwx------ 1 marc marc 64 Nov 17 11:18 255 -> /dev/pts/0
files 0, 1, 2 correspond to stdin, stdout, stderr, and they're simply symlinks to the particular pseudo terminal my login session is using.
I wouldn't call these real files, but:
You can use /dev/stdout etc on Linux.

bash 4.3.42 .: is /dev/fd/1 incorrect after redirecting or closing stdout?

I redirect (or close) stdout but /dev/fd (and /proc/self/fd) still shows stdout going to the tty:
% exec 1>log
% ls -l /dev/fd/ >/dev/stderr
and get this
total 0
lrwx------ 1 guest guest 64 Sep 22 15:31 0 -> /dev/pts/1
l-wx------ 1 guest guest 64 Sep 22 15:31 1 -> /dev/pts/1
lrwx------ 1 guest guest 64 Sep 22 15:31 2 -> /dev/pts/1
lr-x------ 1 guest guest 64 Sep 22 15:31 3 -> /proc/14374/fd/
(ls -l /proc/self/fd/ prints the same).
The command
% date
does not print date on screen but
% cat log > /dev/stderr
Tue Sep 22 15:59:48 PDT 2015
shows that the output of date command has been written to 'log'
I can close fd 1 in a c program (or via exec 1>&- ) and /dev/fd/1 still shows it pointing to my tty. Anyone have an explanation for this behavior?
Fedora fc22 4.1.6-201 & Archlinux version??? on my Raspberry PI
You closed file descriptor 1 of the shell, as you expected. However, when you checked to see what file descriptor 1 was, you used:
ls /dev/fd > /dev/stderr
But what’s that > do? For that single command, it reopens file descriptor 1, pointing to the file /dev/stderr. Hence, since /dev/stderr pointed to your pseudoterminal, ls’s file descriptor 1 will also point to your pseudoterminal, and /dev/fd reflects that. If you wanted to print out the file descriptors of the shell’s process, rather than ls’s process, you’d need to specifically say that:
ls -l /proc/$$/fd > /dev/stderr

Bash: multiple redirection

Early in a script, I see this:
exec 3>&2
And later:
{ $app $conf_file &>$app_log_file & } 1>&3 2>&1
My understanding of this looks something like this:
Create fd 3
Redirect fd 3 output to stderr
(Upon app execution) redirect stdout to fd 3, then redirect stderr to stdout
Isn't that some kind of circular madness? 3>stderr>stdout>3>etc?
I'm especially concerned as to the intention/implications of this line because I'd like to start running some apps using this script with valgrind. I'd like to see valgrind's output interspersed with the app's log statements, so I'm hoping that the default output of stderr is captured by the confusing line above. However, in some of the crashes that have led me to wanting to use valgrind, I've seen glibc errors outputted straight to the terminal, rather than captured in the app's log file.
So, the question(s): What does that execution line do, exactly? Does it capture stderr? If so, why do I see glibc output on the command line when an app crashes? If not, how should I change it to accomplish this goal?
You misread the 3>&2 syntax. It means open fd 3 and make it a duplicate of fd 2. See Duplicating File Descriptors.
In the same way 2>&1 does not mean make fd 2 point to the location of fd 1 it means re-open fd 2 as a duplicate of fd 1 (mostly the same net effect but different semantics).
Also remember that all redirections occur as they happen and that there are no "pointers" here. So 2>&1 1>/dev/null does not redirect standard error to /dev/null it leaves standard error attached to wherever standard output had been attached to (probably the terminal).
So the code in question does this:
Open fd 3 as a duplicate of fd 2
Re-open fd 1 as a duplicate of fd 3
Re-open fd 2 as a duplicate of fd 1
Effectively those lines send everything to standard error (or wherever fd 2 was attached when the initial exec line ran). If the redirections had been 2>&1 1>&3 then they would have swapped locations. I wonder if that was the original intention of that line since, as written, it is fairly pointless.
Not to mention that with the redirection inside the brace list the redirections on the outside of the brace list are fairly useless.
Ok, well let's see what happens in practice:
peter#tesla:/tmp/test$ bash -c 'exec 3>&2; { sleep 60m &>logfile & } 1>&3 2>&1' > stdout 2>stderr
peter#tesla:/tmp/test$ psg sleep
peter 22147 0.0 0.0 7232 836 pts/14 S 15:51 0:00 sleep 60m
peter#tesla:/tmp/test$ ll /proc/22147/fd
total 0
lr-x------ 1 peter peter 64 Jul 8 15:51 0 -> /dev/null
l-wx------ 1 peter peter 64 Jul 8 15:51 1 -> /tmp/test/logfile
l-wx------ 1 peter peter 64 Jul 8 15:51 2 -> /tmp/test/logfile
l-wx------ 1 peter peter 64 Jul 8 15:51 3 -> /tmp/test/stderr
I'm not sure exactly why the author of your script ended up with that line of code. Presumably it made sense to them when they wrote it. The redirections outside the curly braces happen before the redirections inside, so they're both overriden by the &>logfile. Even errors from bash, like command not found would end up in the logfile.
You say you see glibc messages on your terminal when the app crashes. I think your app must be using fd 3 after it starts. i.e., it was written to be started from a script that opened fd 3 for it, or else it opens /dev/tty or something.
BTW, psg is a function I define in my .bashrc:
psg(){ ps aux | grep "${#:-$USER}" | grep -v grep; }
recently updated to:
psg(){ local pids=$(pgrep -f "${#:--u$USER}"); [[ $pids ]] && ps u -p $pids; }
psgw(){ local pids=$(pgrep -f "${#:--u$USER}"); [[ $pids ]] && ps uww -p $pids; }
You need a context first, as in #Peter Cordes example. He provided the context by setting >stdout and 2>stderr first.
I have modified his example a bit.
$ bash -c 'exec 3>&2; { sleep 60m & } 1>&3 2>&1' >stdout 2>stderr
$ ps aux | grep sleep
logan 272163 0.0 0.0 8084 580 pts/2 S 19:22 0:00 sleep 60m
logan 272165 0.0 0.0 8912 712 pts/2 S+ 19:23 0:00 grep --color=auto sleep
$ ll /proc/272163/fd
total 0
dr-x------ 2 logan logan 0 Aug 25 19:23 ./
dr-xr-xr-x 9 logan logan 0 Aug 25 19:23 ../
lr-x------ 1 logan logan 64 Aug 25 19:23 0 -> /dev/null
l-wx------ 1 logan logan 64 Aug 25 19:23 1 -> /tmp/tmp.Vld71a451u/stderr
l-wx------ 1 logan logan 64 Aug 25 19:23 2 -> /tmp/tmp.Vld71a451u/stderr
l-wx------ 1 logan logan 64 Aug 25 19:23 3 -> /tmp/tmp.Vld71a451u/stderr
First, exec 3>&2 sets fd3 to point to stderr file. Then 1>&3 sets fd1 to point to stderr file also. Lastly, 2>&1 sets fd2 to point to stderr file too! (don't get confused with stderr fd2 and in this case stderr just being a random file name)
The reason fd0 is set to /dev/null, I'm guessing, is because the command is run in a non-interactive shell.

How to see which file are in use in Linux

I have a question how can I see which file are in use in linux. To be honest this OS is not the normal version of linux it is very crippled so for example there is no command like "lsof". I found command "strace" but this is no what I looking for. I hear that I can list this file with hacking kernel?
I want to see which file are in use because on this machine is a little free space and I want to delete file which are no in use when the program is running.
I'm sorry for my weak english.
You can inspect the open files by process by walking the /proc virtual filesystem
Every process running has an entry in /proc/PID. There's a directory in each process directory called 'fd', that represents the processes currently opened file descriptors. These appear as links to the actual resources.
e.g. on a VM I have running
root#wvm:/proc/1213/fd# pidof dhclient
1213
root#wvm:/proc/1213/fd# cd /proc/1213/fd
root#wvm:/proc/1213/fd# ls -l
total 0
lrwx------ 1 root root 64 Apr 8 09:11 0 -> /dev/null
lrwx------ 1 root root 64 Apr 8 09:11 1 -> /dev/null
lrwx------ 1 root root 64 Apr 8 09:11 2 -> /dev/null
lrwx------ 1 root root 64 Apr 8 09:11 20 -> socket:[4687]
lrwx------ 1 root root 64 Apr 8 09:11 21 -> socket:[4688]
l-wx------ 1 root root 64 Apr 8 09:11 3 -> /var/lib/dhcp/dhclient.eth0.leases
lrwx------ 1 root root 64 Apr 8 09:11 4 -> socket:[4694]
lrwx------ 1 root root 64 Apr 8 09:11 5 -> socket:[4698]
root#wvm:/proc/1213/fd#
Looking at the kernel process information for 'dhclient' - I find its pid, and then look in the fd subdirectory for this process id. It has a small set of open descriptors - stdin, stdout and stderr ( 0,1,2 ) have all been attached to /dev/null , there's four sockets open, but file descriptor 3 is attached to a data file /var/lib/dhcp/dhclient.eth0.leases
So you could duplicate the functionality of lsof using shell tools to walk /proc and filter out the file names from these links.
Are you able to use "top" command? if so then this should show you the list of all the top OS utilizing operations running on linux. You can do
ps -ef|grep <process_no>
this would give the details of it. Either can stop that process or kill it using
kill -9 <process no>
Use the lsof command to list all open files by process.

Resources