Reading value of `stdout` and `stderr` - linux

I am trying to read value of stdout and stderr using the following commands:
cat /dev/stderr
cat /dev/stdout
But, the command keeps running.

Use a FIFO Instead
Technically, /dev/stdout and /dev/stderr are really file descriptors, not FIFOs or named pipes. On my system, they're actually just symlinks to /dev/fd/1 and /dev/fd/2. Those descriptors are typically linked to your TTY or PTY. So, you can't really read from them the way you're trying to do.
What you probably want is the mkfifo utility. For example, to write to standard error, and then read it from another command or script:
# Create a named pipe.
$ mkfifo error
# See what a named pipe looks like in the filesystem.
$ ls -l error
prw-r--r-- 1 user staff 0 May 13 01:47 error|
# In a subshell: echo to stdout, duplicate stdout to stderr,
# write stderr to the error FIFO. Background to avoid blocking.
# Then read from the FIFO until empty, which ends both tasks.
$ ( echo foo >&2 2> error & ); cat error
foo
As a more verbose but less contorted example, consider this:
$ ruby -e 'STDERR.puts "Some error."' 2> error & cat error
[1] 32458
Some error.
[1]+ Done ruby -e 'STDERR.puts "Some error."' 2> error
In this example, Ruby uses standard error to write a string to the error FIFO we created earlier. The write happens in the background, but blocks until the FIFO is emptied by the cat command. Once the FIFO is emptied, the background job completes.
The FIFO is just a special type of file, so you can remove it when you're done with rm error.

I don't really know what you mean by reading the value of stderr or stdin, but I can tell you why that cat /dev/stderr would keep running: it's waiting for data to read from the fd.
On the systems I can test this on, both output fd's are connected to the terminal just like stdin is, and reading from them works just fine. On Linux, we can view this with:
$ ls -l /proc/self/fd
lrwx------ 1 nobody nogroup 64 May 13 21:47 0 -> /dev/pts/1
lrwx------ 1 nobody nogroup 64 May 13 21:47 1 -> /dev/pts/1
lrwx------ 1 nobody nogroup 64 May 13 21:47 2 -> /dev/pts/1
lr-x------ 1 nobody nogroup 64 May 13 21:47 3 -> /proc/44664/fd
The permission bits at the start of the line show that all of the fd's 0 to 2 are opened for both reading and writing.
Reading from them works in practice, too (input in italics):
$ cat /dev/stderr
foo
foo
$ read -u 2 ; echo "reply: $REPLY"
asdf
reply: asdf
Though really, it might be better to just open /dev/tty and read from there if we want to interact on the terminal even when there are redirections in place. (That's what ssh does to ask the password, for example.)

Related

Script command: separate input and output

I'm trying to monitor command execution on a shell.
I need to separate the input command, for example:
input:
ls -l /
output:
total 76
lrwxrwxrwx 1 root root 7 Aug 11 10:25 bin -> usr/bin
drwxr-xr-x 3 root root 4096 Aug 11 11:18 boot
drwxr-xr-x 17 root root 3200 Oct 11 11:10 dev
...
Also, I want to do the same if I open another shell, for example, after connection through ssh to another server.
I've been using script command to do this and it works just fine!
It logs all command input and output even if the shell changes (through ssh, or entering a msfconsole, for example).
Nevertheless, I found two main issues:
For my project, I need to separate (using a decoder) each command from the rest, also it would be awesome to be able to separate command input and output, for example:
cmd1. pwd ---> /var/
cmd2. echo "hello world" ---> "hello world"
....
Sometimes the script command could generate an output with garbage due to shell special characters (for colors, for example) which I would like to filter out.
So I've been thinking about this and I guess I could create a simple script that read from the file written by "script" command and processed the data.
Nevertheless, I'm not sure about what could be the best approach to do this.
I'm evaluating different solutions and I would like to know different proposals from the community.
Maybe I'm losing something and you know a better tool rather than script command or have some idea I've not considered.
Best regards,
A useful util for distinguishing stdout from stderr is annotate-output, (install the "devscripts" package), which sends stderr and stdin both to stdout along with helpful little prefixes. For example, let's try counting characters of a file that exists, plus one that doesn't exist:
annotate-output wc -c /bin/bash /bin/nosuchshell
Output:
00:29:06 I: Started wc -c /bin/bash /bin/nosuchshell
00:29:06 E: wc: /bin/nosuchshell: No such file or directory
00:29:06 O: 1099016 /bin/bash
00:29:06 O: 1099016 total
00:29:06 I: Finished with exitcode 1
That output could be parsed separately using sed, awk, or even a tee and a few greps.

How to redirect input from another tty?

When i run cat - in say /dev/pts/2 and try to write to its input from another tty with echo foo > /dev/pts/2 or echo foo > /proc/(pid of cat)/fd/0 it just prints foo in pts/2, cat doesn't repeat it . why?
How to send input to cat from another tty so it also repeat it ?
Each and every terminal has a file for it, in /dev/pts/.
$ ps
to determine which terminal you are on. Example: I am on terminal 3
PID TTY TIME CMD
1477 pts/3 00:00:00 ps
26511 pts/3 00:00:01 bash
Than just redirect your output to that terminal.
cat foo > /dev/pts/3
Make a first in first out pipe on the second terminal, the one you'd like to display the text on
mkfifo --mode=600 /tmp/pipe
Redirect the command to that pipe on the first terminal
cat foo > /tmp/pipe
I think there is a fundamental misunderstanding here: you cannot inject content to another TTY's input stream (unless you own the master).
You can, however, call cat /dev/pts/0 to read from another TTY's input stream, but beware that you will be fighting whatever process is already there.

How to redirect output of an already running process [duplicate]

This question already has answers here:
Redirect STDERR / STDOUT of a process AFTER it's been started, using command line?
(8 answers)
Closed 6 years ago.
Normally I would start a command like
longcommand &;
I know you can redirect it by doing something like
longcommand > /dev/null;
for instance to get rid of the output or
longcommand 2>&1 > output.log
to capture output.
But I sometimes forget, and was wondering if there is a way to capture or redirect after the fact.
longcommand
ctrl-z
bg 2>&1 > /dev/null
or something like that so I can continue using the terminal without messages popping up on the terminal.
See Redirecting Output from a Running Process.
Firstly I run the command cat > foo1 in one session and test that data from stdin is copied to the file. Then in another session I redirect the output.
Firstly find the PID of the process:
$ ps aux | grep cat
rjc 6760 0.0 0.0 1580 376 pts/5 S+ 15:31 0:00 cat
Now check the file handles it has open:
$ ls -l /proc/6760/fd
total 3
lrwx—— 1 rjc rjc 64 Feb 27 15:32 0 -> /dev/pts/5
l-wx—— 1 rjc rjc 64 Feb 27 15:32 1 -> /tmp/foo1
lrwx—— 1 rjc rjc 64 Feb 27 15:32 2 -> /dev/pts/5
Now run GDB:
$ gdb -p 6760 /bin/cat
GNU gdb 6.4.90-debian
[license stuff snipped]
Attaching to program: /bin/cat, process 6760
[snip other stuff that's not interesting now]
(gdb) p close(1)
$1 = 0
(gdb) p creat("/tmp/foo3", 0600)
$2 = 1
(gdb) q
The program is running. Quit anyway (and detach it)? (y or n) y
Detaching from program: /bin/cat, process 6760
The p command in GDB will print the value of an expression, an expression can be a function to call, it can be a system call… So I execute a close() system call and pass file handle 1, then I execute a creat() system call to open a new file. The result of the creat() was 1 which means that it replaced the previous file handle. If I wanted to use the same file for stdout and stderr or if I wanted to replace a file handle with some other number then I would need to call the dup2() system call to achieve that result.
For this example I chose to use creat() instead of open() because there are fewer parameter. The C macros for the flags are not usable from GDB (it doesn’t use C headers) so I would have to read header files to discover this – it’s not that hard to do so but would take more time. Note that 0600 is the octal permission for the owner having read/write access and the group and others having no access. It would also work to use 0 for that parameter and run chmod on the file later on.
After that I verify the result:
ls -l /proc/6760/fd/
total 3
lrwx—— 1 rjc rjc 64 2008-02-27 15:32 0 -> /dev/pts/5
l-wx—— 1 rjc rjc 64 2008-02-27 15:32 1 -> /tmp/foo3 <====
lrwx—— 1 rjc rjc 64 2008-02-27 15:32 2 -> /dev/pts/5
Typing more data in to cat results in the file /tmp/foo3 being appended to.
If you want to close the original session you need to close all file handles for it, open a new device that can be the controlling tty, and then call setsid().
You can also do it using reredirect (https://github.com/jerome-pouiller/reredirect/).
The command bellow redirects the outputs (standard and error) of the process PID to FILE:
reredirect -m FILE PID
The README of reredirect also explains other interesting features: how to restore the original state of the process, how to redirect to another command or to redirect only stdout or stderr.
The tool also provides relink, a script allowing to redirect the outputs to the current terminal:
relink PID
relink PID | grep usefull_content
(reredirect seems to have same features than Dupx described in another answer but, it does not depend on Gdb).
Dupx
Dupx is a simple *nix utility to redirect standard output/input/error of an already running process.
Motivation
I've often found myself in a situation where a process I started on a remote system via SSH takes much longer than I had anticipated. I need to break the SSH connection, but if I do so, the process will die if it tries to write something on stdout/error of a broken pipe. I wish I could suspend the process with ^Z and then do a
bg %1 >/tmp/stdout 2>/tmp/stderr
Unfortunately this will not work (in shells I know).
http://www.isi.edu/~yuri/dupx/
Screen
If process is running in a screen session you can use screen's log command to log the output of that window to a file:
Switch to the script's window, C-a H to log.
Now you can :
$ tail -f screenlog.2 | grep whatever
From screen's man page:
log [on|off]
Start/stop writing output of the current window to a file "screenlog.n" in the window's default directory, where n is the number of the current window. This filename can be changed with the 'logfile' command. If no parameter is given, the state of logging is toggled. The session log is appended to the previous contents of the file if it already exists. The current contents and the contents of the scrollback history are not included in the session log. Default is 'off'.
I'm sure tmux has something similar as well.
I collected some information on the internet and prepared the script that requires no external tool: See my response here. Hope it's helpful.

linux: redirect stdout after the process started [duplicate]

I have some scripts that ought to have stopped running but hang around forever. Is there some way I can figure out what they're writing to STDOUT and STDERR in a readable way?
I tried, for example, to do:
$ tail -f /proc/(pid)/fd/1
but that doesn't really work. It was a long shot anyway.
Any other ideas?
strace on its own is quite verbose and unreadable for seeing this.
Note: I am only interested in their output, not in anything else. I'm capable of figuring out the other things on my own; this question is only focused on getting access to stdout and stderr of the running process after starting it.
Since I'm not allowed to edit Jauco's answer, I'll give the full answer that worked for me (Russell's page relies on un-guaranteed behaviour that, if you close file descriptor 1 for STDOUT, the next creat call will open FD 1.
So, run a simple endless script like this:
import time
while True:
print 'test'
time.sleep(1)
Save it to test.py, run with
$ python test.py
Get the PID:
$ ps auxw | grep test.py
Now, attach gdb:
$ gdb -p (pid)
and do the fd magic:
(gdb) call creat("/tmp/stdout", 0600)
$1 = 3
(gdb) call dup2(3, 1)
$2 = 1
Now you can tail /tmp/stdout and see the output that used to go to STDOUT.
There's several new utilities that wrap up the "gdb method" and add some extra touches. The one I use now is called "reptyr" ("Re-PTY-er"). In addition to grabbing STDERR/STDOUT, it will actually change the controlling terminal of a process (even if it wasn't previously attached to a terminal).
The best use of this is to start up a screen session, and use it to reattach a running process to the terminal within screen so you can safely detach from it and come back later.
It's packaged on popular distros (Ex: 'apt-get install reptyr').
http://onethingwell.org/post/2924103615/reptyr
GDB method seems better, but you can do this with strace, too:
$ strace -p <PID> -e write=1 -s 1024 -o file
Via the man page for strace:
-e write=set
Perform a full hexadecimal and ASCII dump of all the
data written to file descriptors listed in the spec-
ified set. For example, to see all output activity
on file descriptors 3 and 5 use -e write=3,5. Note
that this is independent from the normal tracing of
the write(2) system call which is controlled by the
option -e trace=write.
This prints out somewhat more than you need (the hexadecimal part), but you can sed that out easily.
I'm not sure if it will work for you, but I read a page a while back describing a method that uses gdb
I used strace and de-coded the hex output to clear text:
PID=some_process_id
sudo strace -f -e trace=write -e verbose=none -e write=1,2 -q -p $PID -o "| grep '^ |' | cut -c11-60 | sed -e 's/ //g' | xxd -r -p"
I combined this command from other answers.
strace outputs a lot less with just -ewrite (and not the =1 suffix). And it's a bit simpler than the GDB method, IMO.
I used it to see the progress of an existing MythTV encoding job (sudo because I don't own the encoding process):
$ ps -aef | grep -i handbrake
mythtv 25089 25085 99 16:01 ? 00:53:43 /usr/bin/HandBrakeCLI -i /var/lib/mythtv/recordings/1061_20111230122900.mpg -o /var/lib/mythtv/recordings/1061_20111230122900.mp4 -e x264 -b 1500 -E faac -B 256 -R 48 -w 720
jward 25293 20229 0 16:30 pts/1 00:00:00 grep --color=auto -i handbr
$ sudo strace -ewrite -p 25089
Process 25089 attached - interrupt to quit
write(1, "\rEncoding: task 1 of 1, 70.75 % "..., 73) = 73
write(1, "\rEncoding: task 1 of 1, 70.76 % "..., 73) = 73
write(1, "\rEncoding: task 1 of 1, 70.77 % "..., 73) = 73
write(1, "\rEncoding: task 1 of 1, 70.78 % "..., 73) = 73^C
You can use reredirect (https://github.com/jerome-pouiller/reredirect/).
Type
reredirect -m FILE PID
and outputs (standard and error) will be written in FILE.
reredirect README also explains how to restore original state of process, how to redirect to another command or to redirect only stdout or stderr.
You don't state your operating system, but I'm going to take a stab and say "Linux".
Seeing what is being written to stderr and stdout is probably not going to help. If it is useful, you could use tee(1) before you start the script to take a copy of stderr and stdout.
You can use ps(1) to look for wchan. This tells you what the process is waiting for. If you look at the strace output, you can ignore the bulk of the output and identify the last (blocked) system call. If it is an operation on a file handle, you can go backwards in the output and identify the underlying object (file, socket, pipe, etc.) From there the answer is likely to be clear.
You can also send the process a signal that causes it to dump core, and then use the debugger and the core file to get a stack trace.

How can a process intercept stdout and stderr of another process on Linux?

I have some scripts that ought to have stopped running but hang around forever. Is there some way I can figure out what they're writing to STDOUT and STDERR in a readable way?
I tried, for example, to do:
$ tail -f /proc/(pid)/fd/1
but that doesn't really work. It was a long shot anyway.
Any other ideas?
strace on its own is quite verbose and unreadable for seeing this.
Note: I am only interested in their output, not in anything else. I'm capable of figuring out the other things on my own; this question is only focused on getting access to stdout and stderr of the running process after starting it.
Since I'm not allowed to edit Jauco's answer, I'll give the full answer that worked for me (Russell's page relies on un-guaranteed behaviour that, if you close file descriptor 1 for STDOUT, the next creat call will open FD 1.
So, run a simple endless script like this:
import time
while True:
print 'test'
time.sleep(1)
Save it to test.py, run with
$ python test.py
Get the PID:
$ ps auxw | grep test.py
Now, attach gdb:
$ gdb -p (pid)
and do the fd magic:
(gdb) call creat("/tmp/stdout", 0600)
$1 = 3
(gdb) call dup2(3, 1)
$2 = 1
Now you can tail /tmp/stdout and see the output that used to go to STDOUT.
There's several new utilities that wrap up the "gdb method" and add some extra touches. The one I use now is called "reptyr" ("Re-PTY-er"). In addition to grabbing STDERR/STDOUT, it will actually change the controlling terminal of a process (even if it wasn't previously attached to a terminal).
The best use of this is to start up a screen session, and use it to reattach a running process to the terminal within screen so you can safely detach from it and come back later.
It's packaged on popular distros (Ex: 'apt-get install reptyr').
http://onethingwell.org/post/2924103615/reptyr
GDB method seems better, but you can do this with strace, too:
$ strace -p <PID> -e write=1 -s 1024 -o file
Via the man page for strace:
-e write=set
Perform a full hexadecimal and ASCII dump of all the
data written to file descriptors listed in the spec-
ified set. For example, to see all output activity
on file descriptors 3 and 5 use -e write=3,5. Note
that this is independent from the normal tracing of
the write(2) system call which is controlled by the
option -e trace=write.
This prints out somewhat more than you need (the hexadecimal part), but you can sed that out easily.
I'm not sure if it will work for you, but I read a page a while back describing a method that uses gdb
I used strace and de-coded the hex output to clear text:
PID=some_process_id
sudo strace -f -e trace=write -e verbose=none -e write=1,2 -q -p $PID -o "| grep '^ |' | cut -c11-60 | sed -e 's/ //g' | xxd -r -p"
I combined this command from other answers.
strace outputs a lot less with just -ewrite (and not the =1 suffix). And it's a bit simpler than the GDB method, IMO.
I used it to see the progress of an existing MythTV encoding job (sudo because I don't own the encoding process):
$ ps -aef | grep -i handbrake
mythtv 25089 25085 99 16:01 ? 00:53:43 /usr/bin/HandBrakeCLI -i /var/lib/mythtv/recordings/1061_20111230122900.mpg -o /var/lib/mythtv/recordings/1061_20111230122900.mp4 -e x264 -b 1500 -E faac -B 256 -R 48 -w 720
jward 25293 20229 0 16:30 pts/1 00:00:00 grep --color=auto -i handbr
$ sudo strace -ewrite -p 25089
Process 25089 attached - interrupt to quit
write(1, "\rEncoding: task 1 of 1, 70.75 % "..., 73) = 73
write(1, "\rEncoding: task 1 of 1, 70.76 % "..., 73) = 73
write(1, "\rEncoding: task 1 of 1, 70.77 % "..., 73) = 73
write(1, "\rEncoding: task 1 of 1, 70.78 % "..., 73) = 73^C
You can use reredirect (https://github.com/jerome-pouiller/reredirect/).
Type
reredirect -m FILE PID
and outputs (standard and error) will be written in FILE.
reredirect README also explains how to restore original state of process, how to redirect to another command or to redirect only stdout or stderr.
You don't state your operating system, but I'm going to take a stab and say "Linux".
Seeing what is being written to stderr and stdout is probably not going to help. If it is useful, you could use tee(1) before you start the script to take a copy of stderr and stdout.
You can use ps(1) to look for wchan. This tells you what the process is waiting for. If you look at the strace output, you can ignore the bulk of the output and identify the last (blocked) system call. If it is an operation on a file handle, you can go backwards in the output and identify the underlying object (file, socket, pipe, etc.) From there the answer is likely to be clear.
You can also send the process a signal that causes it to dump core, and then use the debugger and the core file to get a stack trace.

Resources