Difference between stdout and /dev/stdout - linux

bala#hp:~$ echo "Hello World" > stdout
bala#hp:~$ cat stdout
Hello World
bala#hp:~$ echo "Hello World" > /dev/stdout
Hello World
Kindly clarify what is the difference between stdout and /dev/stdout
Note :
bala#hp:~$ file stdout
stdout: ASCII text
bala#hp:~$ file /dev/stdout
/dev/stdout: symbolic link to `/proc/self/fd/1'
Kindly help to know the difference .

In your case
stdout is a normal file, created in the same directory where you're running the command.
So, when you're redirecting the output of echo to stdout, it is written to the file. You need to do cat (as example, here you did) in order to see the content on screen here.
/dev/stdout is a device file, which is a link to /proc/self/fd/1, which means it is referring to the file descriptor 1 held by the current process.
So, when you're redirecting the output of echo to /dev/stdout, it is sent to the standard output (the screen) directly.

stdout on its own is just a file in the current directory, no different to finances.txt. It has nothing to do with the standard output of the process other than the fact you're redirecting standard output to it.
On the other hand, /dev/stdout is a link to a special file on the procfs file system, which represents file descriptor 1 of the process using it1.
Hence it has a very real connection to the process standard output.
1 The procfs file system holds all sorts of wondrous information about the system and all its processes (assuming a process has permissions to get to them, which it should have for /proc/self).

One is a normal file, no different from any other normal file like e.g. ~/foobar.txt.
The other is a symbolic link (like one you can create with ln -s) to a special virtual file that represents file descriptor 1 in the current process.

Related

Write to STDERR by filename even if not writable for the user

My user does not have write permissions for STDERR:
user#host:~> readlink -e /dev/stderr
/dev/pts/19
user#host:~> ls -l /dev/pts/19
crw--w---- 1 sysuser tty 136, 19 Apr 26 14:02 /dev/pts/19
It is generally not a big issue,
echo > /dev/stderr
fails with
-bash: /dev/stderr: Permission denied
but usual redirection like
echo >&2
works alright.
However I now need to cope with a 3rd-party binary which only provides logging output to a specified file:
./utility --log-file=output.log
I would like to see the logging output directly in STDERR.
I cannot do it the easy way like --log-file=/dev/stderr due to missing write permissions. On other systems where the write permissions are set this works alright.
Furthermore, I also need to parse output of the process to STDOUT, therefore I cannot simply send log to STDOUT and then redirect to STDERR with >&2. I tried to use the script utility (where the redirection to /dev/stderr works properly) but it merges STDOUT and STDERR together as well.
You can use a Bash process substitution:
./utility --log-file=>(cat>&2)
The substitution will appear to the utility like --log-file=/dev/fd/63, which can be opened. The cat process inherits fd 2 without needing to open it, so it can do the forwarding.
I tested the above using chmod -w /dev/stderr and dd if=/etc/issue of=/dev/stderr. That fails, but changing to dd if=/etc/issue of=>(cat>&2) succeeds.
Note that your error output may suffer more buffering than you would necessarily want/expect, and will not be synchronous with your shell prompt. In other words, your prompt may appear mixed in with error output that arrives at your terminal after utility has completed. The dd example will likely demonstrate this. You may want to append ;wait after the command to ensure that the cat has finished before your PS1 prompt appears: ./utility --log-file=>(cat>&2); wait

Linux writing console output to a file

I have a large amount of prints on the console and I want to store them into a file. Can anyone suggest a way in Linux?
your_print_command > filename.txt
Or
your_print_command >> filename.txt
The latter appends data into file instead of overriding it.
To make sure you get both stderr and stdout to the file instead of the console
command_generating_text &> /path/to/file
To keep stderr and stdout to different files
command_generating_text 1> /path/to/file.stdout 2> /path/to/file.stderr

After using `exec 1>file`, how can I stop this redirection of the STDOUT to file and restore the normal operation of STDOUT?

I am a newbie in shell scripting and I am using Ubuntu-11.10. In the terminal after using exec 1>file command, whatever commands I give to terminal, its output doesn't get shown in terminal. I know that STDOUT is getting redirected to the file, the output of those commands gets redirected to file.
My questions are here
Once I use exec 1>file, how can I get rid of this? i.e. How can I stop the redirection of STDOUT to file and restore the normal operation of STDOUT (i.e. redirection to terminal rather than file)?
I tried using exec 1>&- but it didn’t work as this closes the STDOUT file descriptor.
Please throw light on this entire operation of exec 1>file and exec 1>&-
What will happen if we close the standard file descriptors 0, 1, 2 by using exec 0>&- exec 1>&- exec 2>&-?
Q1
You have to prepare for the recovery before you do the initial exec:
exec 3>&1 1>file
To recover the original standard output later:
exec 1>&3 3>&-
The first exec copies the original file descriptor 1 (standard output) to file descriptor 3, then redirects standard output to the named file. The second exec copies file descriptor 3 to standard output again, and then closes file descriptor 3.
Q2
This is a bit open ended. It can be described at a C code level or at the shell command line level.
exec 1>file
simply redirects the standard output (1) of the shell to the named file. File descriptor one now references the named file; any output written to standard output will go to the file. (Note that prompts in an interactive shell are written to standard error, not standard output.)
exec 1>&-
simply closes the standard output of the shell. Now there is no open file for standard output. Programs may get upset if they are run with no standard output.
Q3
If you close all three of standard input, standard output and standard error, an interactive shell will exit as you close standard input (because it will get EOF when it reads the next command). A shell script will continue running, but programs that it runs may get upset because they're guaranteed 3 open file channels — standard input, standard output, standard error — and when your shell runs them, if there is no other I/O redirection, then they do not get the file channels they were promised and all hell may break loose (and the only way you'll know is that the exit status of the command will probably not be zero — success).
Q1: There is a simple way to restore stdout to the terminal after it has been redirected to a file:
exec >/dev/tty
While saving the original stdout file descriptor is commonly required for it to be restored later, in this particular case, you want stdout to be /dev/tty so there is no need to do more.
$ date
Mon Aug 25 10:06:46 CEST 2014
$ exec > /tmp/foo
$ date
$ exec > /dev/tty
$ date
Mon Aug 25 10:07:24 CEST 2014
$ ls -l /tmp/foo
-rw-r--r-- 1 jlliagre jlliagre 30 Aug 25 10:07 /tmp/foo
$ cat /tmp/foo
Mon Aug 25 10:07:05 CEST 2014
Q2: exec 1>file is a slightly more verbose way of doing exec >file which, as already stated, redirect stdout to the given file, provided you have the right to create/write it. The file is created if it doesn't exist, it is truncated if it does.
exec 1>&- is closing stdout, which is probably a bad idea in most situations.
Q3: The commands should be
exec 0<&-
exec 1>&-
exec 2>&-
Note the reversed redirection for stdin.
It might be simplified that way:
exec <&- >&- 2>&-
This command closes all three standard file descriptors. This is a very bad idea. Should you want a script to be disconnected from these channels, I would suggest this more robust approach:
exec </dev/null >/dev/null 2>&1
In that case, all output will be discarded instead of triggering an error, and all input will return just nothing instead of failing.
The accepted answer is too verbose for me.
So, I decided to sum up an answer to your original answer.
Using Bash version 4.3.39(2)-release
On a x86 32-Bit on Cygwin Machine
GIVEN:
Stdin is fd #0.
Stdout is fd #1.
Stderr is fd #2.
ANSWER (written in bash):
exec 1> ./output.test.txt
echo -e "First Line: Hello World!"
printf "%s\n" "2nd Line: Hello Earth!" "3rd Line: Hello Solar System!"
# This is uneccessary, but
# it stops or closes stdout.
# exec 1>&-
# Send stdout back to stdin
exec 1>&0
# Oops... I need to append some more data.
# So, lets append to the file.
exec 1>> ./output.test.txt
echo -e "# Appended this line and the next line is empty.\n"
# Send stdout back to stdin
exec 1>&0
# Output the contents to stdout
cat ./output.test.txt
USEFUL-KEYWORDS:
There are also here-docs, here-strings, and process-substitution for IO redirection in Bourne, Bash, tcsh, zsh for Linux, BSD, AIX, HP, Busybox, Toybox and etcetera.
While I completely agree with Jonathan's Q1, some systems have /dev/stdout, so you might be able to do exec 1>file; ...; exec 1>/dev/stdout

Linux All Output to a File

Is there any way to tell Linux system put all output(stdout,stderr) to a file?
With out using redirection, pipe or modification the how scrips get called.
Just tell the Linux use a file for output.
for example:
script test1.sh:
#!/bin/bash
echo "Testing 123 "
If i run it like "./test1.sh" (with out redirection or pipe)
i'd like to see "Testing 123" in a file (/tmp/linux_output)
Problem: in the system a binary makes a call to a script and this script call many other scrips. it is not possible to modify each call so If i can modify Linux put "output" to a file i can review the logs.
#!/bin/bash
exec >file 2>&1
echo "Testing 123 "
You can read more about exec here
If you are running the program from a terminal, you can use the command script.
It will open up a sub-shell. Do what you need to do.
It will copy all output to the terminal into a file. When you are done, exit the shell. ^D, or exit.
This does not use redirection or pipes.
You could set your terminal's scrollback buffer to a large number of lines and then see all the output from your commands in the buffer - depending on your terminal window and the options in its menus, there may be an option in there to capture terminal I/O to a file.
Your requirement if taken literally is an impractical one, because it is based in a slight misunderstanding. Fundamentally, to get the output to go in a file, you will have to change something to direct it there - which would violate your literal constraint.
But the practical problem is solvable, because unless explicitly counteracted in the child, the output directions configured in a parent process will be inherited. So you only have to setup the redirection once, using either a shell, or a custom launcher program or intermediary. After that it will be inherited.
So, for example:
cat > test.sh
#/bin/sh
echo "hello on stdout"
rm nosuchfile
./test2.sh
And a child script for it to call
cat > test2.sh
#/bin/sh
echo "hello on stdout from script 2"
rm thisfileisnteither
./nonexistantscript.sh
Run the first script redirecting both stdout and stderr (bash version - and you can do this in many ways such as by writing a C program that redirects its outputs then exec()'s your real program)
./test.sh &> logfile
Now examine the file and see results from stdout and stderr of both parent and child.
cat logfile
hello on stdout
rm: nosuchfile: No such file or directory
hello on stdout from script 2
rm: thisfileisnteither: No such file or directory
./test2.sh: line 4: ./nonexistantscript.sh: No such file or directory
Of course if you really dislike this, you can always always modify the kernel - but again, that is changing something (and a very ungainly solution too).

What does this shell command mean "exec 3>&1 > >(logger -t "OKOK")"

I've found the following bash command in some source code.
exec 3>&1 > >(logger -t "OKOK")
What does it exactly mean?
As far as I know, it redirects those log to the syslog.
However, What is 3>&1?
I have never seen a file descriptor of 3 before.
unusual indeed, but it does exist:
Each open file gets assigned a file descriptor. The file descriptors for stdin, stdout, and stderr are 0, 1, and 2, respectively. For opening additional files, there remain descriptors 3 to 9. It is sometimes useful to assign one of these additional file descriptors to stdin, stdout, or stderr as a temporary duplicate link. This simplifies restoration to normal after complex redirection and reshuffling
Find out more on the IO redirection page.
From this line on, everything printed to STDOUT will be processed by logger. The original STDOUT has been saved in fd3, so you can later (if needed) restore the normal STDOUT. See Advanced BASH Scripting Guide for details.

Resources