I'm an beginner in Linux, I have a question about redirecting STDOUT and STDERR.
Create a file1 to add some strings
echo hello > file1
After this, when I do something like this
cat file1 file2
It will give an error like this
hello
cat: file2: No such file or directory
I want to redirect the STDOUT and STDERR, so
cat file1 file2 > file3 2>&1 | cat
hello
cat: file2: No such file or directory
I know that | can use last command's output as its input, right?
So the first cat's output is:
hello
cat: file2: No such file or directory
Now, I find another method to redirect output, like:
cat file1 file2 &> file3
cat file3
hello
cat: file2: No such file or directory
It can do the same thing, but when I add |cat , the result is
cat file1 file2 &> file3 | cat
hello
Where is the STDERR? It means only hello is the output of the first cat?
What the difference between 2>&1 and &>file?
cat file1 file2
It will give an error like this
hello cat: file2: No such file or directory
The error is simply telling you file2 does not exist. You create file1 with your redirection:
echo hello > file1
Now file1 exists. When you do cat file1 file2, cat attempts to output the contents of file1 & file2 to stdout, but file2 doesn't exist (it tells you). To make file2, you can do cat file1 > file2 to redirect the output of cat file1 to file2, or you can simply cp file1 file2. Then file2 will exist.
I want to redirect the STDOUT and STDERR, so
cat file1 file2 > file3 2>&1 | cat
hello
cat: file2: No such file or directory
Again, file2 still doesn't exist. cat short for concatenate simply output the contents of the files given as input to stdout unless redirected. file1 contains hello so it is output along with the error. hello is redirected to file3, ...and..., since you have redirected stderr to stdout (e.g. 2>&1), the error message also ends up in file3
The | (pipe) command in Linux shell just takes the stdout from the command on the left and redirects it to the stdin of the command after the pipe. Since you already redirected the stdout and stderr from cat file1 file2 into file3, nothing is sent to the cat following the pipe. The output you posted appears to have come from:
cat file3
In a Linux shell, stdin, stdout and stderr are simply special files that represent file descriptors 0, 1 & 2, respectively. The actual files in the filesystem are /dev/stdin, /dev/stdout, and /dev/stderr. If you check with the ls -l command, you will see the relation between the file and file descriptors:
$ ls -l /dev/std*
lrwxrwxrwx 1 root root 4 Apr 2 17:47 /dev/stderr -> fd/2
lrwxrwxrwx 1 root root 4 Apr 2 17:47 /dev/stdin -> fd/0
lrwxrwxrwx 1 root root 4 Apr 2 17:47 /dev/stdout -> fd/1
The output hello appears on standard output. The error message appears on standard error. Henceforth, those will be stdout and stderr.
You claim that cat file1 file2 > file3 2>&1 | cat produces some output on the terminal. It doesn't with any standard shell. When I run it, it produces no visible output, but file3 contains the line from file1 and the error message. Since there's no input for it to read, the second cat command exits without producing any output.
The > file3 redirection sends stdout to file3. The 2>&1 sends stderr to the same place that stdtout is going. File descriptor 0 is standard input (stdin), 1 is stdout, and 2 is stderr.
There is no output sent to the pipe; it is all sent to the file (but the pipe is created first, then stdout is redirected to the file).
These commands demonstrate that all the output (stdout and stderr) was written to file3.
You claim that cat file1 file2 &> file3 | cat produces some output on the terminal. It doesn't with Bash; there is no output because both stdout and stderr go to file3.
The difference between &> file3 and > file3 2>&1 is portability (the &> notation is less portable) and number of characters; functionally, they're equivalent.
Related
I am trying to copy content of file1 to file 2 using linux command
cat file1 > file2
file1 may or may not be available depending on different environments where the program is being run. What should be added to the command in case file1 is not available so that it doesn't return an error ? I have read that appending 2>/dev/null will not give error. While that's true, and I didn't get an error the command
cat file1 2>/dev/null > file2
made file2's previous content completely empty when file1 wasn't there. I don't want to lose the content of file2 in case file1 wasn't there and don't want an error to return.
Also in what other cases can the command fail and return an error ?
Test for file1 first.
[ -r file1 ] && cat ...
See help test for details.
elaborating on #Ignacio Vazquez-Abrams :
if (test -a file1); then cat file1 > file2; fi
File1 is empty
File2 consists below content
praveen
Now I am trying to append the content of file1 to file2
Since file1 is empty to nullifying error using /dev/null so output will not show any error
cat file1 >>file 2>/dev/null
File2 content not got deleted
file2 content exsists
praveen
If [ -f file1 ]
then
cat file >> file2
else
cat file1 >>file 2>/dev/null
fi
First, you wrote:
I am trying to copy content of file1 to file 2 using linux command
To copy content of file1 to file2 use the cp command:
if ! cp file1 file2 2>/dev/null ; then
echo "file1 does not exist or isn't readable"
fi
Just for completeness, with cat:
I would pipe stderr to /dev/null and check the return value:
if ! cat file1 2>/dev/null > file2 ; then
rm file2
echo "file1 does not exist or isn't readable"
fi
I am trying to understand fine point of standard and error redirection in linux shell scripting (bourne, bash).
Example 1:
cat file1 > output.txt
or
cat file1 1> output.txt
This redirects contents of file1 to output.txt. Works as expected.
Example 2:
kat file1 2> output.txt
kat command does not exist so error is redirected to output.txt. Works as expected.
Example 3:
cat file1 2>&1 output.txt
Because cat is a valid command and file1 exists, here I would expect same behavior as example 1. Instead I seem to get contents of both files to screen.
Example 4:
kat file1 2>&1 output.txt
since kat does not exist, I would expect same behavior as example 2. Instead I get error to screen ("-bash: kat: command not found")
as explained in many on-line manuals, example:
https://www.gnu.org/software/bash/manual/html_node/Redirections.html
The problem is that 2>&1 only tells the shell to redirect file descriptor 2 (standard error) to file descriptor 1 (standard output). It doesn't actually do any redirection of standard output.
For that you have to do it explicitly like
cat file1 > output.txt 2>&1
Note that you have to do the descriptor-redirection last (after the standard output redirection) or it will not work.
This is all explained in the Bash manual page (see the section about redirection).
cat file1 2>&1 output.txt
The shell will set up the redirection (stderr to stdout). After that you will have "left" as command executed by shell:
cat file1 output.txt
That's why you see both contents.
For
kat file1 2>&1 output.txt
it is the same because only
kat file1 output.txt
is left after shell sets the descriptors for the command to be executed. And this can't be found => error message from shell.
So I'm learning about pipes in bash and I found this pithy description:
A Unix pipe connects the STDOUT (standard output) file descriptor of
the first process to the STDIN (standard input) of the second. What
happens then is that when the first process writes to its STDOUT, that
output can be immediately read (from STDIN) by the second process.
Source
Given this understanding, let's connect the STDOUT of printf to the STDIN of ls. For simplicity, print the parent directory (printf ..).
~/Desktop/pipes$ mkdir sub
~/Desktop/pipes$ ls
sub
~/Desktop/pipes$ cd sub
(no files)
~/Desktop/pipes/sub$ printf .. | ls
(no files)
~/Desktop/pipes/sub$
I want to be doing: ls .. but it seems that all I'm getting is ls. Why is this so? How can I ls the parent directory using pipes? Am I misunderstanding pipes?
Many programs don't read from stdin, not just ls. It is also possible that a program might not write to stdout either.
Here is a little experiment that might clarify things. Carry out these steps:
cat > file1
This is file1
^D
The ^D is you pressing <CTRL>+D, which is the default end-of-file. So, first we are calling the cat program and redirecting its stdout to file1. If you don't supply an input filename then it reads from stdin, so we type "This is file1".
Now do similar:
cat > file2
This is file2
^D
Now if you:
cat < file1
You get:
This is file1
what if you:
cat file1 | cat file2
or:
cat file2 < file1
Why? Because if you supply an input filename then the cat program does not read stdin, just like ls.
Now, how about:
cat - file1 < file2
By convention the - means a standard stream, stdin when reading or stdout when writing.
The problem is ls does not read from stdin as you intended it to do. You need to use a tool that reads from the stdin like xargs and feed the read input to ls
printf "someSampleFolderOrFile" | xargs ls
xargs man page,
xargs - build and execute command lines from standard input
I am having a list of files under a directory as below,
file1
file2
file3
....
....
files will get created dynamically by a process.
now when i do tail -f file* > data.txt,
file* takes only the existing files in the directory.
for (e.g)
existing files:
file1
file2
i do : tail -f file* > data.txt
when tail in process a new file named file3 got created,
(here i need to include file3 as well in the tail without restarting the command)
however i need to stop tail and start it again so that dynamically created files also tailed.
Is there a way to dynamically include files in tail whenever there is a new file created or any workaround for this.
I have an anwser that satisfies most but not all of your requirements:
You can use
tail -f --follow=name --retry file1 file2 file3 > data.txt
This will keep opening the files 1,2,3 until they become available. It will keep printing the output even if on of the files disappears and reappears again.
example usage:
first create two dummy files:
echo a >> file1
echo b >> file2
now use tail (in a separate window):
tail -f --follow=name --retry file1 file2 file3 > data.txt
now append some data and do some other manipulations:
echo b >> file2
echo c >> file3
rm file1
echo a >> file1
Now this is the final output. Remark that all three files are taken into account, even though they weren't present at a certain moment:
==> file1 <==
a
==> file2 <==
b
tail: cannot open ‘file3’ for reading: No such file or directory
==> file2 <==
b
tail: ‘file3’ has become accessible
==> file3 <==
c
tail: ‘file1’ has become inaccessible: No such file or directory
==> file1 <==
a
remark: this won't work with file*, because that is a glob pattern that is expanded before execution. Suppose you do:
tail -f file*
and only file1 and file2 are present; then tail gets as input:
tail -f file1 file2
The glob expansion cannot know which files would eventually match the pattern. So this is a partial answer: if you know all the possible names of files that will be created; this will do the trick.
You could use inotifywait to inform you of any files created in a directory. Read the output and start a new tail -f as a background process for each new file created.
Can someone please explain why tee works here:
echo "testtext" | tee file1 > file2
My understanding was that tee duplicates the input and prints 1 to screen.
The above example allows the output from echo to be sent to 2 files, the first redirecting to the second.
I would expect 'testtext' to be printed to screen and passed through file1 and landing in file2. Similar as to how the text in the following example would only end up in file2.
echo "testtext" > file1 > file2
Can anyone explain what i am missing in my understanding?
Edit
Is it because its writing to file and then to stdout which gets redirected?
Your description is right: tee receives data from stdin and writes it both into file and stdout. But when you redirect tee's stdout into another file, there is obviously nothing written into terminal because the data ended up inside the second file.
Is it because its writing to file and then to stdout which gets redirected?
Exactly.
What you are trying to do could be done like this (demonstrating how tee works):
$ echo "testtext" | tee file1 | tee file2
testtext
But since tee from gnu coreutils accepts several output files to be specified, one can do just:
$ echo "testtext" | tee file1 file2
testtext
But your idea of passed through file1 and landing in file2 is not correct. Your shell example:
echo "testtext" > file1 > file2
makes the shell open both files file1 and file2 for writing which effectively truncates them and since stdout can be only redirected into another file directly, only the last redirection is effective (as it overrides the previous ones).
tee writes its input to each of the files named in its arguments (it can take more than one) as well as to standard output. The example could also be written
echo "testtext" | tee file1 file2 > /dev/null
where you explicitly write to the two files, then ignore what goes to standard output, rather than redirecting standard output to one of the files.
The > file2 in the command you showed does not somehow "extract" what was written to file1, leaving standard output to be written to the screen. Rather, > file2 instructs the shell to pass a file handle opened on file2 (rather than the terminal) to tee for it to use as standard output.
"is it because its writing to file and then to stdout which gets redirected?"
That is correct
tee sends output to the specified file, and to stdout.
the last ">" redirects standout to the second file specified.