Why more command cannot read stdin but from piped stdin? - linux

I have doubt in more command. Normally, more cannot read from stdin, but using pipe it reads the content from stdin.
For example, While trying to execute more command to get the input from stdin, it is rejecting.
$ more [Enter]
Usage: more [options] file...
Options:
-d display help instead of ring bell
-f count logical, rather than screen lines
-l suppress pause after form feed
-p suppress scroll, clean screen and disblay text
-c suppress scroll, display text and clean line ends
-u suppress underlining
-s squeeze multiple blank lines into one
-NUM specify the number of lines per screenful
+NUM display file beginning from line number NUM
+/STRING display file beginning from search string match
-V output version information and exit
But, Here it is taking input from piped stdin.
$ cat file.txt
This is for testing purpose
$ cat file.txt | more
This is for testing purpose
I would like know how this is happening ( mean not reading from stdin but from piped stdin?

more discriminates on whether its standard input comes from a TTY or from elsewhere (pipe, regular file, etc.). As explained in the comments, if the input comes from a TTY, more refuses to run because it needs the TTY to read its command keystrokes. cat, on the other hand, is not interactive and doesn't deal with the TTY explicitly, so it can afford not to care whether its input is a TTY or other type of open file.
There are many other examples of Unix utilities behaving differently based on whether the standard input or output is a TTY. For example, ls formats its output in multiple columns, while ls | cat does not.

Related

Grep command Linux ordering of source string and target string

Grep command syntax is as:
grep "literal_string" filename --> search from string in filename.
So I am assuming the order of is like this
-- keyword(grep) --> string to be searched --> filename/source string and command is interpreted from left to right.
My question is how the commands such as this got processed:
ps -ef | grep rman
Do the order is optional?
How grep is able to know that source is on left and not on right? Or I am missing something here.
When using Unix Pipes, most system commands will take the output from the previous command (to the left of the pipe ) and then pass the output onto the command to the right of the pipe.
The order is important when using grep with or without a pipe.
Thus
grep doberman /file/about/dogs
is the same as
cat /file/about/dogs | grep doberman
See Pipes on http://linuxcommand.org/lts0060.php for some more information.
As step further down from Kyle's answer regarding pipes is that most shell commands read their input from stdin and write their output to stdout. Now many of the commands will also allow you to specify a filename to read from or write too, or allow you to redirect a file to stdin as input and redirect the commands stdout to a file. But regardless how you specify what to read, the command process input from it's stdin and provides output on stdout (errors on stderr). stdin, stdout, and stderr are the designation of file descriptors 0, 1 & 2, respectively.
This basic function is what allows command to be piped together. Where a pipe (represented by the | character) does nothing more that take the stdout from the first command (on left) and direct it to the next commands stdin. As such, yes, the order is important.
Another point to remember is that each piped process is run in its own subshell. Put another way, each | will spawn another shell to run the following command in. This has implications if you are relying on the environment of one process for the next.
Hopefully, these answers will give you a better feel for what is taking place.

How to show full output on linux shell?

I have a program that runs and shows a GUI window. It also prints a lot of things on the shell. I need to view the first thing printed and the last thing printed. the problem is that when the program terminates, if I scroll to the top of the window, the stuff printed when it began is removed. So stuff printed during the program is now at the top. So that means I can't view the first thing printed.
Also I tried doing > out.txt, but the problem is that the file only gets closed and readable when I manually close the GUI window. If it gets outed to a file, nothing gets printed on the screen and I have no way to know if the program finished. I can't modify any of the code too.
Is there a way I can see the whole list of text printed on the shell?
Thanks
You can just use tee command to get output/error in a file as well on terminal:
your-command |& tee out.log
Though just keep in mind that this output is line buffered by default (4k in size).
When the output of a program goes to your terminal window, the program generally flushes its output after each newline. This is why you see the output interactively.
When you redirect the output of the program to out.txt, it only flushes its output when its internal buffer is full, which is probably after every 8KiB of output. This is why you don't see anything in the file right away, and you don't see the last things printed by the program until it exits (and flushes its last, partially-full buffer).
You can trick a program into thinking it's sending its output to a terminal using the script command:
script -q -f -c myprogram out.txt
This script command runs myprogram connected to a newly-allocated “pseudo-terminal” (or pty for short). This tricks myprogram into thinking it's talking to a terminal, so it flushes its output on every newline. The script command copies myprogram's output to your terminal window and to the file out.txt.
Note that script will write a header line to out.txt. I can't find a way to disable that on my test Linux system.
In the example above, I assumed your program takes no arguments. If it does, you either need to put the program and arguments in quotes:
script -q -f -c 'myprogram arg1 arg2 arg3' out.txt
Or put the program command line in a shell script and pass that shell script to the script command.

Why does the redirection symbol change the behavior of ls?

So, I have always had doubts about how redirection works in the following situations:
I type "ls" and all the filenames are separated by white spaces:
test$ touch a b c
test$ ls
a b c
I use a ">" to redirect STDOUT to a file:
test$ ls > ls.txt
test$ cat ls.txt
a
b
c
ls.txt
It is interesting to see that the format changes, with the filenames separated by newline characters. It seems that the output is generated by ls -1.
Why is the output in the latter case different from that in the former case? Can ls actually see the ">" symbol so it changes its behavior?
ls tests its output stream to see whether it is a terminal, and it modifies its behavior depending on that.
This is documented; the man page for ls documents several things that depend on whether the output is a terminal:
If the output is a terminal, -C (for multi-column output) is a default, otherwise -1 (one-column) is a default.
If -l or -s is used and the output is a terminal, a sum for all file sizes or blocks, respectively, is printed on a line before the listing.
If the output is a terminal, -q is a default. This prints non-graphic characters as “?”. Otherwise, -v and -w are defaults. I am a bit unclear on the difference between -v and -w. The documentation I have says -v forces “unedited printing of non-graphic characters” and -w forces “raw printing of non-printable characters.”
It cannot see the symbol (which is interpreted by the shell), but it can find out whether the output is going to a terminal.
In order to organize files nicely into the columns, ls needs to know the width of the terminal. When the output device is not a terminal, it just doesn't know how to format its output.
Another nice consequence of this behavior is that you can do things like ls | wc -l without worrying about multiple files on the same line. (You still have to worry about file names containing newlines, though.)
ls uses an internal variable called ls_mode which is different for the 3 ls "type" commands that Gnu coreutils implement. For ls, it's LS_LS. For dir, it's LS_MULTI_COL and for vdir, it's LS_LONG_FORMAT. The actual ls program indicates that depending on this variable, the output format will change. For ls, this is what it says
If ls_mode is LS_LS, the output format depends on whether the
output device is a terminal. This is for the 'ls' program.
This is congruent with your experience of the format changing with output location. If you try the same with dir, it won't be though.

Program dumps data to stdout fast. Looking for way to write commands without getting flooded

Program is dumping to stdout and while I try to type new commands I can't see what I'm writing because it gets thrown along with the output. Is there a shell that separates commands and outputs? Or can I use two shells where I can run commands on one and make it dump to the stdout of another?
You can redirect the output of the program to another terminal window. For example:
program > /dev/pts/2 &
The style of terminal name may depend on how your system is organized.
There's 'more' to let you pageinate through output, and 'tee' which lets you split a programs output, so it goes to both stdout and to a file.
$ yourapp | more // show in page-sized chunks
$ yourapp | tee output.txt // flood to stdout, but also save a copy in output.txt
and best of all
$ yourapp | tee output.txt | more // pageinate + save copy
Either redirect standard output and error when you run the program, so it doesn't bother you:
./myprog >myprog.out 2>&1
or, alternatively, run a different terminal to do your work in. That leaves your program free to output whatever it likes to its terminal without bothering you.
Having said that, I'd still capture the information from the program to a file in case you have to go back and look at it.

How can I make Bash automatically pipe the output of every command to something like tee?

I use some magic in $PROMPT_COMMAND to automatically save every command I run to a database:
PROMPT_COMMAND='save_command "$(history 1)"'
where save_command is a more complicated function. It would be nice to save also the head/tail of the output of each command, but I can't think of a reasonable way to do this, other than manually prepending some sort of shell function to everything I type (and this becomes even more painful with complicated pipelines or boolean expressions). Basically, I just want the first and last 10 lines of whatever went to /dev/tty to get saved to a variable (or even a file) - is there any way to do this?
script(1) will probably get you started. It won't let you just record the first and last 10 lines, but you can do some post-processing on its output.
bash | tee /dev/tty ./bashout
This saves all stdout gets saved to bashout.
bash | tee /dev/tty | tail > ./bashout
The tail of stdout of every command gets written to bashout.
bash | tee /dev/tty | sed -e :a -e '10p;$q;N;11,$D;ba' > ./bashout
The first and last 10 lines of stdout of every command gets written to bashout.
These don't save the command, but if you modify your save_command to print the command to stdout, it will get in there.

Resources