I'm looking for a way to limit the amount of output produced by all command line programs in Linux, and preferably tell me when it is limited.
I'm working over a server which has a lag on the display. Occasionally I will accidentally run a command which outputs a large amount of text to the terminal, such as cat on a large file or ls on a directory with many files. I then have to wait a while for all the output to be printed to the terminal.
So is there a way to automatically pipe all output into a command like head or wc to prevent too much output having to be printed to terminal?
I don't know about the general case, but for each well-known command (cat, ls, find?)
you could do the following:
hardlink a copy to the existing utility
write a tiny bash function that calls the utility and pipes to head (or wc, or whatever)
alias the name of the utility to call your function.
So along these lines (utterly untested):
$ ln `which cat` ~/bin/old_cat
function trunc_cat () {
`old_cat $# | head -n 100`
}
alias cat=trunc_cat
Making aliases of all your commands would be a good start. Something like
alias lm="ls -al | more"
alias cam="cat $# | more"
Perhaps using screen could help?
this makes me think of bash-completion.
As complete command in bash enables you to specify handler when a program is not found,
what about write your own handler and clear $PATH, in order to execute every command with redirection to a filtering pipe?
#Did not try it myself.
Assuming you're working over a network connection, like ssh, into a remote server then try piping the output of the command to less. That way you can manage and navigate the output from the program on the server better. Use 'j' and 'k' to move up and down per line and 'ctrl-u' and 'ctrl-d' to move 1/2 a page up and down. When you do this only the relevant text (i.e. what fits on the screen) will be transmitted over the network.
Related
This question already has answers here:
How to trick an application into thinking its stdout is a terminal, not a pipe
(9 answers)
Closed 5 years ago.
Various bash commands I use -- fancy diffs, build scripts, etc, produce lots of color output.
When I redirect this output to a file, and then cat or less the file later, the colorization is gone -- presumably b/c the act of redirecting the output stripped out the color codes that tell the terminal to change colors.
Is there a way to capture colorized output, including the colorization?
One way to capture colorized output is with the script command. Running script will start a bash session where all of the raw output is captured to a file (named typescript by default).
Redirecting doesn't strip colors, but many commands will detect when they are sending output to a terminal, and will not produce colors by default if not. For example, on Linux ls --color=auto (which is aliased to plain ls in a lot of places) will not produce color codes if outputting to a pipe or file, but ls --color will. Many other tools have similar override flags to get them to save colorized output to a file, but it's all specific to the individual tool.
Even once you have the color codes in a file, to see them you need to use a tool that leaves them intact. less has a -r flag to show file data in "raw" mode; this displays color codes. edit: Slightly newer versions also have a -R flag which is specifically aware of color codes and displays them properly, with better support for things like line wrapping/trimming than raw mode because less can tell which things are control codes and which are actually characters going to the screen.
Inspired by the other answers, I started using script. I had to use -c to get it working though. All other answers, including tee, different script examples did not work for me.
Context:
Ubuntu 16.04
running behavior tests with behave and starting shell command during the test with python's subprocess.check_call()
Solution:
script --flush --quiet --return /tmp/ansible-output.txt --command "my-ansible-command"
Explanation for the switches:
--flush was needed, because otherwise the output is not well live-observable, coming in big chunks
--quiet supresses the own output of the script tool
-c, --command directly provides the command to execute, piping from my command to script did not work for me (no colors)
--return to make script propagate the exit code of my command so I know if my command has failed
I found that using script to preserve colors when piping to less doesn't really work (less is all messed up and on exit, bash is all messed up) because less is interactive. script seems to really mess up input coming from stdin even after exiting.
So instead of running:
script -q /dev/null cargo build | less -R
I redirect /dev/null to it before piping to less:
script -q /dev/null cargo build < /dev/null | less -R
So now script doesn't mess with stdin and gets me exactly what I want. It's the equivalent of command | less but it preserves colors while also continuing to read new content appended to the file (other methods I tried wouldn't do that).
some programs remove colorization when they realize the output is not a TTY (i.e. when you redirect them into another program). You can tell some of those to use color forcefully, and tell the pager to turn on colorization, for example use less -R
This question over on superuser helped me when my other answer (involving tee) didn't work. It involves using unbuffer to make the command think it's running from a shell.
I installed it using sudo apt install expect tcl rather than sudo apt-get install expect-dev.
I needed to use this method when redirecting the output of apt, ironically.
I use tee: pipe the command's output to teefilename and it'll keep the colour. And if you don't want to see the output on the screen (which is what tee is for: showing and redirecting output at the same time) then just send the output of tee to /dev/null:
command| teefilename> /dev/null
How can I make a output like htop?
That is to say that the information is continually reinforced.
I had thought of something like this but I don't think it's completely right and I'm sure there are better ways to do it.
while true ; do
ls
sleep 3
clear
done
I would like to do it with bash but I am open to other proposals.
You can use watch utility for such a task:
$ watch -n 10 script.sh
the above command will execute the script.sh every 10 seconds you can change it through n option.
or for your command you can do it like this:
$ watch -n 3 ls
htop uses ncurses (https://www.gnu.org/software/ncurses/) which is a C library developed for making terminal interfaces.
If you'd like to stick with plain old bash, you could try echoing some output, then echoing backspace \b characters and spaces to overwrite it, followed by echoing new output again. You can also make use of system tools like tput to get terminal dimensions and things of that nature. I would suggest not going the bash route however.
I learnt that a tee command will store the STDOUT to a file as well as outputs to terminal.
But, here the problem is every time I have to give tee command, for every command I give.
Is there any way or tool in linux, so that what ever I run in terminal, it should store the command as well as output. (I used tee command in MySQL, where it will store all the commands and outputs to a file of that entire session. I am expecting a tool similar to this.)
Edit:
When I run script -a log.txt, I see ^M characters as well as ^[ and ^] characters in log.txt file. I used various dos2unix, :set ff=unix, :set ff=dos commands, but they didn't helped me in removing these ^[, ^] characters.
Is there any method, I can directly get the plain text file (with out these extra chars).
OS: RHEL 5
You can use script command which writes everything on file
script -f log.txt
you could use aliases like such alias ls="ls;echo ls >>log" so every time you run ls it runs echo ls >>log too.
But script would probably be better in this case, just dont go into vi while you are in script.
I know history will capture commands that I run, but it is shell specific. I work with multiple shells and multiple hosts and would like to write a small script which, after every command I run, dumps that command to some file along with the host name. This way, i can implement my own history command which reads from that file, and can take a host as an argument which would be handy for me. I'm not sure how to get the first part though..i.e., get every shell command I type to trigger a "dump that command into a file" part. Any ideas?
Thanks
In bash, the PROMPT_COMMAND environment variable contains a command that will be executed before the PS1 prompt is displayed. So yours could be something like history | tail -n1 | perl -npe 's/^\s+\d+\s+//' | yourcommand HOST
The script utility should solve your problem. It records everything you type and all that is printed on the terminal in a file (even including terminal control codes, so if you cat that file on the console, you even reproduce the original text colors).
Program is dumping to stdout and while I try to type new commands I can't see what I'm writing because it gets thrown along with the output. Is there a shell that separates commands and outputs? Or can I use two shells where I can run commands on one and make it dump to the stdout of another?
You can redirect the output of the program to another terminal window. For example:
program > /dev/pts/2 &
The style of terminal name may depend on how your system is organized.
There's 'more' to let you pageinate through output, and 'tee' which lets you split a programs output, so it goes to both stdout and to a file.
$ yourapp | more // show in page-sized chunks
$ yourapp | tee output.txt // flood to stdout, but also save a copy in output.txt
and best of all
$ yourapp | tee output.txt | more // pageinate + save copy
Either redirect standard output and error when you run the program, so it doesn't bother you:
./myprog >myprog.out 2>&1
or, alternatively, run a different terminal to do your work in. That leaves your program free to output whatever it likes to its terminal without bothering you.
Having said that, I'd still capture the information from the program to a file in case you have to go back and look at it.