Redirect output from command to terminal and a file without using tee - linux

As the name implies, I want to log the output of a command to a file without changing the terminal behavior. I still want to see the output and importantly, I still need to be able to do input.
The input requirement is why I specifically cannot use tee. The application I am working with doesn't handle input properly when using tee. (No, I cannot modify the application to fix this). I'm hoping a more fundamental approach with '>' redirection gets around this issue.
Theoretically, this should work exactly as I want, but, like I said, it does not.
command | tee -a foo.log
Also notice I added the -a flag. Not strictly required because I can definitely do a work around but it'd be nice if that was a feature as well.

Use script.
% script -c vi vi.log
Script started, output log file is 'vi.log'.
... inside vi command - fully interactive - save the file, takes to:
Script done.
% ls -l vi.log
-rw-r--r-- 1 risner risner 889 Sep 13 15:16 vi.log

Related

Cli colors disappear when piping into a text file [duplicate]

This question already has answers here:
How to trick an application into thinking its stdout is a terminal, not a pipe
(9 answers)
Closed 5 years ago.
Various bash commands I use -- fancy diffs, build scripts, etc, produce lots of color output.
When I redirect this output to a file, and then cat or less the file later, the colorization is gone -- presumably b/c the act of redirecting the output stripped out the color codes that tell the terminal to change colors.
Is there a way to capture colorized output, including the colorization?
One way to capture colorized output is with the script command. Running script will start a bash session where all of the raw output is captured to a file (named typescript by default).
Redirecting doesn't strip colors, but many commands will detect when they are sending output to a terminal, and will not produce colors by default if not. For example, on Linux ls --color=auto (which is aliased to plain ls in a lot of places) will not produce color codes if outputting to a pipe or file, but ls --color will. Many other tools have similar override flags to get them to save colorized output to a file, but it's all specific to the individual tool.
Even once you have the color codes in a file, to see them you need to use a tool that leaves them intact. less has a -r flag to show file data in "raw" mode; this displays color codes. edit: Slightly newer versions also have a -R flag which is specifically aware of color codes and displays them properly, with better support for things like line wrapping/trimming than raw mode because less can tell which things are control codes and which are actually characters going to the screen.
Inspired by the other answers, I started using script. I had to use -c to get it working though. All other answers, including tee, different script examples did not work for me.
Context:
Ubuntu 16.04
running behavior tests with behave and starting shell command during the test with python's subprocess.check_call()
Solution:
script --flush --quiet --return /tmp/ansible-output.txt --command "my-ansible-command"
Explanation for the switches:
--flush was needed, because otherwise the output is not well live-observable, coming in big chunks
--quiet supresses the own output of the script tool
-c, --command directly provides the command to execute, piping from my command to script did not work for me (no colors)
--return to make script propagate the exit code of my command so I know if my command has failed
I found that using script to preserve colors when piping to less doesn't really work (less is all messed up and on exit, bash is all messed up) because less is interactive. script seems to really mess up input coming from stdin even after exiting.
So instead of running:
script -q /dev/null cargo build | less -R
I redirect /dev/null to it before piping to less:
script -q /dev/null cargo build < /dev/null | less -R
So now script doesn't mess with stdin and gets me exactly what I want. It's the equivalent of command | less but it preserves colors while also continuing to read new content appended to the file (other methods I tried wouldn't do that).
some programs remove colorization when they realize the output is not a TTY (i.e. when you redirect them into another program). You can tell some of those to use color forcefully, and tell the pager to turn on colorization, for example use less -R
This question over on superuser helped me when my other answer (involving tee) didn't work. It involves using unbuffer to make the command think it's running from a shell.
I installed it using sudo apt install expect tcl rather than sudo apt-get install expect-dev.
I needed to use this method when redirecting the output of apt, ironically.
I use tee: pipe the command's output to teefilename and it'll keep the colour. And if you don't want to see the output on the screen (which is what tee is for: showing and redirecting output at the same time) then just send the output of tee to /dev/null:
command| teefilename> /dev/null

Appending to file with sudo access

I am trying to append line to an existing file owned by root and I have to do this task with about 100 servers. So I created servers.txt with all the IPs and the ntp.txt file which will have the lines that I need to append. I am executing the following script and I am not achieving what I am trying to. Can someone please suggest what needs to be corrected?
!/bin/bash
servers=`cat servers.txt`;
for i in $servers;
do
cat ntp.txt | ssh root#${i} sudo sh -c "cat >>ntp.conf""
done
Here are some issues; not sure I found all of them.
The shebang line lacks the # which is significant and crucial.
There is no need to read the server names into a variable, and in addition to wasting memory, you are exposing yourself to a number of potential problems; see https://mywiki.wooledge.org/DontReadLinesWithFor
Unless you specifically require the shell to do whitespace tokenization and wildcard expansion on a value, put it in double quotes (or even single quotes, but this inhibits variable expansion, which you still want).
If you are logging in as root, no need to explicitly sudo.
ssh runs a shell for you; no need to explicitly sh -c your commands.
On the other hand, you want to avoid running a root shell if you can. A common way to get to append to a file without having to spawn a shell just to be able to redirect is to use tee -a (just drop the -a to overwrite instead of append). Also printing the file to standard output is an undesired effect (some would say the main effect rather than side effect of tee but let's just not go there) so you often redirect to /dev/null to avoid having the text also spill onto your screen.
You probably want to avoid a useless use of cat if only just to avoid having somebody point out to you that it's useless.
#!/bin/bash
while read -r server; do
do
ssh you#"$server" sudo tee -a /etc/ntp.conf <ntp.txt >/dev/null
done <servers.txt
I changed the code to log in as you but it's of course something you will need to adapt to suit your environment. (If you log in as yourself, you usually just ssh server without explicitly specifying a user name.)
As per your comment, I also added a full path to the destination file /etc/ntp.conf
A more disciplined approach to server configuration is to use something like CFengine2 to manage configurations.

Write to STDERR by filename even if not writable for the user

My user does not have write permissions for STDERR:
user#host:~> readlink -e /dev/stderr
/dev/pts/19
user#host:~> ls -l /dev/pts/19
crw--w---- 1 sysuser tty 136, 19 Apr 26 14:02 /dev/pts/19
It is generally not a big issue,
echo > /dev/stderr
fails with
-bash: /dev/stderr: Permission denied
but usual redirection like
echo >&2
works alright.
However I now need to cope with a 3rd-party binary which only provides logging output to a specified file:
./utility --log-file=output.log
I would like to see the logging output directly in STDERR.
I cannot do it the easy way like --log-file=/dev/stderr due to missing write permissions. On other systems where the write permissions are set this works alright.
Furthermore, I also need to parse output of the process to STDOUT, therefore I cannot simply send log to STDOUT and then redirect to STDERR with >&2. I tried to use the script utility (where the redirection to /dev/stderr works properly) but it merges STDOUT and STDERR together as well.
You can use a Bash process substitution:
./utility --log-file=>(cat>&2)
The substitution will appear to the utility like --log-file=/dev/fd/63, which can be opened. The cat process inherits fd 2 without needing to open it, so it can do the forwarding.
I tested the above using chmod -w /dev/stderr and dd if=/etc/issue of=/dev/stderr. That fails, but changing to dd if=/etc/issue of=>(cat>&2) succeeds.
Note that your error output may suffer more buffering than you would necessarily want/expect, and will not be synchronous with your shell prompt. In other words, your prompt may appear mixed in with error output that arrives at your terminal after utility has completed. The dd example will likely demonstrate this. You may want to append ;wait after the command to ensure that the cat has finished before your PS1 prompt appears: ./utility --log-file=>(cat>&2); wait

Limit output of all Linux commands

I'm looking for a way to limit the amount of output produced by all command line programs in Linux, and preferably tell me when it is limited.
I'm working over a server which has a lag on the display. Occasionally I will accidentally run a command which outputs a large amount of text to the terminal, such as cat on a large file or ls on a directory with many files. I then have to wait a while for all the output to be printed to the terminal.
So is there a way to automatically pipe all output into a command like head or wc to prevent too much output having to be printed to terminal?
I don't know about the general case, but for each well-known command (cat, ls, find?)
you could do the following:
hardlink a copy to the existing utility
write a tiny bash function that calls the utility and pipes to head (or wc, or whatever)
alias the name of the utility to call your function.
So along these lines (utterly untested):
$ ln `which cat` ~/bin/old_cat
function trunc_cat () {
`old_cat $# | head -n 100`
}
alias cat=trunc_cat
Making aliases of all your commands would be a good start. Something like
alias lm="ls -al | more"
alias cam="cat $# | more"
Perhaps using screen could help?
this makes me think of bash-completion.
As complete command in bash enables you to specify handler when a program is not found,
what about write your own handler and clear $PATH, in order to execute every command with redirection to a filtering pipe?
#Did not try it myself.
Assuming you're working over a network connection, like ssh, into a remote server then try piping the output of the command to less. That way you can manage and navigate the output from the program on the server better. Use 'j' and 'k' to move up and down per line and 'ctrl-u' and 'ctrl-d' to move 1/2 a page up and down. When you do this only the relevant text (i.e. what fits on the screen) will be transmitted over the network.

Commands work from Shell script but not from command line?

I quickly searched for this before posting, but could not find any similar posts. Let me know if they exist.
The commands being executed seem very simple. A directory listing is used as the input for a function.
The directory contains a bunch of files named "epi1_mcf_0###.nii.gz"
Command-line version (bash is running when this is executed):
fslmerge -t output_file `ls epi1_mcf_0*.nii.gz`
Shell script version:
#!/bin/bash
fslmerge -t output_file `ls epi1_mcf_0*.nii.gz`
The command-line version fails, but the shell script one works perfectly.
The error message is specific to the function, but it's included anyway.
** ERROR (nifti_image_read): failed to find header file for 'epi1_mcf_0000.nii.gz'
** ERROR: nifti_image_open(epi1_mcf_0000.nii.gz): bad header info
Error: failed to open file epi1_mcf_0000.nii.gz
Cannot open volume epi1_mcf_0000.nii.gz for reading!
I have been very frustrated with this problem (less so after I figured out that there was a way to get the command to work).
Any help would be appreciated.
(Or is the general consensus that the problem should be looked for in the "fslmerge" function?)
`ls epi1_mcf_0*.nii.gz` is better written as simply epi1_mcf_0*.nii.gz. As in:
fslmerge -t output_file epi1_mcf_0*.nii.gz
The `ls` doesn't add anything.
Note: Posted as an answer instead of comment. The Markdown-lite comment parser choked on my `` `ls epi1_mcf_0*.nii.gz` `` markup.
(I mentioned this in a comment first, but I'll make an answer since it helped!)
Do you have any shell aliases defined? (Type alias) Those will affect commands typed at the command line, but not scripts.
Linux often has ls defined as ls --color. This may affect the output since the colour codes are sent as escape codes through the regular output stream. If you use ls --color=auto it will auto-detect whether its output is a terminal or not. From man ls:
By default, color is not used to distinguish types of files. That is
equivalent to using --color=none. Using the --color option without the
optional WHEN argument is equivalent to using --color=always. With
--color=auto, color codes are output only if standard output is connected to a terminal (tty).

Resources