How to stop writing to log file in BASH? - linux

I'm using this code to write all terminal's output to the file
exec > >(tee -a "${LOG_FILE}" )
exec 2> >(tee -a "${LOG_FILE}" >&2)
How to tell it to stop? If, for example, I don't want some output to get into log..
Thank you!

It's not completely clear what your goal is here, but here is something to try.
Do you know about the script utility? You can run script myTerminalOutputFile and any commands and output after that will be captured to myTerminalOutputFile. The downside is that it captures everything. You'll see funny screen control chars like ^[[5m;27j. So do a quick test to see if that works for you.
To finish capturing output just type exit and you are returned to you parent shell cmd-line.
Warning: check man script for details about the inherent funkyness of your particular OS's version of script. Some are better than others. script -k myTerminalOutputFile may work better in this case.
IHTH

You may pipe through "sed" and then tee to log file

Related

redirect sh script output to two logs, one with -x and one without

Is it possible to run a script once and have it write to two logs. One log will be detailed and will have the output of -x option. The other log will not be detailed and will have just the regular output without the -x option?
set -x has this wonderful thing where it's output is actually not printed to stdout but rather to stderr.
Thus if you have
myscript.bash 1> log 2> xlog
with set -x in the script, so log contains your regular output and xlog the output of the debugging commands, + <cmd>, and the errors.
If you want a file containing all output, and one just regular output this may be more difficult, and I think will require editing the log at the end. Here is a wrapper script accepting another script to run:
#!/bin/bash
set -x
. $1 &> full.log
set +x
sed '/^+.*/d' full.log > out.log
where you call
wrap.sh myscript.sh
This has the disadvantage that any line starting with + is deleted, which may or may not be enough. There may be a better solution with process substitution but I cannot think of one that preserves the order between stdout and stderr.

wget's console output to variable

I have a shell script that I want to automate downloading tasks, i would like to get response of command to a variable, command defines as follow.
var=wget --ftp-user=MyName --ftp-password=MyPassword --directory prefix=/home/pi/Desktop/FTP_File/ ftp://202.xx.xx.xx/VideoFiles/Video_1.mp4 2>&1
echo check "$var"
i have achieved that after adding 2>&1 at the end of line and command is in "``", i would like to know what is meant by 2>&1 and is there any other way to achieve it?
Have a look at this: http://www.learnlinux.org.za/courses/build/shell-scripting/ch01s04.html
Any program that is written must have some error checking and it should output some message if any error occurs.
It is a standard practice to output error messages on stderr, informative messages on stdout.
2>&1 means to display all the prints of stderr and stdout related to a particular command when executed.
Hope I have answered your question.

shell prompt seemingly does not reappear after running a script that uses exec with tee to send stdout output to both the terminal and a file

I have a shell script which writes all output to logfile
and terminal, this part works fine, but if I execute the script
a new shell prompt only appear if I press enter. Why is that and how do I fix it?
#!/bin/bash
exec > >(tee logfile)
echo "output"
First, when I'm testing this, there always is a new shell prompt, it's just that sometimes the string output comes after it, so the prompt isn't last. Did you happen to overlook it? If so, there seems to be a race where the shell prints the prompt before the tee in the background completes.
Unfortunately, that cannot fixed by waiting in the shell for tee, see this question on unix.stackexchange. Fragile workarounds aside, the easiest way to solve this that I see is to put your whole script inside a list:
{
your-code-here
} | tee logfile
If I run the following script (suppressing the newline from the echo), I see the prompt, but not "output". The string is still written to the file.
#!/bin/bash
exec > >(tee logfile)
echo -n "output"
What I suspect is this: you have three different file descriptors trying to write to the same file (that is, the terminal): standard output of the shell, standard error of the shell, and the standard output of tee. The shell writes synchronously: first the echo to standard output, then the prompt to standard error, so the terminal is able to sequence them correctly. However, the third file descriptor is written to asynchronously by tee, so there is a race condition. I don't quite understand how my modification affects the race, but it appears to upset some balance, allowing the prompt to be written at a different time and appear on the screen. (I expect output buffering to play a part in this).
You might also try running your script after running the script command, which will log everything written to the terminal; if you wade through all the control characters in the file, you may notice the prompt in the file just prior to the output written by tee. In support of my race condition theory, I'll note that after running the script a few times, it was no longer displaying "abnormal" behavior; my shell prompt was displayed as expected after the string "output", so there is definitely some non-deterministic element to this situation.
#chepner's answer provides great background information.
Here's a workaround - works on Ubuntu 12.04 (Linux 3.2.0) and on OS X 10.9.1:
#!/bin/bash
exec > >(tee logfile)
echo "output"
# WORKAROUND - place LAST in your script.
# Execute an executable (as opposed to a builtin) that outputs *something*
# to make the prompt reappear normally.
# In this case we use the printf *executable* to output an *empty string*.
# Use of `$ec` is to ensure that the script's actual exit code is passed through.
ec=$?; $(which printf) ''; exit $ec
Alternatives:
#user2719058's answer shows a simple alternative: wrapping the entire script body in a group command ({ ... }) and piping it to tee logfile.
An external solution, as #chepner has already hinted at, is to use the script utility to create a "transcript" of your script's output in addition to displaying it:
script -qc yourScript /dev/null > logfile # Linux syntax
This, however, will also capture stderr output; if you wanted to avoid that, use:
script -qc 'yourScript 2>/dev/null' /dev/null > logfile
Note, however, that this will suppress stderr output altogether.
As others have noted, it's not that there's no prompt printed -- it's that the last of the output written by tee can come after the prompt, making the prompt no longer visible.
If you have bash 4.4 or newer, you can wait for your tee process to exit, like so:
#!/usr/bin/env bash
case $BASH_VERSION in ''|[0-3].*|4.[0-3]) echo "ERROR: Bash 4.4+ needed" >&2; exit 1;; esac
exec {orig_stdout}>&1 {orig_stderr}>&2 # make a backup of original stdout
exec > >(tee -a "_install_log"); tee_pid=$! # track PID of tee after starting it
cleanup() { # define a function we'll call during shutdown
retval=$?
exec >&$orig_stdout # Copy your original stdout back to FD 1, overwriting the pipe to tee
exec 2>&$orig_stderr # If something overwrites stderr to also go through tee, fix that too
wait "$tee_pid" # Now, wait until tee exits
exit "$retval" # and complete exit with our original exit status
}
trap cleanup EXIT # configure the function above to be called during cleanup
echo "Writing something to stdout here"

Why are some programs writing on stderr instead of stdout their output?

I've recently added to my .bashrc file an ssh-add command. I found that
ssh-add $HOME/.ssh/id_rsa_github > /dev/null
results on a message "identity added and something else" every time I open a shell.
While
ssh-add $HOME/.ssh/id_rsa_github > /dev/null 2>&1
did the trick and my shell is now 'clean'.
Reading on internet, I found that other command do it, (for example time). Could you please explain why it's done?
When you redirect output from one process to another e.g. via pipes
$ procA | procB | procC
this is traditionally done using stdout. I would expect time and similar commands to output to stderr to avoid corrupting this stream. If you're using time then you're diagnosing issues and you don't want to inadvertently provide extra input to a downstream process.
This article includes some further detail and some history surrounding this.

Use of tee command promptly even for one command

I am new to using tee command.
I am trying to run one of my program which takes long time to finish but it prints out information as it progresses. I am using 'tee' to save the output to a file as well as to see the output in the shell (bash).
But the problem is tee doesn't forward the output to shell until the end of my command.
Is there any way to do that ?
I am using Debian and bash.
This actually depends on the amount of output and the implementation of whatever command you are running. No program is obliged to print stuff straight to stdout or stderr and flush it all the time. So even though most C runtime implementation flush after a certain amount of data was written using one of the runtime routines, such as printf, this may not be true depending on the implementation.
It tee doesn't output it right away, it is likely only receiving the input at the very end of the run of your command. It might be helpful to mention which exact command it is.
The problem you are experienced is most probably related to buffering.
You may have a look at stdbuf command, which does the following:
stdbuf - Run COMMAND, with modified buffering operations for its standard streams.
If you were to post your usage I could give a better answer, but as it is
(for i in `seq 10`; do echo $i; sleep 1s; done) | tee ./tmp
Is proper usage of the tee command and seems to work. Replace the part before the pipe with your command and you should be good to go.

Resources