On Linux, is there a way to both get the output of all commands in a bash and simultaneously store it to a file WITHOUT having to pipe anything. I know I could do something like
ls -al | tee output.log
but I just want all output always to be stored in a log so I can look into it even after a few days. I don't want to have to add the pipe with each command.
You might want script command. When you run it, a new shell session is started and both input and output are recorded to a file specified.
Example:
script my_log.txt
# run your commands
exit
Record of your commands is stored in my_log.txt.
ls -al >> your_file.log 2>&1
Related
What I know and what I've tried: I have a script in R (called GAM.R) that I want to run in the background that outputs .rdata, .pdf, and .jpg files. Running this from the command line is relatively simple:
$ Rscript GAM.R
However, this code takes a very long time to run so I would love to send it to the background and let it work even after I have logged out and turned the computer off. I understand this is pretty easy, as well, and my code would look like this:
$ nohup Rscript GAM.R >/dev/null 2>&1 &
I used this to see if it was working:
$ fg
nohup Rscript GAM.R > /dev/null 2>&1
The problem: I don't know how to check if the code is working (is there a way I can see its progress?) and I don't know where the outputs are going. I can see the progress and output with the first code so I must not be too far off. It doesn't seem that the second code's outputs are going where the first code's outputs went.
Your command line is diverting all output to /dev/null aka, The Bit Bucket.
Consider diverting it to a temporary file:
$ nohup Rscript GAM.R >/tmp/GAM.R.output 2>&1 &
Then you can tail /tmp/GAM.R.output to see the results, it will show the last 10 lines of the file by default. You can use tail -f to show the end of the file, plus new output in real time.
Note, the /tmp/ filesystem is not guaranteed to exist between reboots. You can put the file somewhere else (like ~/GAM.R.output if you need to be sure.
Note, however, that if you turn your computer off, then all processing gets aborted. For this to work your machine must continue to run and not go to sleep, or shutdown.
What you are doing is that with the > you are redirecting the output of your script to /dev/null and by doing 2>&1 you are redirecting the error output to the same place. Finally nohup executes your process and detach it from the current terminal.
So to sum up what you are doing is creating a process and redirecting its output and error output to a file called null that is stored under /dev.
To answer your question I suggest you redirect your outputs to a folder that you can access as normal user and not super user. Then to make sure that everything is ok you can print this file.
For example you can do :
nohup Rscript GAM.R >/home/username/documents/output_file 2>&1 &
and then to see the file from a terminal you can do:
cat /home/username/documents/output_file
Lastly I don't think that your program will keep on running if your turn off your pc and I don't think there is a way to do that.
If you want to run your program in the background and access the output of the program you can easily do that by writing
exec 3< <(Rscript GAM.R)
And then when you wish to check the output of the program run
cat <&3 # or you can use 'cat /dev/fd/3'
Excellent! Thanks everyone for your helpful answers, particularly #Greg Tarsa. Ultimately I needed to use:
$ nohup Rscript GAM.R >/usr/emily/gams/2017_03_14 2>&1 &
The above is used to run the script and save the screen output to emily/gams (called "2017_03_14", a file to be made by the command, not a folder as I had origionally thought). This also outputs my .rdata, .pdf, and .jpg output filesf from the script to usr/emily.
Then I can see progress and running programs using:
$ tail -f 2017_03_14 #Shows the last 10 lines of the program's progress
$ ps #shows your running projects
$ ps -fu emily #see running projects regardless of session, where username==emily
In the spirit of completeness, I can also note here that to cancel a process, you can use:
$ kill -HUP processid #https://kb.iu.edu/d/adqw
I need to run (in bash) a .txt file containing a bunch of commands written to it by another program, at a specific time using at. Normally I would run this with bash myfile.txt but of I choose to run at bash myfile.txt midnight it doesn't like it, saying
syntax error. Last token seen: b
Garbled time
How can I sort this out?
Try this instead:
echo 'bash myfile.txt' | at midnight
at reads commands from standard input or a specified file (parameter -f filename); not from the command line.
I was wondering if it is possible to get all of what is outputted from a script I have made to go to a log file if they change one of the variables in the script. Example, in the script a variable createLog=true could be set to enable logging.
I know I can do ./myscript.sh 2>&1 | tee sabs.log
But I would like to be able to simply run ./myscript.sh
and have the whole script logged in a file, as well as output to the console if the var is set to true.
Would I have to change every command in the script to accomplish this or is there a command I can execute at the beginning of the script that will output to both.
If you need more details please let me know.
Thanks!
exec without an argument lets you redirect for the remainder of the current script.
exec >log 2>&1
You can't tee within the redirect but you can display the file with a background job.
tail -f log &
I am trying to run the following command:
postfix status > tmp
however the resulting file never has any content written, and instead the output is still sent to the terminal.
I have tried adding the following into the mix, and even piping to echo before redirecting the output, but nothing seems ot have any effect
postfix status 2>&1 > tmp
Other commands work no problem.
script -c 'postfix status' -q tmp
It looks like it writes to the terminal instead to stdout. I don't understand piping to 'echo', did you mean piping to 'cat'?
I think you can always use the 'script' command, that logs everything that you see on the terminal. You would run 'script', then your command, then exit.
Thanks to another SO user, who deleted their answer, so now I can't thank, I was put on the right track. I found the answer here:
http://irbs.net/internet/postfix/0211/2756.html
So for those who want to be able to catch the response of the posfix, I used the following method.
Create a script which causes the output to go to where you wish. I did that like this:
#!/bin/sh
cat <<EOF | expect 2>&1
set timeout -1
spawn postfix status
expect eof
EOF
Then i ran the script (say script.sh) and could pipe/redirect from there. i.e. script.sh > file.txt
I needed this for PHP so I could use exec and actually get a response.
I am trying to capture all the input and output from a bash script that i created for installing nagios. I have it creating the log file using tee right now but it only shows when there is an echo command or some output from like "service httpd restart". I mainly want to capture the input the user is entering in the log file for future reference.
The script command, run prior to your program, will capture all input and output to a file you specify. It terminates with a ctrl-D.
script -c yourprogram filename
may do what you're looking for. See the man page for script.