Linux storing top output to a file - linux

I wanted to execute the top command as shown below and store the output to a file.
xyz.sh
#!/bin/sh
while :
do
top -n 1
sleep 5
done
when I run the code, It executes and produces the output fine:
sh xyz.sh
but when I try to execute the code in background its getting killed.
sh xyz.sh > xyz.log &
any ideas how can I capture top output?

The top cli command behaves different to what you might be used to from other utilities. This is because it does not work in a typical filter like way reading from stdin and writing to stdout. Instead it emulates a terminal. This means that you cannot simply redirect its output, because there is no output written to standard out.
Check the man page (which you should always do...). You will notice the -b flag which changes that behaviour. This will allow you to do what you try.

Related

Cli colors disappear when piping into a text file [duplicate]

This question already has answers here:
How to trick an application into thinking its stdout is a terminal, not a pipe
(9 answers)
Closed 5 years ago.
Various bash commands I use -- fancy diffs, build scripts, etc, produce lots of color output.
When I redirect this output to a file, and then cat or less the file later, the colorization is gone -- presumably b/c the act of redirecting the output stripped out the color codes that tell the terminal to change colors.
Is there a way to capture colorized output, including the colorization?
One way to capture colorized output is with the script command. Running script will start a bash session where all of the raw output is captured to a file (named typescript by default).
Redirecting doesn't strip colors, but many commands will detect when they are sending output to a terminal, and will not produce colors by default if not. For example, on Linux ls --color=auto (which is aliased to plain ls in a lot of places) will not produce color codes if outputting to a pipe or file, but ls --color will. Many other tools have similar override flags to get them to save colorized output to a file, but it's all specific to the individual tool.
Even once you have the color codes in a file, to see them you need to use a tool that leaves them intact. less has a -r flag to show file data in "raw" mode; this displays color codes. edit: Slightly newer versions also have a -R flag which is specifically aware of color codes and displays them properly, with better support for things like line wrapping/trimming than raw mode because less can tell which things are control codes and which are actually characters going to the screen.
Inspired by the other answers, I started using script. I had to use -c to get it working though. All other answers, including tee, different script examples did not work for me.
Context:
Ubuntu 16.04
running behavior tests with behave and starting shell command during the test with python's subprocess.check_call()
Solution:
script --flush --quiet --return /tmp/ansible-output.txt --command "my-ansible-command"
Explanation for the switches:
--flush was needed, because otherwise the output is not well live-observable, coming in big chunks
--quiet supresses the own output of the script tool
-c, --command directly provides the command to execute, piping from my command to script did not work for me (no colors)
--return to make script propagate the exit code of my command so I know if my command has failed
I found that using script to preserve colors when piping to less doesn't really work (less is all messed up and on exit, bash is all messed up) because less is interactive. script seems to really mess up input coming from stdin even after exiting.
So instead of running:
script -q /dev/null cargo build | less -R
I redirect /dev/null to it before piping to less:
script -q /dev/null cargo build < /dev/null | less -R
So now script doesn't mess with stdin and gets me exactly what I want. It's the equivalent of command | less but it preserves colors while also continuing to read new content appended to the file (other methods I tried wouldn't do that).
some programs remove colorization when they realize the output is not a TTY (i.e. when you redirect them into another program). You can tell some of those to use color forcefully, and tell the pager to turn on colorization, for example use less -R
This question over on superuser helped me when my other answer (involving tee) didn't work. It involves using unbuffer to make the command think it's running from a shell.
I installed it using sudo apt install expect tcl rather than sudo apt-get install expect-dev.
I needed to use this method when redirecting the output of apt, ironically.
I use tee: pipe the command's output to teefilename and it'll keep the colour. And if you don't want to see the output on the screen (which is what tee is for: showing and redirecting output at the same time) then just send the output of tee to /dev/null:
command| teefilename> /dev/null

Redirect output from command to terminal and a file without using tee

As the name implies, I want to log the output of a command to a file without changing the terminal behavior. I still want to see the output and importantly, I still need to be able to do input.
The input requirement is why I specifically cannot use tee. The application I am working with doesn't handle input properly when using tee. (No, I cannot modify the application to fix this). I'm hoping a more fundamental approach with '>' redirection gets around this issue.
Theoretically, this should work exactly as I want, but, like I said, it does not.
command | tee -a foo.log
Also notice I added the -a flag. Not strictly required because I can definitely do a work around but it'd be nice if that was a feature as well.
Use script.
% script -c vi vi.log
Script started, output log file is 'vi.log'.
... inside vi command - fully interactive - save the file, takes to:
Script done.
% ls -l vi.log
-rw-r--r-- 1 risner risner 889 Sep 13 15:16 vi.log

Linux server: How do I use nohup make sure job is working?

What I know and what I've tried: I have a script in R (called GAM.R) that I want to run in the background that outputs .rdata, .pdf, and .jpg files. Running this from the command line is relatively simple:
$ Rscript GAM.R
However, this code takes a very long time to run so I would love to send it to the background and let it work even after I have logged out and turned the computer off. I understand this is pretty easy, as well, and my code would look like this:
$ nohup Rscript GAM.R >/dev/null 2>&1 &
I used this to see if it was working:
$ fg
nohup Rscript GAM.R > /dev/null 2>&1
The problem: I don't know how to check if the code is working (is there a way I can see its progress?) and I don't know where the outputs are going. I can see the progress and output with the first code so I must not be too far off. It doesn't seem that the second code's outputs are going where the first code's outputs went.
Your command line is diverting all output to /dev/null aka, The Bit Bucket.
Consider diverting it to a temporary file:
$ nohup Rscript GAM.R >/tmp/GAM.R.output 2>&1 &
Then you can tail /tmp/GAM.R.output to see the results, it will show the last 10 lines of the file by default. You can use tail -f to show the end of the file, plus new output in real time.
Note, the /tmp/ filesystem is not guaranteed to exist between reboots. You can put the file somewhere else (like ~/GAM.R.output if you need to be sure.
Note, however, that if you turn your computer off, then all processing gets aborted. For this to work your machine must continue to run and not go to sleep, or shutdown.
What you are doing is that with the > you are redirecting the output of your script to /dev/null and by doing 2>&1 you are redirecting the error output to the same place. Finally nohup executes your process and detach it from the current terminal.
So to sum up what you are doing is creating a process and redirecting its output and error output to a file called null that is stored under /dev.
To answer your question I suggest you redirect your outputs to a folder that you can access as normal user and not super user. Then to make sure that everything is ok you can print this file.
For example you can do :
nohup Rscript GAM.R >/home/username/documents/output_file 2>&1 &
and then to see the file from a terminal you can do:
cat /home/username/documents/output_file
Lastly I don't think that your program will keep on running if your turn off your pc and I don't think there is a way to do that.
If you want to run your program in the background and access the output of the program you can easily do that by writing
exec 3< <(Rscript GAM.R)
And then when you wish to check the output of the program run
cat <&3 # or you can use 'cat /dev/fd/3'
Excellent! Thanks everyone for your helpful answers, particularly #Greg Tarsa. Ultimately I needed to use:
$ nohup Rscript GAM.R >/usr/emily/gams/2017_03_14 2>&1 &
The above is used to run the script and save the screen output to emily/gams (called "2017_03_14", a file to be made by the command, not a folder as I had origionally thought). This also outputs my .rdata, .pdf, and .jpg output filesf from the script to usr/emily.
Then I can see progress and running programs using:
$ tail -f 2017_03_14 #Shows the last 10 lines of the program's progress
$ ps #shows your running projects
$ ps -fu emily #see running projects regardless of session, where username==emily
In the spirit of completeness, I can also note here that to cancel a process, you can use:
$ kill -HUP processid #https://kb.iu.edu/d/adqw

Run two shell script in parallel and capture their output

I want have a shell script, which configure several things and then call two other shell scripts. I want these two scripts run in parallel and I want to be able to get and print their live output.
Here is my first script which calls the other two
#!/bin/bash
#CONFIGURE SOME STUFF
$path/instance2_commands.sh
$path/instance1_commands.sh
These two process trying to deploy two different application and each of them took around 5 minute so I want to run them in parallel and also see their live output so I know where are they with the deploying tasks. Is this possible?
Running both scripts in parallel can look like this:
#!/bin/bash
#CONFIGURE SOME STUFF
$path/instance2_commands.sh >instance2.out 2>&1 &
$path/instance1_commands.sh >instance1.out 2>&1 &
wait
Notes:
wait pauses until the children, instance1 and instance2, finish
2>&1 on each line redirects error messages to the relevant output file
& at the end of a line causes the main script to continue running after forking, thereby producing a child that is executing that line of the script concurrently with the rest of the main script
each script should send its output to a separate file. Sending both to the same file will be visually messy and impossible to sort out when the instances generate similar output messages.
you may attempt to read the output files while the scripts are running with any reader, e.g. less instance1.out however output may be stuck in a buffer and not up-to-date. To fix that, the programs would have to open stdout in line buffered or unbuffered mode. It is also up to you to use -f or > to refresh the display.
Example D from an article on Apache Spark and parallel processing on my blog provides a similar shell script for calculating sums of a series for Pi on all cores, given a C program for calculating the sum on one core. This is a bit beyond the scope of the question, but I mention it in case you'd like to see a deeper example.
It is very possible, change your script to look like this:
#!/bin/bash
#CONFIGURE SOME STUFF
$path/instance2_commands.sh >> script.log
$path/instance1_commands.sh >> script.log
They will both output to the same file and you can watch that file by running:
tail -f script.log
If you like you can output to 2 different files if you wish. Just change each ling to output (>>) to a second file name.
This how I end up writing it using Paul instruction.
source $path/instance2_commands.sh >instance2.out 2>&1 &
source $path/instance1_commands.sh >instance1.out 2>&1 &
tail -q -f instance1.out -f instance2.out --pid $!
wait
sudo rm instance1.out
sudo rm instance2.out
My logs in two processes was different so I didn't care if aren't all together, that is why I put them all in one file.

How to show full output on linux shell?

I have a program that runs and shows a GUI window. It also prints a lot of things on the shell. I need to view the first thing printed and the last thing printed. the problem is that when the program terminates, if I scroll to the top of the window, the stuff printed when it began is removed. So stuff printed during the program is now at the top. So that means I can't view the first thing printed.
Also I tried doing > out.txt, but the problem is that the file only gets closed and readable when I manually close the GUI window. If it gets outed to a file, nothing gets printed on the screen and I have no way to know if the program finished. I can't modify any of the code too.
Is there a way I can see the whole list of text printed on the shell?
Thanks
You can just use tee command to get output/error in a file as well on terminal:
your-command |& tee out.log
Though just keep in mind that this output is line buffered by default (4k in size).
When the output of a program goes to your terminal window, the program generally flushes its output after each newline. This is why you see the output interactively.
When you redirect the output of the program to out.txt, it only flushes its output when its internal buffer is full, which is probably after every 8KiB of output. This is why you don't see anything in the file right away, and you don't see the last things printed by the program until it exits (and flushes its last, partially-full buffer).
You can trick a program into thinking it's sending its output to a terminal using the script command:
script -q -f -c myprogram out.txt
This script command runs myprogram connected to a newly-allocated “pseudo-terminal” (or pty for short). This tricks myprogram into thinking it's talking to a terminal, so it flushes its output on every newline. The script command copies myprogram's output to your terminal window and to the file out.txt.
Note that script will write a header line to out.txt. I can't find a way to disable that on my test Linux system.
In the example above, I assumed your program takes no arguments. If it does, you either need to put the program and arguments in quotes:
script -q -f -c 'myprogram arg1 arg2 arg3' out.txt
Or put the program command line in a shell script and pass that shell script to the script command.

Resources