Different behaviour in bash when printing output from command compared to hardcoded string - linux

I am trying to use tmux and simple bash scripts to get a very basic statusbar in my terminals - so the windows are very small (1 or 2 lines in height and potentially less than 10 characters in width).
The command that's causing a strange behaviour is:
echo "$(top -bn 1 | awk '/^%Cpu/{print $2}')%"
which is basically just supposed to print out something like 1.8%.
In a normal sized terminal (with or without tmux) it works fine but when I shrink down the terminal, at some point it starts to cut off the number and only displays %. This might not seem too strange but when I put some text around this command, like
echo "ab$(top -bn 1 | awk '/^%Cpu/{print $2}'cd"
I get abcd as output when shrinking the terminal. So it's only affecting the part withing $(...) that gets cut away.
Also when I print two lines:
echo "$(top -bn 1 | awk '/^%Cpu/{print $2}'%"
echo "20.5%"
it will print 20.5% fine, even break it up over several lines like
2
0
.
5
%
rather than just not printing parts of it. The first line however only shows % without any number.
Any idea what is causing this and how to get the numbers to show?

By default, top checks the terminal size and adjusts its output based on it, even when used in batch mode and output is being piped instead of going to the terminal. You can use the -w option to override this.
echo "$(top -w 50 -bn 1 | awk '/^%Cpu/{print $2}')%"

Related

Can you use 'less' or 'more' to output one page worth of text?

So in Linux, less is used to read files page by page for better readability. I was wondering if you can do something like less file.txt > output.txt to get one page worth of file.txt and output/write it to `output.txt.
Apparently, this does not work, output.txt is exactly the same as the original file, I'm wondering why this is the case, and if there are other work-arounds. Thank you!
You can use the split command.
split -l 100 -d -a 3 input output
This will split the input file every 100 lines (-l 100), will use numbers as suffixes (-d) and will use 3 numbers as suffix (-a 3) in the output file. Something like this output000, output001, output002
You can use head to get a specific number of lines, and tput lines to see how many lines there are on your current terminal.
Here's a script that fetches a pageful, or the standard 25 lines if no terminal is available:
#!/bin/bash
lines=$(tput lines) || lines=25
head -n "$lines" file.txt > output.txt
we use head and tail to get n lines of top or bottom part of a file
cat /var/log/messages|tail -n20
head -n10 src/main.h

How to make less paginate only when the input is larger than the screen size? [duplicate]

This question already has answers here:
Count number of lines after wrapped by terminal’s length [closed]
(3 answers)
Closed 5 years ago.
If I want to clear the screen and run less if the output of a program exceeds the number of lines on a screen, I can do something like:
mypgm | tee mypgm.out
LEN=$(wc -l mypgm.out | cut -f1 -d' ')
[[ $LEN >= $LINES ]]; then
clear
less -R mypgm.out
fi
It works just fine as long as none of the lines in the output wrap around, visually taking up more than a single line on the screen.
Is there an simple way to determine how many lines of the screen the output occupies visually, or do I have to loop over the output file, computing the number of "lines" a long line needs to display by using $COLUMNS
Seems like you're actually looking for something like:
mycmd | less -F
From man less:
-F or --quit-if-one-screen
Causes less to automatically exit if the entire file can be displayed on the first screen.
You can make it a default behavior with:
export LESS='-F'
in your .bashrc.
The -F option is nicely combined with -X which will skip clearing of the screen before listing (can also have it as a default with LESS='-FX').
I found fold, which solves the problem:
fold -w "$COLUMNS" mypgm.out | wc -l
(no need for the 'cut' since wc is running against a pipe)

Continuous grep, output at same spot on console

I use
tail -f file | grep pattern
all the time for continuous grep.
However, is there a way I can make grep output its pattern at the same spot, say at the top of the screen? so that the screen doesn't scroll all the time?
My case is something like this: tail -f log_file | grep Status -A 2 will show the current status and what changed it to that status. The problem is the screen scrolls and it becomes annoying. I'd rather have the output stuck on the first 3 lines in the screen.
Thank you!
you could use the watch command; which will always execute the same command, but the position on the screen will stay the same. The process might eat some more CPU or memory though:
watch "tail file | grep pattern"
by default watch executes that command every 2 seconds. You can adjust up to 0.1 seconds using:
watch -n 0.1
NOTE
As noted by #etanReisner: this is not exactly the same as tail -f: tail -f will change immediately if something is added to your logfile, the watch command will only notice that when it executes, ie every 2 (or 0.1 seconds).
Assuming you are using a vt100 compatible emulator...
This command will tail a file, pipe it into grep, read it a line at a time and then display it in reverse on the top line of the screen:
TOSL=$(tput sc;tput cup 0 0;tput rev;tput el)
FROMSL=$(tput sgr0; tput rc)
tail -f file | grep --line-buffered pattern | while read line
do
echo -n "$TOSL${line}$FROMSL"
done
It assumes your output appears a line at a time. If you want more than one line, you can read more than a line, but you have to decide how you want to buffer the output. You could also use the csr terminfo command to set up an entire separate scrolling region instead of just having one line.
Here is the scrolling region version with a ten line status area at the top:
TOSL=$(tput sc; tput csr 0 10; tput cup 10 0;tput rev;tput el)
FROMSL=$(tput sgr0; tput rc;tput csr 10 50;tput rc)
tail -f file | grep --line-buffered pattern | while read line
do
echo -n "$TOSL${line}
$FROMSL"
done
Note that it is not impossible that your display will be corrupted from time-to-time as it could be that the output from your main shell and your background task get mixed up.
Simply replace the newlines with carriage returns.
tail -f file | grep --line-buffered whatever | tr '\012' '\015'
The line buffering is to avoid jumpy output; see http://mywiki.wooledge.org/BashFAQ/009
This is quick and dirty. As noted in comments, this will leave the previous contents of the line underneath, so a shorter line will not completely overlay a longer line. You could add some control codes to address that, but then you might as well use Curses for the formatting too, like in rghome's answer.

How can you read the most recent line from the linux program screen?

I use screen to run a minecraft server .jar, and I would like to write a bash script to see if the most recent line has changed every five minutes or so. If it has, then the script would start from the beginning and make the check again in another five minutes. If not, it should kill the java process.
How would I go about getting the last line of text from a screen via a bash script?
If I have understand, you can redirect the output of your program in a file and work on it, with the operator >.
Try to run :
ls -l > myoutput.txt
and open the file created.
You want to use the tail command. tail -n 1 will give you the last line of the file or redirected standard output, while tail -f will keep the tail program going until you cancel it yourself.
For example:
echo -e "Jello\nPudding\nSkittles" | tail -n 1 | if grep -q Skittles ; then echo yes; fi
The first section simply prints three lines of text:
Jello
Pudding
Skittles
The tail -n 1 finds the last line of text ("Skittles") and passes that to the next section.
grep -q simply returns TRUE if your pattern was found or FALSE if not, without actually dumping or outputting anything to screen.
So the if grep -q Skittles section will check the result of that grep -q Skittles pattern and, if it found Skittles, prints 'yes' to the screen. If not, nothing gets printed (try replacing Skittles with Pudding, and even though it was in the original input, it never made it out the other end of the tail -n 1 call).
Maybe you can use that logic and output your .jar to standard output, then search that output every 5 minutes?

"tail -f" alternate which doesn't scroll the terminal window

I want to check a file at continuous intervals for contents which keep changing. "tail -f" doesn't suffice as the file doesn't grow in size.
I could use a simple while loop in bash to the same effect:
while [ 1 ]; do cat /proc/acpi/battery/BAT1/state ; sleep 10; done
It works, although it has the unwanted effect of scrolling my terminal window.
So now I'm wondering, is there a linux/shell command that would display the output of this file without scrolling the terminal?
watch -n 10 cat /proc/acpi/battery/BAT1/state
You can add the -d flag if you want it to highlight the differences from one iteration to the next.
watch is your friend. It uses curses so it won't scroll your terminal.
Usage: watch [-dhntv] [--differences[=cumulative]] [--help] [--interval=<n>] [--no-title] [--version] <command>
-d, --differences[=cumulative] highlight changes between updates
(cumulative means highlighting is cumulative)
-h, --help print a summary of the options
-n, --interval=<seconds> seconds to wait between updates
-v, --version print the version number
-t, --no-title turns off showing the header
So taking your example it'll be:
watch -n 10 cat /proc/acpi/battery/BAT1/state
Combining several ideas from other answers plus a couple of other tricks, this will output the file without clearing the screen or scrolling (except for the first cycle if the prompt is at the bottom of the screen).
up=$(tput cuu1)$(tput el); while true; do (IFS=$'\n'; a=($(</proc/acpi/battery/BAT1/state)); echo "${a[*]}"; sleep 1; printf "%.0s$up" ${a[#]}); done
It's obviously something you wouldn't type by hand, so you can make it a function that takes the filename, the number of seconds between updates, starting line and number of lines as arguments.
watchit () {
local up=$(tput cuu1)$(tput el) IFS=$'\n' lines
local start=${3:-0} end
while true
do
lines=($(<"$1"))
end=${4:-${#lines[#]}}
echo "${lines[*]:$start:$end}"
sleep ${2:-1}
# go up and clear each line
printf "%.0s$up" "${lines[#]:$start:$end}"
done
}
Run it:
watchit /proc/acpi/battery/BAT1/state .5 0 6
The second argument (seconds between updates) defaults to 1. The third argument (starting line) defaults to 0. The fourth argument (number of lines) defaults to the whole file. If you omit the number of lines and the file grows it may cause scrolling to accommodate the new lines.
Edit: I added an argument to control the frequency of updates.
My favorite, which works in places that don't have watch, is this:
while true; do clear ; cat /proc/acpi/battery/BAT1/state ; sleep 10; done
The canonical (and easiest, and most flexible) answer is watch, as others have said. But if you want to see just the first line of a file, here's an alternative that neither clears nor scrolls the terminal:
while line=`head -n 1 /proc/acpi/battery/BAT1/state` \
&& printf "%s\r" "$line" \
&& sleep 10
do
printf "%s\r" "`echo -n "$line" | sed 's/./ /g'`"
done
echo
The carriage return is the core concept here. It tells the cursor to return to the beginning of the current line, like a newline but without moving to the next line. The printf command is used here because (1) it doesn't automatically add a newline, and (2) it translates \r into a carriage return.
The first printf prints your line. The second one clears it by overwriting it with spaces, so that you don't see garbage if the next line to be printed is shorter.
Note that if the line printed is longer than the width of your terminal, the terminal will scroll anyway.

Resources