expect , need to treat expression and print results on screen - linux

Guys i have the following problem , i'm issuing the command free in a busybox host , i wanna check how much memory i have free, i need to print if the result is bigger than 10M then i should return something like " i have more than 30M free" etc... i dont know how to treat the output from expect.
i have the code snippet
expect "$"
send "df -h\r"
expect "$"
send "uptime\r"
expect "$"
send "free | awk -F ' ' ' FNR == 2 {print \$3}'\r"
expect "$"
how to treat the output from the command free ? that output would give me integer with the free memory i need to analyze a condition.

You might want check if the Busybox version of free is the same as GNU, but on my Fedora system, free's output looks like
total used free shared buffers cached
Mem: 3094900 2691252 403648 0 442924 983336
-/+ buffers/cache: 1264992 1829908
Swap: 2064380 126268 1938112
To get free memory, this would do it:
free | awk -F ' ' ' FNR == 2 {print \$4}'
To capture Expect output in a tcl variable:
set results $expect_out(buffer)
So the way to do what you want is approximately
send "free | awk -F ' ' ' FNR == 2 {print \$4}'\r"
set freeram $expect_out(buffer)
if { $freeram >= 10000 } {
puts "You have lots of free memory.\n"
}

Since you're using a Linux variant (remotely), the easiest way to get the amount of free memory over ssh is to not use free. Instead, you can use a different approach entirely:
spawn ssh $user#$theOtherHost cat /proc/meminfo
expect {
-re {MemTotal:\s+(\d+) kB} {
set total $expect_out(1,string)
exp_continue
}
-re {MemFree:\s+(\d+) kB} {
set free $expect_out(1,string)
exp_continue
}
"Password:" {
send "$thePassword\r"
exp_continue
}
eof {
# Do nothing; default fall-through is fine
}
}
puts "usage was $free out of $total (both in kilobytes)"

Related

How can I detect a sequence of "hollows" (holes, lines not matching a pattern) bigger than n in a text file?

Case scenario:
$ cat Status.txt
1,connected
2,connected
3,connected
4,connected
5,connected
6,connected
7,disconnected
8,disconnected
9,disconnected
10,disconnected
11,disconnected
12,disconnected
13,disconnected
14,connected
15,connected
16,connected
17,disconnected
18,connected
19,connected
20,connected
21,disconnected
22,disconnected
23,disconnected
24,disconnected
25,disconnected
26,disconnected
27,disconnected
28,disconnected
29,disconnected
30,connected
As can be seen, there are "hollows", understanding them as lines with the "disconnected" value inside the sequence file.
I want, in fact, to detect these "holes", but it would be useful if I could set a minimum n of missing numbers in the sequence.
I.e: for ' n=5' a detectable hole would be the 7... 13 part, as there are at least 5 "disconnected" in a row on the sequence. However, the missing 17 should not be considered as detectable in this case. Again, at line 21 whe get a valid disconnection.
Something like:
$ detector Status.txt -n 5 --pattern connected
7
21
... that could be interpreted like:
- Missing more than 5 "connected" starting at 7.
- Missing more than 5 "connected" starting at 21.
I need to script this on Linux shell, so I was thinking about programing some loop, parsing strings and so on, but I feel like if this could be done by using linux shell tools and maybe some simpler programming. Is there a way?
Even when small programs like csvtool are a valid solution, some more common Linux commands (like grep, cut, awk, sed, wc... etc) could be worth for me when working with embedded devices.
#!/usr/bin/env bash
last_connected=0
min_hole_size=${1:-5} # default to 5, or take an argument from the command line
while IFS=, read -r num state; do
if [[ $state = connected ]]; then
if (( (num-last_connected) > (min_hole_size+1) )); then
echo "Found a hole running from $((last_connected + 1)) to $((num - 1))"
fi
last_connected=$num
fi
done
# Special case: Need to also handle a hole that's still open at EOF.
if [[ $state != connected ]] && (( num - last_connected > min_hole_size )); then
echo "Found a hole running from $((last_connected + 1)) to $num"
fi
...emits, given your file on stdin (./detect-holes <in.txt):
Found a hole running from 7 to 13
Found a hole running from 21 to 29
See:
BashFAQ #1 - How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?
The conditional expression -- the [[ ]] syntax used to make it safe to do string comparisons without quoting expansions.
Arithmetic comparison syntax -- valid in $(( )) in all POSIX-compliant shells; also available without the expansion side effects as (( )) as a bash extension.
This is the perfect use case for awk, since the machinery of line reading, column splitting, and matching is all built in. The only tricky bit is getting the command line argument to your script, but it's not too bad:
#!/usr/bin/env bash
awk -v window="$1" -F, '
BEGIN { if (window=="") {window = 1} }
$2=="disconnected"{if (consecutive==0){start=NR}; consecutive++}
$2!="disconnected"{if (consecutive>window){print start}; consecutive=0}
END {if (consecutive>window){print start}}'
The window value is supplied as the first command line argument; left out, it defaults to 1, which means "display the start of gaps with at least two consecutive disconnections". Probably could have a better name. You can give it 0 to include single disconnections. Sample output below. (Note that I added series of 2 disconnections at the end to test the failure that Charles metions).
njv#organon:~/tmp$ ./tst.sh 0 < status.txt # any number of disconnections
7
17
21
31
njv#organon:~/tmp$ ./tst.sh < status.txt # at least 2 disconnections
7
21
31
njv#organon:~/tmp$ ./tst.sh 8 < status.txt # at least 9 disconnections
21
Awk solution:
detector.awk script:
#!/bin/awk -f
BEGIN { FS="," }
$2 == "disconnected"{
if (f && NR-c==nr) c++;
else { f=1; c++; nr=NR }
}
$2 == "connected"{
if (f) {
if (c > n) {
printf "- Missing more than 5 \042connected\042 starting at %d.\n", nr
}
f=c=0
}
}
Usage:
awk -f detector.awk -v n=5 status.txt
The output:
- Missing more than 5 "connected" starting at 7.
- Missing more than 5 "connected" starting at 21.

merge sorted files, with minimal buffering

I have two log files that are prefixed with a sortable timesetamp.
I'd like to see them, in order, while the processes generating the log files are still running. This is a pretty faithful simulation of the situation:
slow() {
# print stdout at 30bps
exec pv -qL 30
}
timestamp() {
# prefix stdin with a sortable timestamp
exec tai64n
}
# Simulate two slowly-running batch jobs:
seq 000 099 | slow | timestamp > seq.1 &
seq1=$!
seq 100 199 | slow | timestamp > seq.2 &
seq2=$!
# I'd like to see the combined output of those two logs, in timestamp-sorted order
try1() {
# this shows me the output as soon as it's available,
# but it's badly interleaved and not necessarily in order
tail -f seq.1 --pid=$seq1 &
tail -f seq.2 --pid=$seq2 &
}
try2() {
# this gives the correct output,
# but outputs nothing till both jobs have stopped
sort -sm <(tail -f seq.1 --pid=$seq1) <(tail -f seq.2 --pid=$seq2)
}
try2
wait
The solution using tee (to write the files so that the standard output still goes to the console) won't work because tee introduces unwanted latency and doesn't solve the problem. Similarly, I couldn't get solutions using tail -f -s 0.01 (which alters the polling to 100/s), and/or some kind of call like split --filter='sort -sm' to sort small batches.
I also don't have tai64n, so my test code actually used this functionally identical perl code:
tai64n() {
perl -MTime::HiRes=time -pe '
printf "\#4%015x%x%n", split(/\./,time), $c; print 0 x(25-$c) . " "'
}
After failing to solve this with sh and bash, I exercised my standard failover, perl:
slow() {
# print stdout at 30bps
pv -qL 30
}
tai64n_and_tee() {
# prefix stdin with a sortable timestamp and copy to given file
perl -MTime::HiRes=time -e '
$_ = shift;
open(TEE, "> $_") or die $!;
while (<>) {
$_ = sprintf("\#4%015x%x%n", split(/\./,time), $c) . 0 x(25-$c) . " $_";
print TEE $_;
print $_;
}
' "$1"
}
# Simulate two slowly-running batch jobs:
seq 000 099 | slow | tai64n_and_tee seq.1 &
seq 100 199 | slow | tai64n_and_tee seq.2 &
wait
This was convenient for me because I had already used perl for the timestamp. I failed to do this with perl acting as tai64n and a separate perl call to act as tee, but it might work with the real tai64n.

Perl anonymous pipe no output

Question
Why is nothing printed when using anonymous pipe, unless I print the actual data from pipe ?
Example
use strict;
use warnings;
my $child_process_id = 0;
my $vmstat_command = 'vmstat 7|';
$child_process_id = open(VMSTAT, $vmstat_command) || die "Error when executing \"$vmstat_command\": $!";
while (<VMSTAT>) {
print "hi" ;
}
close VMSTAT or die "bad command: $! $?";
Appears to hang
use strict;
use warnings;
my $child_process_id = 0;
my $vmstat_command = 'vmstat 7|';
$child_process_id = open(VMSTAT, $vmstat_command) || die "Error when executing \"$vmstat_command\": $!";
while (<VMSTAT>) {
print "hi" . $_ ;
# ^^^ Added this
}
close VMSTAT or die "bad command: $! $?";
Prints
hiprocs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------
hi r b swpd free buff cache si so bi bo in cs us sy id wa st
hi 1 0 0 7264836 144200 307076 0 0 0 1 0 14 0 0 100 0 0
etc...
Expected behaviour
Would be to print hi for every line of output of vmstat for the first example.
Versions
perl, v5.10.0
GNU bash, version 3.2.51
Misc
It also appears to hang when using chomp before printing the line (which i thought only removes newlines).
I feel like i'm missing something fundamental to how the pipe is read and processed but could not find a similar question. If there is one then dupe this and I'll have a look at it.
Any further information needed just ask.
Alter
print "hi";
to
print "hi\n";
and it also "works"
the reason it fails is that output is line buffered by default
setting $| will flush the buffer straight away
If set to nonzero, forces a flush right away and after every write or print on the currently selected output channel. Default is 0 (regardless of whether the channel is really buffered by the system or not; "$|" tells you only whether you've asked Perl explicitly to flush after each write). STDOUT will typically be line buffered if output is to the terminal and block buffered otherwise. Setting this variable is useful primarily when you are outputting to a pipe or socket, such as when you are running a Perl program under rsh and want to see the output as it's happening. This has no effect on input buffering. See the getc entry in the perlfunc manpage for that. (Mnemonic: when you want your pipes to be piping hot.)

Bash: wait until CPU usage gets below a threshold

In a bash script I need to wait until CPU usage gets below a threshold.
In other words, I'd need a command wait_until_cpu_low which I would use like this:
# Trigger some background CPU-heavy command
wait_until_cpu_low 40
# Some other commands executed when CPU usage is below 40%
How could I do that?
Edit:
target OS is: Red Hat Enterprise Linux Server release 6.5
I'm considering the average CPU usage (across all cores)
A much more efficient version just calls mpstat and awk once each, and keeps them both running until done; no need to explicitly sleep and restart both processes every second (which, on an embedded platform, could add up to measurable overhead):
wait_until_cpu_low() {
awk -v target="$1" '
$13 ~ /^[0-9.]+$/ {
current = 100 - $13
if(current <= target) { exit(0); }
}' < <(LC_ALL=C mpstat 1)
}
I'm using $13 here because that's where idle % is for my version of mpstat; substitute appropriately if yours differs.
This has the extra advantage of doing floating point math correctly, rather than needing to round to integers for shell-native math.
wait_for_cpu_usage()
{
current=$(mpstat 1 1 | awk '$12 ~ /[0-9.]+/ { print int(100 - $12 + 0.5) }')
while [[ "$current" -ge "$1" ]]; do
current=$(mpstat 1 1 | awk '$12 ~ /[0-9.]+/ { print int(100 - $12 + 0.5) }')
sleep 1
done
}
Notice it requires sysstat package installed.
You might use a function based on the top utility. But note, that doing so is not very reliable because the CPU utilization might - rapidly - change at any time. Meaning that just because the check succeeded, it is not guaranteed that the CPU utilization will stay low as long the following code runs. You have been warned.
The function:
function wait_for_cpu_usage {
threshold=$1
while true ; do
# Get the current CPU usage
usage=$(top -n1 | awk 'NR==3{print $2}' | tr ',' '.')
# Compared the current usage against the threshold
result=$(bc -l <<< "$usage <= $threshold")
[ $result == "1" ] && break
# Feel free to sleep less than a second. (with GNU sleep)
sleep 1
done
return 0
}
# Example call
wait_for_cpu_usage 25
Note that I'm using bc -l for the comparison since top prints the CPU utilization as a float value.
As noted by "Mr. Llama" in a comment above, I've used uptime to write my simple function:
function wait_cpu_low() {
threshold=$1
while true; do
current=$(uptime | awk '{ gsub(/,/, ""); print $10 * 100; }')
if [ $current -lt $threshold ]; then
break;
else
sleep 5
fi
done
}
In awk expression:
$10 is to get average CPU usage in last minute
$11 is to get average CPU usage in last 5 minutes
$12 is to get average CPU usage in last 15 minutes
And here is an usage example:
wait_cpu_low 20
It waits one minute average CPU usage is below 20% of one core of CPU.

Linux retrieve monitor names

Situation: I'm using multiple monitors and I want to get their names in bash. Currently I'm using Ubuntu 10.04.
I know about xrandr. From it I can get only statistics data. What I want is to read all monitor names in an array to work with them.
Is there a clear way to do that without cutting names from some kind of string? A clear way would be reading them from file. A not clear way would be to pipe xrandr output to some sort a function to cut names out from it.
Inspired by Beni's answer, this will read the EDID data using xrandr and extract the monitors names according to the EDID specification, with no need of any external tools like parse-edid:
#!/bin/bash
while read -r output hex conn; do
[[ -z "$conn" ]] && conn=${output%%-*}
echo "# $output $conn $(xxd -r -p <<< "$hex")"
done < <(xrandr --prop | awk '
!/^[ \t]/ {
if (output && hex) print output, hex, conn
output=$1
hex=""
}
/ConnectorType:/ {conn=$2}
/[:.]/ && h {
sub(/.*000000fc00/, "", hex)
hex = substr(hex, 0, 26) "0a"
sub(/0a.*/, "", hex)
h=0
}
h {sub(/[ \t]+/, ""); hex = hex $0}
/EDID.*:/ {h=1}
END {if (output && hex) print output, hex, conn}
' | sort
)
Uses awk to precisely extract the monitor name only, and no extra garbage from the EDID, hence "magic numbers" like 000000fc00, 26 and 0a. Finally uses xxd to convert from hex to ASCII, printing one monitor name per line.
Based on this solution I made a handy script to switch monitors, which can also be used to simply list monitor info:
$ monitor-switch --list
Connected monitors:
# DFP5 HDMI HT-R391
# DFP7 DVI-I DELL U2412M
$ monitor-switch --list
Connected monitors:
# DisplayPort-1 DisplayPort DELL U2412M
# DisplayPort-3 DisplayPort DELL U2415
# HDMI-A-2 HDMI LG TV
Tested on Ubuntu 16.04, 18.04. (I know its too late to answer but this solution is relevant today)
$ sudo apt-get install -y hwinfo
...
$ hwinfo --monitor --short
monitor:
SONY TV
AUO LCD Monitor
I have two monitors attached. One with the laptop and the other is an external display. As soon as the external monitor is plugged-in or out, this command reflects the change. You continuously need to poll. Removing the --short option gives more detailed information.
You can poll the state with the following background job:
$ while true;
> do
> hwinfo --monitor --short;
> sleep 2;
> done >> monitor.log &
The while true loop runs infinite times. The sleep 2 pauses each iteration of the loop for 2 seconds. And the output of hwinfo --monitor --short is appended to monitor.log. This log file can give you the activity history of monitor plug-in and plug-out.
FYI: I am using a background (daemon) python script using the above command (and other similar ones) to detect if someone is doing some HW plug-ins and plug-outs with the systems in the computer lab. If so, I get appropriate notifications that someone plugged-out/in a monitor, mouse or keyboard in almost real-time!
More info about hwinfo command is here. Its man page is also a good source.
sudo get-edid didn't work for me. (EDIT: now works on another computer, Lubuntu 14.10; I'd blame BIOS differences but that's a random guess...)
Anyway under X, xrandr --verbose prints the EDID block. Here is a quick and dirty way to extract it and pass to parse-edid:
#!/bin/bash
xrandr --verbose | perl -ne '
if ((/EDID(_DATA)?:/.../:/) && !/:/) {
s/^\s+//;
chomp;
$hex .= $_;
} elsif ($hex) {
# Use "|strings" if you dont have read-edid package installed
# and just want to see (or grep) the human-readable parts.
open FH, "|parse-edid";
print FH pack("H*", $hex);
$hex = "";
}'
If you don't want to parse xrandr output, write a C program using libXrandr that gets only what you want. If all you want to do is to query information, it can be done quickly. Read this document.
If you want to get the real monitor name, an alternative to #dtmilano's solution is to get the EDID property of the monitor using libXrandr and then manually parse it and print (read the EDID specification).
xrandr source code.
I know this is a dirty way, but it gives me some monitor model name even better than sudo get-edid|parse-edid. It reads information in arrays, and outputs it in a way that can be read like you would read a file. You may modify it according to your needs.
#!/bin/bash
#
#
# get-monitors.sh
#
# Get monitor name and some other properties of connected monitors
# by investigating the output of xrandr command and EDID data
# provided by it.
#
# Copyright (C) 2015,2016 Jarno Suni <8#iki.fi>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. See <http://www.gnu.org/licenses/gpl.html>
set -o nounset
set -o errexit
# EDID format:
# http://en.wikipedia.org/wiki/Extended_Display_Identification_Data#EDID_1.3_data_format
# http://read.pudn.com/downloads110/ebook/456020/E-EDID%20Standard.pdf
declare -r us=';' # separator string;
# If EDID has more than one field with same tag, concatenate them,
# but add this string in between.
declare -r fs=$'\x1f' # Field separator for internal use;
# must be a character that does not occur in data fields.
declare -r invalid_edid_tag='--bad EDID--'
# If base EDID is invalid, don't try to extract information from it,
# but assign this string to the fields.
# Get information in these arrays:
declare -a outs # Output names
declare -a conns # Connection type names (if available)
declare -a names # Monitor names (but empty for some laptop displays)
declare -a datas # Extra data; may include laptop display brand name
# and model name
declare -i no # number of connected outputs (to be counted)
# xrandr command to use as a source of information:
declare -r xrandr_output_cmd="xrandr --prop"
hex_to_ascii() {
echo -n "$1" | xxd -r -p
}
ascii_to_hex() {
echo -n "$1" | xxd -p
}
get_info() {
no=0
declare OIFS=$IFS;
IFS=$fs
while read -r output conn hexn hexd; do
outs[no]="${output}"
conns[no]="${conn}"
names[no]="$(hex_to_ascii "$hexn")"
datas[no]="$(hex_to_ascii "$hexd")"
(( ++no ))
done < <(eval $xrandr_output_cmd | gawk -v gfs="$fs" '
function print_fields() {
print output, conn, hexn, hexd
conn=""; hexn=""; hexd=""
}
function append_hex_field(src_hex,position,app_hex, n) {
n=substr(src_hex,position+10,26)
sub(/0a.*/, "", n)
# EDID specification says field ends by 0x0a
# (\n), if it is shorter than 13 bytes.
#sub(/(20)+$/, "", n)
# strip whitespace at the end of ascii string
if (n && app_hex) return app_hex sp n
else return app_hex n
}
function get_hex_edid( hex) {
getline
while (/^[ \t]*[[:xdigit:]]+$/) {
sub(/[ \t]*/, "")
hex = hex $0
getline
}
return hex
}
function valid_edid(hex, a, sum) {
if (length(hex)<256) return 0
for ( a=1; a<=256; a+=2 ) {
# this requires gawk
sum+=strtonum("0x" substr(hex,a,2))
# this requires --non-decimal-data for gawk:
#sum+=sprintf("%d", "0x" substr(hex,a,2))
}
if (sum % 256) return 0
return 1
}
BEGIN {
OFS=gfs
}
/[^[:blank:]]+ connected/ {
if (unprinted) print_fields()
unprinted=1
output=$1
}
/[^[:blank:]]+ disconnected/ {
if (unprinted) print_fields()
unprinted=0
}
/^[[:blank:]]*EDID.*:/ {
hex=get_hex_edid()
if (valid_edid(hex)) {
for ( c=109; c<=217; c+=36 ) {
switch (substr(hex,c,10)) {
case "000000fc00" :
hexn=append_hex_field(hex,c,hexn)
break
case "000000fe00" :
hexd=append_hex_field(hex,c,hexd)
break
}
}
} else {
# set special value to denote invalid EDID
hexn=iet; hexd=iet
}
}
/ConnectorType:/ {
conn=$2
}
END {
if (unprinted) print_fields()
}' sp=$(ascii_to_hex $us) iet=$(ascii_to_hex $invalid_edid_tag))
IFS="$OIFS"
}
get_info
# print the colums of each display quoted in one row
for (( i=0; i<$no; i++ )); do
echo "'${outs[i]}' '${conns[i]}' '${names[i]}' '${datas[i]}'"
done
You may try ddcprobe and/or get-edid
$ sudo apt-get install xresprobe read-edid
$ sudo ddcprobe
$ sudo get-edid
You're looking for EDID information, which is passed along an I²C bus and interpreted by your video driver. As dtmilano says, get-edit from ddcprobe should work.
You can also get this information by logging your X start:
startx -- -logverbose 6
Years ago, I used a package called read-edid to gather this information.
The read-edid package may be available in Ubuntu already, according to this blog post from 2009.

Resources