Bash Script to allow Nagios to report ping between two other Linux machines - linux

I'm looking for alternatives to working out the ping between two machine (mA and mB) and report this back to Nagios (on mC).
My current thoughts are to write a BASH script that will ping the machines in a cron job, output the data to a file then have another bash script that Nagios can use to read that file. This doesn't feel like the best/right way to do this though?
Here's the script I plan to run in the cron job:
#!/bin/bash
if [ -z "$1" ] || [ -z "$2" ] || [ -z "$3" ] || [ -z "$4" ]
then
echo $0: usage: $0 file? ip? pingcount? deadline?
exit 126
else
FILE=$1
IP=$2
PCOUNT=$3
DLINE=$4
while read line
do
if [[ $line == rtt* ]]
then
#replace forward slash with underscore
line=${line////_}
#replace spaces with underscore
line=${line// /_}
#get the 8 item when splitting string on underscore
#echo $line| cut -d'_' -f 8 >> $FILE #Append
#echo $line| cut -d'_' -f 8 > $FILE #Overwrite
echo $line| cut -d'_' -f 8
fi
done < <(ping $IP -c $PCOUNT -q -w $DLINE) #-q output summary / -w deadline / -c pint count
I though about using trace route, but I think this would be produces a slower ping?, is there another way to achieve what I want?
Note: I know Nagios can directly ping a machine, but this isn't what I want to do and won't tell me what I want. Also this is my second script ever, so it's probably rubbish. Also, what alternative would I have if ICMP was blocked?

Have you looked at NRPE and check_ping? This would allow the nagios machine (mC) to ask mA to ping mB and then mA would report the results to mC. You would need to install and configure NRPE and the nagios-plugins on mA for this to work.

Related

How to develop a Condition to close program only when log file has been updated in Bash Script [duplicate]

I want to run a shell script when a specific file or directory changes.
How can I easily do that?
You may try entr tool to run arbitrary commands when files change. Example for files:
$ ls -d * | entr sh -c 'make && make test'
or:
$ ls *.css *.html | entr reload-browser Firefox
or print Changed! when file file.txt is saved:
$ echo file.txt | entr echo Changed!
For directories use -d, but you've to use it in the loop, e.g.:
while true; do find path/ | entr -d echo Changed; done
or:
while true; do ls path/* | entr -pd echo Changed; done
I use this script to run a build script on changes in a directory tree:
#!/bin/bash -eu
DIRECTORY_TO_OBSERVE="js" # might want to change this
function block_for_change {
inotifywait --recursive \
--event modify,move,create,delete \
$DIRECTORY_TO_OBSERVE
}
BUILD_SCRIPT=build.sh # might want to change this too
function build {
bash $BUILD_SCRIPT
}
build
while block_for_change; do
build
done
Uses inotify-tools. Check inotifywait man page for how to customize what triggers the build.
Use inotify-tools.
The linked Github page has a number of examples; here is one of them.
#!/bin/sh
cwd=$(pwd)
inotifywait -mr \
--timefmt '%d/%m/%y %H:%M' --format '%T %w %f' \
-e close_write /tmp/test |
while read -r date time dir file; do
changed_abs=${dir}${file}
changed_rel=${changed_abs#"$cwd"/}
rsync --progress --relative -vrae 'ssh -p 22' "$changed_rel" \
usernam#example.com:/backup/root/dir && \
echo "At ${time} on ${date}, file $changed_abs was backed up via rsync" >&2
done
How about this script? Uses the 'stat' command to get the access time of a file and runs a command whenever there is a change in the access time (whenever file is accessed).
#!/bin/bash
while true
do
ATIME=`stat -c %Z /path/to/the/file.txt`
if [[ "$ATIME" != "$LTIME" ]]
then
echo "RUN COMMNAD"
LTIME=$ATIME
fi
sleep 5
done
Check out the kernel filesystem monitor daemon
http://freshmeat.net/projects/kfsmd/
Here's a how-to:
http://www.linux.com/archive/feature/124903
As mentioned, inotify-tools is probably the best idea. However, if you're programming for fun, you can try and earn hacker XPs by judicious application of tail -f .
Just for debugging purposes, when I write a shell script and want it to run on save, I use this:
#!/bin/bash
file="$1" # Name of file
command="${*:2}" # Command to run on change (takes rest of line)
t1="$(ls --full-time $file | awk '{ print $7 }')" # Get latest save time
while true
do
t2="$(ls --full-time $file | awk '{ print $7 }')" # Compare to new save time
if [ "$t1" != "$t2" ];then t1="$t2"; $command; fi # If different, run command
sleep 0.5
done
Run it as
run_on_save.sh myfile.sh ./myfile.sh arg1 arg2 arg3
Edit: Above tested on Ubuntu 12.04, for Mac OS, change the ls lines to:
"$(ls -lT $file | awk '{ print $8 }')"
Add the following to ~/.bashrc:
function react() {
if [ -z "$1" -o -z "$2" ]; then
echo "Usage: react <[./]file-to-watch> <[./]action> <to> <take>"
elif ! [ -r "$1" ]; then
echo "Can't react to $1, permission denied"
else
TARGET="$1"; shift
ACTION="$#"
while sleep 1; do
ATIME=$(stat -c %Z "$TARGET")
if [[ "$ATIME" != "${LTIME:-}" ]]; then
LTIME=$ATIME
$ACTION
fi
done
fi
}
Quick solution for fish shell users who wanna track a single file:
while true
set old_hash $hash
set hash (md5sum file_to_watch)
if [ $hash != $old_hash ]
command_to_execute
end
sleep 1
end
replace md5sum with md5 if on macos.
Here's another option: http://fileschanged.sourceforge.net/
See especially "example 4", which "monitors a directory and archives any new or changed files".
inotifywait can satisfy you.
Here is a common sample for it:
inotifywait -m /path -e create -e moved_to -e close_write | # -m is --monitor, -e is --event
while read path action file; do
if [[ "$file" =~ .*rst$ ]]; then # if suffix is '.rst'
echo ${path}${file} ': '${action} # execute your command
echo 'make html'
make html
fi
done
Suppose you want to run rake test every time you modify any ruby file ("*.rb") in app/ and test/ directories.
Just get the most recent modified time of the watched files and check every second if that time has changed.
Script code
t_ref=0; while true; do t_curr=$(find app/ test/ -type f -name "*.rb" -printf "%T+\n" | sort -r | head -n1); if [ $t_ref != $t_curr ]; then t_ref=$t_curr; rake test; fi; sleep 1; done
Benefits
You can run any command or script when the file changes.
It works between any filesystem and virtual machines (shared folders on VirtualBox using Vagrant); so you can use a text editor on your Macbook and run the tests on Ubuntu (virtual box), for example.
Warning
The -printf option works well on Ubuntu, but do not work in MacOS.

Watch file to be updated [duplicate]

I want to run a shell script when a specific file or directory changes.
How can I easily do that?
You may try entr tool to run arbitrary commands when files change. Example for files:
$ ls -d * | entr sh -c 'make && make test'
or:
$ ls *.css *.html | entr reload-browser Firefox
or print Changed! when file file.txt is saved:
$ echo file.txt | entr echo Changed!
For directories use -d, but you've to use it in the loop, e.g.:
while true; do find path/ | entr -d echo Changed; done
or:
while true; do ls path/* | entr -pd echo Changed; done
I use this script to run a build script on changes in a directory tree:
#!/bin/bash -eu
DIRECTORY_TO_OBSERVE="js" # might want to change this
function block_for_change {
inotifywait --recursive \
--event modify,move,create,delete \
$DIRECTORY_TO_OBSERVE
}
BUILD_SCRIPT=build.sh # might want to change this too
function build {
bash $BUILD_SCRIPT
}
build
while block_for_change; do
build
done
Uses inotify-tools. Check inotifywait man page for how to customize what triggers the build.
Use inotify-tools.
The linked Github page has a number of examples; here is one of them.
#!/bin/sh
cwd=$(pwd)
inotifywait -mr \
--timefmt '%d/%m/%y %H:%M' --format '%T %w %f' \
-e close_write /tmp/test |
while read -r date time dir file; do
changed_abs=${dir}${file}
changed_rel=${changed_abs#"$cwd"/}
rsync --progress --relative -vrae 'ssh -p 22' "$changed_rel" \
usernam#example.com:/backup/root/dir && \
echo "At ${time} on ${date}, file $changed_abs was backed up via rsync" >&2
done
How about this script? Uses the 'stat' command to get the access time of a file and runs a command whenever there is a change in the access time (whenever file is accessed).
#!/bin/bash
while true
do
ATIME=`stat -c %Z /path/to/the/file.txt`
if [[ "$ATIME" != "$LTIME" ]]
then
echo "RUN COMMNAD"
LTIME=$ATIME
fi
sleep 5
done
Check out the kernel filesystem monitor daemon
http://freshmeat.net/projects/kfsmd/
Here's a how-to:
http://www.linux.com/archive/feature/124903
As mentioned, inotify-tools is probably the best idea. However, if you're programming for fun, you can try and earn hacker XPs by judicious application of tail -f .
Just for debugging purposes, when I write a shell script and want it to run on save, I use this:
#!/bin/bash
file="$1" # Name of file
command="${*:2}" # Command to run on change (takes rest of line)
t1="$(ls --full-time $file | awk '{ print $7 }')" # Get latest save time
while true
do
t2="$(ls --full-time $file | awk '{ print $7 }')" # Compare to new save time
if [ "$t1" != "$t2" ];then t1="$t2"; $command; fi # If different, run command
sleep 0.5
done
Run it as
run_on_save.sh myfile.sh ./myfile.sh arg1 arg2 arg3
Edit: Above tested on Ubuntu 12.04, for Mac OS, change the ls lines to:
"$(ls -lT $file | awk '{ print $8 }')"
Add the following to ~/.bashrc:
function react() {
if [ -z "$1" -o -z "$2" ]; then
echo "Usage: react <[./]file-to-watch> <[./]action> <to> <take>"
elif ! [ -r "$1" ]; then
echo "Can't react to $1, permission denied"
else
TARGET="$1"; shift
ACTION="$#"
while sleep 1; do
ATIME=$(stat -c %Z "$TARGET")
if [[ "$ATIME" != "${LTIME:-}" ]]; then
LTIME=$ATIME
$ACTION
fi
done
fi
}
Quick solution for fish shell users who wanna track a single file:
while true
set old_hash $hash
set hash (md5sum file_to_watch)
if [ $hash != $old_hash ]
command_to_execute
end
sleep 1
end
replace md5sum with md5 if on macos.
Here's another option: http://fileschanged.sourceforge.net/
See especially "example 4", which "monitors a directory and archives any new or changed files".
inotifywait can satisfy you.
Here is a common sample for it:
inotifywait -m /path -e create -e moved_to -e close_write | # -m is --monitor, -e is --event
while read path action file; do
if [[ "$file" =~ .*rst$ ]]; then # if suffix is '.rst'
echo ${path}${file} ': '${action} # execute your command
echo 'make html'
make html
fi
done
Suppose you want to run rake test every time you modify any ruby file ("*.rb") in app/ and test/ directories.
Just get the most recent modified time of the watched files and check every second if that time has changed.
Script code
t_ref=0; while true; do t_curr=$(find app/ test/ -type f -name "*.rb" -printf "%T+\n" | sort -r | head -n1); if [ $t_ref != $t_curr ]; then t_ref=$t_curr; rake test; fi; sleep 1; done
Benefits
You can run any command or script when the file changes.
It works between any filesystem and virtual machines (shared folders on VirtualBox using Vagrant); so you can use a text editor on your Macbook and run the tests on Ubuntu (virtual box), for example.
Warning
The -printf option works well on Ubuntu, but do not work in MacOS.

clear my script logs every 10 second

I have script with name : run.sh
This is my script code :
#!/usr/bin/env bash
install() {
sudo apt-get update
sudo apt-get upgrade
}
if [ "$1" = "install" ]; then
install
else
if [ ! -f ./tg/tgcli ]; then
echo "tg not found"
echo "Run $0 install"
exit 1
fi
#sudo service redis-server restart
#./tg/tgcli -s ./bot/bot.lua -l 1 -E $#
./tg/tgcli -s ./bot/bot.lua $#
fi
and when run this script give me output like this every second :
[09:54] 2014 Hello
[09:55] 2014 Hi
[09:57] 2014 How Are you ?
and many like this (thousands in hour !)
and my server get slow in 5 hour.
i check print commands in bot.lua but there are no way to remove print it.
can you add some codes to clear my script logs every 10 second ?
Thanks a lot.
My Script Output Doesn't Save Anywhere and Just Show me in terminal
I want a code such as clear command on linux terminal , clear my script logs every 10 minute or 5 minute.
After 5 day of script running i can (sometimes can't) login my server and my server get very slow and i must wait 3 or 5 minute to login my server and this amazing after login my server my server again get fast !
and i forgot say i use byobu screen for run my scripts and I think screen get my server slow down.
I don't think that something as simple as this would cause your server to slow down, but you can add a check to your script to calculate the size or line count of your log file every time it runs.
This function assumes you are redirecting your output to a log file. Set the variables to whatever makes the most sense.
log_check() {
line_count=$(wc -l $log_file | awk '{print $1}')
size_check=$(du -ax $log_file | awk '{print $1}')
max_file_size="1500"
max_file_length="1000"
if [[ $line_count >= $max_file_length || $size_check >= $max_file_size ]]; then
echo "" > $log_file
fi
}
I would also recommend using [[ ]] over [ ] since this is a bash script, as long as you don't plan in it being posix compliant and only plan on using it with bash [[]] is always better than [].
EDIT:
Since you are logging output to the terminal and not a file you can literally use the clear command in your script.
Try this out and see how the functionality works
for i in {1..20}; do
echo $i
if (( i == 10 )); then
clear
fi
done
I'm assuming your code has a loop somewhere, if not it will be a bit more complex to clear the terminal session. I'm not really sure what part of your code is actually printing anything to stdout, I'm guessing it's this piece here
./tg/tgcli -s ./bot/bot.lua $#
You could try something like this, which will background your initial process and then run clear every 60 seconds to clear the terminal window. Is there any reason you're not writing the output to a log file? That alone could solve some of your issues as well.
#!/bin/bash
./tg/tgcli -s ./bot/bot.lua $# &
pid="$!"
check_pid() {
ps -ef |grep "$pid"|grep -v 'grep' &>/dev/null
}
cnt=1
until ! check_pid; do
if (( cnt == 6 )); then
clear
cnt=1
fi
sleep 10
((cnt++))
done

Bash script to find filesystem usage

EDIT: Working script below
I have used this site MANY times to get answers, but I am a little stumped with this.
I am tasked with writing a script, in bash, to log into roughly 2000 Unix servers (Solaris, AIX, Linux) and check the size of OS filesystems, most notable /var /usr /opt.
I have set some variables, which may be where I am going wrong right off the bat.
1.) First I am connecting to another server that has a list of all hosts in the infrastructure. Then I parse this data with some sed commands to get a list I can use properly
1.) Then I do a ping test, to see if the server is alive. If the server is decom. The idea behind this, is if the server is not pingable, I don't want it being reported on, or any attempt to be made to connect to it, as it is just wasting time. I feel I am doing this wrong, but don't know how to do it corectly (a re-occurring theme you will here in this post lol)
If any FS is over 80% mark, then it should output to a text file with the servername, filesystem, size on one line <== very important for me
If the FS is under 80% full, then I don't want it in my output, it can me omitted completely.
I have created something that I will post below, and am hoping to get some help in figuring out where I am going wrong. I am very new to bash scripting, but have experience as a Unix admin (i have never been good at scripting).
Can anyone provide some direction and teach me where I am going wrong?
I will upload my script that i can confirm is working hopefully tomorrow. thanks everyone for your input in this!
Here is my "disk usage" linux script, i hope that help you.
#!/bin/sh
df -H | awk '{ print $5 " " $6 }' | while read output;
do
echo $output
usep=$(echo $output | awk '{ print $1}' | cut -d'%' -f1 )
partition=$(echo $output | awk '{ print $2 }' )
if [ $usep -ge 90 ]; then
echo "Running out of space \"$partition ($usep%)\" on $(hostname) as on $(date)" |
mail -s "Warning! There is no space on the disk: $usep%" root#domain.com
fi
done
Some trouble is here:
ping -c 1 -W 3 $i > /dev/null 2>&1
if [ $? -ne 0 ]; then
echo "$i is offline" >> $LOG
fi
You need a continue statement inside that if. Your program isn't really treating non-pingable hosts differently, just logging they're not pingable.
Okay, now I'm looking a little deeper, and there's more naive stuff in here. These shouldn't work:
SOLVARFS=$(df -h /var |cut -f5 |grep -v capacity |awk '{print $5}')
SOLUSRFS=$(df -h /usr |cut -f5 |grep -v capacity |awk '{print $5}')
SOLOPTFS=$(df -h /opt |cut -f5 |grep -v capacity |awk '{print $5}')
etc...
The problem with these lines is, the command substitution gets assigned to the variables before the ssh session happens. So the content of each variable is the command's result on your local system, not the command itself. Since you're doing command substitution around your ssh calls, it might well work just to rewrite these lines as (note the backslash escapes on $5):
SOLVARFS="df -h /var |cut -f5 |grep -v capacity |awk '{print \$5}'"
SOLUSRFS="df -h /usr |cut -f5 |grep -v capacity |awk '{print \$5}'"
SOLOPTFS="df -h /opt |cut -f5 |grep -v capacity |awk '{print \$5}'"
etc...
The part where you're contacting another server has some more stuff to correct. You don't need three if statements per server, and there's no reason to echo anything to /dev/null. Here's a rewrite for the SunOS section. For each directory you're checking, it outputs the host name, the command name (so you can see which dir was being checked), and the result:
if [[ $UNAME = "SunOS" ]]; then
for SSH_COMMAND in SOLVARFS SOLUSRFS SOLOPTFS ; do
RESULT=`ssh -o PasswordAuthentication=no -o BatchMode=yes -o StrictHostKeyChecking=no -o ConnectTimeout=2 GSSAPIAuthentication=no -q $i ${!SSH_COMMAND}`
if ["$RESULT" -gt 80] ; do
echo "$i, $SSH_COMMAND, $RESULT" >> $LOG
fi
done
fi
Note that the ${!BLAH} construction is variable indirection. "Give me the contents of the variable named by BLAH".
Your original script does a bunch of things less-than-optimally. Rather than running an almost-identical block of code for each filesystem and each operating system, the thing to do would be to record the differences in a way that a SINGLE piece of code can iterate over all your objects, adapting as required.
Here's my take on this. Commands should appear ONCE, but
they get run multiple times by loops, and
they get run multiple ways using arrays.
The following script passes lint checks, but obviously this is untested, as I don't have your environment to test in.
You might still want to think about how your logging and notifications work.
#!/bin/bash
# Assign temp file, remove it automatically upon successful exit.
tmpfile=$(mktemp /tmp/${0##*/}.XXXX)
trap "rm '$tmpfile'" 0
#NOW=$(date +"%Y-%m-%d-%T")
NOW=$(date +"%F")
LOG=/usr/scripts/disk_usage/Unix_df_issues-$NOW.txt
printf '' > "$LOG"
# Use variables to refer to commonly accessed files. If you change a name, just do it once.
rawhostlist=all_vms.txt
host_os=${rawhostlist}_OS
# Commonly-used options need only be declared once. Use an array for easier management.
declare -a ssh_opts=()
ssh_opts+=(-o PasswordAuthentication=no)
ssh_opts+=(-o BatchMode=yes)
ssh_opts+=(-o StrictHostKeyChecking=no) # Eliminate prompts on new hosts
ssh_opts+=(-o ConnectTimeout=2) # This should make your `ping` unnecessary.
ssh_opts+=(-o GSSAPIAuthentication=no) # This is default. Do we really need it?
# Note: Associative arrays require Bash 4.x.
declare -A df_opts=(
[SunOS]="-h"
[Linux]="-hP"
[AIX]=""
)
declare -A df_column=(
[SunOS]=5
[Linux]=5
[AIX]=4
)
# Fetch host list from configserver, stripping /^adm/ on the remote end.
ssh "${ssh_opts[#]}" -q configserver "sed 's/^adm//' /reports/*/HOSTNAME" > "$rawhostlist"
# Confirm that our host_os cache is up to date and process any missing hosts.
awk '
NR==FNR { h[$1]; next } # Add everything in rawhostlist to an array...
{ delete h[$1] } # Then remove any entries that exist in host_os.
END {
for (i in h) print i # And print whatever remains.
}' "$rawhostlist" "$host_os" |
while read h; do
printf '%s\t%s\n' "$h" $(ssh "$h" "${ssh_opts[#]}" -q uname -s)
done >> "$host_os"
# Next, step through the host list and collect data.
while read host os; do
ssh "${ssh_opts[#]}" "$host" df "${df_opts[$os]}" /var /usr /opt |
awk -v column="${df_column[$os]}" -v host="$host" 'NR>1 { print host,$1,$column }'
)
done < "$host_os" > "$tmpfile"
# Now that we have all our data, check for warning/critical levels.
while read host filesystem usage; do
if [ "$usage" -gt 80 ]; then
status="CRITICAL"
elif [ "$usage" -gt 70 ]; then
status="WARNING"
else
continue
fi
# Log our results to our log file, AND send them to stderr.
printf "[%s] %s: %s:%s at %d%%\n" "$(date +"%F %T")" "$status" "$host" "$filesystem" "$usage" | tee -a "$LOG" >&2
done < "$tmpfile"
# Email and record our results.
if [ -s "$LOG" ]; then
mail -s "Daily Unix /var Report - $NOW" unixsystems#examplle.com < "$LOG"
mv "$LOG" /var/log/vm_reports/
fi
Consider this example code. If you like the way it looks, your next task is to debug it, or open new questions for parts that you're having trouble debugging. :-)

Linux Top command with more than 20 commands

I want to use top in order to monitor numerous processes by process name. I already know about doing $ top -p $(pgrep -d ',' <pattern>) but top only limits me to 20 pids. Is there a way to allow for more than 20 pids?
Do I have to use a combination of ps and watch to get similar results?
From top/top.c:
if (Monpidsidx >= MONPIDMAX)
error_exit(fmtmk(N_fmt(LIMIT_exceed_fmt), MONPIDMAX));
(where LIMIT_exceed_fmt is the error message you're getting).
And in top/top.h:
#define MONPIDMAX 20
I changed this number to 80, and that seems to work okay. Not sure why this hardcoded limit is so low.
So, if manually compiling procps-ng is an option, then you could do that. You don't need to replace the system top (or need root privileges), you can just put it in your homedir.
Another workaround might be using tmux or screen and multiple top instances.
Yet another possible solution might be using ps with a loop, ie.
#!/bin/sh
while :; do
clear
ps $*
sleep 1
done
Invoke it as: ./psloop.sh 42 666
You may want to add more flags to ps for additional info. Also be aware this is less efficient, since it will invoke 3 binaries every second.
A wrapper with watch. Tested with Ubuntu 11.04, Ubuntu 14.04, RHEL5, RHEL6 and RHEL7
Syntax: script.sh pid [ pid ...] # space separated
Example: script.sh $(pgrep -d ' ' <pattern>)
#!/bin/bash
i=10 # ~ interval in seconds
format()
{
a="$1 $2 $3 $4 $5 $6 $7 $8 $9 ${10} ${11} ${12}"
a="$a ${13} ${14} ${15} ${16} ${17} ${18} ${19} ${20}"
a="${a%%*( )}"; a="${a// /,}"
}
main()
{
format $#
top -b -n 1 -p $a
[ $# -gt 20 ] && shift 20 || shift $#
until [ $# -eq 0 ]; do
format $#
top -b -n 1 -p $a | sed '1,/PID/d;/^$/d'
[ $# -gt 20 ] && shift 20 || shift $#
done
}
if [ "$1" == "watch" ]; then
shift
shopt -s extglob
main $#
else
watch -t -n $i "$0 watch $#"
fi

Resources