Linux finding the mean of the memory usage of last hour - linux

i am trying to write a script that finds the mean of the memory usage of last hour and if it's above %60, mails to someone thats relevant.
I am trying this for days and i am completely lost. On the other hand i can't get any updates for my Ubuntu so i can't try someting like atop. I need this to work on other computers as well.
As far as i know ;
free -m | awk 'NR==2{printf "Memory Usage: %s/%sMB (%.2f%%)\n", $3,$2,$3*100/$2 }'
I am trying to use something like this in my code. Any help would appreciated.
Thanks.
EDIT
So i've built my scripts basics. But in this script i am getting the current ram usage.
#!/bin/sh
used=$(free -m | grep '^Mem' | awk '{print $3}')
total=$(free -m | grep '^Mem' | awk '{print $2}')
perct=$((($used*100)/$total))
echo "$perct%"
if [ $perct -gt 60 ] ; then
echo "Ram usage: $perct is above 60%" | mail -s "Critical Ram Usage" "example#example.com"
fi
#end
From this point , what can i do to improve my code ?

Related

How to get the memory and cpu usage of a remote server?

My intent is to log into several servers and print out their memory & cpu usage one by one. I wrote the follow scripts
START=1
END=5
for i in {$START..$END}
do
echo "myserver$i"
ssh myserver$i
free -m | awk 'NR==2{printf "Memory Usage: %s/%sMB (%.2f%%)\n", $3,$2,$3*100/$2 }'
top -bn1 | grep load | awk '{printf "CPU Load: %.2f\n", $(NF-2)}'
logout
done
But it doesn't work. Who can give a solution to this? Thanks a lot!
Look carefully at your code.
After the SSH command, you are on the remote server, in an SSH shell. And obviously your script now wants you to talk (via keyboard) to the remote server. When it is finished, e.g. if you hit ctrl-c or ctrl-d, then the next commands like "free" and "top" are running on your local machine.
You have to tell ssh with a kind of "-exec" argument that it should execute free and top on the remote server :D
I'm sure you figure it out yourself how to do that, have fun.
There is one useful command for CPU/mem usage - top.
To get the result, run this command.
CPU Usage - top -b -n 1 | grep Cpu
Mem Usage - top -b -n 1 | grep 'KiB Mem'
After searching online and combining a few answers from other questions on stackflow. I get the following solution.
Solution
On your local computer, you might want to have the following bash script, named, say, usage_ssh
START=1
END=3
date
for i in $(seq $START $END)
do
printf '=%.0s' {1..50};
printf '\n'
echo myservery$i
ssh myserver$i -o LogLevel=QUIET -t "~/bin/usage"
done
printf '=%.0s' {1..50};
printf '\n'
printf 'CPU Load: \n'
printf 'First Field\tprocesses per processor\n'
printf 'Second Filed\tidling percentage in last 5 minutes\n'
printf '\n'
printf '\n'
On your remote server, you should have the following bash script named usage. This script should be located in ~/bin.
free -m | awk 'NR==2{printf "Memory Usage\t%s/%sMB\t\t%.2f%\n", $3, $2, $3/$2*100}';
top -n 1 | grep load | awk '{printf "CPU Load\t%.2f\t\t\t%.2f\n", $(NF-2), $(NF-1)}';
Explanation
The idea is that You will call the use ssh -t <your command> to run executable on your remote file and get the output on the screen of your local computer.
Output
Sat Mar 28 10:32:34 CDT 2020
==================================================
myserver1
Memory Usage 47418/48254MB 98.27%
CPU Load 0.01 0.02
==================================================
myserver2
Memory Usage 47421/48254MB 98.27%
CPU Load 0.01 0.02
==================================================
myserver3
Memory Usage 4300/84541MB 5.09%
CPU Load 0.02 0.02
==================================================
CPU Load:
First Field processes per processor
Second Filed idling percentage in last 5 minutes

Bash script to find filesystem usage

EDIT: Working script below
I have used this site MANY times to get answers, but I am a little stumped with this.
I am tasked with writing a script, in bash, to log into roughly 2000 Unix servers (Solaris, AIX, Linux) and check the size of OS filesystems, most notable /var /usr /opt.
I have set some variables, which may be where I am going wrong right off the bat.
1.) First I am connecting to another server that has a list of all hosts in the infrastructure. Then I parse this data with some sed commands to get a list I can use properly
1.) Then I do a ping test, to see if the server is alive. If the server is decom. The idea behind this, is if the server is not pingable, I don't want it being reported on, or any attempt to be made to connect to it, as it is just wasting time. I feel I am doing this wrong, but don't know how to do it corectly (a re-occurring theme you will here in this post lol)
If any FS is over 80% mark, then it should output to a text file with the servername, filesystem, size on one line <== very important for me
If the FS is under 80% full, then I don't want it in my output, it can me omitted completely.
I have created something that I will post below, and am hoping to get some help in figuring out where I am going wrong. I am very new to bash scripting, but have experience as a Unix admin (i have never been good at scripting).
Can anyone provide some direction and teach me where I am going wrong?
I will upload my script that i can confirm is working hopefully tomorrow. thanks everyone for your input in this!
Here is my "disk usage" linux script, i hope that help you.
#!/bin/sh
df -H | awk '{ print $5 " " $6 }' | while read output;
do
echo $output
usep=$(echo $output | awk '{ print $1}' | cut -d'%' -f1 )
partition=$(echo $output | awk '{ print $2 }' )
if [ $usep -ge 90 ]; then
echo "Running out of space \"$partition ($usep%)\" on $(hostname) as on $(date)" |
mail -s "Warning! There is no space on the disk: $usep%" root#domain.com
fi
done
Some trouble is here:
ping -c 1 -W 3 $i > /dev/null 2>&1
if [ $? -ne 0 ]; then
echo "$i is offline" >> $LOG
fi
You need a continue statement inside that if. Your program isn't really treating non-pingable hosts differently, just logging they're not pingable.
Okay, now I'm looking a little deeper, and there's more naive stuff in here. These shouldn't work:
SOLVARFS=$(df -h /var |cut -f5 |grep -v capacity |awk '{print $5}')
SOLUSRFS=$(df -h /usr |cut -f5 |grep -v capacity |awk '{print $5}')
SOLOPTFS=$(df -h /opt |cut -f5 |grep -v capacity |awk '{print $5}')
etc...
The problem with these lines is, the command substitution gets assigned to the variables before the ssh session happens. So the content of each variable is the command's result on your local system, not the command itself. Since you're doing command substitution around your ssh calls, it might well work just to rewrite these lines as (note the backslash escapes on $5):
SOLVARFS="df -h /var |cut -f5 |grep -v capacity |awk '{print \$5}'"
SOLUSRFS="df -h /usr |cut -f5 |grep -v capacity |awk '{print \$5}'"
SOLOPTFS="df -h /opt |cut -f5 |grep -v capacity |awk '{print \$5}'"
etc...
The part where you're contacting another server has some more stuff to correct. You don't need three if statements per server, and there's no reason to echo anything to /dev/null. Here's a rewrite for the SunOS section. For each directory you're checking, it outputs the host name, the command name (so you can see which dir was being checked), and the result:
if [[ $UNAME = "SunOS" ]]; then
for SSH_COMMAND in SOLVARFS SOLUSRFS SOLOPTFS ; do
RESULT=`ssh -o PasswordAuthentication=no -o BatchMode=yes -o StrictHostKeyChecking=no -o ConnectTimeout=2 GSSAPIAuthentication=no -q $i ${!SSH_COMMAND}`
if ["$RESULT" -gt 80] ; do
echo "$i, $SSH_COMMAND, $RESULT" >> $LOG
fi
done
fi
Note that the ${!BLAH} construction is variable indirection. "Give me the contents of the variable named by BLAH".
Your original script does a bunch of things less-than-optimally. Rather than running an almost-identical block of code for each filesystem and each operating system, the thing to do would be to record the differences in a way that a SINGLE piece of code can iterate over all your objects, adapting as required.
Here's my take on this. Commands should appear ONCE, but
they get run multiple times by loops, and
they get run multiple ways using arrays.
The following script passes lint checks, but obviously this is untested, as I don't have your environment to test in.
You might still want to think about how your logging and notifications work.
#!/bin/bash
# Assign temp file, remove it automatically upon successful exit.
tmpfile=$(mktemp /tmp/${0##*/}.XXXX)
trap "rm '$tmpfile'" 0
#NOW=$(date +"%Y-%m-%d-%T")
NOW=$(date +"%F")
LOG=/usr/scripts/disk_usage/Unix_df_issues-$NOW.txt
printf '' > "$LOG"
# Use variables to refer to commonly accessed files. If you change a name, just do it once.
rawhostlist=all_vms.txt
host_os=${rawhostlist}_OS
# Commonly-used options need only be declared once. Use an array for easier management.
declare -a ssh_opts=()
ssh_opts+=(-o PasswordAuthentication=no)
ssh_opts+=(-o BatchMode=yes)
ssh_opts+=(-o StrictHostKeyChecking=no) # Eliminate prompts on new hosts
ssh_opts+=(-o ConnectTimeout=2) # This should make your `ping` unnecessary.
ssh_opts+=(-o GSSAPIAuthentication=no) # This is default. Do we really need it?
# Note: Associative arrays require Bash 4.x.
declare -A df_opts=(
[SunOS]="-h"
[Linux]="-hP"
[AIX]=""
)
declare -A df_column=(
[SunOS]=5
[Linux]=5
[AIX]=4
)
# Fetch host list from configserver, stripping /^adm/ on the remote end.
ssh "${ssh_opts[#]}" -q configserver "sed 's/^adm//' /reports/*/HOSTNAME" > "$rawhostlist"
# Confirm that our host_os cache is up to date and process any missing hosts.
awk '
NR==FNR { h[$1]; next } # Add everything in rawhostlist to an array...
{ delete h[$1] } # Then remove any entries that exist in host_os.
END {
for (i in h) print i # And print whatever remains.
}' "$rawhostlist" "$host_os" |
while read h; do
printf '%s\t%s\n' "$h" $(ssh "$h" "${ssh_opts[#]}" -q uname -s)
done >> "$host_os"
# Next, step through the host list and collect data.
while read host os; do
ssh "${ssh_opts[#]}" "$host" df "${df_opts[$os]}" /var /usr /opt |
awk -v column="${df_column[$os]}" -v host="$host" 'NR>1 { print host,$1,$column }'
)
done < "$host_os" > "$tmpfile"
# Now that we have all our data, check for warning/critical levels.
while read host filesystem usage; do
if [ "$usage" -gt 80 ]; then
status="CRITICAL"
elif [ "$usage" -gt 70 ]; then
status="WARNING"
else
continue
fi
# Log our results to our log file, AND send them to stderr.
printf "[%s] %s: %s:%s at %d%%\n" "$(date +"%F %T")" "$status" "$host" "$filesystem" "$usage" | tee -a "$LOG" >&2
done < "$tmpfile"
# Email and record our results.
if [ -s "$LOG" ]; then
mail -s "Daily Unix /var Report - $NOW" unixsystems#examplle.com < "$LOG"
mv "$LOG" /var/log/vm_reports/
fi
Consider this example code. If you like the way it looks, your next task is to debug it, or open new questions for parts that you're having trouble debugging. :-)

Linux: How to receive warning email from a server when not much hard drive space left?

I am building a new CentOS 6.4 server.
I was wondering if there is a way I can receive a warning email when the use of any partition exceeds 80% in the server.
EDIT:
As Aaron Digulla pointed out, this question is better suited for Server Fault.
Please view or answer this question in the following post in Server Fault.
https://serverfault.com/questions/570647/linux-how-to-receive-warning-email-from-a-server-when-not-much-hard-drive-space
EDIT:
Server Fault put my post on hold. I guess I have no choice but continue this post here.
As Sayajin suggested, the following script can do the trick.
usage=$(df | awk '{print $1,$5}' | tail -n +2 | tr -d '%');
echo "$usage" | while read FS PERCENT; do [ "$PERCENT" -ge "80" ] && echo "$FS has used ${PERCENT}% Disk Space"; done
This is exactly what I want to do. However for my case, the df output looks something like this:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup-LogVol01
197836036 5765212 182021288 4% /
As you see, filesystem and Use% are not in the same line. This causes $1 and $5 are not the info I want to get. Any idea to fix this?
Thanks.
EDIT:
The trick is
df -P
I also found shell script example in the following link doing exactly the same thing:
http://bash.cyberciti.biz/monitoring/shell-script-monitor-unix-linux-diskspace/
Install a monitoring service like Nagios.
You could always create a bash script & then have it email you:
usage=$(df | awk '{print $1,$5}' | tail -n +2 | tr -d '%');
echo "$usage" | while read FS PERCENT; do [ "$PERCENT" -ge "80" ] && echo "$FS has used ${PERCENT}% Disk Space"; done
Obviously instead of the && echo "$FS has used ${PERCENT}% Disk Space" you would send the warning email.
For people who do not have a monitoring system like Nagios (as suggested by #Aaron Digulla), this simple script can do the job :
#!/bin/bash
CURRENT=$(df / | grep / | awk '{ print $5}' | sed 's/%//g')
THRESHOLD=90
if [ "$CURRENT" -gt "$THRESHOLD" ] ; then
mail -s 'Disk Space Alert' mailid#domainname.com << EOF
Your root partition remaining free space is critically low. Used: $CURRENT%
EOF
fi
Then just add a cron job.

Removing forks in a shell script so that it runs well in Cygwin

I'm trying to run a shell script on windows in Cygwin. The problem I'm having is that it runs extremely slowly in the following section of code. From a bit of googling, I believe its due to there being a large amount of fork() calls within the script and as windows has to use Cygwins emulation of this, it just slows to a crawl.
A typical scenario would be in Linux, the script would complete in < 10 seconds (depending on file size) but in Windows on Cygin for the same file it would take nearly 10 minutes.....
So the question is, how can i remove some of these forks and still have the script return the same output. I'm not expecting miracles but I'd like to cut that 10 minute wait time down a fair bit.
Thanks.
check_for_customization(){
filename="$1"
extended_class_file="$2"
grep "extends" "$filename" | grep "class" | grep -v -e '^\s*<!--' | while read line; do
classname="$(echo $line | perl -pe 's{^.*class\s*([^\s]+).*}{$1}')"
extended_classname="$(echo $line | perl -pe 's{^.*extends\s*([^\s]+).*}{$1}')"
case "$classname" in
*"$extended_classname"*) echo "$filename"; echo "$extended_classname |$classname | $filename" >> "$extended_class_file";;
esac
done
}
Update: Changed the regex a bit and used a bit more perl:
check_for_customization(){
filename="$1"
extended_class_file="$2"
grep "^\(class\|\(.*\s\)*class\)\s.*\sextends\s\S*\(.*$\)" "$filename" | grep -v -e '^\s*<!--' | perl -pe 's{^.*class\s*([^\s]+).*extends\s*([^\s]+).*}{$1 $2}' | while read classname extended_classname; do
case "$classname" in
*"$extended_classname"*) echo "$filename"; echo "$extended_classname | $classname | $filename" >> "$extended_class_file";;
esac
done
}
So, using the above code, the run time was reduced from about 8 minutes to 2.5 minutes. Quite an improvement.
If anybody can suggest any other changes I would appreciate it.
Put more commands into one perl script, e. g.
check_for_customization(){
filename="$1" extended_class_file="$2" perl -n - "$1" <<\EOF
next if /^\s*<!--/;
next unless /^.*class\s*([^\s]+).*/; $classname = $1;
next unless /^.*extends\s*([^\s]+).*/; $extended_classname = $1;
if (index($extended_classname, $classname) != -1)
{
print "$ENV{filename}\n";
open FILEOUT, ">>$ENV{extended_class_file}";
print FILEOUT "$extended_classname |$classname | $ENV{filename}\n"
}
EOF
}

Bash monitor disk usage

I bought a NAS box which has a cut down version of debian on it.
It ran out of space the other day and I did not realise. I am basically wanting to write a bash script that will alert me whenever the disk gets over 90% full.
Is anyone aware of a script that will do this or give me some advice on writing one?
#!/bin/bash
source /etc/profile
# Device to check
devname="/dev/sdb1"
let p=`df -k $devname | grep -v ^File | awk '{printf ("%i",$3*100 / $2); }'`
if [ $p -ge 90 ]
then
df -h $devname | mail -s "Low on space" my#email.com
fi
Crontab this to run however often you want an alert
EDIT: For multiple disks
#!/bin/bash
source /etc/profile
# Devices to check
devnames="/dev/sdb1 /dev/sda1"
for devname in $devnames
do
let p=`df -k $devname | grep -v ^File | awk '{printf ("%i",$3*100 / $2); }'`
if [ $p -ge 90 ]
then
df -h $devname | mail -s "$devname is low on space" my#email.com
fi
done
I tried to use Erik's answer but had issues with devices having long names which wraps the numbers and causes script to fail, also the math looked wrong to me and didn't match the percentages reported by df itself.
Here's an update to his script:
#!/bin/bash
source /etc/profile
# Devices to check
devnames="/dev/sda1 /dev/md1 /dev/mapper/vg1-mysqldisk1 /dev/mapper/vg4-ctsshare1 /dev/mapper/vg2-jbossdisk1 /dev/mapper/vg5-ctsarchive1 /dev/mapper/vg3-muledisk1"
for devname in $devnames
do
let p=`df -Pk $devname | grep -v ^File | awk '{printf ("%i", $5) }'`
if [ $p -ge 70 ]
then
df -h $devname | mail -s "$devname is low on space" my#email.com
fi
done
Key changes are changed df -k to df -Pk to avoid line wrapping and simplified the awk to use pre-calc'd percent instead of recalcing.
You could also use Monit for this kind of job. It's a "free open source utility for managing and monitoring, processes, programs, files, directories and filesystems on a UNIX system".
Based on #Erik answer, here is my version with variables :
#!/bin/bash
DEVNAMES="/ /home"
THRESHOLD=80
EMAIL=you#email.com
host=$(hostname)
for devname in $DEVNAMES
do
current=$(df $devname | grep / | awk '{ print $5}' | sed 's/%//g')
if [ "$current" -gt "$THRESHOLD" ] ; then
mail -s "Disk space alert on $host" "$EMAIL" << EOF
WARNING: partition $devname on $host is $current% !!
To list big files (>100Mo) :
find $devname -xdev -type f -size +100M
EOF
fi
done
And if you do not have the mail command on your server, you can send email via SMPT with swaks :
swaks --from "$EMAIL" --to "$EMAIL" --server "TheServer" --auth LOGIN --auth-user "TheUser" --auth-password "ThePasswrd" --h-Subject "Disk space alert on $host" --body - << EOF
#!/bin/bash
DEVNAMES=$(df --output=source | grep ^/dev)
THRESHOLD=90
EMAIL=your#email
HOST=$(hostname)
for devname in $DEVNAMES
do
current=$(df $devname | awk 'NR>1 {printf "%i",$5}')
[ "$current" -gt "$THRESHOLD" ] && warn="WARNING: partition $devname on $HOST is $current% !! \n$warn"
done
[ "$warn" ] && echo -e "$warn" | mail -s "Disk space alert on $HOST" $EMAIL
Based on previous answers, here's my version with following changes:
Automatically checks all mounted devices
Sends only one mail per check, regardless of how many devices are over the threshold
Code generally tidied up

Resources