Im looking to monitor some aspects of a farm of servers that are necessary for the application that runs on them.
Basically, Im looking to have a file on each machine, which when accessed via http (on a vlan), with curl, that will spit out information Im looking for, which I can log into the database with dameon that sits in a loop and checks the health of all the servers one by one.
The info Im looking to get is
<load>server load</load>
<free>md0 free space in MB</free>
<total>md0 total space in MB</total>
<processes># of nginx processes</processes>
<time>timestamp</time>
Whats the best way of doing that?
EDIT: We are using cacti and opennms, however what Im looking for here is data that is necessary for the application that runs on these servers. I dont want to complicate it by having it rely on any 3rd party software to fetch this basic data which can be gotten with a few linux commands.
Make a cron entry that:
executes a shell script every few minutes (or whatever frequency you want)
saves the output in a directory that's published by the web server
Assuming your text is literally what you want, this will get you 90% of the way there:
#!/usr/bin/env bash
LOAD=$(uptime | cut -d: -f5 | cut -d, -f1)
FREE=$(df -m / | tail -1 | awk '{ print $4 }')
TOTAL=$(df -m / | tail -1 | awk '{ print $2 }')
PROCESSES=$(ps aux | grep [n]ginx | wc -l)
TIME=$(date)
cat <<-EOF
<load>$LOAD</load>
<free>$FREE</free>
<total>$TOTAL</total>
<processes>$PROCESSES</processes>
<time>$TIME</time>
EOF
Sample output:
<load> 0.05</load>
<free>9988</free>
<total>13845</total>
<processes>6</processes>
<time>Wed Apr 18 22:14:35 CDT 2012</time>
Related
I'm currently trying to find a good solution to a PCAP storage problem I'm encountering. Right now, I have a LVM drive on RHEL that stores PCAPs taken from netsniff. As you can imagine, this drive fills up quickly and somewhat unpredictably depending on how much traffic flows across my network.
Currently, I'm using an inelegant solution to my problem. I'm using a custom shell script that checks the percentage of the disk remaining then removes the 100 oldest captures by invoking logrotate. This is set to run every 30 minutes or so.
#!/bin/bash
declare -i ALERT
ALERt=80
df -H | grep -vE '^Filesystem|tmpfs|udev' | awk '{print $1 " " $5}' | while read output;
do
partition=$(echo $output | awk '{print $2}' | cut -d '%' -f1)
if [ $partition -ge $ALERT ]; then
echo "Running Out of Space" $partition"% remaining"
logrotate -v
else
echo "Plenty of Space" $partition"% remaining"
fi
done
I was wondering if there was a better solution out there? Something that might take into account fluctuations in traffic and adjust the offloading of pcaps accordingly.
What about compressing all the pcaps? This would probably save a lot of space.
apologies, I'm not a Unix guy, Windows and Powershell is more my area. I need to check the uptime on Linux servers using a shell command that can be invoked from SCOM.
I have been able to get the uptime in seconds back into SCOM using...
cat /proc/uptime | gawk -f ' ' '{print $1}'
However, SCOM does not pick this up as numerical, I think it's treating the returned output as a string.
I'd like a shell command that returns a 0 or 1 if the number of seconds is less than one day (86400).
I've been experimenting with [test -gt] but can't seem to get it working?
thanks
Does this work for you?
cat /proc/uptime | gawk '{print ($1>86400)?1:0;}'
I have a script which I used perfectly fine in Linux, but now that I've switched over to Mac, the script still runs but has slightly different behavior.
This is a script for tallying student attendance at departmental functions. We use a portable barcode scanner to scan their ID's, and then save all scans in one csv file per date.
I used grep -m1 $ID csvfolder/* | wc -l in the past to get a count of how many files their ID shows up in. The -m1 is necessary to make sure they don't get "extra credit" for repeatedly scanning in at the same event.
However, when I use this same command in Mac, it exits grep when it has found the first match in the first file. So if the student shows up in 4 files, wc -l still returns 1
How can I (without installing the GNU versions) emulate this feature?
I don't have Mac OS X handy to test it with, but the following is Posix-standard afaik:
grep -l "$ID" csvfolder/* | wc -l
The grep will print the name of each file which contains a match. That should work with Gnu grep equally.
You could alternatively use awk for this task:
awk -v id="$ID" '$0 ~ id{print 1; exit}' csvfolder/* | wc -l
I am trying to write a batch file for reading and extracting user agent from a log file which i am able to do with the following code but i need to give numeric count of the browser that made the requests and Using gnu-plot, plot a bar-chart of the number of requests per browser.i am kinda stuck with the browser requests a little help or direction will be appreciated.
Cheers.
#!/bin/bash
# All we're doing here is extracting the user agent field from the log file and 'piping' it through some other commands. The first sort is to # enable uniq to properly identify and count unique user agents. The final sort orders the result by number and name (both descending).
awk -F\" '{print $6}' access.log | sort | uniq -c | sort -fr > extracteduseragents.txt
To get friendlier names for the browsers, you can use e.g. pybrowscap together with the browscap.csv file from http://tempdownloads.browserscap.com/index.php.
Then you could use a script like the following sanitize_ua.py:
#!/usr/bin/env python
import sys
from pybrowscap.loader.csv import load_file
browscap = load_file('browscap.csv')
for ua in sys.stdin:
browser = browscap.search(ua)
if browser:
print "'{} {}'".format(browser.name(), browser.version())
And run from the command line like
awk -F\" '{print $6}' access.log | sort | python sanitize_ua.py | uniq -c | sort -fr
Of course, searching all user agents before uniq is very unefficient, but it should show the working principle. And, of course, you could also write a single python script to do all the processing.
Is there a way to show all open files by IP address on linux?
I use this:
netstat -atun | awk '{print $5}' | cut -d: -f1 | sed -e '/^$/d' |sort | uniq -c | sort -n
to show all connections from IP sorted by number of connections.
How do I know what are these IP's hitting?
thanks in advance!
If you can find a way to identify the process that has the socket open in netstat, you could use the ls -l /proc/<pid>/fd to find what files that process has open. Of course, a lot of those files may not be accessed from the network - for example your typical apache server will have /var/log/httpd/access_log and /var/log/httpd/error_log and quite possibly some other files too. And of course, it will be a "moment in time", what file that process has open in 5 seconds or a millisecond from now may be quite a bit different.
I presume you don't just let anything/anyone access the server, so I'm guessing it's a webserver or some such, in which case, it would probably be easier to put some code into your web interface to track who does what.