Unix shell command for uptime monitoring - linux

apologies, I'm not a Unix guy, Windows and Powershell is more my area. I need to check the uptime on Linux servers using a shell command that can be invoked from SCOM.
I have been able to get the uptime in seconds back into SCOM using...
cat /proc/uptime | gawk -f ' ' '{print $1}'
However, SCOM does not pick this up as numerical, I think it's treating the returned output as a string.
I'd like a shell command that returns a 0 or 1 if the number of seconds is less than one day (86400).
I've been experimenting with [test -gt] but can't seem to get it working?
thanks

Does this work for you?
cat /proc/uptime | gawk '{print ($1>86400)?1:0;}'

Related

Getting process from yesterday

I want to obtain all the process that are running in the system, but only from yesterday.
I am using this, ps -eo etime,pid
but i need only list the process from yesterday, any idea?
INFO: active from yesterday, process actually running from yesterday
Thanks in advance
If you want all the pids that have been running for more than 1 day but less than 2:
ps -e -o pid= -o etime= | sed 's/^ *//' | awk -F '[ -]+' 'NF>2 && $2==1 {print $1}'
if you want just more than 1 day, change it to $2>=1
It is not clear what exactly you need
But if you needed it from a specific time, just add it to crontab.
For example:
0 2 * * * ps -eo etime,pid>/tmp/processes.txt
This way you will have a snapshot from the processes running on 2:00
If you didnt mean that please be more specific of what exactly do you need

Getting specific PID from CentOS Journalctl

I'm writing a bash script that will print on the screen all the latest logs from a service that has already died (or still lives, both situations must work). I know its name and don't have to guess.
I'm having difficulty getting the latest PID for a process that has already died from journalctl. I'm not talking about this:
journalctl | grep "<processname>"
This will give me all the logs that include processname in their text.
I've also tried:
journalctl | pgrep -f "<processname>"
This command gave me a list of numbers which supposedly should include the pid of my process. It was not there.
These ideas came from searching for previous questions. I haven't found a question that answers specifically what I asked.
How can I extract the latest PID from journalctl for a specific process?
I figured this out.
First, you must be printing your PID in your logs. It doesn't appear there automatically. Then, you can use grep -E and awk to grab exactly the expression you want from your log:
Var=$(journalctl --since "24 hours ago" | grep -E "\[([0-9]+)\]" | tail -n 1 | awk '{print $5}' | awk -F"[][{}]" '{print $2}'
This one-liner script takes the logs from the last 24 hours, grep with -E to use an expression, tail -n 1 to grab the last most updated line from those results and then, using awk to delimit the line and grab the exact expression you need from it.

How to get grep -m1 to work in OSX

I have a script which I used perfectly fine in Linux, but now that I've switched over to Mac, the script still runs but has slightly different behavior.
This is a script for tallying student attendance at departmental functions. We use a portable barcode scanner to scan their ID's, and then save all scans in one csv file per date.
I used grep -m1 $ID csvfolder/* | wc -l in the past to get a count of how many files their ID shows up in. The -m1 is necessary to make sure they don't get "extra credit" for repeatedly scanning in at the same event.
However, when I use this same command in Mac, it exits grep when it has found the first match in the first file. So if the student shows up in 4 files, wc -l still returns 1
How can I (without installing the GNU versions) emulate this feature?
I don't have Mac OS X handy to test it with, but the following is Posix-standard afaik:
grep -l "$ID" csvfolder/* | wc -l
The grep will print the name of each file which contains a match. That should work with Gnu grep equally.
You could alternatively use awk for this task:
awk -v id="$ID" '$0 ~ id{print 1; exit}' csvfolder/* | wc -l

Why `read -t` is not timing out in bash on RHEL?

Why read -t doesn't time out when reading from pipe on RHEL5 or RHEL6?
Here is my example which doesn't timeout on my RHEL boxes wile reading from the pipe:
tail -f logfile.log | grep 'something' | read -t 3 variable
If I'm correct read -t 3 should timeout after 3 seconds?
Many thanks in advance.
Chris
GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)
The solution given by chepner should work.
An explanation why your version doesn't is simple: When you construct a pipe like yours, the data flows through the pipe from the left to the right. When your read times out however, the programs on the left side will keep running until they notice that the pipe is broken, and that happens only when they try to write to the pipe.
A simple example is this:
cat | sleep 5
After five seconds the pipe will be broken because sleep will exit, but cat will nevertheless keep running until you press return.
In your case that means, until grep produces a result, your command will keep running despite the timeout.
While not a direct answer to your specific question, you will need to run something like
read -t 3 variable < <( tail -f logfile.log | grep "something" )
in order for the newly set value of variable to be visible after the pipeline completes. See if this times out as expected.
Since you are simply using read as a way of exiting the pipeline after a fixed amount of time, you don't have to worry about the scope of variable. However, grep may find a match without printing it within your timeout due to its own internal buffering. You can disable that (with GNU grep, at least), using the --line-buffered option:
tail -f logfile.log | grep --line-buffered "something" | read -t 3
Another option, if available, is the timeout command as a replacement for the read:
timeout 3 tail -f logfile.log | grep -q --line-buffered "something"
Here, we kill tail after 3 seconds, and use the exit status of grep in the usual way.
I dont have a RHEL server to test your script right now but I could bet than read is exiting on timeout and working as it should. Try run:
grep 'something' | strace bash -c "read -t 3 variable"
and you can confirm that.

Bash script to get server health

Im looking to monitor some aspects of a farm of servers that are necessary for the application that runs on them.
Basically, Im looking to have a file on each machine, which when accessed via http (on a vlan), with curl, that will spit out information Im looking for, which I can log into the database with dameon that sits in a loop and checks the health of all the servers one by one.
The info Im looking to get is
<load>server load</load>
<free>md0 free space in MB</free>
<total>md0 total space in MB</total>
<processes># of nginx processes</processes>
<time>timestamp</time>
Whats the best way of doing that?
EDIT: We are using cacti and opennms, however what Im looking for here is data that is necessary for the application that runs on these servers. I dont want to complicate it by having it rely on any 3rd party software to fetch this basic data which can be gotten with a few linux commands.
Make a cron entry that:
executes a shell script every few minutes (or whatever frequency you want)
saves the output in a directory that's published by the web server
Assuming your text is literally what you want, this will get you 90% of the way there:
#!/usr/bin/env bash
LOAD=$(uptime | cut -d: -f5 | cut -d, -f1)
FREE=$(df -m / | tail -1 | awk '{ print $4 }')
TOTAL=$(df -m / | tail -1 | awk '{ print $2 }')
PROCESSES=$(ps aux | grep [n]ginx | wc -l)
TIME=$(date)
cat <<-EOF
<load>$LOAD</load>
<free>$FREE</free>
<total>$TOTAL</total>
<processes>$PROCESSES</processes>
<time>$TIME</time>
EOF
Sample output:
<load> 0.05</load>
<free>9988</free>
<total>13845</total>
<processes>6</processes>
<time>Wed Apr 18 22:14:35 CDT 2012</time>

Resources