I want to check whether application up & running or down , because of some error in the application. I can able to monitor server using process ID , but there is possible application down but server up & running.
I want to monitor the application URL , like http://10.1.1.1:8080/test. i can't ping the url , because its not an DNS server.
So how to monitor the application url whether is working good or some error occur. Please advise...
I'd do this with wget. You need to configure it with number of attempts to try, and I'd recommend a low-ish timeout value too, whatever you think an acceptable response is. It will return 0 if it could download the page and non-zero otherwise.
#!/bin/sh
if wget "$1" -O /dev/null --tries 1 --quiet --timeout 5; then echo "Up"; else echo "Down"; fi
Example
check-site.sh "http://10.1.1.1:8080/test"
Related
I have Jenkins pipeline job which goes thought all our Jenkins servers and check the connectivity (runs every few minutes).
ksh file:
#!/bin/ksh
JENKINS_URL=$1
curl --connect-timeout 10 "$JENKINS_URL" >/dev/null
status=`echo $?`
if [ "$status" == "7" ]; then
export SUBJECT="Connection refused or can not connect to URL $JENKINS_URL"
echo "$SUBJECT"|/usr/sbin/sendmail -t XXXX#gmail.com
else
echo "successfully connected $JENKINS_URL"
fi
exit 0
I would like to add another piece of code, which record all the times that server was down (it should include the name of the server and timestamp) into a file, and in case the server is up again, send an email which will notify about it, and it will be also recorded in the file.
I don't want to get extra alerts, only one alert (to file and mail) when it's down, and one when it's up again. any idea how to implement it?
The detailed answer was given by unix.stackexchange community:
https://unix.stackexchange.com/questions/562594/how-to-set-and-record-alerts-for-jenkin-server-down-and-up
I have a script on a linux box in my environment to loop through a series of services that should always be running, and if one isn't running, send me an email.
Now, it seems to work fine except for two issues. I'd really appreciate some help and insight. Most of my background comes from Python and Powershell.
Whenever the script detects a service that's down, it exits the script, instead of looping through the rest. It then appends the services it didn't check to the body of the email, despite me not specifying an email body in the mail command.
Every so often, it throws a false error on "hostservices"; and I have no idea how to even go about figuring out why.
The script is a cron job running every 10 minutes. Full text of the script and list of services are below, as well as a screenshot of what happens when the script finds a service that's down.
Script
#!/bin/bash
while read services; do
#Run Command to get Service Status, store results as string variable
service_status=$(service $services status)
#Check if the service is NOT running.
if [[ "$service_status" != *"is running..." ]];
then
mail -s "Service $services is down on [SERVER]" [EMAIL ADDRESS]
elif [[ $service_status == *"is running..." ]];
then
:
else
mail -s "ERROR IN SCRIPT, unable to get $services status on [SERVER]" [EMAIL ADDRESS]
fi
done < /home/services.txt
services.txt
hostcontect
hostservices
ecs-ec
ecs-ep
imq
tomcat
httpd
Email Alert for Down Service
SUBJECT: "Service hostservices is down on [SERVER]"
BODY:
ecs-ec
ecs-ep
imq
tomcat
httpd
mail reads the body of the email from standard input. In your case, the input file is redirected to stdin, so it's read instead. Tell mail to read the body from elsewhere, e.g.
mail -s ... < /dev/null
I need a small alarm (HTTP req or any) when the server is going down. I checked many applications like nagios, servercheck and so on... All these application monitor only remote servers. I have only two servers to monitor. So if my server (10.172.65.124) is going down cant it send a alarm. I dont want to maintain one more server to monitor this. I am using rhel6 & centos7. Any suggestions
Here's a python script that will serve the purpose. It uses sendmail to send your email which will require running it from a linux server that has sendmail enabled. Change the url to point to the url you are monitoring. If you run this script, it'll check stackoverflow.
This uses urllib to check the status code it receives when trying to load your url. If it gets a status other than 200 from the HTTP request it expects the site to be down.
To monitor your server you should run the script on a server or desktop that is independent from your webhost, otherwise you won't be alerted when your server crashes due to a number of reasons.
#Import time to allow you to sleep the script, urllib to load the site, subprocess will allow you to run a process on the machine outside of the script (in this instance it's send mail)
import time
import urllib
from email.mime.text import MIMEText
from subprocess import Popen, PIPE
#The url being monitored.
url = "http://www.stackoverflow.com"
#The contents of the email
msg = MIMEText(url + " is not responding. Please investigate.")
msg["From"] = "me#youremail.com"
msg["To"] = "me#youremail.com"
msg["Subject"] = url + "is not responding"
#This loops while the script is running.
# It gets the status returned from the urllib call, if it's not 200 it will email the email contents above.
while True:
status = urllib.urlopen(url).getcode()
if status <> 200:
#This is what sends the email. If you don't have sendmail then update this.
p = Popen(["/usr/sbin/sendmail", "-t", "-oi"], stdin=PIPE)
p.communicate(msg.as_string())
#The number of seconds the loop will pause for before checking again. I set it to 60.
time.sleep(60)
I would recommend create simple script to ping the machine (they can monitor each other)and if ping timesout send an email.
something like this
#!/bin/bash
SERVERIP=IP ADDRESS
NOTIFYEMAIL=test#example.com
ping -c 3 $SERVERIP > /dev/null 2>&1
if [ $? -ne 0 ]
then
# Use your favorite mailer here:
mailx -s "Server $SERVERIP is down" -t "$NOTIFYEMAIL" < /dev/null
fi
As given above script you can configure normal bash script to monitor server http request or any other service request so if it would not get reply then you will get mail.
There is normal application for monitoring web service which is free on limited no of site per user which you can also use this.
http://uptimerobot.com/
Below script will check for the operation state of interface, If you need add some interfaces do alert as per your wish
#!/bin/bash
while true
do
if [ $(cat /sys/class/net/eth0/operstate) != "up" ]; then
sleep 1
#send mail for logging
fi
done
I have a small vps where I host a web app that I developed, and it's starting to receive a lot of visits.
I need to check/verify, some how, every X minutes to see if the web is up and running (check for status code, 200) or if is down (code 500), and if down, then restart run a script that I made to restart some services.
Any idea how to check that in linux? Curl, Lynx?
curl --head --max-time 10 -s -o /dev/null \
-w "%{http_code} %{time_total} %{url_effective}\n" \
http://localhost
Times out after 10 seconds, and reports Response Code and time
Curl will exit with an error code of 28 if the request times out (check $?)
Found this on a sister website (serverfault)
https://serverfault.com/questions/124952/testing-a-website-from-linux-command-line
wget -p http://site.com
This seems to do the trick
For questions like that the man pages of the tools normally provide a pretty good list of all possible options.
For curl you can also find it here.
The option you seem to search is -w with the http-code variable.
EDIT:
Please see #Ken's answer of how to use -w.
Ok, I created two scripts:
site-statush.sh http://yoursite.com => to check site status and if 200, do no thing, else invoke services-action.sh restart
services-action.sh restart => to restart all services indicated in $services
Check it out at https://gist.github.com/2421072
I would like to write a script to check whethere the application is up or not using unix shell scripts.
From googling I found a script wget -O /dev/null -q http://mysite.com, But not sure how this works. Can someone please explain. It will be helpful for me.
Run the wget command
the -O option tells where to put the data that is retrieved
/dev/null is a special UNIX file that is always empty. In other words the data is discarded.
-q means quiet. Normally wget prints lots of info telling its progress in downloading the data so we turn that bit off.
http://mysite.com is the URL of the exact web page that you want to retrieve.
Many programmers create a special page for this purpose that is short, and contains status data. In that case, do not discard it but save it to a log file by replacing -O /dev/null with -a mysite.log.
Check whether you can connect to your web server.
Connect to the port where you web server
If it connects properly your web server is up otherwise down.
You can check farther. (e.g. if index page is proper)
See this shell script.
if wget -O /dev/null -q http://shiplu.mokadd.im;
then
echo Site is up
else
echo Site is down
fi