Check Whether a Web Application is Up or Not - linux

I would like to write a script to check whethere the application is up or not using unix shell scripts.
From googling I found a script wget -O /dev/null -q http://mysite.com, But not sure how this works. Can someone please explain. It will be helpful for me.

Run the wget command
the -O option tells where to put the data that is retrieved
/dev/null is a special UNIX file that is always empty. In other words the data is discarded.
-q means quiet. Normally wget prints lots of info telling its progress in downloading the data so we turn that bit off.
http://mysite.com is the URL of the exact web page that you want to retrieve.
Many programmers create a special page for this purpose that is short, and contains status data. In that case, do not discard it but save it to a log file by replacing -O /dev/null with -a mysite.log.

Check whether you can connect to your web server.
Connect to the port where you web server
If it connects properly your web server is up otherwise down.
You can check farther. (e.g. if index page is proper)
See this shell script.
if wget -O /dev/null -q http://shiplu.mokadd.im;
then
echo Site is up
else
echo Site is down
fi

Related

FTP status check using a variable - Linux

I am doing an ftp and I want to check the status. I don't want to use '$?' as mostly it returns 0 (Success) for ftp even though internally ftp didn't go through.
I know I can check the log file and do a grep from there for "Transfer complete" (221 status). That works fine but I don't want to do it as I have many different reports doing ftp. So creating multiple log files for all of them is what I want to avoid.
Can I get the logged information in a local script variable and process it inside the script itself?
Something similar to these (I've tried both but neither worked):
Grab FTP output in BASH SCRIPT
FTP status check whether successful or not
Below is something similar to what I am trying to do:
ftp -inv ${HOST} > log_file.log <<!
user ${USER} ${PASS}
bin
cd "${TARGET}"
put ${FEEDFILE}
bye
!
Any suggestions on how can I get the entire ftp output in a script variable and then check it within the script?
To capture stdout to a variable you can use bash's command substitution, so either OUTPUT=`cmd` or OUTPUT=$(cmd).
Here's an example how to capture the output from ftp in your case:
CMDS="user ${USER} ${PASS}
bin
cd \"${TARGET}\"
put \"${FEEDFILE}\"
bye"
OUTPUT=$(echo "${CMDS}" | ftp -inv ${HOST})

Adding new user to multiple unix servers using terminal

I am working within a company and require myself to be added onto different branch servers. The current way of doing this is:
sudo /usr/local/bin/sd-adduser test "Test User"
This needs to be done individually logging into each server manually - which is about 20 servers. I vaguely know of expect which allows you to do add a user to multiple servers? Could anyone point me in the right direction? Or provide me the script to do this.
Any help is appreciated.
Sounds like multi-ssh could help you or pssh or pdsh.
In the long run you probably want a central user management like LDAP.
Routine administration tasks such as this can be done using a script that reads a list of server names and runs a command. Something like this "each-host" script:
#!/bin/sh
for server in $(cat mylist)
do
ssh -t $server "$#"
done
where mylist is a file containing the list of servers.
Thus
each-host sudo /usr/local/bin/sd-adduser test "Test User"
would run the OP's command on each host. Once you get that working, you could tidy up a little, making it less verbose (not printing /etc/motd);
#!/bin/sh
for server in $(cat mylist)
do
echo "** $server"
ssh -q -t $server "$#"
done

Using WGET to run a cronjob PHP disable notification email

Im using godaddy as a webhost and id like to disable the email notification that is sent after a cronjob is done. Lucky for me they have been no help but the cronjob area says:
You can have cron send an email every time it runs a command. If you do not want an email to be sent for an individual cron job you can redirect the command’s output to /dev/null like this: mycommand >/dev/null 2>&1
Ive tried several variations of this and nothing seems to fix it.
My command:
wget http://example.com/wp-admin/tools.php?page=post-by-email&tab=log&check_mail=1
Any advice is greatly appreciated.
As the cronjob area says, you need to redirect the command’s output to /dev/null.
Your command should look like this:
wget -O /dev/null -o /dev/null "http://example.com/wp-admin/wp-mail.php" &> /dev/null
The -O option makes sure that the fetched content is sent to /dev/null.
If you want the fetched content to be downloaded in the server filesystem, you can use this option to specify the path to the desired file.
The -o option logs to /dev/null instead of stderr
&> /dev/null is another way yo redirect stdout output to /dev/null.
NOTES
For more information on wget, check the man pages: you can type man wget on the console, or use the online man pages: http://man.he.net/?topic=wget&section=all
With both -O and -o pointing to /dev/null, the output redirection ( &> ... ) should not be needed.
If you don't need to download the contents, and only need the server to process the request, you can simply use the --spider argument

Cron / wget jobs intermittently not running - not getting into access log

I've a number of accounts running cron-started php jobs hourly.
The generic structure of the command is this:
wget -q -O - http://some.site.com/cron.php
Now, this used to be running just fine.
Lately, though, on a number of accounts it has started playing up - but only on this one server. Once or twice a day the php file is not run.
The access log is missing the relevant entry.
While the cron log shows that the job was run.
We've added a bit to the command to log things out (-o /tmp/logfile) but it shows nothing.
I'm at a loss, really. I'm looking for ideas what can be wrong, or how to sidestep this issue as it has started taking up way too much of my time.
Has anyone seen anything remotely like this?
Thanks in advance!
Try this command
wget -d -a /tmp/logfile -O - http://some.site.com/cron.php
With -q you turn off wget's output. With -d you turn on debug output (maybe -v for verbose output is already enough). With -a you append logging messages to /tmp/logfile instead of always creating a new file.
You can also use curl:
curl http://some.site.com/cron.php

Get website's status code in linux

I have a small vps where I host a web app that I developed, and it's starting to receive a lot of visits.
I need to check/verify, some how, every X minutes to see if the web is up and running (check for status code, 200) or if is down (code 500), and if down, then restart run a script that I made to restart some services.
Any idea how to check that in linux? Curl, Lynx?
curl --head --max-time 10 -s -o /dev/null \
-w "%{http_code} %{time_total} %{url_effective}\n" \
http://localhost
Times out after 10 seconds, and reports Response Code and time
Curl will exit with an error code of 28 if the request times out (check $?)
Found this on a sister website (serverfault)
https://serverfault.com/questions/124952/testing-a-website-from-linux-command-line
wget -p http://site.com
This seems to do the trick
For questions like that the man pages of the tools normally provide a pretty good list of all possible options.
For curl you can also find it here.
The option you seem to search is -w with the http-code variable.
EDIT:
Please see #Ken's answer of how to use -w.
Ok, I created two scripts:
site-statush.sh http://yoursite.com => to check site status and if 200, do no thing, else invoke services-action.sh restart
services-action.sh restart => to restart all services indicated in $services
Check it out at https://gist.github.com/2421072

Resources