I am currently using a Raspberry PI as a ping server and this is the script that I am using checking for a ok response.
I'm not familiar with bash scripting so it's a bit of a beginner question with the curl call, is there a way to increase the timeout, as it keeps reporting false down websites.
#!/bin/bash
SITESFILE=/sites.txt #list the sites you want to monitor in this file
EMAILS=" " #list of email addresses to receive alerts (comma separated)
while read site; do
if [ ! -z "${site}" ]; then
CURL=$(curl -s --head $site)
if echo $CURL | grep "200 OK" > /dev/null
then
echo "The HTTP server on ${site} is up!"
sleep 2
else
MESSAGE="This is an alert that your site ${site} has failed to respond 200 OK."
for EMAIL in $(echo $EMAILS | tr "," " "); do
SUBJECT="$site (http) Failed"
echo "$MESSAGE" | mail -s "$SUBJECT" $EMAIL
echo $SUBJECT
echo "Alert sent to $EMAIL"
done
fi
fi
done < $SITESFILE
Yes, man curl:
--connect-timeout <seconds>
Maximum time in seconds that you allow the connection to the server to take.
This only limits the connection phase, once curl has connected this option is
of no more use. See also the -m, --max-time option.
You can also consider using ping to test the connection before calling curl. Something with a ping -c2 will give you 2 pings to test the connection. Then just check the return from ping (i.e. [[ $? -eq 0 ]] means ping succeeded, then connect with curl)
Also you can use [ -n ${site} ] (site is set) instead of [ ! -z ${site} ] (site is not-unset). Additionally, you will generally want to use the [[ ]] test keywords instead of single [ ] for test constructs. For ultimate portability, just use test -n "${site}" (always double-quote when using test.
I think you need this option --max-time <seconds>
-m/--max-time <seconds>
Maximum time in seconds that you allow the whole operation to take. This is useful for preventing your batch jobs from hanging for hours
due to slow networks or links going down.
--connect-timeout <seconds>
Maximum time in seconds that you allow the connection to the server to take. This only limits the connection phase, once curl has connected this option is of no more use.
Related
checkServer(){
response=$(curl --connect-timeout 10 --write-out %{http_code} --silent --output /dev/null localhost:8080/patuna/servicecheck)
if [ "$response" = "200" ];
then echo "`date --rfc-3339=seconds` - Server is healthy, up and running"
return 0
else
echo "`date --rfc-3339=seconds` - Server is not healthy(response code - $response ), server is going to restrat"
startTomcat
fi
}
Here i want to time out the curl command but it dose not work. in centos7 Shell scrit. what i simply needs to do is timeout the curl command
ERROR code is curl: option --connect-timeout=: is unknown
checkServer(){
response=$(curl --max-time 20 --connect-timeout 0 --write-out %{http_code} --silent --output /dev/null localhost:8080/patuna/servicecheck)
if [ "$response" = "200" ];
then echo "`date --rfc-3339=seconds` - Server is healthy, up and running"
return 0
else
echo "`date --rfc-3339=seconds` - Server is not healthy(response code - $response ), server is going to restrat"
startTomcat
fi
}
You can try the option of --max-time.
Maximum time in seconds that you allow the whole operation to take. This is useful for preventing your batch jobs from hanging for hours due to slow networks or links
going down. Since 7.32.0, this option accepts decimal values, but the actual timeout will decrease in accuracy as the specified timeout increases in decimal precision.
If you just want to check the http status code. You might want to check out the --head option.
I suggest using --silent with --show-error at the same time in case that you might want to know the error message.
I'm trying to create a bash script to download files en masse from a certain website.
Their download links are sequential - e.g. it's just id=1, id=2, id=3 all the way up to 660000. The only requirement is that you have to be logged in, which makes this a bit harder. Oh, and the login will randomly time out after a few hours so I have to log back in.
Here's my current script, which works well about 99% of the time.
#!/bin/sh
cd downloads
for i in `seq 1 660000`
do
lastname=""
echo "Downloading file $i"
echo "Downloading file $i" >> _downloadinglog.txt
response=$(curl --write-out %{http_code} -b _cookies.txt -c _cookies.txt --silent --output /dev/null "[sample URL to make sure cookie is still logged in]")
if ! [ $response -eq 200 ]
then
echo "Cookie didn't work, trying to re-log in..."
curl -d "userid=[USERNAME]" -d "pwd=[PASSWORD]" -b _cookies.txt -c _cookies.txt --silent --output /dev/null "[login URL]"
response=$(curl --write-out %{http_code} -b _cookies.txt -c _cookies.txt --silent --output /dev/null "[sample URL again]")
if ! [ $response -eq 200 ]
then
echo "Something weird happened?? Response code $response. Logging in didn't fix issue, fix then resume from $(($i - 1))"
echo "Something weird happened?? Response code $response. Logging in didn't fix issue, fix then resume from $(($i - 1))" >> _downloadinglog.txt
exit 0
fi
echo "Downloading file $(($i - 1)) again incase cookie expiring caused it to fail"
echo "Downloading file $(($i - 1)) again incase cookie expiring caused it to fail" >> _downloadinglog.txt
lastname=$(curl --write-out %{filename_effective} -O -J -b _cookies.txt -c _cookies.txt "[URL to download files]?id=$(($i - 1))")
echo "id $(($i - 1)) = $lastname" >> _downloadinglog.txt
lastname=""
echo "Downloading file $i"
echo "Downloading file $i" >> _downloadinglog.txt
fi
lastname=$(curl --write-out %{filename_effective} -O -J -b _cookies.txt -c _cookies.txt "[URL to download files]?id=$i")
echo "id $i = $lastname" >> _downloadinglog.txt
done
So basically what I have it doing is attempting to download a random file before moving to the next file in the set. If the download fails, we assume the login cookie is no longer valid and tell curl to log me back in.
This works great, and I was able to get several thousand files from it. But what would happen is - either my router goes down for a second or two, or THEIR site goes down for a minute or two, and curl will just sit there thinking it's downloading for hours. I once came back to it literally spending 24 hours on the same file. It doesn't seem to have the ability to know if the transfer timed out in the middle - only if it can't START the transfer.
I know there are ways to terminate execution of a command if you combine it with "sleep", but since this has to be "smart" and restart from where it left off, I can't just kill the whole script.
Any suggestions? I'm open to using something other than curl if I can use it to login via a terminal command.
You can try using the curl options --connect-timeout or --max-time .
--max-time should be your pick.
From manual :
--max-time
Maximum time in seconds that you allow the whole operation to take. This is useful for preventing your batch jobs from hanging for hours due to slow networks or links going down. Since 7.32.0, this option accepts decimal values, but the actual timeout will decrease in accuracy as the specified timeout increases in decimal precision. See also the --connect-timeout option.
If this option is used several times, the last one will be used.
Then capture the result of the command in a var and process further based on the result.
I have a requirement where I need to test is ssh tunnel is alive or not from different server.
This is how code looks like to check if the connection is live, if it is alive, it would send email.
#!/bin/bash
SERVERIP=192.xxx.xxx.xxx
NOTIFYEMAIL=xyz#gmail.com
SENDEREMAIL=alert#localhost
SERVER=http://127.0.0.1/
ping -c 3 $SERVERIP > /dev/null 2>&1
if [ $? -eq 0 ]
then
# Use your favorite mailer here:
mailx -s "Server $SERVERIP is down" -r "$SENDEREMAIL" -t "$NOTIFYEMAIL" </dev/null
fi
However, on running this ssh file, the below error is generated. can someone help me out.
No recipients specified
"/home/user name/dead.letter" 10/303
the -t switch force you to have a specific header format. Remove it and it will works better
I am a beginner user of linux, and also quite newbie at ssh and tunnels.
Anyway, my goal is to maintain a ssh tunnel open in background.
In order to do that, I wrote the following batch that I then added into crontab (the batch is automatically processed every 5 minutes during workdays and from 8am to 9pm).
I read in some other thread in stackoverflow that one should use autossh that will ensure the ssh will always be ok through a recurrent check. So did I....
#!/bin/bash
LOGFILE="/root/Tunnel/logBatchRestart.log"
NOW="$(date +%d/%m/%Y' - '%H:%M)" # date & time of log
if ! ps ax | grep ssh | grep tunnelToto &> /dev/null
then
echo "[$NOW] ssh tunnel not running : restarting it" >> $LOGFILE
autossh -f -N -L pppp:tunnelToto:nnnnn nom-prenom#193.xxx.yyy.zzz -p qqqq
if ! ps ax | grep ssh | grep toto &> /dev/null
then
echo "[$NOW] failed starting tunnel" >> $LOGFILE
else
echo "[$NOW] restart successfull" >> $LOGFILE
fi
fi
My problem is that sometimes the tunnel stops working, although every thing looks ok (ps ax | grep ssh > the result shows the two expected tasks : autossh main task and the ssh tunnel itself). I actually know about the problem cause the tunnel is used by a third party software that triggers an error as soon as the tunnel is no more responding.
SO I am wondering how I should improve my batch in order It will be able to check the tunnel and restart it if it happens to be dead. I saw some ideas in there, but it was concluded by the "autossh" hint... which I already use. Thus, I am out of ideas... If any of you have, I'd gladly have a look at them!
Thanks for taking interest in my question, and for your (maybe) suggestions!
Instead of checking the ssh process with ps you can do the following trick
create script, that does the following and add it to your crontab via crontab -e
#!/bin/sh
REMOTEUSER=username
REMOTEHOST=remotehost
SSH_REMOTEPORT=22
SSH_LOCALPORT=10022
TUNNEL_REMOTEPORT=8080
TUNNEL_LOCALPORT=8080
createTunnel() {
/usr/bin/ssh -f -N -L$SSH_LOCALPORT:$REMOTEHOST:SSH_REMOTEPORT -L$TUNNEL_LOCALPORT:$REMOTEHOST:TUNNEL_REMOTEPORT $REMOTEUSER#$REMOTEHOST
if [[ $? -eq 0 ]]; then
echo Tunnel to $REMOTEHOST created successfully
else
echo An error occurred creating a tunnel to $REMOTEHOST RC was $?
fi
}
## Run the 'ls' command remotely. If it returns non-zero, then create a new connection
/usr/bin/ssh -p $SSH_LOCALPORT $REMOTEUSER#localhost ls >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
echo Creating new tunnel connection
createTunnel
fi
In fact, this script will open two ports
port 22 which will be used to check if the tunnel is still alive
port 8080 which is the port you might want to use
Please check and send me further questions via comments
(I add this as an answer since there is not enough room for it un a comment)
Ok, I managed to make the batch run to launch the ssh tunnel (I had to specify my hostname instead of localhost in order it could be triggered) :
#!/bin/bash
LOGFILE="/root/Tunnel/logBatchRedemarrage.log"
NOW="$(date +%d/%m/%Y' - '%H:%M)" # date et heure du log
REMOTEUSER=username
REMOTEHOST=remoteHost
SSH_REMOTEPORT=22
SSH_LOCALPORT=10022
TUNNEL_REMOTEPORT=12081
TUNNEL_SPECIFIC_REMOTE_PORT=22223
TUNNEL_LOCALPORT=8082
createTunnel() {
/usr/bin/ssh -f -N -L$SSH_LOCALPORT:$REMOTEHOST:$SSH_REMOTEPORT -L$TUNNEL_LOCALPORT:$REMOTEHOST:$TUNNEL_REMOTEPORT $REMOTEUSER#193.abc.def.ghi -p $TUNNEL_SPECIFIC_REMOTE_PORT
if [[ $? -eq 0 ]]; then
echo [$NOW] Tunnel to $REMOTEHOST created successfully >> $LOGFILE
else
echo [$NOW] An error occurred creating a tunnel to $REMOTEHOST RC was $? >> $LOGFILE
fi
}
## Run the 'ls' command remotely. If it returns non-zero, then create a new connection
/usr/bin/ssh -p $SSH_LOCALPORT $REMOTEUSER#193.abc.def.ghi ls >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
echo [$NOW] Creating new tunnel connection >> $LOGFILE
createTunnel
fi
However, I got some immediate message (below) when the tunnel is running and when cron tries to lauch the batch again... sounds like it cannot listen to it. Also since I need some time to get a proof , I can't say yet it will successfully restart if the tunnel is out.
Here's the response to the second start of the batch.
bind: Address already in use channel_setup_fwd_listener: cannot listen
to port: 10022 bind: Address already in use
channel_setup_fwd_listener: cannot listen to port: 8082 Could not
request local forwarding.
Every night I go through the same process of checking failover systems for our T1's. I essentially go through the following process:
Start the failover process.
traceroute $server;
Once I see it's failed over, I verify that connections work by SSHing into a server.
ssh $server;
Then once I see it works, I take it off of failover.
So what I want to do is to continually run a traceroute until I get a certain result, then run a SSH command.
Put your list of successful messages in a file (omit the variable lines and fractions of the line, and use a ^ to identify the start of the line, as such:)
patterns.list:
^ 7 4.68.63.165
^ 8 4.68.17.133
^ 9 4.79.168.210
^10 216.239.48.108
^11 66.249.94.46
^12 72.14.204.99
Then a simple while loop:
while ! traceroute -n ${TARGET} | grep -f patterns.list
do
sleep 5 # 5 second delay between traceroutes, for niceness.
done
ssh ${DESTINATION}
Use traceroute -n to generate the output so you don't get an IP address that resolves one time, but and a name the next, resulting in a false positive.
I think you could be better off using ping command to verify server's accessability than traceroute.
It is easy to check for return status of ping command without using any grep at all:
if [ ping -c 4 -n -q 10.10.10.10 >/dev/null 2>& ]; then
echo "Server is ok"
else
echo "Server is down"
fi
If you want to do it continually in a loop, try this:
function check_ssh {
# do your ssh stuff here
echo "performing ssh test"
}
while : ; do
if [ ping -c 4 -n -q 10.10.10.10 >/dev/null 2>& ]; then
echo "Server is ok"
check_ssh
else
echo "Server is down"
fi
sleep 60
done