I've coded below in bash to check proxy server from textfile.
#!/bin/bash
# CURL BASED
# REQUIRES FILE NAME AS ARGUMENT
# HTTP Proxy Server's Port Number
#port=$2
# We are trying to reach this url via the given HTTP Proxy Server
url="http://www.google.com"
# Timeout time (in seconds)
timeout=1
while read line
do
#echo $line
# HTTP Proxy Server's IP Address (or URL)
proxy_server=$line
# We're fetching the return code and assigning it to the $result variable
#result=`HEAD -d -p http://$proxy_server:$port -t $timeout $url`
#result=`HEAD -d -p http://$proxy_server -t $timeout $url`
result=`curl -I -s -x $proxy_server --connect-timeout $timeout $url | head -n 1 | cut -d/ -f1`
#echo $result
# If the return code is 200, we've reached to $url successfully
if [ "$result" = "HTTP" ]; then
echo "$proxy_server :: OK (proxy works)"
# Otherwise, we've got a problem (either the HTTP Proxy Server does not work
# or the request timed out)
else
echo "$proxy_server :: ERR (proxy does not work or request timed out)"
fi
done <$1
one thread works flawlessly. but how to make it multi threaded? I've tried to sem but it seems so complicated and out of my knowledge.
i wish someone can give me a hand.
Related
I have to check the status of 200 http URLs and find out which of these are broken links. The links are present in a simple text file (say URL.txt present in my ~ folder). I am using Ubuntu 14.04 and I am a Linux newbie. But I understand the bash shell is very powerful and could help me achieve what I want.
My exact requirement would be to read the text file which has the list of URLs and automatically check if the links are working and write the response to a new file with the URLs and their corresponding status (working/broken).
I created a file "checkurls.sh" and placed it in my home directory where the urls.txt file is also located. I gave execute privileges to the file using
$chmod +x checkurls.sh
The contents of checkurls.sh is given below:
#!/bin/bash
while read url
do
urlstatus=$(curl -o /dev/null --silent --head --write-out '%{http_code}' "$url" )
echo "$url $urlstatus" >> urlstatus.txt
done < $1
Finally, I executed it from command line using the following -
$./checkurls.sh urls.txt
Voila! It works.
#!/bin/bash
while read -ru 4 LINE; do
read -r REP < <(exec curl -IsS "$LINE" 2>&1)
echo "$LINE: $REP"
done 4< "$1"
Usage:
bash script.sh urls-list.txt
Sample:
http://not-exist.com/abc.html
https://kernel.org/nothing.html
http://kernel.org/index.html
https://kernel.org/index.html
Output:
http://not-exist.com/abc.html: curl: (6) Couldn't resolve host 'not-exist.com'
https://kernel.org/nothing.html: HTTP/1.1 404 Not Found
http://kernel.org/index.html: HTTP/1.1 301 Moved Permanently
https://kernel.org/index.html: HTTP/1.1 200 OK
For everything, read the Bash Manual. See man curl, help, man bash as well.
What about to add some parallelism to the accepted solution. Lets modify the script chkurl.sh to be little easier to read and to handle just one request at a time:
#!/bin/bash
URL=${1?Pass URL as parameter!}
curl -o /dev/null --silent --head --write-out "$URL %{http_code} %{redirect_url}\n" "$URL"
And now you check your list using:
cat URL.txt | xargs -P 4 -L1 ./chkurl.sh
This could finish the job up to 4 times faster.
Herewith my full script that checks URLs listed in a file passed as an argument e.g. 'checkurls.sh listofurls.txt'.
What it does:
check url using curl and return HTTP status code
send email notifications when url returns other code than 200
create a temporary lock file for failed urls (file naming could be improved)
send email notification when url becoms available again
remove lock file once url becomes available to avoid further notifications
log events to a file and handle increasing log file size (AKA log
rotation, uncomment echo if code 200 logging required)
Code:
#!/bin/sh
EMAIL=" your#email.com"
DATENOW=`date +%Y%m%d-%H%M%S`
LOG_FILE="checkurls.log"
c=0
while read url
do
((c++))
LOCK_FILE="checkurls$c.lock"
urlstatus=$(/usr/bin/curl -H 'Cache-Control: no-cache' -o /dev/null --silent --head --write-out '%{http_code}' "$url" )
if [ "$urlstatus" = "200" ]
then
#echo "$DATENOW OK $urlstatus connection->$url" >> $LOG_FILE
[ -e $LOCK_FILE ] && /bin/rm -f -- $LOCK_FILE > /dev/null && /bin/mail -s "NOTIFICATION URL OK: $url" $EMAIL <<< 'The URL is back online'
else
echo "$DATENOW FAIL $urlstatus connection->$url" >> $LOG_FILE
if [ -e $LOCK_FILE ]
then
#no action - awaiting URL to be fixed
:
else
/bin/mail -s "NOTIFICATION URL DOWN: $url" $EMAIL <<< 'Failed to reach or URL problem'
/bin/touch $LOCK_FILE
fi
fi
done < $1
# REMOVE LOG FILE IF LARGER THAN 100MB
# alow up to 2000 lines average
maxsize=120000
size=$(/usr/bin/du -k "$LOG_FILE" | /bin/cut -f 1)
if [ $size -ge $maxsize ]; then
/bin/rm -f -- $LOG_FILE > /dev/null
echo "$DATENOW LOG file [$LOG_FILE] has been recreated" > $LOG_FILE
else
#do nothing
:
fi
Please note that changing order of listed urls in text file will affect any existing lock files (remove all .lock files to avoid confusion). It would be improved by using url as file name but certain characters such as : # / ? & would have to be handled for operating system.
I recently released deadlink, a command-line tool for finding broken links in files. Install with
pip install deadlink
and use as
deadlink check /path/to/file/or/directory
or
deadlink replace-redirects /path/to/file/or/directory
The latter will replace permanent redirects (301) in the specified files.
Example output:
if your input file contains one url per line you can use a script to read each line, then try to ping the url, if ping success then the url is valid
#!/bin/bash
INPUT="Urls.txt"
OUTPUT="result.txt"
while read line ;
do
if ping -c 1 $line &> /dev/null
then
echo "$line valid" >> $OUTPUT
else
echo "$line not valid " >> $OUTPUT
fi
done < $INPUT
exit
ping options :
-c count
Stop after sending count ECHO_REQUEST packets. With deadline option, ping waits for count ECHO_REPLY packets, until the timeout expires.
you can use this option as well to limit waiting time
-W timeout
Time to wait for a response, in seconds. The option affects only timeout in absense
of any responses, otherwise ping waits for two RTTs.
curl -s -I --http2 http://$1 >> fullscan_curl.txt | cut -d: -f1 fullscan_curl.txt | cat fullscan_curl.txt | grep HTTP >> fullscan_httpstatus.txt
its work me
I am currently using a Raspberry PI as a ping server and this is the script that I am using checking for a ok response.
I'm not familiar with bash scripting so it's a bit of a beginner question with the curl call, is there a way to increase the timeout, as it keeps reporting false down websites.
#!/bin/bash
SITESFILE=/sites.txt #list the sites you want to monitor in this file
EMAILS=" " #list of email addresses to receive alerts (comma separated)
while read site; do
if [ ! -z "${site}" ]; then
CURL=$(curl -s --head $site)
if echo $CURL | grep "200 OK" > /dev/null
then
echo "The HTTP server on ${site} is up!"
sleep 2
else
MESSAGE="This is an alert that your site ${site} has failed to respond 200 OK."
for EMAIL in $(echo $EMAILS | tr "," " "); do
SUBJECT="$site (http) Failed"
echo "$MESSAGE" | mail -s "$SUBJECT" $EMAIL
echo $SUBJECT
echo "Alert sent to $EMAIL"
done
fi
fi
done < $SITESFILE
Yes, man curl:
--connect-timeout <seconds>
Maximum time in seconds that you allow the connection to the server to take.
This only limits the connection phase, once curl has connected this option is
of no more use. See also the -m, --max-time option.
You can also consider using ping to test the connection before calling curl. Something with a ping -c2 will give you 2 pings to test the connection. Then just check the return from ping (i.e. [[ $? -eq 0 ]] means ping succeeded, then connect with curl)
Also you can use [ -n ${site} ] (site is set) instead of [ ! -z ${site} ] (site is not-unset). Additionally, you will generally want to use the [[ ]] test keywords instead of single [ ] for test constructs. For ultimate portability, just use test -n "${site}" (always double-quote when using test.
I think you need this option --max-time <seconds>
-m/--max-time <seconds>
Maximum time in seconds that you allow the whole operation to take. This is useful for preventing your batch jobs from hanging for hours
due to slow networks or links going down.
--connect-timeout <seconds>
Maximum time in seconds that you allow the connection to the server to take. This only limits the connection phase, once curl has connected this option is of no more use.
I want a script which will restart the defined services on their respective servers. I want to pass parameter as below to the script:
eg:
sh execute.sh
Enter your server and service list: [server1:nginx,mysql],[server2:mysql,apache],[server3:mongodb,apache]
By this input the script should verify and start the services on the respective servers. I am able to do this on a single server by declaring variables.
#!/bin/bash
Instance_Name=server1
Service_Name=(nginx php-fpm mysql)
SSH_USER=admin
SSH_IDENT_FILE=~/credentials/user.pem
len=${#Service_Name[*]}
i=0
while [ $i -lt $len ]; do
service=${Service_Name[$i]}
ssh -i $SSH_IDENT_FILE -o StrictHostKeyChecking=no $SSH_USER#$Instance_Name 'service $service restart'
done
Now I don't have an idea to move forward. Please let me know if my question is unclear. Thanks in advance.
Parse your input parameters
# create an array of server:services
a=($(echo "$1" | gawk 'BEGIN { FS="[]],[[]" } ; { print $1, $2, $3 }' | tr -d '[]'))
# add a for loop here to iterate values in array with code below
for var in "${a[#]}" ; do
# get server name
server1=$(echo $var | cut -d ':' -f1)
# get your services as space separated
servs1="$(echo $var | cut -d ':' -f2 | tr ',' ' ')"
# loop your services
for s in $servs1; do
ssh $server1 "service $s restart"
done
done
If you like bash programming or have to learn it this is the 'bible' to me
Advanced Bash-Scripting Guide
An in-depth exploration of the art of shell scripting
Mendel Cooper
http://www.tldp.org/LDP/abs/html/
Try this?
#!/bin/bash
SSH_IDENT_FILE=~/credentials/user.pem
SSH_USER=admin
# tells 'for' to read "per line"
IFS='
'
for line in $(
# read input from 1st command-line arg (not stdin)
echo "$1"|\
# remove all services except (nginx php-fpm mysql)
sed 's/\([:,]\)\(\(nginx\|php-fpm\|mysql\)\|[^]:,[]\+\)/\1\3/g'|\
# make it multi-line:
# server1 svc1 svc2
# server2 svc1
sed 's/\[[^]:]*:,*\]//g;s/\],*$//;s/\],*/\n/g;s/[:,]\+/ /g;s/\[//g'|\
# make it single-param:
# server1 svc1
# server1 svc2
# server2 svc1
awk '{for(i=2;i<=NF;i++)print $1" "$i}'
);do
IFS=' ' read Instance_Name service <<< $line
echo ssh -i $SSH_IDENT_FILE -o StrictHostKeyChecking=no $SSH_USER#$Instance_Name "service $service restart"
done
I am a beginner user of linux, and also quite newbie at ssh and tunnels.
Anyway, my goal is to maintain a ssh tunnel open in background.
In order to do that, I wrote the following batch that I then added into crontab (the batch is automatically processed every 5 minutes during workdays and from 8am to 9pm).
I read in some other thread in stackoverflow that one should use autossh that will ensure the ssh will always be ok through a recurrent check. So did I....
#!/bin/bash
LOGFILE="/root/Tunnel/logBatchRestart.log"
NOW="$(date +%d/%m/%Y' - '%H:%M)" # date & time of log
if ! ps ax | grep ssh | grep tunnelToto &> /dev/null
then
echo "[$NOW] ssh tunnel not running : restarting it" >> $LOGFILE
autossh -f -N -L pppp:tunnelToto:nnnnn nom-prenom#193.xxx.yyy.zzz -p qqqq
if ! ps ax | grep ssh | grep toto &> /dev/null
then
echo "[$NOW] failed starting tunnel" >> $LOGFILE
else
echo "[$NOW] restart successfull" >> $LOGFILE
fi
fi
My problem is that sometimes the tunnel stops working, although every thing looks ok (ps ax | grep ssh > the result shows the two expected tasks : autossh main task and the ssh tunnel itself). I actually know about the problem cause the tunnel is used by a third party software that triggers an error as soon as the tunnel is no more responding.
SO I am wondering how I should improve my batch in order It will be able to check the tunnel and restart it if it happens to be dead. I saw some ideas in there, but it was concluded by the "autossh" hint... which I already use. Thus, I am out of ideas... If any of you have, I'd gladly have a look at them!
Thanks for taking interest in my question, and for your (maybe) suggestions!
Instead of checking the ssh process with ps you can do the following trick
create script, that does the following and add it to your crontab via crontab -e
#!/bin/sh
REMOTEUSER=username
REMOTEHOST=remotehost
SSH_REMOTEPORT=22
SSH_LOCALPORT=10022
TUNNEL_REMOTEPORT=8080
TUNNEL_LOCALPORT=8080
createTunnel() {
/usr/bin/ssh -f -N -L$SSH_LOCALPORT:$REMOTEHOST:SSH_REMOTEPORT -L$TUNNEL_LOCALPORT:$REMOTEHOST:TUNNEL_REMOTEPORT $REMOTEUSER#$REMOTEHOST
if [[ $? -eq 0 ]]; then
echo Tunnel to $REMOTEHOST created successfully
else
echo An error occurred creating a tunnel to $REMOTEHOST RC was $?
fi
}
## Run the 'ls' command remotely. If it returns non-zero, then create a new connection
/usr/bin/ssh -p $SSH_LOCALPORT $REMOTEUSER#localhost ls >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
echo Creating new tunnel connection
createTunnel
fi
In fact, this script will open two ports
port 22 which will be used to check if the tunnel is still alive
port 8080 which is the port you might want to use
Please check and send me further questions via comments
(I add this as an answer since there is not enough room for it un a comment)
Ok, I managed to make the batch run to launch the ssh tunnel (I had to specify my hostname instead of localhost in order it could be triggered) :
#!/bin/bash
LOGFILE="/root/Tunnel/logBatchRedemarrage.log"
NOW="$(date +%d/%m/%Y' - '%H:%M)" # date et heure du log
REMOTEUSER=username
REMOTEHOST=remoteHost
SSH_REMOTEPORT=22
SSH_LOCALPORT=10022
TUNNEL_REMOTEPORT=12081
TUNNEL_SPECIFIC_REMOTE_PORT=22223
TUNNEL_LOCALPORT=8082
createTunnel() {
/usr/bin/ssh -f -N -L$SSH_LOCALPORT:$REMOTEHOST:$SSH_REMOTEPORT -L$TUNNEL_LOCALPORT:$REMOTEHOST:$TUNNEL_REMOTEPORT $REMOTEUSER#193.abc.def.ghi -p $TUNNEL_SPECIFIC_REMOTE_PORT
if [[ $? -eq 0 ]]; then
echo [$NOW] Tunnel to $REMOTEHOST created successfully >> $LOGFILE
else
echo [$NOW] An error occurred creating a tunnel to $REMOTEHOST RC was $? >> $LOGFILE
fi
}
## Run the 'ls' command remotely. If it returns non-zero, then create a new connection
/usr/bin/ssh -p $SSH_LOCALPORT $REMOTEUSER#193.abc.def.ghi ls >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
echo [$NOW] Creating new tunnel connection >> $LOGFILE
createTunnel
fi
However, I got some immediate message (below) when the tunnel is running and when cron tries to lauch the batch again... sounds like it cannot listen to it. Also since I need some time to get a proof , I can't say yet it will successfully restart if the tunnel is out.
Here's the response to the second start of the batch.
bind: Address already in use channel_setup_fwd_listener: cannot listen
to port: 10022 bind: Address already in use
channel_setup_fwd_listener: cannot listen to port: 8082 Could not
request local forwarding.
Every night I go through the same process of checking failover systems for our T1's. I essentially go through the following process:
Start the failover process.
traceroute $server;
Once I see it's failed over, I verify that connections work by SSHing into a server.
ssh $server;
Then once I see it works, I take it off of failover.
So what I want to do is to continually run a traceroute until I get a certain result, then run a SSH command.
Put your list of successful messages in a file (omit the variable lines and fractions of the line, and use a ^ to identify the start of the line, as such:)
patterns.list:
^ 7 4.68.63.165
^ 8 4.68.17.133
^ 9 4.79.168.210
^10 216.239.48.108
^11 66.249.94.46
^12 72.14.204.99
Then a simple while loop:
while ! traceroute -n ${TARGET} | grep -f patterns.list
do
sleep 5 # 5 second delay between traceroutes, for niceness.
done
ssh ${DESTINATION}
Use traceroute -n to generate the output so you don't get an IP address that resolves one time, but and a name the next, resulting in a false positive.
I think you could be better off using ping command to verify server's accessability than traceroute.
It is easy to check for return status of ping command without using any grep at all:
if [ ping -c 4 -n -q 10.10.10.10 >/dev/null 2>& ]; then
echo "Server is ok"
else
echo "Server is down"
fi
If you want to do it continually in a loop, try this:
function check_ssh {
# do your ssh stuff here
echo "performing ssh test"
}
while : ; do
if [ ping -c 4 -n -q 10.10.10.10 >/dev/null 2>& ]; then
echo "Server is ok"
check_ssh
else
echo "Server is down"
fi
sleep 60
done