Linux script with curl to check webservice is up - linux

I have a webservice provided at http://localhost/test/testweb
I want to write a script to check if webservice is up with curl
If there a curl parameter given, returns 200 OK ok true false so that I can use it is if-else block in linux script

curl -sL -w "%{http_code}\\n" "http://www.google.com/" -o /dev/null
-s = Silent cURL's output
-L = Follow redirects
-w = Custom output format
-o = Redirects the HTML output to /dev/null
Example:
[~]$ curl -sL -w "%{http_code}\\n" "http://www.google.com/" -o /dev/null
200
I would probably remove the \\n if I were to capture the output.

I use:
curl -f -s -I "http://example.com" &>/dev/null && echo OK || echo FAIL
-f --fail Fail silently (no output at all) on HTTP errors
-s --silent Silent mode
-I --head Show document info only
Note:
depending on needs you can also remove the "-I" because in some cases you need to do a GET and not a HEAD

Same as #burhan-khalid, but added --connect-timeout 3 and --max-time 5.
test_command='curl -sL \
-w "%{http_code}\\n" \
"http://www.google.com:8080/" \
-o /dev/null \
--connect-timeout 3 \
--max-time 5'
if [ $(test_command) == "200" ] ;
then
echo "OK" ;
else
echo "KO" ;
fi

That will check the headers via wget 2>&1pipes the stderr to stdout
grep filters
-O /dev/null just throws the content of the page
if [ "\`wget http://example.org/ -O /dev/null -S --quiet 2>&1 | grep '200 OK'\`" != "" ];
then
echo Hello;
fi;
I know not curl, but still a solution

I needed a better answer to this, so I wrote the script below.
The fakePhrase is used to detect ISP "Search Assist" adware HTTP resposnes.
#!/bin/bash
fakePhrase="verizon"
siteList=(
'http://google.com'
'https://google.com'
'http://wikipedia.org'
'https://wikipedia.org'
'http://cantgettherefromhere'
'http://searchassist.verizon.com'
)
exitStatus=0
function isUp {
http=`curl -sL -w "%{http_code}" "$1" -o temp_isUp`
fakeResponse=`cat temp_isUp | grep $fakePhrase`
if [ -n "$fakeResponse" ]; then
http=$fakePhrase
fi
case $http in
[2]*)
;;
[3]*)
echo 'Redirect'
;;
[4]*)
exitStatus=4
echo "$1 is DENIED with ${http}"
;;
[5]*)
exitStatus=5
echo "$1 is ERROR with ${http}"
;;
*)
exitStatus=6
echo "$1 is NO RESPONSE with ${http}"
;;
esac
}
for var in "${siteList[#]}"
do
isUp $var
done
if [ "$exitStatus" -eq "0" ]; then
echo 'All up'
fi
rm temp_isUp
exit $exitStatus

Use this:
curl -o $CURL_OUTPUT -s -w %{http_code}\\n%{time_total}\\n $URL > $TMP_FILE 2>&1
cat $TMP_FILE

Related

Synchronize all current users in bash script using mkfifo pipe

I'm creating a program written in bash script that manages users and groups. I want to reload all current users when I add or delete a user in a specific tab. I tried to use mkfifo fpipe but it only reloads all users when I restart the app. Any ideas to solve this problem? Below is the code that performs this function.
mkfifo "$fpipe"
trap "rm -f $fpipe $fts" EXIT
fpipe="OUTPUT.txt"
#getAllUsers function
function get_all_user(){
echo -e '\f' >> "$fpipe"
alluser=$(cat /etc/passwd | awk -F: '$7=="/bin/bash" {print $1"\\n"$3"\\n"$4"\\n"}' | tr -d '[:space:]' )
echo -e $alluser > "$fpipe"
}
export -f get_all_user
#get_selected_user function
function get_selected_user()
{
echo -e '\f' > "$temp"
echo "$1" > "$temp"
cat $temp
}
export -f get_selected_user
#adduser function
function run_adduser()
{
# check for this in '/etc/passwd' and '/etc/shadow'
# $2 is the username
# $3 is the password
if id "$2" &>/dev/null; then
zenity --warning \
--text="Username existed. Please enter another username."
else
useradd -m -p $(openssl passwd -1 $3) -s /bin/bash -G sudo $2
zenity --info \
--text="User added successfully."
fi
}
export -f run_adduser
# Users information tab
get_all_user
yad --plug=$KEY --tabnum=1 --width=600 --height=450 --expand-column=0 --limit=10 \
--list --select-action='#bash -c "get_selected_user %s %s %s"' --column="Username" --column="UID" --column="GID" <&3 &
exec 3>&-

checking response control just "connected" or not with wget

I have just trying to write a script which just controls about the response contains "connected" or not
#!/bin/bash
cat control.txt | while read link // control.txt contains http and https urls
do
if [[ $(wget --spider -S $link 2>&1 | grep "connected") =~~ *"connected"* ]];
then echo "OK";
else echo "FAIL";
fi
done
Output:
sh -x portcontrol.sh
portcontrol.sh[2]: Syntax error at line 4 : `=~' is not expected.
If I read your script correctly, you're retrieving the page, but ignoring its contents, and all you want is to see whether wget shows the string 'connected'.
If that is so, your code can be simplified as follows:
if wget --spider -S $link 2>&1 | grep "connected" > /dev/null
then
echo "OK";
else
echo "FAIL";
fi
You don't need to capture wget's output and run a regexp search on it; grep already returns 0 (success) or 1 (not found) when searching for the string you gave.
That return code can be used directly to control the if.
The output of grep is redirected to /dev/null as to not show up on the screen or script output.
If you simply want to see if the connection request succeeded, and the wget output is of the form:
Connecting to <hostname>|<ip_addr>|:<port>... connected.
it should be sufficient to just do:
if [[ $(wget --spider -S $link 2>&1 | grep -c " connected\.") -gt 0 ]];
then echo "OK";
else echo "FAIL";
fi
Checking exit code works too, but it depends on what your requirements really are.

Multithreading in bash scripting

I run a bash script, and looping as much line in text file. to cURL the site listed in the txt file.
here is my script :
SECRET_KEY='zuhahaha'
FILE_NAME=""
case "$1" in
"sma")
FILE_NAME="sma.txt"
;;
"smk")
FILE_NAME="smk.txt"
;;
"smp")
FILE_NAME="smp.txt"
;;
"sd")
FILE_NAME="sd.txt"
;;
*)
echo "not in case !"
;;
esac
function save_log()
{
printf '%s\n' \
"Header Code : $1" \
"Executed at : $(date)" \
"Response Body : $2" \
"====================================================================================================="$'\r\n\n' >> output.log
}
while IFS= read -r line;
do
HTTP_RESPONSE=$(curl -L -s -w "HTTPSTATUS:%{http_code}\\n" -H "X-Gitlab-Event: Push Hook" -H 'X-Gitlab-Token: '$SECRET_KEY --insecure $line 2>&1) &
HTTP_BODY=$(echo $HTTP_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g') &
HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://') &
save_log "$HTTP_STATUS" "$HTTP_BODY" &
done < $FILE_NAME
how i can run threading or make the loop fast in bash ?
You should be able to do this relatively easily. Don't try to background each command, but instead put the body of your while loop into a subshell and background that. That way, your commands (which clearly depend on each other) run sequentially, but all the lines in the file can be process in parallel.
while IFS= read -r line;
do
(
HTTP_RESPONSE=$(curl -L -s -w "HTTPSTATUS:%{http_code}\\n" -H "X-Gitlab-Event: Push Hook" -H 'X-Gitlab-Token: '$SECRET_KEY --insecure $line 2>&1)
HTTP_BODY=$(echo $HTTP_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g')
HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://')
save_log "$HTTP_STATUS" "$HTTP_BODY" ) &
done < $FILE_NAME
My favourite was to do this is generate a file that lists all the commands you wish to perform. If you have a script that performs your operations create a file like:
$ cat commands.txt
echo 1
echo 2
echo $[12+3]
....
For example this could be hundreds of commands long.
To execute each line in parallel, use the parallel command with, say, at most 3 jobs running in parallel at any time.
$ cat commands.txt | parallel -j
1
2
15
For your curl example you could generate thousands of curl commands, execute them say 30 in parallel at any one time.

using curl with -d param1=value1 seems not work properly

I have a shell Unix running every hour (crontab on CentOS 7).
Inside that shell, a loop read and proceed treatment for all new files find in a defined folder.
At the end of each files's treatment a CURL command is send with some parameters, for example :
curl https://aaaaaa.com/website -d param1=value1 -d param2=value2 ....
Each time the shell is run by crontab, the 1st CURL is correctly converted to a true URL and received by Apache/Tomcat, but all the others are bad. In fact the 2nd and the following CURLs seem not converted in the correct format like
https://aaaaaa.com/website?param1=value1&param2=value2
but they are sent like
https://aaaaaa.com/website -d param1=value1 -d param2=value2
So the website is unable to treat the parameters properly.
Why the 1st command is correctly converted to a correct URL format and not the following ?
EDIT - EDIT
The part of shell :
#!/bin/bash
...
#======================================================
# FUNCTIONS
#======================================================
UpdateStatus () {
CMD_CURL="${URL_WEBSITE} -d client=CLIENT -d site=TEST -d produit=MEDIASFILES -d action=update"
CMD_CURL="${CMD_CURL} -d codecmd=UPDATE_MEDIA_STATUS"
CMD_CURL="${CMD_CURL} -d idmedia=$4"
CMD_CURL="${CMD_CURL} -d idbatch=$3"
CMD_CURL="${CMD_CURL} -d statusmedia=$2"
if [[ ! -z "$5" ]]; then
CMD_CURL="${CMD_CURL} -d filename=$5"
fi
echo " ${CMD_CURL}" >> $1
CURL_RESULT=`curl -k ${CMD_CURL}`
CURL_RESULT=`echo ${CURL_RESULT} | tr -d ' '`
echo " Result CURL = ${CURL_RESULT}" >> $1
if [ "${CURL_RESULT}" = "OK" ]; then
return 0
fi
return 1
}
#======================================================
# MAIN PROGRAM
#======================================================
echo "----- Batch in progress : `date '+%d/%m/%y - %H:%M:%S'` -----"
for file in $( ls ${DIR_FACTORY_BATCHFILES}/*.batch )
do
...
old_IFS=$IFS
while IFS=';' read <&3 F_STATUS F_FILEIN F_TYPE F_CODE F_ID F_IDPARENT F_TAGID3 F_PROF F_YEARMEDIA F_DATECOURS F_TIMEBEGINCOURS F_LANG || [[ -n "$F_STATUS $F_FILEIN $F_TYPE $F_CODE $F_ID $F_IDPARENT $F_TAGID3 $F_PROF $F_YEARMEDIA $F_DATECOURS $F_TIMEBEGINCOURS $F_LANG" && $F_STATUS ]];
do
...
UpdateStatus ${LOG_FILENAME} ${STATUS_ERROR} ${F_ID} ${F_IDPARENT}
...
done 3< $file
IFS=$Old_IFS
...
done
You need to provide the "-d" flags and values before the URL so:
curl -d param1=value1 -d param2=value2 https://aaaaaa.com/website
Moreover, this command is going to send the parameters/values as POST parameters, not query parameters. You can use the "-G" flag, possibly combined with "--url-encode" to send as query parameters, see:
https://unix.stackexchange.com/questions/86729/any-way-to-encode-the-url-in-curl-command

How to stop xargs on first error?

I have an pages.txt file with 100 URLs inside. I want to check them one by one and fail on the first problem. This is what I'm doing:
cat pages.txt | xargs -n 1 curl --silent \
--output /dev/null --write-out '%{url_effective}: %{http_code}\n'; echo $?
Exit code is 1, but I see it only when the entire file is done. How to stop earlier, on the first problem?
General method
xargs -n 1 sh -c '<your_command> $0 || exit 255' < input
Specific case
xargs -n 1 sh -c 'curl --silent --output /dev/null \
--write-out "%{url_effective}: %{http_code}\n" $0 || exit 255' < pages.txt
Explanation
For every URL in pages.txt, executes sh -c 'curl ... $0 || exit 255' one by one (-n 1) forcing to exit with 255 if the command fails.
From man xargs:
If any invocation of the command exits with a status of 255, xargs will stop immediately without reading any further input. An error message is issued on stderr when this happens.
I haven't found a way to do what you ask for with xargs, but a loop with read might be what you are looking for.
while read URL; do
curl --silent \
--output /dev/null --write-out '%{url_effective}: %{http_code}\n' $URL;
RET=$?;
echo $RET;
if [ $RET -ne 0 ]; then break; fi
done < pages.txt

Resources