Using an "if" statement with curl in terminal? - linux

I'm using this command to get the response code of a page using curl:
curl -s -o /dev/null -w "%{http_code}" 'https://www.example.com'
If the response code is 200, then I want to delete a certain file on my computer. If it isn't 200, nothing should be done.
What's the easiest way to do this?

You can store the result in a shell variable (via command substitution), and then test the value with a simple if and [[ command. For example, in bash:
#!/bin/bash
code=$(curl -s -o /dev/null -w "%{http_code}" 'https://www.example.com')
if [[ $code == 200 ]]; then
rm /path/to/file
# other actions
fi
If all you want is a simple rm, you can shorten it to:
#!/bin/bash
[[ $code == 200 ]] && rm /path/to/file
In a generic POSIX shell, you'll have to use a less flexible [ command and quote the variable:
#!/bin/sh
code=$(curl -s -o /dev/null -w "%{http_code}" 'https://www.example.com')
if [ "$code" = 200 ]; then
rm /path/to/file
fi
Additionally, to test for a complete class of codes (e.g. 2xx), you can use wildcards:
#!/bin/bash
[[ $code == 2* ]] && rm /path/to/file
and the case command (an example here).

Related

Set specific option sets for arguments in bash

I've a script where I can pass different set of arguments for the work the script has to do. The options are specified below:
[Show Help] ./myscript.sh -h
[To Create] ./myscript.sh -c -f /input_loc/input_file.txt
[To Export] ./myscript.sh -e -d /destination_loc/exported_db_date.csv
[To Import] ./myscript.sh -i -s /source_loc/to_import_date.csv
And the script myscript.sh has getopts to parse the options and then push it to case - esac to apply checks and logics for each of the arguments passed to check their validity. For example, like below:
while getopts "cd:ef:his:" o; do
case "${o}" in
Put options and their check logics
esac
done
My question is, what logic can I use to force arguments to be presented in particular sets only, for e.g.
Allowed Option set
./myscript.sh -h
./myscript.sh -c -f <file_name>
./myscript.sh -e -d <file_name>
./myscript.sh -i -s <file_name>
NOT Allowed Option set
./myscript.sh -h -c -f <file_name>
./myscript.sh -h -d <file_name>
./myscript.sh -e -s <file_name>
./myscript.sh -c -i -f <file_name>
I think your specific usage of getopts() here is needed only to parse the valid option flags first and put them into an array and use the parsed flags later for ensuring only if the required order of flags are set.
The steps involved are two-fold. Firstly parse the valid flags to an array (multi) and secondly join the flags together without spaces and do a regex match on the provided flags, if you encounter ones that aren't allowed don't do your logic.
#!/usr/bin/env bash
while getopts "cd:ef:his:" opt; do
case "$opt" in
c|d|e|f|h|i|s) multi+=("$opt");;
*) printf 'invalid flag provided' 1>&2 ; exit 1 ;;
esac
done
joined=$(IFS=;echo "${multi[*]}")
if ! [[ $joined =~ ^(hcf|hd|es|cif)$ ]]; then
printf 'valid flags provided\n'
# Add your logic here
fi
Note, the answer does consider the option of swapping your flags order, i.e. if ./myscript.sh -h -c -f <file_name> isn't allowed, should ./myscript.sh -f <file_name> -h -c be allowed? Need to modify the answer based on that.
I've used if...elif...else loop for this
#!/bin/bash
my_ary=("$#")
LEN=${#my_ary[#]}
if [[ ${my_ary[0]} =~ "-h" && $LEN -eq 1 ]] || \
[[ ${my_ary[0]} =~ "-c" && ${my_ary[1]} =~ "-f" && $LEN -eq 3 ]] || \
[[ ${my_ary[0]} =~ "-e" && ${my_ary[1]} =~ "-d" && $LEN -eq 3 ]] || \
[[ ${my_ary[0]} =~ "-i" && ${my_ary[1]} =~ "-s" && $LEN -eq 3 ]]; then
echo "Proper set of arguments passed"
else
echo "You are not passing arguments in proper sets, please look at Usage once again..."
fi
Is there a better way to do the same checks?

Checking if String is in a command answer

I'm struggling with a problem in linux bash.
I want a script to execute a command
curl -s --head http://myurl/ | head -n 1
and if the result of the command contains 200 it executes another command.
Else it is echoing something.
What i have now:
CURLCHECK=curl -s --head http://myurl | head -n 1
if [[ $($CURLCHECK) =~ "200" ]]
then
echo "good"
else
echo "bad"
fi
The script prints:
HTTP/1.1 200 OK
bad
I tried many ways but none of them seems to work.
Can someone help me?
I would do something like this:
if curl -s --head http://myurl | head -n 1 | grep "200" >/dev/null 2>&1; then
echo good
else
echo bad
fi
You need to actually capture the output from the curl command:
CURLCHECK=$(curl -s --head http://myurl | head -n 1)
I'm surprised you're not getting a "-s: command not found" error
You can use this curl command with -w "%{http_code}" to just get http status code:
[[ $(curl -s -w "%{http_code}" -A "Chrome" -L "http://myurl/" -o /dev/null) == 200 ]] &&
echo "good" || echo "bad"
using wget
if wget -O /dev/null your_url 2>&1 | grep -F HTTP >/dev/null 2>&1 ;then echo good;else echo bad; fi

Shell script with Wget - If else nested inside for loop

I'm trying to make a shell script that reads a list of download URLs to find if they're still active. I'm not sure what's wrong with my current script, (I'm new to this) and any pointers would be a huge help!
user#pc:~/test# cat sites.list
http://www.google.com/images/srpr/logo3w.png
http://www.google.com/doesnt.exist
notasite
Script:
#!/bin/bash
for i in `cat sites.list`
do
wget --spider $i -b
if grep --quiet "200 OK" wget-log; then
echo $i >> ok.txt
else
echo $i >> notok.txt
fi
rm wget-log
done
As is, the script outputs everything to notok.txt - (the first google site should go to ok.txt).
But if I run:
wget --spider http://www.google.com/images/srpr/logo3w.png -b
And then do:
grep "200 OK" wget-log
It greps the string without any problems. What noob mistake did I make with the syntax? Thanks m8s!
The -b option is sending wget to the background, so you're doing the grep before wget has finished.
Try without the -b option:
if wget --spider $i 2>&1 | grep --quiet "200 OK" ; then
There are a few issues with what you're doing.
Your for i in will have problems with lines that contain whitespace. Better to use while read to read individual lines of a file.
You aren't quoting your variables. What if a line in the file (or word in a line) starts with a hyphen? Then wget will interpret that as an option. You have a potential security risk here, as well as an error.
Creating and removing files isn't really necessary. If all you're doing is checking whether a URL is reachable, you can do that without temp files and the extra code to remove them.
wget isn't necessarily the best tool for this. I'd advise using curl instead.
So here's a better way to handle this...
#!/bin/bash
sitelist="sites.list"
curl="/usr/bin/curl"
# Some errors, for good measure...
if [[ ! -f "$sitelist" ]]; then
echo "ERROR: Sitelist is missing." >&2
exit 1
elif [[ ! -s "$sitelist" ]]; then
echo "ERROR: Sitelist is empty." >&2
exit 1
elif [[ ! -x "$curl" ]]; then
echo "ERROR: I can't work under these conditions." >&2
exit 1
fi
# Allow more advanced pattern matching (for case..esac below)
shopt -s globstar
while read url; do
# remove comments
url=${url%%#*}
# skip empty lines
if [[ -z "$url" ]]; then
continue
fi
# Handle just ftp, http and https.
# We could do full URL pattern matching, but meh.
case "$url" in
#(f|ht)tp?(s)://*)
# Get just the numeric HTTP response code
http_code=$($curl -sL -w '%{http_code}' "$url" -o /dev/null)
case "$http_code" in
200|226)
# You'll get a 226 in ${http_code} from a valid FTP URL.
# If all you really care about is that the response is in the 200's,
# you could match against "2??" instead.
echo "$url" >> ok.txt
;;
*)
# You might want different handling for redirects (301/302).
echo "$url" >> notok.txt
;;
esac
;;
*)
# If we're here, we didn't get a URL we could read.
echo "WARNING: invalid url: $url" >&2
;;
esac
done < "$sitelist"
This is untested. For educational purposes only. May contain nuts.

Bash | curl | curls 2 URL's then stops

I am trying to write a simple bash script that will use a list from a text document and curl each URL that is on the list in order to see what the contents of each URL is. It allows me to cURL 2 sites and creates the text documents for the rest however it only downloads the first 2. I have already manage to write the script that pulls there IP's and places them in a seperate file using the grep command. At first i tried
#!/bin/bash
for var in `cat host.txt`; do
curl -s $var >> /tmp/ping/html/$var.html
done
I have tried with and without the silent switch. I then tried the following:
#!/bin/bash
for var in `head -2 host.txt`; do
curl $var >> /tmp/ping/html/$var.html
wait
done
for var in `head -4 host.txt | tail -2`; do
curl $var >> /tmp/ping/html/$var.html
done
This would try and do them all at the same time again stopping after 2
#!/bin/bash
for var in `head -2 host.txt`; do
curl $var >> /tmp/ping/html/$var.html
done
wait
for var in `head -4 host.txt | tail -2`; do
curl $var >> /tmp/ping/html/$var.html
done
This would do the same, I am new to bash scripting and only know some of the basics, any help would be appreciated
Start with the simple: verify that you are in fact iterating over the entire list:
# This is the recommended way to iterate over the file. See
# http://mywiki.wooledge.org/BashFAQ/001
while read -r var; do
echo "$var"
done < hosts.txt
Then add in the call to curl, checking its exit status
while read -r var; do
echo "$var"
curl "$var" >> /tmp/ping/html/$var.html || echo "curl failed: $?"
done < hosts.txt
You pipe into $var, which could result in a wrong filename, because of the two slashes in the URL. Additionally i would quote the URL. For Example it works with the basename of the URL.
#!/bin/bash
for var in `cat host.txt`; do
name=$(basename $var)
curl -v -s "$var" -o "/tmp/ping/html/$name.html"
done
You may also want to skip blank lines and Comments (#)
#!/bin/bash
file="host.txt"
curl="curl"
while read -r line
do
[[ $line = \#* ]] || [[ -z "${line}" ]] && continue
filename=$(basename $line)
$curl -s "$line" >> "/tmp/ping/html/$filename.html"
done < "$file"

Consuming bandwidth

I know how to write a basic bash script which uses wget to download a file, but how do I run this in an endless loop to download the specified file, delete it when the download is complete, then download it again.
you're looking for
while :
do
wget -O - -q "http://some.url/" > /dev/null
done
this will not save the file, not output useless info, and dump the contents over and over again in /dev/null
edit to just consume bandwidth, use ping -f or ping -f -s 65507
If your goal is to max out your bandwidth, especially for the purposes of benchmarking, use iperf. You run iperf on your server and client, and it will test your bandwidth using the protocol and parameters you specify. It can test one-way or two-way throughput and can optionally try to achieve a "target" bandwidth utilization (i.e. 3Mbps).
Everything is possible with programming. :)
If you want to try and max out your internet bandwidth, you could start many many processes of wget and let them download some big disk image files at the same time, while at the same time sending some huge files back to some server.
The details are left for the implementation, but this is one method to max out your bandwidth.
In case you want to consume network bandwidth, you'll need another computer. Then from computer A, IP 192.168.0.1, listen on a port (e.g. 12345).
$ netcat -l -p 12345
Then, from the other computer, send data to it.
$ netcat 192.168.0.1 12345 < /dev/zero
I perfer to use curl to wget. it is more editable. here is an excrpt from a bash script i wrote which checks the SVN version, and then gives the user a choice to download stable or latest. It then parses out the file, separating the "user settings" from the rest of the script.
svnrev=`curl -s -m10 mythicallibrarian.googlecode.com/svn/trunk/| grep -m1 Revision | sed s/"<html><head><title>mythicallibrarian - "/""/g| sed s/": \/trunk<\/title><\/head>"/""/g`
if ! which librarian-notify-send>/dev/null && test "$LinuxDep" = "1"; then
dialog --title "librarian-notify-send" --yesno "install librarian-notify-send script for Desktop notifications?" 8 25
test $? = 0 && DownloadLNS=1 || DownloadLNS=0
if [ "$DownloadLNS" = "1" ]; then
curl "http://mythicallibrarian.googlecode.com/files/librarian-notify-send">"/usr/local/bin/librarian-notify-send"
sudo chmod +x /usr/local/bin/librarian-notify-send
fi
fi
if [ ! -f "./librarian" ]; then
DownloadML=Stable
echo "Stable `date`">./lastupdated
else
lastupdated="`cat ./lastupdated`"
DownloadML=$(dialog --title "Version and Build options" --menu "Download an update first then Build mythicalLibrarian" 10 70 15 "Latest" "Download and switch to SVN $svnrev" "Stable" "Download and switch to last stable version" "Build" "using: $lastupdated" 2>&1 >/dev/tty)
if [ "$?" = "1" ]; then
clear
echo "mythicalLibrarian was not updated."
echo "Please re-run mythicalSetup."
echo "Done."
exit 1
fi
fi
clear
if [ "$DownloadML" = "Stable" ]; then
echo "Stable "`date`>"./lastupdated"
test -f ./mythicalLibrarian.sh && rm -f mythicalLibrarian.sh
curl "http://mythicallibrarian.googlecode.com/files/mythicalLibrarian">"./mythicalLibrarian.sh"
cat "./mythicalLibrarian.sh"| sed s/' '/'\\t'/g |sed s/'\\'/'\\\\'/g >"./mythicalLibrarian1" #sed s/"\\"/"\\\\"/g |
rm ./mythicalLibrarian.sh
mv ./mythicalLibrarian1 ./mythicalLibrarian.sh
parsing="Stand-by Parsing mythicalLibrarian"
startwrite=0
test -f ./librarian && rm -f ./librarian
echo -e 'mythicalVersion="'"`cat ./lastupdated`"'"'>>./librarian
while read line
do
test "$line" = "########################## USER JOBS############################" && let startwrite=$startwrite+1
if [ $startwrite = 2 ]; then
clear
parsing="$parsing""."
test "$parsing" = "Stand-by Parsing mythicalLibrarian......." && parsing="Stand-by Parsing mythicalLibrarian"
echo $parsing
echo -e "$line" >> ./librarian
fi
done <./mythicalLibrarian.sh
clear
echo "Parsing mythicalLibrarian completed!"
echo "Removing old and downloading new version of mythicalSetup..."
test -f ./mythicalSetup.sh && rm -f ./mythicalSetup.sh
curl "http://mythicallibrarian.googlecode.com/files/mythicalSetup.sh">"./mythicalSetup.sh"
chmod +x "./mythicalSetup.sh"
./mythicalSetup.sh
exit 0
fi
if [ "$DownloadML" = "Latest" ]; then
svnrev=`curl -s mythicallibrarian.googlecode.com/svn/trunk/| grep -m1 Revision | sed s/"<html><head><title>mythicallibrarian - "/""/g| sed s/": \/trunk<\/title><\/head>"/""/g`
echo "$svnrev "`date`>"./lastupdated"
test -f ./mythicalLibrarian.sh && rm -f mythicalLibrarian.sh
curl "http://mythicallibrarian.googlecode.com/svn/trunk/mythicalLibrarian">"./mythicalLibrarian.sh"
cat "./mythicalLibrarian.sh"| sed s/' '/'\\t'/g |sed s/'\\'/'\\\\'/g >"./mythicalLibrarian1" #sed s/"\\"/"\\\\"/g |
rm ./mythicalLibrarian.sh
mv ./mythicalLibrarian1 ./mythicalLibrarian.sh
parsing="Stand-by Parsing mythicalLibrarian"
startwrite=0
test -f ./librarian && rm -f ./librarian
echo -e 'mythicalVersion="'"`cat ./lastupdated`"'"'>>./librarian
while read line
do
test "$line" = "########################## USER JOBS############################" && let startwrite=$startwrite+1
if [ $startwrite = 2 ]; then
clear
parsing="$parsing""."
test "$parsing" = "Stand-by Parsing mythicalLibrarian......." && parsing="Stand-by Parsing mythicalLibrarian"
echo $parsing
echo -e "$line" >> ./librarian
fi
done <./mythicalLibrarian.sh
clear
echo "Parsing mythicalLibrarian completed!"
echo "Removing old and downloading new version of mythicalSetup..."
test -f ./mythicalSetup.sh && rm -f ./mythicalSetup.sh
curl "http://mythicallibrarian.googlecode.com/svn/trunk/mythicalSetup.sh">"./mythicalSetup.sh"
chmod +x "./mythicalSetup.sh"
./mythicalSetup.sh
exit 0
fi
EDIT: NEVERMIND I THOUGHT YOU WERE SAYING IT WAS DOWNLOADING IN AN ENDLESS LOOP

Resources