wget with errorlevel bash output - linux

I want to create a bash file (.sh) which does the following:
I call the script like ./download.sh www.blabla.com/bla.jpg
the script has to echo then if the file has downloaded or not...
How can I do this? I know I can use errorlevel but I'm new to linux so...
Thanks in advance!

Typically applications in Linux will set the value of the environment variable $? on failure. You can examine this return code and see if it gets you any error for wget.
#!/bin/bash
wget $1 2>/dev/null
export RC=$?
if [ "$RC" = "0" ]; then
echo $1 OK
else
echo $1 FAILED
fi
You could name this script download.sh. Change the permissions to 755 with chmod 755. Call it with the name of the file you wish to download. ./download.sh www.google.com

You could try something like:
#!/bin/sh
[ -n $1 ] || {
echo "Usage: $0 [url to file to get]" >&2
exit 1
}
wget $1
[ $? ] && {
echo "Could not download $1" | mail -s "Uh Oh" you#yourdomain.com
echo "Aww snap ..." >&2
exit 1
}
# If we're here, it downloaded successfully, and will exit with a normal status
When making a script that will (likely) be called by other scripts, it is important to do the following:
Ensure argument sanity
Send e-mail, write to a log, or do something else so someone knows what went wrong
The >&2 simply redirects the output of error messages to stderror, which allows a calling script to do something like this:
foo-downloader >/dev/null 2>/some/log/file.txt
Since it is a short wrapper, no reason to forsake a bit of sanity :)
This also allows you to selectively direct the output of wget to /dev/null, you might actually want to see it when testing, especially if you get an e-mail saying it failed :)

wget executes in non-interactive way. This means that wget work in the background and you can't catch de return code with $?.
One solution it's to handle the "--server-response" property, searching http 200 status code
Example:
wget --server-response -q -o wgetOut http://www.someurl.com
sleep 5
_wgetHttpCode=`cat wgetOut | gawk '/HTTP/{ print $2 }'`
if [ "$_wgetHttpCode" != "200" ]; then
echo "[Error] `cat wgetOut`"
fi
Note: wget need some time to finish his work, for that reason I put "sleep 5". This is not the best way to do but worked ok for test the solution.

Related

Use of echo >> produces inconsistent results

I've been trying to understand a problem that's cropped up with some of the scripts we use at work.
To generate many of our script logs, we utilize the exec command and file redirects to print all output from the script to both the terminal and a log file. Occasionally, for information that doesn't need to be displayed to the user, we do a straight redirect to the log file.
The issue we're seeing occurs on the last line of output to the file when we're printing the number of errors that occurred during that execution: The text doesn't get printed to the file.
In an attempt to diagnose the problem, I wrote a simplified version of our production script (script1.bash) and a test script (script2.bash) to try to tease out the problem.
script1.bash
#!/bin/bash
log_name="${USER}_`date +"%Y%m%d-%H%M%S"`_${HOST}_${1}.log"
log="/tmp/${log_name}"
log_tmp="/tmp/temp_logs"
err_count=0
finish()
{
local ecode=0
if [ $# -eq 1 ]; then
ecode=${1}
fi
# This is the problem line
echo "Error Count: ${err_count}" >> "${log}"
mvlog
local success=$?
exec 1>&3 2>&4
if [ ${success} -ne 0 ]; then
echo ""
echo "WARNING: Failed to save log file to ${log_tmp}"
echo ""
ecode=$((ecode+1))
fi
exit ${ecode}
}
mvlog()
{
local ecode=1
if [ ! -d "${log_tmp}" ]; then
mkdir -p "${log_tmp}"
chmod 775 "${log_tmp}"
fi
if [ -d "${log_tmp}" ]; then
rsync -pt --bwlimit=4096 "${log}" "${log_tmp}/${log_name}" 2> /dev/null
[ $? -eq 0 ] && ecode=0
if [ ${ecode} -eq 0 ]; then
rm -f "${log}"
fi
fi
}
exec 3>&1 4>&2 >(tee "${log}") 2>&1
ecode=0
echo
echo "Some text"
echo
finish ${ecode}
script2.bash
#!/bin/bash
runs=10000
logdir="/tmp/temp_logs"
if [ -d "${logdir}" ]; then
rm -rf "${logdir}"
fi
for i in $(seq 1 ${runs}); do
echo "Conducting run #${i}/${runs}..."
${HOME}/bin/script1.bash ${i}
done
echo "Scanning logs from runs..."
total_count=`find "${logdir}" -type f -name "*.log*" | wc -l`
missing_count=`grep -L 'Error Count:' ${logdir}/*.log* | grep -c /`
echo "Number of runs performed: ${runs}"
echo "Number of log files generated: ${total_count}"
echo "Number of log files missing text: ${missing_count}"
My first test indicated roughly 1% of the time the line isn't written to the log file. I then proceeded to try several different methods of handling this line of output.
Echo and Wait
echo "Error Count: ${err_count}" >> "${log}"
wait
Alternate print method
printf "Error Count: %d\n" ${err_count} >> "${log}"
No Explicit File Redirection
echo "Error Count: ${err_count}"
Echo and Sleep
echo "Error Count: ${err_count}" >> "${log}"
sleep 0.2
Of these, #1 and #2 each had a 1% fail rate while #4 had a staggering 99% fail rate. #3 was the only methodology that had a 0% fail rate.
At this point, I'm at a loss for why this behavior is occurring, so I'm asking the gurus here for any insight.
(Note that the simple solution is to implement #3, but I want to know why this is happening.)
Without testing, this looks like a race condition between your script and tee. It's generally better to avoid multiple programs writing to the same file at the same time.
If you do insist on having multiple writers, make sure they are all in append mode, in this case by using tee -a. Appends to the local filesystem are atomic, so all writes should make it (this is not necessarily true for networked file systems).

How to run bash script while it returns code 0?

I have bash script with many lines of code and I need run it while it returns $? == 0, but in case if it has error I need stop it and exit with code 1?
The question is how to do it?
I tried to use set -e command, but Jenkins does not marks build as failed, for him it looks like Success
I also need to get the Error message to show it in my Jenkins log
I managed to get error code(in my case it will be 126), but how to get error message?
main file
fileWithError.sh
rc=$?; if [[ $rc != 0 ]]; then
echo "exit {$rc} ";
fi
fileWithError.sh
#!/bin/sh
set -e
echo "Test"
agjfsjgfshgd
echo "Test2"
echo "Test3"
Just add the command set -e to the beginning of the file
This should look something similar to this
#!/bin/sh
set -e
#...Your code...
I think you just want:
#!/bin/sh
while fileWithError.sh; do
sleep 1;
done
echo fileWithError.sh failed!! >&2
Note that if the script is written well, then the echo is
redundant as fileWithError.sh should have written a decent
error message already. Also, the sleep may not be needed, but is useful to prevent a fast loop if the script succeeds quickly.
You can get the explicit return value, but it requires a bit of refactoring.
#!/bin/sh
true
while test $? = 0; do fileWithError.sh; done
echo fileWithError.sh failed with status $?!! >&2
since the return value of the while script will be the
return value of sleep in the first construction.
Its not quite easy to get an error code only.
How about this ...
#!/bin/bash
Msg=$(fileWithError.sh 2>&1) # redirect all error messages to stdout
if [ "$?" -ne 0 ] # Not Equal
then
echo "$Msg"
exit 1
fi
exit 0
You catch all messages created by fileWithError.sh and if the programm returned an error code then you have the error message already saved in a variable.
But this will make a disadvantage, because you will temporary store all messages created by fileWithError.sh till the error appears.
You can filter the error message with echo "$Msg" |tail -n 1, but its not 100% save.
You should also do some changes in fileWithError.sh...
Switch set -e with trap "exit 1" ERR. this will close the script on errors.
Hope this will help.

How to check dependency in bash script

I want to check whether nodejs is installed on the system or not. I am getting this error:
Error : command not found.
How can i fix it?
#!/bin/bash
if [ nodejs -v ]; then
echo "nodejs found"
else
echo "nodejs not found"
fi
You can use the command bash builtin:
if command -v nodejs >/dev/null 2>&1 ; then
echo "nodejs found"
echo "version: $(nodejs -v)"
else
echo "nodejs not found"
fi
The name of the command is node, not nodejs
which returns the path to the command to stdout, if it exists
if [ $(which node 2>/dev/null) ]; then
echo "nodejs found"
else
echo "nodejs not found"
fi
This is not what the OP asked for (nearly 3 years ago!), but for anyone who wants to check multiple dependencies:
#!/bin/bash
echo -n "Checking dependencies... "
for name in youtube-dl yad ffmpeg
do
[[ $(which $name 2>/dev/null) ]] || { echo -en "\n$name needs to be installed. Use 'sudo apt-get install $name'";deps=1; }
done
[[ $deps -ne 1 ]] && echo "OK" || { echo -en "\nInstall the above and rerun this script\n";exit 1; }
Here's how it works. First, we print a line saying that we are checking dependencies. The second line starts a "for name in..." loop, in which we put the dependencies we want to check, in this example we will check for youtube-dl, yad and ffmpeg. The loop commences (with "do") and the next line checks for the existence of each command using the bash command "which." If the dependency is already installed, no action is taken and we skip to the next command in the loop. If it does need to be installed, a message is printed and a variable "deps" is set to 1 (deps = dependencies) and then we continue to the next command to check. After all the commands are checked, the final line checks to see if any dependencies are required by checking the deps variable. If it is not set, it appends "OK" to the line where it originally said "Checking dependencies.... " and continues (assuming this is the first part of a script). If it is set, it prints a message asking to install the dependencies and to rerun the script. It then exits the script.
The echo commands look complicated but they are necessary to give a clean output on the terminal. Here is a screenshot showing that the dependencies are not met on the first run, but they are on the second.
PS If you save this as an script, you will need to be in the same directory as the script and type ./{name_of_your_script} and it will need to be executable.
You may check the existence of a program or function by
type nodejs &>/dev/null || echo "node js not installed"
However, there is a more sophisticated explanation available here.
I was thinking about this and came up with a few versions, then went on the internet to see what others have to say and ended up here. Albeit an old thread, I'll reply with my thoughts.
First to answer the OP's original question: How can i fix it?
if node -v &>/dev/null; then
echo "nodejs found"
else
echo "nodejs not found"
fi
If you are simply checking if node works, this would do it. But it isn't a very generic way to do it.
Another way is to use command in a loop and collect the missing dependencies (in this example looking for the commands kind and kubectl).
for app in kind kubectl; do command -v "${app}" &>/dev/null || not_available+=("${app}"); done
(( ${#not_available[#]} > 0 )) && echo "Please install missing dependencies: ${not_available[*]}" 1>&2 && exit 1
Or less concisely expressed:
unset not_available # script safety, however not necessary.
for app in kind kubectl; do
if ! command -v "${app}" &>/dev/null; then
not_available+=("${app}")
fi
done
if (( ${#not_available[#]} > 0 )); then
echo "Please install missing dependencies: ${not_available[#]}" 1>&2
exit 1
fi
Then I figured I'd want a way to do the same without a loop, so came up with this:
not_installed=$(command -V kind kubectl 2>&1 | awk -F': +' '$NF == "not found" {printf "%s ", $(NF-1)}')
[[ -n ${not_installed} ]] && echo "Please install missing dependencies: ${not_installed}" 1>&2 && exit 1
The command -V can take any number of entries and posts the result back to stdout and stderr (though I redirect both to stdout for the next command to parse).
awk sets the field separator to <colon><one or more space>, expressed as : +. If the last field contains, "not found", print the second to last field, being the name of the command which is not installed.
Lastly, if the variable contains any data, then report back which dependencies that are missing to stderr and exit the script!
You can do dependency checks in a million ways, but here are a few alternatives which are more generally applicable and not too lengthy while still being easy to follow. :]
If all you want is to check to see if a command exists, use which command. It returns the patch if the command is called, and nothing if it is not found
if [ "$(which openssl)" = "" ] ;then
echo "This script requires openssl, please resolve and try again."
exit 1
fi

Shell script with Wget - If else nested inside for loop

I'm trying to make a shell script that reads a list of download URLs to find if they're still active. I'm not sure what's wrong with my current script, (I'm new to this) and any pointers would be a huge help!
user#pc:~/test# cat sites.list
http://www.google.com/images/srpr/logo3w.png
http://www.google.com/doesnt.exist
notasite
Script:
#!/bin/bash
for i in `cat sites.list`
do
wget --spider $i -b
if grep --quiet "200 OK" wget-log; then
echo $i >> ok.txt
else
echo $i >> notok.txt
fi
rm wget-log
done
As is, the script outputs everything to notok.txt - (the first google site should go to ok.txt).
But if I run:
wget --spider http://www.google.com/images/srpr/logo3w.png -b
And then do:
grep "200 OK" wget-log
It greps the string without any problems. What noob mistake did I make with the syntax? Thanks m8s!
The -b option is sending wget to the background, so you're doing the grep before wget has finished.
Try without the -b option:
if wget --spider $i 2>&1 | grep --quiet "200 OK" ; then
There are a few issues with what you're doing.
Your for i in will have problems with lines that contain whitespace. Better to use while read to read individual lines of a file.
You aren't quoting your variables. What if a line in the file (or word in a line) starts with a hyphen? Then wget will interpret that as an option. You have a potential security risk here, as well as an error.
Creating and removing files isn't really necessary. If all you're doing is checking whether a URL is reachable, you can do that without temp files and the extra code to remove them.
wget isn't necessarily the best tool for this. I'd advise using curl instead.
So here's a better way to handle this...
#!/bin/bash
sitelist="sites.list"
curl="/usr/bin/curl"
# Some errors, for good measure...
if [[ ! -f "$sitelist" ]]; then
echo "ERROR: Sitelist is missing." >&2
exit 1
elif [[ ! -s "$sitelist" ]]; then
echo "ERROR: Sitelist is empty." >&2
exit 1
elif [[ ! -x "$curl" ]]; then
echo "ERROR: I can't work under these conditions." >&2
exit 1
fi
# Allow more advanced pattern matching (for case..esac below)
shopt -s globstar
while read url; do
# remove comments
url=${url%%#*}
# skip empty lines
if [[ -z "$url" ]]; then
continue
fi
# Handle just ftp, http and https.
# We could do full URL pattern matching, but meh.
case "$url" in
#(f|ht)tp?(s)://*)
# Get just the numeric HTTP response code
http_code=$($curl -sL -w '%{http_code}' "$url" -o /dev/null)
case "$http_code" in
200|226)
# You'll get a 226 in ${http_code} from a valid FTP URL.
# If all you really care about is that the response is in the 200's,
# you could match against "2??" instead.
echo "$url" >> ok.txt
;;
*)
# You might want different handling for redirects (301/302).
echo "$url" >> notok.txt
;;
esac
;;
*)
# If we're here, we didn't get a URL we could read.
echo "WARNING: invalid url: $url" >&2
;;
esac
done < "$sitelist"
This is untested. For educational purposes only. May contain nuts.

Bash: Create a file if it does not exist, otherwise check to see if it is writeable

I have a bash program that will write to an output file. This file may or may not exist, but the script must check permissions and fail early. I can't find an elegant way to make this happen. Here's what I have tried.
set +e
touch $file
set -e
if [ $? -ne 0 ]; then exit;fi
I keep set -e on for this script so it fails if there is ever an error on any line. Is there an easier way to do the above script?
Why complicate things?
file=exists_and_writeable
if [ ! -e "$file" ] ; then
touch "$file"
fi
if [ ! -w "$file" ] ; then
echo cannot write to $file
exit 1
fi
Or, more concisely,
( [ -e "$file" ] || touch "$file" ) && [ ! -w "$file" ] && echo cannot write to $file && exit 1
Rather than check $? on a different line, check the return value immediately like this:
touch file || exit
As long as your umask doesn't restrict the write bit from being set, you can just rely on the return value of touch
You can use -w to check if a file is writable (search for it in the bash man page).
if [[ ! -w $file ]]; then exit; fi
Why must the script fail early? By separating the writable test and the file open() you introduce a race condition. Instead, why not try to open (truncate/append) the file for writing, and deal with the error if it occurs? Something like:
$ echo foo > output.txt
$ if [ $? -ne 0 ]; then die("Couldn't echo foo")
As others mention, the "noclobber" option might be useful if you want to avoid overwriting existing files.
Open the file for writing. In the shell, this is done with an output redirection. You can redirect the shell's standard output by putting the redirection on the exec built-in with no argument.
set -e
exec >shell.out # exit if shell.out can't be opened
echo "This will appear in shell.out"
Make sure you haven't set the noclobber option (which is useful interactively but often unusable in scripts). Use > if you want to truncate the file if it exists, and >> if you want to append instead.
If you only want to test permissions, you can run : >foo.out to create the file (or truncate it if it exists).
If you only want some commands to write to the file, open it on some other descriptor, then redirect as needed.
set -e
exec 3>foo.out
echo "This will appear on the standard output"
echo >&3 "This will appear in foo.out"
echo "This will appear both on standard output and in foo.out" | tee /dev/fd/3
(/dev/fd is not supported everywhere; it's available at least on Linux, *BSD, Solaris and Cygwin.)

Resources