Trouble with error handling in my first bash script - linux

OK, so I am a total beginner with bash scripts and I am aware that the question is probably phrased a bit awkwardly, but I'll be as clear as I can!
I have written the following script to create a backup of repositories in a folder. The script is as follows:
#!/bin/bash
SVNREPO="/var/svn"
TEMP="/var/tmp"
BACKUP="/home/helix/backups"
cd $SVNREPO
if [ $# -eq 0 ]; then
for REPO in *; do
ARRAY+=($REPO)
done
else
for REPO in $#; do
ARRAY+=($REPO)
done
fi
for REPO in ${ARRAY[#]}; do
svnadmin dump $SVNREPO/$REPO -r HEAD | gzip > $TEMP/$REPO.svn.gzip sd
cp $TEMP/$REPO.svn.gzip $BACKUP/$REPO.svn.gzip
rm $TEMP/$REPO.svn.gzip
done
This script successfully produces .gzip backups of the all of the repositories in 'var/svn' when the script is called with no arguments, and creates .gzip backups of the specific repositories that are called as arguments. Great!
However, if the script is run with an argument that does not correspond to a repository that exists, then the program will crash with the error message: svnadmin: E000002: Can't open file '/var/svn/ada/format': No such file or directory. What I am trying to achieve is to catch this error and print a more user friendly output to the console. I have been trying to do this using 'trap'.
First I added the following line:
trap 'echo ERROR! The repository or repositories that you are trying to backup do not exist!' ERR
...and then I pushed the error to /dev/null at this point in the final for loop:
svnadmin dump $SVNREPO/$REPO -r HEAD 2>/dev/null | gzip > $TEMP/$REPO.svn.gzip
I pushed to the /dev/null file at the place I did because this is where the program errors out. However, the script no longer seems to work. What am I doing wrong here? Is it an issue to do with having the 2>/dev/null in the middle of a line? If so, how could I refactor this code so that it doesn't require the pipe in the middle of the line?
Many thanks for any help, I hope my question is reasonably clear! To confirm, the final non-working code is as follows:
#!/bin/bash
SVNREPO="/var/svn"
TEMP="/var/tmp"
BACKUP="/home/helix/backups"
cd $SVNREPO
if [ $# -eq 0 ]; then
for REPO in *; do
ARRAY+=($REPO)
done
else
for REPO in $#; do
ARRAY+=($REPO)
done
fi
trap 'echo ERROR! The repository or repositories that you are trying to backup do not exist!' ERR
for REPO in ${ARRAY[#]}; do
svnadmin dump $SVNREPO/$REPO -r HEAD 2>/dev/null | gzip > $TEMP/$REPO.svn.gzip sd
cp $TEMP/$REPO.svn.gzip $BACKUP/$REPO.svn.gzip
rm $TEMP/$REPO.svn.gzip
done

I do not know exactly how trap command works, but I will suggest another way that might solve your problem in another way:
First, before your for loop, add this line:
set -o pipefail
This means that when any command in a pipe fails, the last exit code ($?) will contain the error code if any failed.
On the line right after your svnadmin call, I would suggest adding this:
if [ $? -ne 0 ]; then
echo "ERROR! Received error code $? for repository '$REPO'."
continue
fi
You can of course alter the error message to your taste. The functionality should be clear: If svnadmin or bzip fails, it will print an error message and continue to next item in the for loop.
Hope this helps.

Using #cdarke suggestion of checking whether a file exists, I now have it working with the following code:
#!/bin/bash
SVNREPO="/var/svn"
TEMP="/var/tmp"
BACKUP="/home/helix/backups"
cd $SVNREPO
if [ $# -eq 0 ]; then
for REPO in *; do
ARRAY+=($REPO)
done
else
for REPO in $#; do
ARRAY+=($REPO)
done
fi
for REPO in ${ARRAY[#]}; do
if [ -f $SVNREPO/$REPO/format ]; then
vnadmin dump $SVNREPO/$REPO -r HEAD 2>/dev/null | gzip > $TEMP/$REPO.svn.gzip
cp $TEMP/$REPO.svn.gzip $BACKUP/$REPO.svn.gzip
rm $TEMP/$REPO.svn.gzip
else
echo ERROR! The repository $REPO does not exist. No backup has been made for this argument.
fi
done

Related

Silent while loop in bash

I am looking to create a bash script that keeps checking a file in directory and perform certain operation on it. I am using while loop, if file does not exists I want that while loop stays quite and keeps on checking condition. Here is what i created but it keeps throwing error that file not found, if file is not there.
while [ ! -f /home/master/applications/tmp/mydata.txt ]
do
cat mydata.txt;
rm mydata.txt;
sleep 1; done
There are two issue in your implementation:
You should use the same (absolute or relative) path in your while loop test statement [ ! -f $file ] and in your cat and rm commands.
The cat command is looking for the file in the current working directory (pwd) and your while statement might be looking somewhere else and hence, your implementation is buggy and won't work as expected if your pwd isn't /home/master/applications/tmp.
You need to move your cat command and rm command after the while block. It doesn't make sense to cat a file if a file doesn't exist. I think your misplaced those commands.
Try this:
file="/home/master/applications/tmp/mydata.txt"
while [ ! -f "$file" ]
do
sleep 1
done
cat $file
rm $file
EDIT
As per suggestion from #Ivan, you could use until instead of while as it suits more to your requirements.
file="/home/master/applications/tmp/mydata.txt"
until [ -f "$file" ]; do sleep 1; done
cat $file
rm $file
Making a different assumption than abhiarora, I'll guess maybe you meant for the file to reappear, and you want it shown each time.
file=/home/master/applications/tmp/mydata.txt
while :
do if [[ -f "$file" ]]
then echo "$(<"$file")"
rm "$file"
fi
sleep 1
done
This creates an infinite loop. If that's NOT what you wanted, use abhiarora's solution.

Bash script to iterate contents of directory moving only the files not currently open by other process

I have people uploading files to a directory on my Ubuntu Server.
I need to move those files to the final location (another directory) only when I know these files are fully uploaded.
Here's my script so far:
#!/bin/bash
cd /var/uploaded_by_users
for filename in *; do
lsof $filename
if [ -z $? ]; then
# file has been closed, move it
else
echo "*** File is open. Skipping..."
fi
done
cd -
However it's not working as it says some files are open when that's not true. I supposed $? would have 0 if the file was closed and 1 if it wasn't but I think that's wrong.
I'm not linux expert so I'm looking to know how to implement this simple script that will run on a cron job every 1 minute.
[ -z $? ] checks if $? is of zero length or not. Since $? will never be a null string, your check will always fail and result in else part being executed.
You need to test for numeric zero, as below:
lsof "$filename" >/dev/null; lsof_status=$?
if [ "$lsof_status" -eq 0 ]; then
# file is open, skipping
else
# move it
fi
Or more simply (as Benjamin pointed out):
if lsof "$filename" >/dev/null; then
# file is open, skip
else
# move it
fi
Using negation, we can shorten the if statement (as dimo414 pointed out):
if ! lsof "$filename" >/dev/null; then
# move it
fi
You can shorten it even further, using &&:
for filename in *; do
lsof "$filename" >/dev/null && continue # skip if the file is open
# move the file
done
You may not need to worry about when the write is complete, if you are moving the file to a different location in the same file system. As long as the client is using the same file descriptor to write to the file, you can simply create a new hard link for the upload file, then remove the original link. The client's file descriptor won't be affected by one of the links being removed.
cd /var/uploaded_by_users
for f in *; do
ln "$f" /somewhere/else/"$f"
rm "$f"
done

Check if rsync command ran successful

The following bash-script is doing a rsync of a folder every hour:
#!/bin/bash
rsync -r -z -c /home/pi/queue root#server.mine.com:/home/foobar
rm -rf rm /home/pi/queue/*
echo "Done"
But I found out that my Pi disconnected from the internet, so the rsync failed. So it did the following command, deleting the folder.
How to determine if a rsync-command was successful, if it was, then it may remove the folder.
Usually, any Unix command shall return 0 if it ran successfully, and non-0 in other cases.
Look at man rsync for exit codes that may be relevant to your situation, but I'd do that this way :
#!/bin/bash
rsync -r -z -c /home/pi/queue root#server.mine.com:/home/foobar && rm -rf rm /home/pi/queue/* && echo "Done"
Which will rm and echo done only if everything went fine.
Other way to do it would be by using $? variable which is always the return code of the previous command :
#!/bin/bash
rsync -r -z -c /home/pi/queue root#server.mine.com:/home/foobar
if [ "$?" -eq "0" ]
then
rm -rf rm /home/pi/queue/*
echo "Done"
else
echo "Error while running rsync"
fi
see man rsync, section EXIT VALUES
Old question but I am surprised nobody has given the simple answer:
Use the --remove-source-files rsync option.
I think it is exactly what you need.
From the man page:
--remove-source-files sender removes synchronized files (non-dir)
Only files that rsync has fully successfully transferred are removed.
When unfamiliar with rsync it is easy to be confused about the --delete options and the --remove-source-files option. The --delete options remove files on the destination side. More info here:
https://superuser.com/questions/156664/what-are-the-differences-between-the-rsync-delete-options
you need to check the exit value of rsync
#!/bin/bash
rsync -r -z -c /home/pi/queue root#server.mine.com:/home/foobar
if [[ $? -gt 0 ]]
then
# take failure action here
else
rm -rf rm /home/pi/queue/*
echo "Done"
fi
Set of result codes here:
http://linux.die.net/man/1/rsync

scp: how to find out that copying was finished

I'm using scp command to copy file from one Linux host to another.
I run scp commend on host1 and copy file from host1 to host2. File is quite big and it takes for some time to copy it.
On host2 file appears immediately as soon as copying was started. I can do everything with this file even if copying is still in progress.
Is there any reliable way to find out if copying was finished or not on host2?
Off the top of my head, you could do something like:
touch tinyfile
scp bigfile tinyfile user#host:
Then when tinyfile appears you know that the transfer of bigfile is complete.
As pointed out in the comments, this assumes that scp will copy the files one by one, in the order specified. If you don't trust it, you could do them one by one explicitly:
scp bigfile user#host:
scp tinyfile user#host:
The disadvantage of this approach is that you would potentially have to authenticate twice. If this were an issue you could use something like ssh-agent.
On sending side (host1) use script like this:
#!/bin/bash
echo 'starting transfer'
scp FILE USER#DST_SERVER:DST_PATH
OUT=$?
if [ $OUT = 0 ]; then
echo 'transfer successful'
touch successful
scp successful USER#DST_SERVER:DST_PATH
else
echo 'transfer faild'
fi
On receiving side (host2) make script like this:
#!/bin/bash
SLEEP_TIME=30
MAX_CNT=10
CNT=0
while [[ ! -e successful && $CNT < $MAX_CNT ]]; do
((CNT++))
sleep($SLEEP_TIME);
done;
if [[ -e successful ]]; then
echo 'successful'
rm successful
# do somethning with FILE
fi
With CNT and MAX_CNT you disable endless loop (in case file successful isn't transferred).
Product MAX_CNT and SLEEP_TIME should be equal or greater expected transfer time. In my example expected transfer time is less than 300 seconds.
A checksum (md5sum, sha256sum ,sha512sum) of the local and remote files would tell you if they're identical.
For the situation where you don't have SSH access to the remote system - like an FTP server - you can download the file after it's uploaded and compare the checksums. I do this for files I send from production scripts at work. Below is a snippet from the script in which I do this.
MD5SRC=$(md5sum $LOCALFILE | cut -c 1-32)
MD5TESTFILE=$(mktemp -p /ramdisk)
curl \
-o $MD5TESTFILE \
-sS \
-u $FTPUSER:$FTPPASS \
ftp://$FTPHOST/$REMOTEFILE
MD5DST=$(md5sum $MD5TESTFILE | cut -c 1-32)
if [ "$MD5SRC" == "$MD5DST" ]
then
echo "+Local and Remote files match!"
else
echo "-Local and Remote files don't match"
fi
if you use inotify-tools,
then the solution will looks like this:
while ! inotifywait -e close $(dirname ${bigfile_fullname}) 2>/dev/null | \
grep -Eo "CLOSE $(basename ${bigfile_fullname})$">/dev/null
do true
done
echo "File ${bigfile_fullname} closed"
After some investigation, and discussion of the problem on other forums I have found one more solution. Maybe it can help somebody.
There is a command "lsof". It lists open files. During copying the file will be opened, so the command
lsof | grep filename
will return non empty result.
So you might want to make a while loop to wait until lsof returns nothing and proceed with your task.
Example:
# provide your file name here
f=<nameOfYourFile>
lsofresult=`lsof | grep $f | wc -l`
while [ $lsofresult != 0 ]; do
echo still copying file $f...
sleep 5
lsofresult=`lsof | grep $f | wc -l`
done; echo copying file $f is finished: `ls $f`
For the duplicate question, How to check if file has been scp 100% to the remote location , which was for an expect script, to know if a file is transferred completely, we can add expect 100% .. .. i.e something like this ...
expect -c "
set timeout 1
spawn scp user#$REMOTE_IP:/tmp/my.file user#$HOST_IP:/home/.
expect yes/no { send yes\r ; exp_continue }
expect password: { send $SCP_PASSWORD\r }
expect 100%
sleep 1
exit
"
if [ -f "/home/my.file" ]; then
echo "Success"
fi
If avoiding a second SSH handshake is important, you can use something like the following:
ssh host cat \> bigfile \&\& touch complete < bigfile
Then wait for the "complete" file to get created on the remote end.

wget with errorlevel bash output

I want to create a bash file (.sh) which does the following:
I call the script like ./download.sh www.blabla.com/bla.jpg
the script has to echo then if the file has downloaded or not...
How can I do this? I know I can use errorlevel but I'm new to linux so...
Thanks in advance!
Typically applications in Linux will set the value of the environment variable $? on failure. You can examine this return code and see if it gets you any error for wget.
#!/bin/bash
wget $1 2>/dev/null
export RC=$?
if [ "$RC" = "0" ]; then
echo $1 OK
else
echo $1 FAILED
fi
You could name this script download.sh. Change the permissions to 755 with chmod 755. Call it with the name of the file you wish to download. ./download.sh www.google.com
You could try something like:
#!/bin/sh
[ -n $1 ] || {
echo "Usage: $0 [url to file to get]" >&2
exit 1
}
wget $1
[ $? ] && {
echo "Could not download $1" | mail -s "Uh Oh" you#yourdomain.com
echo "Aww snap ..." >&2
exit 1
}
# If we're here, it downloaded successfully, and will exit with a normal status
When making a script that will (likely) be called by other scripts, it is important to do the following:
Ensure argument sanity
Send e-mail, write to a log, or do something else so someone knows what went wrong
The >&2 simply redirects the output of error messages to stderror, which allows a calling script to do something like this:
foo-downloader >/dev/null 2>/some/log/file.txt
Since it is a short wrapper, no reason to forsake a bit of sanity :)
This also allows you to selectively direct the output of wget to /dev/null, you might actually want to see it when testing, especially if you get an e-mail saying it failed :)
wget executes in non-interactive way. This means that wget work in the background and you can't catch de return code with $?.
One solution it's to handle the "--server-response" property, searching http 200 status code
Example:
wget --server-response -q -o wgetOut http://www.someurl.com
sleep 5
_wgetHttpCode=`cat wgetOut | gawk '/HTTP/{ print $2 }'`
if [ "$_wgetHttpCode" != "200" ]; then
echo "[Error] `cat wgetOut`"
fi
Note: wget need some time to finish his work, for that reason I put "sleep 5". This is not the best way to do but worked ok for test the solution.

Resources