Rsync script does not continue after sync - linux

Problem: I have a simple custom backup script that is set to run whenever my backup drive is detected, this is done via udev. All is well until about halfway down through the script it seems to hang after the rsync command. My code is below:
#!/bin/bash
#Mount the Backup Drive
wall "backup is starting"
mount -U f91b8373-6349-4de3-86e1-6a2557f2c3f7 /media/backupdrive
#Get updated package-list
mv /media/backupdrive/package-selections /media/backupdrive/package-selections.old
dpkg --get-selections >/media/backupdrive/package-selections
wall "pacakge list updated"
#Run Backup
mv /home/user/backup/rsync.log /home/user/backup/rsync.log.old
rsync --log-file=/media/backupdrive/backup/rsync.log -ravzX --delete --exclude /var/tmp --exclude /var/lock --exclude /var/run /home /etc /var /usr /media/backupdrive/backup
wall "rsync complete"
#Sync changes to disk and unmount
sync
cp /media/backupdrive/backup/rsync.log /home/user/backup/rsync.log
umount /media/backupdrive
wall "Backup is complete, the logfile can be viewed at /home/user/backup/rsync.log"
Question: What am I doing wrong here, why is the script not continuing after the rsync?
PS - The wall commands are not important to the script I placed them in at various points to troubleshoot, yes I'm new to this :)
Edit - I have tried removing the "z" option as was mentioned on a similar question, however this has made no difference

It's like a timeout of RUN command in udev.
Instead of running backup script (which normally takes long time to complete) directly from udev, you can run it from separate process, activated by udev.
For example you can use at command:
ACTION=="add", KERNEL=="sd*", ENV{ID_FS_UUID_ENC}=="f91b8373-6349-4de3-86e1-6a2557f2c3f7", RUN+="/home/steve/backup/backup_at.sh"
backup_at.sh:
#!/bin/sh
echo /home/steve/backup/backup.sh | at now
Or you can try to run it in background:
ACTION=="add", KERNEL=="sd*", ENV{ID_FS_UUID_ENC}=="f91b8373-6349-4de3-86e1-6a2557f2c3f7", RUN+="/home/steve/backup/backup.sh &"
but I don't check this method.
From http://lists.freedesktop.org/archives/systemd-devel/2012-November/007390.html:
It's completely wrong to launch any long running task from a udev rule
and you should expect that it will be killed. If you need to launch a
process from a udev rule, use ENV{SYSTEMD_WANTS} to activate a
service.

Related

Change location of /etc/fstab

I have written a script which requires to read a few entries in /etc/fstab. I have tested the script by manually adding some entries in /etc/fstab and then restored the file to its original contents, also manually. Now I would like to automate those tests and run them as a seperate script. I do, however, not feel comfortable with the idea of changing /etc/fstab altered. I was thinking of making a backup copy of /etc/fstab, then altering it and finally restoring the original file after the tests are done. I would prefer it if I could temporarily alter the location of fstab.
Is there a way to alter the location of fstab to, say, /usr/local/etc/fstab so that when mount -a is run from within a script only the entries in /usr/local/etc/fstab are processed?
UPDATE:
I used bishop's solution by setting LIBMOUNT_FSTAB=/usr/local/etc/fstab. I have skimmed the man page of mount on several occasions in the past but I never noticed this variable. I am not sure if this variable has always been there and I simply overlooked it or if it had been added at some point. I am using mount from util-linux 2.27.1 and at least in this version LIBMOUNT_FSTAB is available and documented in the man-page. It is in the ENVIRONMENT section at the end. This will make my automated tests a lot safer in the future.
UPDATE2:
Since there has been some discussion whether this is an appropriate programming question or not, I have decided to write a small script which demonstrates the usage of LIBMOUNT_FSTAB.
#!/bin/bash
libmount=libmount_fstab
tmpdir="/tmp/test_${libmount}_folder" # temporary test folder
mntdir="$tmpdir/test_${libmount}_mountfolder" # mount folder for loop device
img="$tmpdir/loop.img" # dummy image for loop device
faketab="$tmpdir/alternate_fstab" # temporary, alternative fstab
# get first free loop device
loopdev=$(losetup -f)
# verify there is a free loop device
if [[ -z "$loopdev" ]];then
echo "Error: No free loop device" >&2
exit 1
fi
# check that loop device is not managed by default /etc/fstab
if grep "^$loopdev" /etc/fstab ;then
echo "Error: $loopdev already managed by /etc/fstab" >&2
exit 1
fi
# make temp folders
mkdir -p "$tmpdir"
mkdir -p "$mntdir"
# create temporary, alternative fstab
echo "$loopdev $mntdir ext2 errors=remount-ro 0 1" > "$faketab"
# create dummy image for loop device
dd if=/dev/zero of="$img" bs=1M count=5 &>/dev/null
# setup loop device with dummy image
losetup "$loopdev" "$img" &>/dev/null
# format loop device so it can be mounted
mke2fs "$loopdev" &>/dev/null
# alter location for fstab
export LIBMOUNT_FSTAB="$faketab"
# mount loop device by using alternative fstab
mount "$loopdev" &>/dev/null
# verify loop device was successfully mounted
if mount | grep "^$loopdev" &>/dev/null;then
echo "Successfully used alternative fstab: $faketab"
else
echo "Failed to use alternative fstab: $faketab"
fi
# clean up
umount "$loopdev" &>/dev/null
losetup -d "$loopdev"
rm -rf "$tmpdir"
exit 0
My script primarily manages external devices which are not attached most of the time. I use loop-devices to simulate external devices to test the functionality of my script. This saves a lot of time since I do not have to attach/reattach several physical devices. I think this proves that being able to use an alternative fstab is a very useful feature and allows for scripting safe test scenarios whenever parsing/altering of fstab is required. In fact, I have decided to partially rewrite my script so that it can also use an alternative fstab. Since most of the external devices are hardly ever attached to the system their corresponding entries are just cluttering up /etc/fstab.
Refactor your code that modifies fstab contents into a single function, then test that function correctly modifies the dummy fstab files you provide it. Then you can confidently use that function as part of your mount pipeline.
function change_fstab {
local fstab_path=${1:?Supply a path to the fstab file}
# ... etc
}
change_fstab /etc/fstab && mount ...
Alternatively, set LIBMOUNT_FSTAB per the libmount docs:
LIBMOUNT_FSTAB=/path/to/fake/fstab mount ...

Linux umount a device from a script running in the device itself

I've a mounted iso image in the path:
/mnt/iso
Inside this iso I've an install script install.sh
I run the installation script from the iso and at the end the script ask to the user if he want to umount the iso itself.
If the user press "y" the script execute the following code:
cd /
umount /mnt/iso
echo "Installation completed!"
Unfortunately when the script tries to execute the umount there's an error
umount: /mnt/iso: device is busy
I suppose it's due to the fact that the virtual device is busy from the script itself.
How can make it work?
Tnx
Use the -l or --lazy switch to umount which will do a lazy umount, where it is only fully unmounted once it is no longer in use. The full description in the manual page (this is a linux specific option) is:
Lazy unmount. Detach the filesystem from the filesystem hierarchy
now, and cleanup all references to the filesystem as soon as it is not
busy anymore. (Requires kernel 2.4.11 or later.)
TomH's solution will resolve the issue if you are using the latest. Otherwise the comment by Simone Palazzo is your best bet. You are unmounting something through a script located in the area you are unmounting. If you run the script from the root directory it will be successful.

Detecting USB Thumb Drive when Ready in Linux Shell Script

I am a Windows admin and dev, I do not generally work with Linux so forgive me if this is in some way obvious.
I have a not so good Linux box, some older version of Open SUSE, and I have a script that unmounts the USB thumb drive, formats it, and then waits for the device to become ready again before it runs a script that does a copy/MD5 checksum verification on the source and destination file to ensure the copy was valid. The problem is that on one box the USB thumb drive does not become ready after the format in a consistent way. It takes anywhere from 1 to 2+ minutes before I can access the drive via /media/LABELNAME.
The direct path is /dev/sdb but, of course, I cannot access it directly via this path to copy the files. Here is my shell script as it stands:
#!/bin/bash
set -e
echo "Starting LABELNAME.\n\nUnmounting /dev/sdb/"
umount /dev/sdb
echo "Formatting /dev/sdb/"
mkfs.vfat -I -F32 -n "LABELNAME" /dev/sdb
echo "Waiting on remount..."
sleep 30
echo "Format complete. Running make master."
perl /home/labelname_master.20120830.pl
Any suggestions? How might I wait for the drive to become ready and detect it? I have seen Detecting and Writing to a USB Key / Thumb DriveAutomatically but quite frankly I don't even know what that answer means.
It seems that you have some automatic mounting service running which detects the flash disk and mounts the partition. However, you already know what the partition is, so I recommend that you simply mount the disk in your script, choosing a suitable mount point yourself.
mkfs.vfat -I -F32 -n "LABELNAME" /dev/sdb
echo "Format complete, remounting"
mount /dev/sdb $mountpoint #<-- you would choose $mountpoint
echo "Running make master."
perl /home/labelname_master.20120830.pl

Linux mount fails with error Transport endpoint not connected

From time to time for reasons unknown, the Amazon S3 Fuse mount on a linux server fails throughout the day. The only resolution is to umount and then mount the directory again. I tried writing the following shell script which when manually unmounted it worked and remounted but I learned there must be some other "state" when a link fails but is not actually unmounted.
Original error:
[root#app3 mnt]# cd s3fs
[root#app3 s3fs]# ls
ls: cannot access amazon: Transport endpoint is not connected
amazon
[root#app3 s3fs]# umount amazon
[root#app3 s3fs]# mount amazon/
Shell script attempt to check mount and remount if failed (worked in manual tests but failed):
#!/bin/bash
cat /etc/mtab | grep /mnt/$1 >/dev/null
if [ "$?" -eq "0" ]; then
echo /mnt/$1 is mounted.
else
echo /mnt/$1 is not mounted at this time.
echo remounting now...
umount /mnt/$1
mount /mnt/$1
fi
Why would the shell script work if I manually unmount the directory and run test, but when transport endpoint fails the test returns true and remount doesn't happen?
What is the best way to solve this?
I know this is old but it might help others facing this issue.
We had a similar problem with our bucket being unmounted randomly and getting the 'Transport endpoint is not connected' error.
Instead of using "cat /etc/mtab", I use "df -hT" and it works with my script. The problem is it gets stuck in this weird state, of being half unmounted and the "mtab" still sees it as mounted; but I still don't know why.
This is the code I'm using:
#!/bin/bash
if [ $(df -hT | grep -c s3fs) != 1 ]
then
# unmount it first
umount /path/to/mounted/bucket;
# remount it
/usr/local/bin/s3fs bucket-name /path/to/mount/bucket -o noatime -o allow_other;
echo "s3fs is down";
# maybe send email here to let you know it went down
fi
Also make sure you run your script as root, otherwise it won't be able to unmount/remount.

Rsync cronjob that will only run if rsync isn't already running

I have checked for a solution here but cannot seem to find one. I am dealing with a very slow wan connection about 300kb/sec. For my downloads I am using a remote box, and then I am downloading them to my house. I am trying to run a cronjob that will rsync two directories on my remote and local server every hour. I got everything working but if there is a lot of data to transfer the rsyncs overlap and end up creating two instances of the same file thus duplicate data sent.
I want to instead call a script that would run my rsync command but only if rsync isn't running?
The problem with creating a "lock" file as suggested in a previous solution, is that the lock file might already exist if the script responsible to removing it terminates abnormally.
This could for example happen if the user terminates the rsync process, or due to a power outage. Instead one should use flock, which does not suffer from this problem.
As it happens flock is also easy to use, so the solution would simply look like this:
flock -n lock_file -c "rsync ..."
The command after the -c option is only executed if there is no other process locking on the lock_file. If the locking process for any reason terminates, the lock will be released on the lock_file. The -n options says that flock should be non-blocking, so if there is another processes locking the file nothing will happen.
Via the script you can create a "lock" file. If the file exists, the cronjob should skip the run ; else it should proceed. Once the script completes, it should delete the lock file.
if [ -e /home/myhomedir/rsyncjob.lock ]
then
echo "Rsync job already running...exiting"
exit
fi
touch /home/myhomedir/rsyncjob.lock
#your code in here
#delete lock file at end of your job
rm /home/myhomedir/rsyncjob.lock
To use the lock file example given by #User above, a trap should be used to verify that the lock file is removed when the script is exited for any reason.
if [ -e /home/myhomedir/rsyncjob.lock ]
then
echo "Rsync job already running...exiting"
exit
fi
touch /home/myhomedir/rsyncjob.lock
#delete lock file at end of your job
trap 'rm /home/myhomedir/rsyncjob.lock' EXIT
#your code in here
This way the lock file will be removed even if the script exits before the end of the script.
A simple solution without using a lock file is to just do this:
pgrep rsync > /dev/null || rsync -avz ...
This will work as long as it is the only rsync job you run on the server, and you can then run this directly in cron, but you will need to redirect the output to a log file.
If you do run multiple rsync jobs, you can get pgrep to match against the full command line with a pattern like this:
pgrep -f rsync.*/data > /dev/null || rsync -avz --delete /data/ otherhost:/data/
pgrep -f rsync.*/www > /dev/null || rsync -avz --delete /var/www/ otherhost:/var/www/
As a definite solution kill rsync processes before new one starts in crontab.

Resources