Bash Script - umount a device, but don't fail if it's not mounted? - linux

I'm writing a bash script and I have errexit set, so that the script will die if any command doesn't return a 0 exit code, i.e. if any command doesn't complete successfully. This is to make sure that my bash script is robust.
I have to mount some filesystems, copy some files over, the umount it. I'm putting a umount /mnt/temp at the start so that it'll umount it before doing anything. However if it's not mounted then umount will fail and stop my script.
Is it possible to do a umount --dont-fail-if-not-mounted /mnt/temp? So that it will return 0 if the device isn't mounted? Like rm -f?

The standard trick to ignore the return code is to wrap the command in a boolean expression that always evaluates to success:
umount .... || /bin/true

Assuming that your umount returns 1 when device isn't mounted, you can do it like that:
umount … || [ $? -eq 1 ]
Then bash will assume no error if umount returns 0 or 1 (i.e. unmounts successfully or device isn't mounted) but will stop the script if any other code is returned (e.g. you have no permissions to unmount the device).

Ignoring exit codes isn't really safe as it won't distinguish between something that is already unmounted and a failure to unmount a mounted resource.
I'd recommend testing that the path is mounted with mountpoint which returns 0 if and only if the given path is a mounted resource.
This script will exit with 0 if the given path was not mounted otherwise it give the exit code from umount.
#!/bin/sh
if mountpoint -q "$1"; then
umount "$1"
fi
You an also do it as a one liner.
! mountpoint -q "$mymount" || umount "$mymount"

I just found the ":" more use ful and wanted a similar solution but let the script know whats happening.
umount ...... || { echo "umount failed but not to worry" ; : ; }
This returns true with the message, though the umount failed.

Related

Correct way to switch_root

I'm having a ton (and a half) of problems setting up booting for my embedded target; situation is:
I have an embedded target with a rather small, but reliable, Flash (16M bytes) and a possibly large (currently 8GB), but rather unreliable, SD card.
Unreliability of SD card is mainly due to hardware setup (direct connection to power supply, so NO WAY to hardware reset it if it wakes up "badly") I cannot change.
This is mainly affecting u-boot (Linux handling seems much more stable).
Since I need a relatively high reliability, even during/after software update, I opted for a dial system (update the dormant one, then reboot) with a "recovery" fallback.
Unfortunately I do not have enough space in Flash for a proper initramfs plus a full recovery system, so I'm trying to couple startup system with recovery.
I thus have a full flash system with a small /init script choosing startup mode.
/init script is something like:
#!/bin/ash
set -x
export PATH=/bin:/sbin:/usr/bin:/usr/sbin
boot_prod() {
mount -t proc none /proc
mount -t sysfs none /sys
mount $1 /mnt && [ -x /mnt/sbin/init ] || return 1
echo "switching to $1"
cd /mnt
mount --move /sys sys
umount /proc # /proc is remounted by BusyBox /dev/inittab
mount --move /dev dev
config_set sys $2
pivot_root . mnt
umount mnt
exec chroot . sbin/init <dev/console >dev/console 2>&1
}
case $tryboot; in
A)
boot_prod /dev/mmcblk0p6 A
;;
B)
boot_prod /dev/mmcblk0p7 B
;;
R)
[ -x /mnt/sbin/init ] && exec /mnt/sbin/init
;;
*)
;;
esac
echo "Could not boot cleanly; running a shell"
mount -t proc none /proc
mount -t sysfs none /sys
exec setsid cttyhack sh
This is clearly started with bootargs containing tryboot=A/B/R.
I am not using Busybox switch_root because this is not a true initramfs, but a SquashFS living on Flash; my current u-Boot environment includes:
BOOT_A_GOOD=y
BOOT_CURRENT=B
SYSTEM_R=/dev/mtdblock5
boot_a=echo "Loading System A";part=A;run boot_x
boot_b=echo "Loading System B";part=B;run boot_x
boot_now=if test "${BOOT_CURRENT}" = A; then run boot_a; elif test "${BOOT_CURRENT}" = B; then run boot_b; fi; if env exists BOOT_A_GOOD; then run boot_a; fi; if env exists BOOT_B_GOOD; then run boot_b; fi; run boot_r
boot_r=echo "Loading Recovery";part=R;run boot_x
boot_x=setenv bootargs "${default_bootargs} mtdparts=${mtdparts} root=${part}" && bootm bc050000
bootcmd=rub boot_now
bootdelay=2
default_bootargs=earlyprintk rootwait console=ttyS2,115200
mtdids=nor0=spi0.0
mtdparts=spi0.0:312k(u-boot),4k(env),4k(factory),2368k(kernel),-(filesystem)
Current problem is system dies at exec chroot . sbin/init ... with a "Kernel panic - not syncing: Attempted to kill init !" error without printing the "culprit" + exec chroot . sbin/init <dev/console >dev/console 2>&1 line.
Note1: I made sure sbin/init exists on new root fs (it is a symlink to bin/busybox
Note2: Error persists even if I comment out previous umount mnt.
What am I doing wrong?

Change location of /etc/fstab

I have written a script which requires to read a few entries in /etc/fstab. I have tested the script by manually adding some entries in /etc/fstab and then restored the file to its original contents, also manually. Now I would like to automate those tests and run them as a seperate script. I do, however, not feel comfortable with the idea of changing /etc/fstab altered. I was thinking of making a backup copy of /etc/fstab, then altering it and finally restoring the original file after the tests are done. I would prefer it if I could temporarily alter the location of fstab.
Is there a way to alter the location of fstab to, say, /usr/local/etc/fstab so that when mount -a is run from within a script only the entries in /usr/local/etc/fstab are processed?
UPDATE:
I used bishop's solution by setting LIBMOUNT_FSTAB=/usr/local/etc/fstab. I have skimmed the man page of mount on several occasions in the past but I never noticed this variable. I am not sure if this variable has always been there and I simply overlooked it or if it had been added at some point. I am using mount from util-linux 2.27.1 and at least in this version LIBMOUNT_FSTAB is available and documented in the man-page. It is in the ENVIRONMENT section at the end. This will make my automated tests a lot safer in the future.
UPDATE2:
Since there has been some discussion whether this is an appropriate programming question or not, I have decided to write a small script which demonstrates the usage of LIBMOUNT_FSTAB.
#!/bin/bash
libmount=libmount_fstab
tmpdir="/tmp/test_${libmount}_folder" # temporary test folder
mntdir="$tmpdir/test_${libmount}_mountfolder" # mount folder for loop device
img="$tmpdir/loop.img" # dummy image for loop device
faketab="$tmpdir/alternate_fstab" # temporary, alternative fstab
# get first free loop device
loopdev=$(losetup -f)
# verify there is a free loop device
if [[ -z "$loopdev" ]];then
echo "Error: No free loop device" >&2
exit 1
fi
# check that loop device is not managed by default /etc/fstab
if grep "^$loopdev" /etc/fstab ;then
echo "Error: $loopdev already managed by /etc/fstab" >&2
exit 1
fi
# make temp folders
mkdir -p "$tmpdir"
mkdir -p "$mntdir"
# create temporary, alternative fstab
echo "$loopdev $mntdir ext2 errors=remount-ro 0 1" > "$faketab"
# create dummy image for loop device
dd if=/dev/zero of="$img" bs=1M count=5 &>/dev/null
# setup loop device with dummy image
losetup "$loopdev" "$img" &>/dev/null
# format loop device so it can be mounted
mke2fs "$loopdev" &>/dev/null
# alter location for fstab
export LIBMOUNT_FSTAB="$faketab"
# mount loop device by using alternative fstab
mount "$loopdev" &>/dev/null
# verify loop device was successfully mounted
if mount | grep "^$loopdev" &>/dev/null;then
echo "Successfully used alternative fstab: $faketab"
else
echo "Failed to use alternative fstab: $faketab"
fi
# clean up
umount "$loopdev" &>/dev/null
losetup -d "$loopdev"
rm -rf "$tmpdir"
exit 0
My script primarily manages external devices which are not attached most of the time. I use loop-devices to simulate external devices to test the functionality of my script. This saves a lot of time since I do not have to attach/reattach several physical devices. I think this proves that being able to use an alternative fstab is a very useful feature and allows for scripting safe test scenarios whenever parsing/altering of fstab is required. In fact, I have decided to partially rewrite my script so that it can also use an alternative fstab. Since most of the external devices are hardly ever attached to the system their corresponding entries are just cluttering up /etc/fstab.
Refactor your code that modifies fstab contents into a single function, then test that function correctly modifies the dummy fstab files you provide it. Then you can confidently use that function as part of your mount pipeline.
function change_fstab {
local fstab_path=${1:?Supply a path to the fstab file}
# ... etc
}
change_fstab /etc/fstab && mount ...
Alternatively, set LIBMOUNT_FSTAB per the libmount docs:
LIBMOUNT_FSTAB=/path/to/fake/fstab mount ...

fusermount: umount failed: Invalid argument

When I try to umount the FUSE file system, I get an error:
root#ubuntu:/home/fufs/src# fusermount -u /tmp/kpfss
fusermount: failed to unmount /tmp/kpfss: Invalid argument
root#ubuntu:/home/fufs/src# fusermount -z -u /tmp/kpfss
fusermount: failed to unmount /tmp/kpfss: Invalid argument
How can I umount the file system? Thanks.
Had this problem before and you may find it helps to use umount thus:
umount -f /tmp/kpfss # or whatever the mount point is
When I've seen this issue, there was a dropped connection to the remote server, and trying to access the mount point locked the shell up. The shell process couldn't even be killed.
Using umount seemed to help sort that.
Often while developing fuse filesystems, I've experienced this when the fuse filesystem locks up in an infinite while loop, or having seg faulted somehow. The only way I know how to free it is to ps -ef | grep name_of_fuse_filesystem_process and kill the corresponding pid.

Linux mount fails with error Transport endpoint not connected

From time to time for reasons unknown, the Amazon S3 Fuse mount on a linux server fails throughout the day. The only resolution is to umount and then mount the directory again. I tried writing the following shell script which when manually unmounted it worked and remounted but I learned there must be some other "state" when a link fails but is not actually unmounted.
Original error:
[root#app3 mnt]# cd s3fs
[root#app3 s3fs]# ls
ls: cannot access amazon: Transport endpoint is not connected
amazon
[root#app3 s3fs]# umount amazon
[root#app3 s3fs]# mount amazon/
Shell script attempt to check mount and remount if failed (worked in manual tests but failed):
#!/bin/bash
cat /etc/mtab | grep /mnt/$1 >/dev/null
if [ "$?" -eq "0" ]; then
echo /mnt/$1 is mounted.
else
echo /mnt/$1 is not mounted at this time.
echo remounting now...
umount /mnt/$1
mount /mnt/$1
fi
Why would the shell script work if I manually unmount the directory and run test, but when transport endpoint fails the test returns true and remount doesn't happen?
What is the best way to solve this?
I know this is old but it might help others facing this issue.
We had a similar problem with our bucket being unmounted randomly and getting the 'Transport endpoint is not connected' error.
Instead of using "cat /etc/mtab", I use "df -hT" and it works with my script. The problem is it gets stuck in this weird state, of being half unmounted and the "mtab" still sees it as mounted; but I still don't know why.
This is the code I'm using:
#!/bin/bash
if [ $(df -hT | grep -c s3fs) != 1 ]
then
# unmount it first
umount /path/to/mounted/bucket;
# remount it
/usr/local/bin/s3fs bucket-name /path/to/mount/bucket -o noatime -o allow_other;
echo "s3fs is down";
# maybe send email here to let you know it went down
fi
Also make sure you run your script as root, otherwise it won't be able to unmount/remount.

How can I check a file exists and execute a command if not?

I have a daemon I have written using Python. When it is running, it has a PID file located at /tmp/filename.pid. If the daemon isn't running then PID file doesn't exist.
On Linux, how can I check to ensure that the PID file exists and if not, execute a command to restart it?
The command would be
python daemon.py restart
which has to be executed from a specific directory.
[ -f /tmp/filename.pid ] || python daemon.py restart
-f checks if the given path exists and is a regular file (just -e checks if the path exists)
the [] perform the test and returns 0 on success, 1 otherwise
the || is a C-like or, so if the command on the left fails, execute the command on the right.
So the final statement says, if /tmp/filename.pid does NOT exist then start the daemon.
test -f filename && daemon.py restart || echo "File doesn't exists"
If it is bash scripting you are wondering about, something like this would work:
if [ ! -f "$FILENAME" ]; then
python daemon.py restart
fi
A better option may be to look into lockfile
The other answers are fine for detecting the existence of the file. However for a complete solution you probably should check that the PID in the pidfile is still running, and that it's your program.
Another approach to solving the problem is a script that ensures that your daemon "stays" alive...
Something like this (note: signal handling should be added for proper startup/shutdown):
$PIDFILE = "/path/to/pidfile"
if [ -f "$PIDFILE" ]; then
echo "Pid file exists!"
exit 1
fi
while true; do
# Write it's own pid file
python your-server.py ;
# force removal of pid in case of unexpected death.
rm -f $PIDFILE;
# sleep for 2 seconds
sleep 2;
done
In this way, the server will stay alive even if it dies unexpectedly.
You can also use a ready solution like Monit.
ls /tmp/filename.pid
It returns true if file exists. Returns false if file does not exist.

Resources