Bash script doesn't wait until commands have been properly executed - linux

I am working on a very simple script but for some reason parts of it seem to run asynchronously.
singlePartDevice() {
# http://www.linuxquestions.org/questions/linux-software-2/removing-all-partition-from-disk-690256/
# http://serverfault.com/questions/257356/mdadm-on-ubuntu-10-04-raid5-of-4-disks-one-disk-missing-after-reboot
# Create single partition
parted -s "$1" mklabel msdos
# Find size of disk
v_disk=$(parted -s "$1" print|awk '/^Disk/ {print $3}'|sed 's/[Mm][Bb]//')
parted -s "$1" mkpart primary ext3 4096 ${v_disk}
parted -s "$1" set 1 raid on
return 0
}
singlePartDevice "/dev/sdc"
singlePartDevice "/dev/sdd"
#/dev/sdc1 exists but /dev/sdd1 does not exist
sleep 5s
#/dev/sdc1 exists AND /dev/sdd1 does also exist
As you see before the call of sleep the script has only partially finished its job. How do I make my script to wait until parted has done its job sucessfully?

(I am assuming that you are working on Linux due to the links in your question)
I am not very familiar with parted, but I believe that the partition device nodes are not created directly by it - they are created by udev, which is by nature an asynchronous procedure:
parted creates a partition
the kernel updates its internal state
the kernel notifies the udev daemon (udevd)
udevd checks its rule files (usually under /etc/udev/) and creates the appropriate device nodes
This procedure allows for clear separation of the device node handling policy from the kernel, which is a Good Thing (TM). Unfortunately, it also introduces relatively unpredictable delays.
A possible way to handle this is to have your script wait for the device nodes to appear:
while [ ! -e "/dev/sdd1" ]; do sleep 1; done

Assuming all you want to do is ensure that the partitions are created before proceeding, there are a couple of different approaches
Check whether process parted has completed before moving to the next step
Check if the devices are ready before moving to the next step (you will need to check the syntax). Eg
until [ -f /dev/sdc && -f /dev/sdd ]
sleep 5

Related

Non-persistent system wide environment variables

I have been looking for a solution to setup an environment variable, that is non-persistent after reboot, but still accessable system wide. Anyone with a solution (bash)?
The reason is that I want crontab etc. to use these variables (after they have been set once in a single session). But, if the power goes or the disk is taken physically the variable (a password) is not found in the file system. (It's a Raspberry Pi backup server).
So far I have considered using Screen (with no real luck), and is considering a script in /etc/profile.d
The reason is that I want crontab etc. to use these variables (after they have been set once in a single session). But, if the power goes or the disk is taken physically the variable
So store the password in memory, assuming that memory is cleared after power loss. Ex. use shared memory or tmpfs filesystem. First script to setup the password:
#!/bin/bash
# setup_password.sh
if ! findmnt /srv/mypassword >/dev/null; then
mount -o tmpfs tmpfs /srv/mypassword
fi
printf "%s\n" "$1" > /srv/mypassword/password.txt
and script you execute from cron:
#!/bin/bash
if ! password=$(cat /srv/mypassword/password.txt); then
echo "ERROR: password not setup" >&2
exit 2
fi
: continue and use the password "$password"
If your /tmp is a tmpfs, and probably it is, just store the password in /tmp...

Change location of /etc/fstab

I have written a script which requires to read a few entries in /etc/fstab. I have tested the script by manually adding some entries in /etc/fstab and then restored the file to its original contents, also manually. Now I would like to automate those tests and run them as a seperate script. I do, however, not feel comfortable with the idea of changing /etc/fstab altered. I was thinking of making a backup copy of /etc/fstab, then altering it and finally restoring the original file after the tests are done. I would prefer it if I could temporarily alter the location of fstab.
Is there a way to alter the location of fstab to, say, /usr/local/etc/fstab so that when mount -a is run from within a script only the entries in /usr/local/etc/fstab are processed?
UPDATE:
I used bishop's solution by setting LIBMOUNT_FSTAB=/usr/local/etc/fstab. I have skimmed the man page of mount on several occasions in the past but I never noticed this variable. I am not sure if this variable has always been there and I simply overlooked it or if it had been added at some point. I am using mount from util-linux 2.27.1 and at least in this version LIBMOUNT_FSTAB is available and documented in the man-page. It is in the ENVIRONMENT section at the end. This will make my automated tests a lot safer in the future.
UPDATE2:
Since there has been some discussion whether this is an appropriate programming question or not, I have decided to write a small script which demonstrates the usage of LIBMOUNT_FSTAB.
#!/bin/bash
libmount=libmount_fstab
tmpdir="/tmp/test_${libmount}_folder" # temporary test folder
mntdir="$tmpdir/test_${libmount}_mountfolder" # mount folder for loop device
img="$tmpdir/loop.img" # dummy image for loop device
faketab="$tmpdir/alternate_fstab" # temporary, alternative fstab
# get first free loop device
loopdev=$(losetup -f)
# verify there is a free loop device
if [[ -z "$loopdev" ]];then
echo "Error: No free loop device" >&2
exit 1
fi
# check that loop device is not managed by default /etc/fstab
if grep "^$loopdev" /etc/fstab ;then
echo "Error: $loopdev already managed by /etc/fstab" >&2
exit 1
fi
# make temp folders
mkdir -p "$tmpdir"
mkdir -p "$mntdir"
# create temporary, alternative fstab
echo "$loopdev $mntdir ext2 errors=remount-ro 0 1" > "$faketab"
# create dummy image for loop device
dd if=/dev/zero of="$img" bs=1M count=5 &>/dev/null
# setup loop device with dummy image
losetup "$loopdev" "$img" &>/dev/null
# format loop device so it can be mounted
mke2fs "$loopdev" &>/dev/null
# alter location for fstab
export LIBMOUNT_FSTAB="$faketab"
# mount loop device by using alternative fstab
mount "$loopdev" &>/dev/null
# verify loop device was successfully mounted
if mount | grep "^$loopdev" &>/dev/null;then
echo "Successfully used alternative fstab: $faketab"
else
echo "Failed to use alternative fstab: $faketab"
fi
# clean up
umount "$loopdev" &>/dev/null
losetup -d "$loopdev"
rm -rf "$tmpdir"
exit 0
My script primarily manages external devices which are not attached most of the time. I use loop-devices to simulate external devices to test the functionality of my script. This saves a lot of time since I do not have to attach/reattach several physical devices. I think this proves that being able to use an alternative fstab is a very useful feature and allows for scripting safe test scenarios whenever parsing/altering of fstab is required. In fact, I have decided to partially rewrite my script so that it can also use an alternative fstab. Since most of the external devices are hardly ever attached to the system their corresponding entries are just cluttering up /etc/fstab.
Refactor your code that modifies fstab contents into a single function, then test that function correctly modifies the dummy fstab files you provide it. Then you can confidently use that function as part of your mount pipeline.
function change_fstab {
local fstab_path=${1:?Supply a path to the fstab file}
# ... etc
}
change_fstab /etc/fstab && mount ...
Alternatively, set LIBMOUNT_FSTAB per the libmount docs:
LIBMOUNT_FSTAB=/path/to/fake/fstab mount ...

Waiting for /dev/disk/by-label to be populated after SD insertion

I have a bash script doing some initialization on a removable SD card (problem would be the same for any removable storage, I think). Specific behavior depends on card formatting, specifically on fs labels available.
In order to do so I need to request SD insertion and then wait for udev to pick up and populate /dev/*
I can try to speed-up things by explicitly calling partprobe, but I still have to wait (sometime up to 10s!) to get /dev/by-label/ subdir to be populated.
How can I speed up this?
Is there some way to explicitly trigger udev and wait for completion?
A very crude bash script mat be as follows:
sudo partprobe
count=0
while [ ! -L /dev/disk/by-label/root ]
do
if ((count > 10))
then
echo "ERROR: unable to find root's label!"
exit 1
fi
sleep 1
count=$((count++))
done
Feel free to improve.

Detecting USB Thumb Drive when Ready in Linux Shell Script

I am a Windows admin and dev, I do not generally work with Linux so forgive me if this is in some way obvious.
I have a not so good Linux box, some older version of Open SUSE, and I have a script that unmounts the USB thumb drive, formats it, and then waits for the device to become ready again before it runs a script that does a copy/MD5 checksum verification on the source and destination file to ensure the copy was valid. The problem is that on one box the USB thumb drive does not become ready after the format in a consistent way. It takes anywhere from 1 to 2+ minutes before I can access the drive via /media/LABELNAME.
The direct path is /dev/sdb but, of course, I cannot access it directly via this path to copy the files. Here is my shell script as it stands:
#!/bin/bash
set -e
echo "Starting LABELNAME.\n\nUnmounting /dev/sdb/"
umount /dev/sdb
echo "Formatting /dev/sdb/"
mkfs.vfat -I -F32 -n "LABELNAME" /dev/sdb
echo "Waiting on remount..."
sleep 30
echo "Format complete. Running make master."
perl /home/labelname_master.20120830.pl
Any suggestions? How might I wait for the drive to become ready and detect it? I have seen Detecting and Writing to a USB Key / Thumb DriveAutomatically but quite frankly I don't even know what that answer means.
It seems that you have some automatic mounting service running which detects the flash disk and mounts the partition. However, you already know what the partition is, so I recommend that you simply mount the disk in your script, choosing a suitable mount point yourself.
mkfs.vfat -I -F32 -n "LABELNAME" /dev/sdb
echo "Format complete, remounting"
mount /dev/sdb $mountpoint #<-- you would choose $mountpoint
echo "Running make master."
perl /home/labelname_master.20120830.pl

Simulate a faulty block device with read errors?

I'm looking for an easier way to test my application against faulty block devices that generate i/o read errors when certain blocks are read. Trying to use a physical hard drive with known bad blocks is a pain and I would like to find a software solution if one exists.
I did find the Linux Disk Failure Simulation Driver which allows creating an interface that can be configured to generate errors when certain ranges of blocks are read, but it is for the 2.4 Linux Kernel and hasn't been updated for 2.6.
What would be perfect would be an losetup and loop driver that also allowed you to configure it to return read errors when attempting to read from a given set of blocks.
It's not a loopback device you're looking for, but rather device-mapper.
Use dmsetup to create a device backed by the "error" target. It will show up in /dev/mapper/<name>.
Page 7 of the Device mapper presentation (PDF) has exactly what you're looking for:
dmsetup create bad_disk << EOF
0 8 linear /dev/sdb1 0
8 1 error
9 204791 linear /dev/sdb1 9
EOF
Or leave out the sdb1 parts to and put the "error" target as the device for blocks 0 - 8 (instead of sdb1) to make a pure error disk.
See also The Device Mapper appendix from "RHEL 5
Logical Volume Manager Administration".
There's also a flakey target - a combo of linear and error that sometimes succeeds. Also a delay to introduce intentional delays for testing.
It seems like Linux's built-in fault injection capabilities would be a good idea to use.
Blog: http://blog.wpkg.org/2007/11/08/using-fault-injection/
Reference: https://www.kernel.org/doc/Documentation/fault-injection/fault-injection.txt
The easiest way to play with block devices is using nbd.
Download the userland sources from git://github.com/yoe/nbd.git and modify nbd-server.c to fail at reading or writing on whichever areas you want it to fail on, or to fail in a controllably random pattern, or basically anything you want.
I would like to elaborate on Peter Cordes answer.
In bash, setup an image on a loopback device with ext4, then write a file to it named binary.bin.
imageName=faulty.img
mountDir=$(pwd)/mount
sudo umount $mountDir ## make sure nothing is mounted here
dd if=/dev/zero of=$imageName bs=1M count=10
mkfs.ext4 $imageName
loopdev=$(sudo losetup -P -f --show $imageName); echo $loopdev
mkdir $mountDir
sudo mount $loopdev $mountDir
sudo chown -R $USER:$USER mount
echo "2ed99f0039724cd194858869e9debac4" | xxd -r -p > $mountDir/binary.bin
sudo umount $mountDir
in python3 (since bash struggles to deal with binary data) search for the magic binary data in binary.bin
import binascii
with open("faulty.img", "rb") as fd:
s = fd.read()
search = binascii.unhexlify("2ed99f0039724cd194858869e9debac4")
beg=0
find = s.find(search, beg); beg = find+1; print(find)
start_sector = find//512; print(start_sector)
then back in bash mount the faulty block device
start_sector=## copy value from variable start_sector in python
next_sector=$(($start_sector+1))
size=$(($(wc -c $imageName|cut -d ' ' -f1)/512))
len=$(($size-$next_sector))
echo -e "0\t$start_sector\tlinear\t$loopdev\t0" > fault_config
echo -e "$start_sector\t1\terror" >> fault_config
echo -e "$next_sector\t$len\tlinear\t$loopdev\t$next_sector" >> fault_config
cat fault_config | sudo dmsetup create bad_drive
sudo mount /dev/mapper/bad_drive $mountDir
finally we can test the faulty block device by reading a file
cat $mountDir/binary.bin
which produces the error:
cat: /path/to/your/mount/binary.bin: Input/output error
clean up when you're done with testing
sudo umount $mountDir
sudo dmsetup remove bad_drive
sudo losetup -d $loopdev
rm fault_config $imageName

Resources