Script to create swap partition fails when running automatically - linux

I am creating a cluster of machines in AWS (Amazon Linux 2) using terraform by utilizing the user_data argument under the aws_instance resource. Part of the script is the creation of a swap partition. The commands in my script work perfectly if I execute them manually in the instance.
I have tried the following in my script. It creates the partition successfully, but it does not seem to finish up creating the swap as confirmed in more /proc/swaps. It successfully executes the lines of code below everything I have showed that I omitted from my post. So it must be failing at the partprobe, mkswap /dev/nvme1n1p2, or swapon /dev/nvme1n1p2. It runs the echo "/dev/nvme1n1p2 swap swap defaults 0 0" >> /etc/fstab. I'm not sure how to tell where it is not executing.
# Create SWAP partition
fdisk /dev/nvme1n1 <<EOF
n
p
2
+48G
w
EOF
partprobe
mkswap /dev/nvme1n1p2
swapon /dev/nvme1n1p2
echo "/dev/nvme1n1p2 swap swap defaults 0 0" >> /etc/fstab
The results intended are to create swap partition as confirmed by running more /proc/swaps. The partition is created, but not the actual swap.
UPDATE: Here is the contents of the log file:
mkswap: cannot open /dev/nvme1n1p2: No such file or directory
swapon: cannot open /dev/nvme1n1p2: No such file or directory
However that device is listed when running lsblk and the command works if I run it manually.

Solution from the comments by #thatotherguy:
I'm guessing it's a race condition with partprobe. If you run manually, there will be several seconds between each command so udev will have plenty of time to create device node asynchronously. When you run them in a script, it doesn't. You could try adding sleep 5 after partprobe.
This solved the OP's error, and I can also report success using this fix.

Related

Swap file returns to original size after reboot

I am trying to increase swapfile size on my raspberry pi 3.
I'm following this guide here on DigitalOcean.
After successfully increasing the file and setting it up using mkswap and swpon commands everything works fine. I even tried filling my ram with random data to see if it is going to use the new swap space and it works perfectly.
However after I reboot my raspberry, swap file returns to the previous (default) size of 100MB.
Is there any way to make this change permanent?
I'm running Raspberry pi 3 on Raspbian Jessie.
I figured it out.
Modifying /etc/dphys-swapfile solves all problems.
I just changed CONF_SWAPSIZE=100 to CONF_SWAPSIZE=2000
dphys-swapfile is responsible for setting up, mounting/unmounting and deleting swap files.
In the configuration file you can also specify the location of the swap file as well as few other parameters.
Suppose you have got to the point where swapon -s returns
# sudo swapon -s
Filename Type Size Used Priority
/swapfile file 4194300 0 -1
Now to make this change permanent you need to add a record about your new swapfile in fstab.
You need to add the following line:
/swapfile none swap sw 0 0
The meaning of the fstab fields is as follows:
#1.source 2.mountpoint 3.fstype 4.options 5.freq 6.order

Change location of /etc/fstab

I have written a script which requires to read a few entries in /etc/fstab. I have tested the script by manually adding some entries in /etc/fstab and then restored the file to its original contents, also manually. Now I would like to automate those tests and run them as a seperate script. I do, however, not feel comfortable with the idea of changing /etc/fstab altered. I was thinking of making a backup copy of /etc/fstab, then altering it and finally restoring the original file after the tests are done. I would prefer it if I could temporarily alter the location of fstab.
Is there a way to alter the location of fstab to, say, /usr/local/etc/fstab so that when mount -a is run from within a script only the entries in /usr/local/etc/fstab are processed?
UPDATE:
I used bishop's solution by setting LIBMOUNT_FSTAB=/usr/local/etc/fstab. I have skimmed the man page of mount on several occasions in the past but I never noticed this variable. I am not sure if this variable has always been there and I simply overlooked it or if it had been added at some point. I am using mount from util-linux 2.27.1 and at least in this version LIBMOUNT_FSTAB is available and documented in the man-page. It is in the ENVIRONMENT section at the end. This will make my automated tests a lot safer in the future.
UPDATE2:
Since there has been some discussion whether this is an appropriate programming question or not, I have decided to write a small script which demonstrates the usage of LIBMOUNT_FSTAB.
#!/bin/bash
libmount=libmount_fstab
tmpdir="/tmp/test_${libmount}_folder" # temporary test folder
mntdir="$tmpdir/test_${libmount}_mountfolder" # mount folder for loop device
img="$tmpdir/loop.img" # dummy image for loop device
faketab="$tmpdir/alternate_fstab" # temporary, alternative fstab
# get first free loop device
loopdev=$(losetup -f)
# verify there is a free loop device
if [[ -z "$loopdev" ]];then
echo "Error: No free loop device" >&2
exit 1
fi
# check that loop device is not managed by default /etc/fstab
if grep "^$loopdev" /etc/fstab ;then
echo "Error: $loopdev already managed by /etc/fstab" >&2
exit 1
fi
# make temp folders
mkdir -p "$tmpdir"
mkdir -p "$mntdir"
# create temporary, alternative fstab
echo "$loopdev $mntdir ext2 errors=remount-ro 0 1" > "$faketab"
# create dummy image for loop device
dd if=/dev/zero of="$img" bs=1M count=5 &>/dev/null
# setup loop device with dummy image
losetup "$loopdev" "$img" &>/dev/null
# format loop device so it can be mounted
mke2fs "$loopdev" &>/dev/null
# alter location for fstab
export LIBMOUNT_FSTAB="$faketab"
# mount loop device by using alternative fstab
mount "$loopdev" &>/dev/null
# verify loop device was successfully mounted
if mount | grep "^$loopdev" &>/dev/null;then
echo "Successfully used alternative fstab: $faketab"
else
echo "Failed to use alternative fstab: $faketab"
fi
# clean up
umount "$loopdev" &>/dev/null
losetup -d "$loopdev"
rm -rf "$tmpdir"
exit 0
My script primarily manages external devices which are not attached most of the time. I use loop-devices to simulate external devices to test the functionality of my script. This saves a lot of time since I do not have to attach/reattach several physical devices. I think this proves that being able to use an alternative fstab is a very useful feature and allows for scripting safe test scenarios whenever parsing/altering of fstab is required. In fact, I have decided to partially rewrite my script so that it can also use an alternative fstab. Since most of the external devices are hardly ever attached to the system their corresponding entries are just cluttering up /etc/fstab.
Refactor your code that modifies fstab contents into a single function, then test that function correctly modifies the dummy fstab files you provide it. Then you can confidently use that function as part of your mount pipeline.
function change_fstab {
local fstab_path=${1:?Supply a path to the fstab file}
# ... etc
}
change_fstab /etc/fstab && mount ...
Alternatively, set LIBMOUNT_FSTAB per the libmount docs:
LIBMOUNT_FSTAB=/path/to/fake/fstab mount ...

Ubuntu bash script spiking CPU usage and not dropping, when run via crontab

I'm pretty new to bash scripting, but have constructed something that works. It copies new/changed files from a folder on my web server to another directory. This directory is then compressed and the compressed folder is uploaded to my drop box account.
This works perfectly when I run it manually with;
sudo run-parts /path/to/bash/scripts
I wanted to automate this, so I edited my crontab file using sudo crontab -e to include the following;
0 2 * * * sudo run-parts /path/to/bash/scripts
This works, but with one issue. It spikes my CPU usage to 60% and it doesn't drop until I open htop and kill the final process (the script that does the uploading). When it runs the next day, CPU usage spikes to 100% and stays there, because it was still running from the previous day. This issue doesn't occur when manually running the scripts.
Thoughts?

mdadm: array disappearing on reboot, despite correct mdadm.conf

I'm using Ubuntu 13.10 and trying to create a RAID 5 array across 3 identical disks connected to SATA ports on the motherboard. I've followed every guide and and used both the built-in Disks GUI app and mdadm at the command line, and despite everything I cannot get the array to persist after reboot.
I create the array with the following command:
root#zapp:~# mdadm --create /dev/md/array --chunk=512 --level=5 \
--raid-devices=3 /dev/sda /dev/sdb /dev/sdd
Then I watch /proc/mdstat for awhile while it syncs, until I get this:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid5 sda1[0] sdd1[3] sdb1[1]
1953262592 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
To update the mdadm config file, I run the following:
root#zapp:~# /usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf
This adds the essential line to my config file:
ARRAY /dev/md/array metadata=1.2 UUID=0ad3753e:f0177930:8362f527:285d76e7 name=zapp:array
Everything seems correct, but when I reboot, the array is gone!
The key to fixing this was to partition the drives first, and create the array from the partitions instead of the raw devices.
Basically, the create command just needed to change to:
root#zapp:~# mdadm --create /dev/md/array --chunk=512 --level=5 \
--raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdd1
The rest of the steps were correct, and created everything properly once this change was made. Any further info as to why this was necessary would be helpful. It was certainly not obvious in any of the documentation I found.

Bash script doesn't wait until commands have been properly executed

I am working on a very simple script but for some reason parts of it seem to run asynchronously.
singlePartDevice() {
# http://www.linuxquestions.org/questions/linux-software-2/removing-all-partition-from-disk-690256/
# http://serverfault.com/questions/257356/mdadm-on-ubuntu-10-04-raid5-of-4-disks-one-disk-missing-after-reboot
# Create single partition
parted -s "$1" mklabel msdos
# Find size of disk
v_disk=$(parted -s "$1" print|awk '/^Disk/ {print $3}'|sed 's/[Mm][Bb]//')
parted -s "$1" mkpart primary ext3 4096 ${v_disk}
parted -s "$1" set 1 raid on
return 0
}
singlePartDevice "/dev/sdc"
singlePartDevice "/dev/sdd"
#/dev/sdc1 exists but /dev/sdd1 does not exist
sleep 5s
#/dev/sdc1 exists AND /dev/sdd1 does also exist
As you see before the call of sleep the script has only partially finished its job. How do I make my script to wait until parted has done its job sucessfully?
(I am assuming that you are working on Linux due to the links in your question)
I am not very familiar with parted, but I believe that the partition device nodes are not created directly by it - they are created by udev, which is by nature an asynchronous procedure:
parted creates a partition
the kernel updates its internal state
the kernel notifies the udev daemon (udevd)
udevd checks its rule files (usually under /etc/udev/) and creates the appropriate device nodes
This procedure allows for clear separation of the device node handling policy from the kernel, which is a Good Thing (TM). Unfortunately, it also introduces relatively unpredictable delays.
A possible way to handle this is to have your script wait for the device nodes to appear:
while [ ! -e "/dev/sdd1" ]; do sleep 1; done
Assuming all you want to do is ensure that the partitions are created before proceeding, there are a couple of different approaches
Check whether process parted has completed before moving to the next step
Check if the devices are ready before moving to the next step (you will need to check the syntax). Eg
until [ -f /dev/sdc && -f /dev/sdd ]
sleep 5

Resources