Bind mount not visible when created from a CGI script in Apache - linux

My application allows the user to bind mount a source directory to a target mount point. This is all working correctly except the mount does not exist outside the process that corrected it.
I have boiled down the issue to a very simple script.
#!/bin/bash
echo "Content-type: text/html"
echo ""
echo ""
echo "<p>Hello</p>"
echo "<p>Results from pid #{$$}:</p>"
echo "<ul>"
c="sudo mkdir /shares/target"
echo "<li>Executed '$c', Results: " $(eval $c) "</li>"
c="sudo mount --bind /root/source /shares/target"
echo "<li>Executed '$c', Results: " $(eval $c) "</li>"
c="sudo mount | grep shares"
echo "<li>Executed '$c', Results: " $(eval $c) "</li>"
c="sudo cat /proc/mounts | grep shares"
echo "<li>Executed '$c', Results: " $(eval $c) "</li>"
echo "</ul>"
The first two commands create a mount point and execute the mount. The last two commands verify the result. The script executes without issue. However, the mount is not visible or available in a separate shell process. Executing the last two commands in a separate shell does not show the mount being available. If I attempt to execute "rm -rf /shares/target" I get "rm: cannot remove '/shares/target/': Device or resource busy”. Executing "losf | grep /shares/target" generates no output. In a seperate shell I have switch to the apache user but the mount is still not available. I have verified the apache process is not in a chroot by logging the output of "ls /proc/$$/root". It points to "/".
I am running:
Apache 2.4.6
CentOS 7
httpd-2.4.6-31.el7.centos.1.x86_64
httpd-tools-2.4.6-31.el7.centos.1.x86_64
I turned logging to debug but the error_log indicates nothing.
Thanks in advance.

This behavior is due to the following line in the httpd.service systemd unit:
PrivateTmp=true
From the systemd.exec(5) man page:
PrivateTmp=
Takes a boolean argument. If true, sets up a new file
system namespace for the executed processes and mounts
private /tmp and /var/tmp directories inside it that is not
shared by processes outside of the namespace.
[...]
Note that using this setting will disconnect propagation of
mounts from the service to the host (propagation in the
opposite direction continues to work). This means that this
setting may not be used for services which shall be able to
install mount points in the main mount namespace.
In other words, mounts made by httpd and child processes will not be
visible to other processes on your host.
The PrivateTmp directive is useful from a security perspective, as described here:
/tmp traditionally has been a shared space for all local services and
users. Over the years it has been a major source of security problems
for a multitude of services. Symlink attacks and DoS vulnerabilities
due to guessable /tmp temporary files are common. By isolating the
service's /tmp from the rest of the host, such vulnerabilities become
moot.
You can safely remove the PrivateTmp directive from the unit file (well, don't actually modify the unit file -- create a new one at /etc/systemd/system/httpd.service, then systemctl daemon-reload, then systemctl restart httpd).

Related

Problems running shell script from within .Net Core service daemon on Linux

I'm trying to execute a .sh script from within a .Net Core service daemon and getting weird behavior. The purpose of the script is to create an encrypted container, format it, set some settings, then mount it.
I'm using .Net Core version 3.1.4 on Raspbian on a Raspberry Pi 4.
The problem: I have the below script which creates the container, formats it, sets the settings, then attempts to mount it. It all seems to work fine but the last command, mount call, never actually works. The mount point is not valid.
The kicker: After the script is run via the service, if I open a terminal and issue the mount command there manully, it mounts correctly. I can then goto that mount point and it shows ~10GB of space available meaning it's using the container.
Note: Make sure the script is chmod +x when testing. Also you'll need cryptsetup installed to work.
Thoughts:
I'm not sure if some environment or PATH variables are missing for the shell script to properly function. Since this is a service, I can edit the Unit to include this information, if I knew what it was.
In previous attempts at issuing bash commands, I've had to set the DISPLAY variable like below for it to work correctly (because of needing to work with the desktop). For this issue that doesn't seem to matter but if I need to set the script as executable, then this command is uses as an example
string chmodArgs = string.Format("DISPLAY=:0.0; export DISPLAY && chmod +x {0}", scriptPath);
chmodArgs = string.Format("-c \"{0}\"", chmodArgs);
I'd like to see if someone can take the below and test on their end to confirm and possibly help come up with a solution. Thanks!
#!/bin/bash
# variables
# s0f4e7n4r4h8x4j4
# /usr/sbin/content1
# content1
# /mnt/content1
# 10240
# change the size of M to what the size of container should be
echo "Allocating 10240MB..."
fallocate -l 10240M /usr/sbin/content1
sleep 1
# using echo with -n passes in the password required for cryptsetup command. The dash at the end tells cryptsetup to read in from console
echo "Formatting..."
echo -n s0f4e7n4r4h8x4j4 | cryptsetup luksFormat /usr/sbin/content1 -
sleep 1
echo "Opening..."
echo -n s0f4e7n4r4h8x4j4 | cryptsetup luksOpen /usr/sbin/content1 content1 -
sleep 1
# create without journaling
echo "Creating filesystem..."
mkfs.ext4 -O ^has_journal /dev/mapper/content1
sleep 1
# enable writeback mode
echo "Tuning..."
tune2fs -o journal_data_writeback /dev/mapper/content1
sleep 1
if [ ! -d "/mnt/content1" ]; then
echo "Creating directory..."
mkdir -p /mnt/content1
sleep 1
fi
# mount with no access time to stop unnecessary writes to disk for just access
echo "Mounting..."
mount /dev/mapper/content1 /mnt/content1 -o noatime
sleep 1
This is how I'm executing the script in .Net
var proc = new System.Diagnostics.Process {
StartInfo =
{
FileName = pathToScript,
WorkingDirectory = workingDir,
Arguments = args,
UseShellExecute = false
}
};
if (proc.Start())
{
while (!proc.HasExited)
{
System.Threading.Thread.Sleep(33);
}
}
The Unit file use for service daemon
[Unit]
Description=Service name
[Service]
ExecStart=/bin/bash -c 'PATH=/sbin/dotnet:$PATH exec dotnet myservice.dll'
WorkingDirectory=/sbin/myservice/
User=root
Group=root
Restart=on-failure
SyslogIdentifier=my-service
PrivateTmp=true
[Install]
WantedBy=multi-user.target
The problem was not being able to run the mount command from within a service directly. From extensive trial and error, even printing verbose of the mount command would show that there was NO errors and it would NOT be mounted. Very misleading to not provide some failure message for users.
Solution is to create a Unit file "service" to handle the mount/umount. Below explains with a link to the inspiring article that brought me here.
Step 1: Create the Unit File
The key is the .mount file needs to be named in a pattern that matches the Where= in the Unit file. So if you're mounting /mnt/content1, your file would be:
sudo nano /etc/systemd/system/mnt-content1.mount
Here is the Unit file details I used.
[Unit]
Description=Mount Content (/mnt/content1)
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=/dev/mapper/content1
Where=/mnt/content1
Type=ext4
Options=noatime
[Install]
WantedBy=multi-user.target
Step 2: Reload systemctl
systemctl daemon-reload
Final steps:
You can now issue start/stop on the new "service" that is dedicated just to mount and unmount. This will not auto mount on reboot, if you need that you'll need to enable the service to do such.
systemctl start mnt-content1.mount
systemctl stop mnt-content1.mount
Article: https://www.golinuxcloud.com/mount-filesystem-without-fstab-systemd-rhel-8/

Change location of /etc/fstab

I have written a script which requires to read a few entries in /etc/fstab. I have tested the script by manually adding some entries in /etc/fstab and then restored the file to its original contents, also manually. Now I would like to automate those tests and run them as a seperate script. I do, however, not feel comfortable with the idea of changing /etc/fstab altered. I was thinking of making a backup copy of /etc/fstab, then altering it and finally restoring the original file after the tests are done. I would prefer it if I could temporarily alter the location of fstab.
Is there a way to alter the location of fstab to, say, /usr/local/etc/fstab so that when mount -a is run from within a script only the entries in /usr/local/etc/fstab are processed?
UPDATE:
I used bishop's solution by setting LIBMOUNT_FSTAB=/usr/local/etc/fstab. I have skimmed the man page of mount on several occasions in the past but I never noticed this variable. I am not sure if this variable has always been there and I simply overlooked it or if it had been added at some point. I am using mount from util-linux 2.27.1 and at least in this version LIBMOUNT_FSTAB is available and documented in the man-page. It is in the ENVIRONMENT section at the end. This will make my automated tests a lot safer in the future.
UPDATE2:
Since there has been some discussion whether this is an appropriate programming question or not, I have decided to write a small script which demonstrates the usage of LIBMOUNT_FSTAB.
#!/bin/bash
libmount=libmount_fstab
tmpdir="/tmp/test_${libmount}_folder" # temporary test folder
mntdir="$tmpdir/test_${libmount}_mountfolder" # mount folder for loop device
img="$tmpdir/loop.img" # dummy image for loop device
faketab="$tmpdir/alternate_fstab" # temporary, alternative fstab
# get first free loop device
loopdev=$(losetup -f)
# verify there is a free loop device
if [[ -z "$loopdev" ]];then
echo "Error: No free loop device" >&2
exit 1
fi
# check that loop device is not managed by default /etc/fstab
if grep "^$loopdev" /etc/fstab ;then
echo "Error: $loopdev already managed by /etc/fstab" >&2
exit 1
fi
# make temp folders
mkdir -p "$tmpdir"
mkdir -p "$mntdir"
# create temporary, alternative fstab
echo "$loopdev $mntdir ext2 errors=remount-ro 0 1" > "$faketab"
# create dummy image for loop device
dd if=/dev/zero of="$img" bs=1M count=5 &>/dev/null
# setup loop device with dummy image
losetup "$loopdev" "$img" &>/dev/null
# format loop device so it can be mounted
mke2fs "$loopdev" &>/dev/null
# alter location for fstab
export LIBMOUNT_FSTAB="$faketab"
# mount loop device by using alternative fstab
mount "$loopdev" &>/dev/null
# verify loop device was successfully mounted
if mount | grep "^$loopdev" &>/dev/null;then
echo "Successfully used alternative fstab: $faketab"
else
echo "Failed to use alternative fstab: $faketab"
fi
# clean up
umount "$loopdev" &>/dev/null
losetup -d "$loopdev"
rm -rf "$tmpdir"
exit 0
My script primarily manages external devices which are not attached most of the time. I use loop-devices to simulate external devices to test the functionality of my script. This saves a lot of time since I do not have to attach/reattach several physical devices. I think this proves that being able to use an alternative fstab is a very useful feature and allows for scripting safe test scenarios whenever parsing/altering of fstab is required. In fact, I have decided to partially rewrite my script so that it can also use an alternative fstab. Since most of the external devices are hardly ever attached to the system their corresponding entries are just cluttering up /etc/fstab.
Refactor your code that modifies fstab contents into a single function, then test that function correctly modifies the dummy fstab files you provide it. Then you can confidently use that function as part of your mount pipeline.
function change_fstab {
local fstab_path=${1:?Supply a path to the fstab file}
# ... etc
}
change_fstab /etc/fstab && mount ...
Alternatively, set LIBMOUNT_FSTAB per the libmount docs:
LIBMOUNT_FSTAB=/path/to/fake/fstab mount ...

Wifi disconnected before init.d script is run

I've set up a simple init.d script "S3logrotate" to run on shutdown. The "S3logrotate" script works fine when run manually from command line but the script does not function correctly on shut down.
The script uploads logs from my PC to an Amazon S3 bucket and requires wifi to run correctly.
Debugging proved that the script is actually run but the upload process fails.
I found that the problem seems to be that the script seems to run after wifi is terminated.
These are the blocks I used to test my internet connection in the script.
if ping -q -c 1 -W 1 8.8.8.8 >/dev/null; then
echo "IPv4 is up" >> *x.txt*
else
echo "IPv4 is down" >> *x.txt*
fi
if ping -q -c 1 -W 1 google.com >/dev/null; then
echo "The network is up" >> *x.txt*
else
echo "The network is down" >> *x.txt*
fi
The output for this block is:
IPv4 is down
The network is down
Is there any way to set the priority of an init.d script? As in, can I make my script run before the network connection is terminated? If not, is there any alternative to init.d?
I use Ubuntu 16.04 and have dual booted with Windows 10 if that's significant.
Thanks,
sganesan7
You should place you scrip in:
/etc/NetworkManager/dispatcher.d/pre-down.d
change group and owner to root
chown root:root S3logrotate
and it should work. If you need to do this for separate interface place script in
create a script inside
/etc/NetworkManager/dispatcher.d/
and name it (for example):
wlan0-down
and should work too.

LDAP - SSH script across multiple VM's

So I'm ssh'ing into a router that has several VM's. It is setup using LDAP so that each VM has the same files, settings, etc. However they have different cores allocated, different libraries and packages installed. Instead of logging into each VM individually and running the command, I want to automate it by putting the script in .bashrc.
So what I have so far:
export LD_LIBRARY_PATH=/lhome/username
# .so files are in ~/ to avoid permission denied problems
output=$(cat /proc/cpuinfo | grep "^cpu cores" | uniq | tail -c 2)
current=server_name
if [[ `hostname-s` != $current ]]; then
ssh $current
fi
/path/to/program --hostname $(echo $(hostname -s)) --threads $((output*2))
Each VM, upon logging in, will execute this script, so I have to check if the current VM has the hostname to avoid an SSH loop. The idea is to run the program, then exit back out to the origin to resume the script. The problem is of course that the process will die upon logging out.
It's been suggested to me to use TMUX on an array of the hostnames, but I would have no idea on how to approach this.
You could install clusterSSH, set up a list of hostnames, and execute things from the terminal windows opened. You may use screen/tmux/nohup to allow processes started to keep running, even after logout.
Yet, if you still want to play around with scripting, you may install tmux, and use:
while read host; do
scp "script_to_run_remotely" ${host}:~/
ssh ${host} tmux new-session -d '~/script_to_run_remotely'\; detach
done < hostlist
Note: hostlist should be a list of hostnames, one per line.

Linux mount fails with error Transport endpoint not connected

From time to time for reasons unknown, the Amazon S3 Fuse mount on a linux server fails throughout the day. The only resolution is to umount and then mount the directory again. I tried writing the following shell script which when manually unmounted it worked and remounted but I learned there must be some other "state" when a link fails but is not actually unmounted.
Original error:
[root#app3 mnt]# cd s3fs
[root#app3 s3fs]# ls
ls: cannot access amazon: Transport endpoint is not connected
amazon
[root#app3 s3fs]# umount amazon
[root#app3 s3fs]# mount amazon/
Shell script attempt to check mount and remount if failed (worked in manual tests but failed):
#!/bin/bash
cat /etc/mtab | grep /mnt/$1 >/dev/null
if [ "$?" -eq "0" ]; then
echo /mnt/$1 is mounted.
else
echo /mnt/$1 is not mounted at this time.
echo remounting now...
umount /mnt/$1
mount /mnt/$1
fi
Why would the shell script work if I manually unmount the directory and run test, but when transport endpoint fails the test returns true and remount doesn't happen?
What is the best way to solve this?
I know this is old but it might help others facing this issue.
We had a similar problem with our bucket being unmounted randomly and getting the 'Transport endpoint is not connected' error.
Instead of using "cat /etc/mtab", I use "df -hT" and it works with my script. The problem is it gets stuck in this weird state, of being half unmounted and the "mtab" still sees it as mounted; but I still don't know why.
This is the code I'm using:
#!/bin/bash
if [ $(df -hT | grep -c s3fs) != 1 ]
then
# unmount it first
umount /path/to/mounted/bucket;
# remount it
/usr/local/bin/s3fs bucket-name /path/to/mount/bucket -o noatime -o allow_other;
echo "s3fs is down";
# maybe send email here to let you know it went down
fi
Also make sure you run your script as root, otherwise it won't be able to unmount/remount.

Resources