Problems running shell script from within .Net Core service daemon on Linux - linux

I'm trying to execute a .sh script from within a .Net Core service daemon and getting weird behavior. The purpose of the script is to create an encrypted container, format it, set some settings, then mount it.
I'm using .Net Core version 3.1.4 on Raspbian on a Raspberry Pi 4.
The problem: I have the below script which creates the container, formats it, sets the settings, then attempts to mount it. It all seems to work fine but the last command, mount call, never actually works. The mount point is not valid.
The kicker: After the script is run via the service, if I open a terminal and issue the mount command there manully, it mounts correctly. I can then goto that mount point and it shows ~10GB of space available meaning it's using the container.
Note: Make sure the script is chmod +x when testing. Also you'll need cryptsetup installed to work.
Thoughts:
I'm not sure if some environment or PATH variables are missing for the shell script to properly function. Since this is a service, I can edit the Unit to include this information, if I knew what it was.
In previous attempts at issuing bash commands, I've had to set the DISPLAY variable like below for it to work correctly (because of needing to work with the desktop). For this issue that doesn't seem to matter but if I need to set the script as executable, then this command is uses as an example
string chmodArgs = string.Format("DISPLAY=:0.0; export DISPLAY && chmod +x {0}", scriptPath);
chmodArgs = string.Format("-c \"{0}\"", chmodArgs);
I'd like to see if someone can take the below and test on their end to confirm and possibly help come up with a solution. Thanks!
#!/bin/bash
# variables
# s0f4e7n4r4h8x4j4
# /usr/sbin/content1
# content1
# /mnt/content1
# 10240
# change the size of M to what the size of container should be
echo "Allocating 10240MB..."
fallocate -l 10240M /usr/sbin/content1
sleep 1
# using echo with -n passes in the password required for cryptsetup command. The dash at the end tells cryptsetup to read in from console
echo "Formatting..."
echo -n s0f4e7n4r4h8x4j4 | cryptsetup luksFormat /usr/sbin/content1 -
sleep 1
echo "Opening..."
echo -n s0f4e7n4r4h8x4j4 | cryptsetup luksOpen /usr/sbin/content1 content1 -
sleep 1
# create without journaling
echo "Creating filesystem..."
mkfs.ext4 -O ^has_journal /dev/mapper/content1
sleep 1
# enable writeback mode
echo "Tuning..."
tune2fs -o journal_data_writeback /dev/mapper/content1
sleep 1
if [ ! -d "/mnt/content1" ]; then
echo "Creating directory..."
mkdir -p /mnt/content1
sleep 1
fi
# mount with no access time to stop unnecessary writes to disk for just access
echo "Mounting..."
mount /dev/mapper/content1 /mnt/content1 -o noatime
sleep 1
This is how I'm executing the script in .Net
var proc = new System.Diagnostics.Process {
StartInfo =
{
FileName = pathToScript,
WorkingDirectory = workingDir,
Arguments = args,
UseShellExecute = false
}
};
if (proc.Start())
{
while (!proc.HasExited)
{
System.Threading.Thread.Sleep(33);
}
}
The Unit file use for service daemon
[Unit]
Description=Service name
[Service]
ExecStart=/bin/bash -c 'PATH=/sbin/dotnet:$PATH exec dotnet myservice.dll'
WorkingDirectory=/sbin/myservice/
User=root
Group=root
Restart=on-failure
SyslogIdentifier=my-service
PrivateTmp=true
[Install]
WantedBy=multi-user.target

The problem was not being able to run the mount command from within a service directly. From extensive trial and error, even printing verbose of the mount command would show that there was NO errors and it would NOT be mounted. Very misleading to not provide some failure message for users.
Solution is to create a Unit file "service" to handle the mount/umount. Below explains with a link to the inspiring article that brought me here.
Step 1: Create the Unit File
The key is the .mount file needs to be named in a pattern that matches the Where= in the Unit file. So if you're mounting /mnt/content1, your file would be:
sudo nano /etc/systemd/system/mnt-content1.mount
Here is the Unit file details I used.
[Unit]
Description=Mount Content (/mnt/content1)
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=/dev/mapper/content1
Where=/mnt/content1
Type=ext4
Options=noatime
[Install]
WantedBy=multi-user.target
Step 2: Reload systemctl
systemctl daemon-reload
Final steps:
You can now issue start/stop on the new "service" that is dedicated just to mount and unmount. This will not auto mount on reboot, if you need that you'll need to enable the service to do such.
systemctl start mnt-content1.mount
systemctl stop mnt-content1.mount
Article: https://www.golinuxcloud.com/mount-filesystem-without-fstab-systemd-rhel-8/

Related

Linux systemd service file to start and stop a minecraft server

I am trying to run a minecraft server on a remote linux instance.
I would like the instance to start up the server on a screen named serverscreen which is owned by the user named minecraft once the system boots up, and run a stop command to the serverscreen when the instance shuts down. Then, it needs to wait untill the server has stopped before actually shutting down.
I am quite new to linux but I have managed to come up with a few commands that work, but I have issues trying to start and stop the server automatically.
I have tried quite a few things, like creating a .sh script to run on startup with crontab -e #reboot script.sh, or create a file in etc/rc.local with #!/bin/sh sh script.sh, but those methods didn't seem to work properly for me. Also, they do not run un shutdown unfortunately. Therefore, I thought it would be best to create a service file named minecraft.service with the following commands:
[Unit]
Description=Minecraft Server
After=network.target
[Service]
User=minecraft
Nice=5
KillMode=none
SuccessExitStatus=0 1
InaccessibleDirectories=/root /sys /srv /media -/lost+found
NoNewPrivileges=true
WorkingDirectory=/opt/minecraft/server
ReadWriteDirectories=/opt/minecraft/server
#### Command to start the server.
ExecStart=sudo -u minecraft screen -dmS serverscreen java -Xms6G -Xmx6G -jar /opt/minecraft/server/forgeserver.jar nogui
#### Command to stop the server.
ExecStop=sudo -u minecraft screen -S serverscreen -p 0 -X eval "stuff stop^M"
##### Try to wait untill the server has stopped. I am not sure about this line of code since I haven't been able to test it properly.
ExecStop=/bin/bash -c "while ps -p $MAINPID > /dev/null; do /bin/sleep 1; done"
[Install]
WantedBy=multi-user.target
but when running this, it gives me an error saying that I did not provide an absolute path for something.
Could someone help me setup a service file that will boot up the server on a screen named serverscreen for the user minecraft, and run command stop when the instance shuts down after the server has been stopped?
Thanks to #Riz, the service now works as intended by using a bash script in order to run the commands.

Run script after all udev rules are through and device is completely initialized

I am attempting to read information from a usb device after it is attached.
The information I require are accessed through two APIs: v4l2 and libusb.
Both are used through a script that is correctly called as the v4l2 part executes are expected.
SUBSYSTEM=="usb", ATTRS{idVendor}=="199a", GROUP="video", MODE="0666", TAG+="uaccess", TAG+="udev-acl"
ACTION=="add", SUBSYSTEM=="video4linux", \
ATTRS{idVendor}=="199a" \
RUN+="/usr/bin/camera-infos-wrapper %s{serial}"
When I run the script manually all steps are executed correctly.
I have a wrapper around the script to set additional environment variables.
#!/usr/bin/env bash
export DISPLAY=":0"
export XAUTHORITY=/home/user/.Xauthority
# sleep 3 <- does not work
# sleep 4 <- works
# ensure debug output is logged
exec 1> >(logger -s -t $(basename $0)) 2>&1
/usr/bin/tcam-index-camera $1
When I sleep for 3 seconds libusb is unable to correctly open the device.
Sleeping for 4 seconds allows correct access.
Since this has to run on more than on PC I would prefer a more robust solution.
Is there any way to run the script after all udev rules are through and the device is completely initialized?
The way to go seems to be systemd.
The systemd unit camera-index#.service
[Unit]
Description=My service
After=dev-ident%i.device
Wants=dev-ident%i.device
[Service]
Type=forking
ExecStart=/usr/bin/script %i
Note the '#' in the file name. It is important as it is required for arguments.
The udev rule looks like:
ACTION=="add", SUBSYSTEM=="video4linux", \
ATTRS{idVendor}=="<vendor id>", \
TAG+="systemd", \
SYMLINK+="ident%s{serial}", \
ENV{SYSTEMD_WANTS}="camera-index#%s{serial}.service"
The systemd unit waits until the symlink is created and executes the script after that.

Opensuse udev device does not start systemd when replugged-in

I'm trying to create an automatic backup system when an external hard drive plugged in :
# /etc/udev/rules.d/300-backup-projects.rules
SUBSYSTEMS=="usb", ACTION=="add", ATTRS{idVendor}=="0480", ATTRS{idProduct}=="b202", ENV{SYSTEMD_WANTS}="backup_projects.service"
# /usr/lib/systemd/system/backup_projects.service
[Unit]
Description=Backup Projects Folder
BindsTo=run-media-user-Backup\x20EXT.mount
After=run-media-user-Backup\x20EXT.mount
[Service]
ExecStart=/home/user/backup.sh
[Install]
WantedBy=run-media-user-Backup\x20EXT.mount
And the backup.sh file : ( just testing )
#!/bin/bash
sleep 2;
echo $(date) > /run/media/user/Backup\x20EXT/data.txt
When I run sudo systemctl start backup_projects.service the dada.txt file get created. But when I replug-in the external hard drive, nothing happen.
It turned out that the service stopped when the file get created.
I don't know if I'm messing something ?

Bind mount not visible when created from a CGI script in Apache

My application allows the user to bind mount a source directory to a target mount point. This is all working correctly except the mount does not exist outside the process that corrected it.
I have boiled down the issue to a very simple script.
#!/bin/bash
echo "Content-type: text/html"
echo ""
echo ""
echo "<p>Hello</p>"
echo "<p>Results from pid #{$$}:</p>"
echo "<ul>"
c="sudo mkdir /shares/target"
echo "<li>Executed '$c', Results: " $(eval $c) "</li>"
c="sudo mount --bind /root/source /shares/target"
echo "<li>Executed '$c', Results: " $(eval $c) "</li>"
c="sudo mount | grep shares"
echo "<li>Executed '$c', Results: " $(eval $c) "</li>"
c="sudo cat /proc/mounts | grep shares"
echo "<li>Executed '$c', Results: " $(eval $c) "</li>"
echo "</ul>"
The first two commands create a mount point and execute the mount. The last two commands verify the result. The script executes without issue. However, the mount is not visible or available in a separate shell process. Executing the last two commands in a separate shell does not show the mount being available. If I attempt to execute "rm -rf /shares/target" I get "rm: cannot remove '/shares/target/': Device or resource busy”. Executing "losf | grep /shares/target" generates no output. In a seperate shell I have switch to the apache user but the mount is still not available. I have verified the apache process is not in a chroot by logging the output of "ls /proc/$$/root". It points to "/".
I am running:
Apache 2.4.6
CentOS 7
httpd-2.4.6-31.el7.centos.1.x86_64
httpd-tools-2.4.6-31.el7.centos.1.x86_64
I turned logging to debug but the error_log indicates nothing.
Thanks in advance.
This behavior is due to the following line in the httpd.service systemd unit:
PrivateTmp=true
From the systemd.exec(5) man page:
PrivateTmp=
Takes a boolean argument. If true, sets up a new file
system namespace for the executed processes and mounts
private /tmp and /var/tmp directories inside it that is not
shared by processes outside of the namespace.
[...]
Note that using this setting will disconnect propagation of
mounts from the service to the host (propagation in the
opposite direction continues to work). This means that this
setting may not be used for services which shall be able to
install mount points in the main mount namespace.
In other words, mounts made by httpd and child processes will not be
visible to other processes on your host.
The PrivateTmp directive is useful from a security perspective, as described here:
/tmp traditionally has been a shared space for all local services and
users. Over the years it has been a major source of security problems
for a multitude of services. Symlink attacks and DoS vulnerabilities
due to guessable /tmp temporary files are common. By isolating the
service's /tmp from the rest of the host, such vulnerabilities become
moot.
You can safely remove the PrivateTmp directive from the unit file (well, don't actually modify the unit file -- create a new one at /etc/systemd/system/httpd.service, then systemctl daemon-reload, then systemctl restart httpd).

Arch Linux / systemd - prevent any kind of shutdown/rebboot

I'm running Arch-based Manjaro Linux and wrote myself a little update program, that starts every 7 hours and runs completely in the background. This update program is started by systemd.
What I wanna know is: How can I prevent any system shutdown/reboot during the time this program runs no matter if the user just wants to turn it off or any program wants to do so.
The best would be, if any shutdown/reboot action wouldn't be cancelled but delayed instead, so when the update program has finished its run, the shutdown/reboot continues.
My systemd parts are:
uupgrades.timer
[Unit]
Description=UU Upgrades Timer
[Timer]
OnBootSec=23min
OnUnitActiveSec=7h
Unit=uupgrades.target
[Install]
WantedBy=basic.target
uupgrades.target
[Unit]
Description=UU Upgrades Timer Target
StopWhenUnneeded=yes
and in the folder uupgrades.target.wants
uupgrades.service
[Unit]
Description=UU Update Program
[Service]
Nice=19
IOSchedulingClass=2
IOSchedulingPriority=7
ExecStart=/usr/bin/uupgrades
How can I achieve this?
If a user with sufficient permissions to reboot the server or manipulate processes wants to stop or reboot the machine you cant stop them. That's just how linux works. You should set up permissions and accounts such that no other users have root permissions or permissions sufficient to manipulate the process or user that the process is running as.
When I want to block myself from rebooting or shutdown, I alias my usual shutdown and reboot aliases to beep;beep;beep;.
In multiuser environments you could move the reboot, shutdown etc. binaries and move them back, when shutdown should be allowed again.
You could also temporarily move an executable shell script outputting information about the postponed shutdown possibility in place of the corresponding binaries. This script could set a flag, if a shutdown was requested.
Q&D example script:
#!/usr/bin/env bash
echo "preventing reboot"
BACKUPBINARY_REBOOT=$(mktemp);
mv /bin/reboot $BACKUPBINARY_REBOOT;
FLAGFILE=$(mktemp);
echo '#!/usr/bin/env bash' > /bin/reboot;
echo '# original reboot binary was moved to'"$BACKUPBINARY_REBOOT" >> /bin/reboot;
echo 'echo request-reboot > '"$FLAGFILE" >> /bin/reboot;
echo 'echo reboot is prevented, your request will trigger later' >> /bin/reboot;
chmod 666 "$FLAGFILE";
chmod +x /bin/reboot;
echo "postponed reboot - press enter to allow it again and make up for requested reboot";
read;
mv "$BACKUPBINARY_REBOOT" /bin/reboot;
if grep -q request-reboot "$FLAGFILE"; then
rm $FLAGFILE;
/bin/reboot;
fi
You can add another systemd service at /usr/lib/systemd/system-shutdown/ which will be run at shutdown, and have it check if your update script is running, and if so, cancel or delay the shutdown.

Resources