Linux mount fails with error Transport endpoint not connected - linux

From time to time for reasons unknown, the Amazon S3 Fuse mount on a linux server fails throughout the day. The only resolution is to umount and then mount the directory again. I tried writing the following shell script which when manually unmounted it worked and remounted but I learned there must be some other "state" when a link fails but is not actually unmounted.
Original error:
[root#app3 mnt]# cd s3fs
[root#app3 s3fs]# ls
ls: cannot access amazon: Transport endpoint is not connected
amazon
[root#app3 s3fs]# umount amazon
[root#app3 s3fs]# mount amazon/
Shell script attempt to check mount and remount if failed (worked in manual tests but failed):
#!/bin/bash
cat /etc/mtab | grep /mnt/$1 >/dev/null
if [ "$?" -eq "0" ]; then
echo /mnt/$1 is mounted.
else
echo /mnt/$1 is not mounted at this time.
echo remounting now...
umount /mnt/$1
mount /mnt/$1
fi
Why would the shell script work if I manually unmount the directory and run test, but when transport endpoint fails the test returns true and remount doesn't happen?
What is the best way to solve this?

I know this is old but it might help others facing this issue.
We had a similar problem with our bucket being unmounted randomly and getting the 'Transport endpoint is not connected' error.
Instead of using "cat /etc/mtab", I use "df -hT" and it works with my script. The problem is it gets stuck in this weird state, of being half unmounted and the "mtab" still sees it as mounted; but I still don't know why.
This is the code I'm using:
#!/bin/bash
if [ $(df -hT | grep -c s3fs) != 1 ]
then
# unmount it first
umount /path/to/mounted/bucket;
# remount it
/usr/local/bin/s3fs bucket-name /path/to/mount/bucket -o noatime -o allow_other;
echo "s3fs is down";
# maybe send email here to let you know it went down
fi
Also make sure you run your script as root, otherwise it won't be able to unmount/remount.

Related

How to determine in bash if / mountpoint was mounted from other OS?

Im writing shell script to check if user may be doing some nasty things in Linux enviroment. One check i would like to do is determine if / filesyste was mounted using external OS (like using live SO) in previous mount.
First i think to exec script when boot to get the mount time in previous boot using journalctl and actual last mount using tune2fs, to compare it. But last mount using tune2fs gets current mount, not previous, because system is mounted when ckecks it.
Any idea to solve it?
Thanks!
dmesg's output shows about the mounting of / (and other infos as well). If your current OS's dmesg's output has that info, it was mounted by the current system.
You can use the output of dmesg in your script like :
#!/bin/bash
number=$(dmesg | grep -c "sdaN")
if [ $number == 0 ]; then
echo "It was not mounted by the current system"
else
echo "It was mounted by the current system"
fi

Shell Script Disk Image Analysis

I’m a beginner programmer and I'm try to learn how to successfully mount a disk image and analyse it but can't fine any guides online or any mention on web pages.
I’ve set myself the task as I’m thinking of joining a computer forensics course next year and believe these skills will give me a head start.
This is the code I've made so far but I've become stuck. I want the script to be able to extract command history data for all users, and also log successful and unsuccessful login attempts from log files such as /var/log/wtmp.
I’m not exactly looking for someone to complete the code (as that would be counterproductive) but to point me towards hints and tips, guides and tutorials to get over these early stage of programming.
#!/bin/bash
mount="/myfilesystem"
if grep -qs "$mount" /proc/mounts; then
echo "It's mounted."
else
echo "It's not mounted."
mount "$mount"
if [ $? -eq 0 ]; then
echo "Mount success!"
else
echo "Something went wrong with the mount..."
fi
fi
sudo fdisk -l | grep/bin /sbin
For mounting a filesystem, you need two arguments at least.
The image file or block device to be mounted and
The place where to mount it in your filesystem
So, if you want to mount some external USB drive, that e.g. shows as /dev/sda and has a single partition (sda1), you need to do the following:
Find or create a directory to mount your device (easiest as root), say you created a directory /root/mountpoint
Execute the mount command: mount /dev/sda1 /root/mountpoint
You then can step into the mounted filesystem cd /root/mountpoint and look around.
Just as a sidenote: For forensics, you should always draw an image from the device (e.g. dd if=/dev/sda1 of=/root/disk.img) to avoid destroying any evidence and then mount it through the loop driver (losetup /dev/loop1 /root/disk.img; mount /dev/loop1 /root/mountpoint).
Hope this gives you a hint to start over...

Bind mount not visible when created from a CGI script in Apache

My application allows the user to bind mount a source directory to a target mount point. This is all working correctly except the mount does not exist outside the process that corrected it.
I have boiled down the issue to a very simple script.
#!/bin/bash
echo "Content-type: text/html"
echo ""
echo ""
echo "<p>Hello</p>"
echo "<p>Results from pid #{$$}:</p>"
echo "<ul>"
c="sudo mkdir /shares/target"
echo "<li>Executed '$c', Results: " $(eval $c) "</li>"
c="sudo mount --bind /root/source /shares/target"
echo "<li>Executed '$c', Results: " $(eval $c) "</li>"
c="sudo mount | grep shares"
echo "<li>Executed '$c', Results: " $(eval $c) "</li>"
c="sudo cat /proc/mounts | grep shares"
echo "<li>Executed '$c', Results: " $(eval $c) "</li>"
echo "</ul>"
The first two commands create a mount point and execute the mount. The last two commands verify the result. The script executes without issue. However, the mount is not visible or available in a separate shell process. Executing the last two commands in a separate shell does not show the mount being available. If I attempt to execute "rm -rf /shares/target" I get "rm: cannot remove '/shares/target/': Device or resource busy”. Executing "losf | grep /shares/target" generates no output. In a seperate shell I have switch to the apache user but the mount is still not available. I have verified the apache process is not in a chroot by logging the output of "ls /proc/$$/root". It points to "/".
I am running:
Apache 2.4.6
CentOS 7
httpd-2.4.6-31.el7.centos.1.x86_64
httpd-tools-2.4.6-31.el7.centos.1.x86_64
I turned logging to debug but the error_log indicates nothing.
Thanks in advance.
This behavior is due to the following line in the httpd.service systemd unit:
PrivateTmp=true
From the systemd.exec(5) man page:
PrivateTmp=
Takes a boolean argument. If true, sets up a new file
system namespace for the executed processes and mounts
private /tmp and /var/tmp directories inside it that is not
shared by processes outside of the namespace.
[...]
Note that using this setting will disconnect propagation of
mounts from the service to the host (propagation in the
opposite direction continues to work). This means that this
setting may not be used for services which shall be able to
install mount points in the main mount namespace.
In other words, mounts made by httpd and child processes will not be
visible to other processes on your host.
The PrivateTmp directive is useful from a security perspective, as described here:
/tmp traditionally has been a shared space for all local services and
users. Over the years it has been a major source of security problems
for a multitude of services. Symlink attacks and DoS vulnerabilities
due to guessable /tmp temporary files are common. By isolating the
service's /tmp from the rest of the host, such vulnerabilities become
moot.
You can safely remove the PrivateTmp directive from the unit file (well, don't actually modify the unit file -- create a new one at /etc/systemd/system/httpd.service, then systemctl daemon-reload, then systemctl restart httpd).

commands in bash script doesn't work properly

I have this script :
#!/bin/bash
./process-list $1
det=$?
echo $det
if [ $det -eq 1 ]
then
echo "!!!"
ssh -n -f 192.0.2.1 "/usr/local/bin/sshfs -r 192.0.2.2:/home/sth/rootcheck_redhat /home/ossl7/r"
rk=$(ssh -n -f 192.0.2.1 'cd /home/s/r/rootcheck-2.4; ./ossec-rootcheck >&2; echo $?' 2>res)
if [ $rk -eq 0 ]
then
echo "not!"
fi
fi
exit;
I ssh to system 192.0.2.1 and run sshfs command on it. actualy I want to mount a directory of system 192.0.2.2 on system 192.0.2.1 and then run a program (which is located in that directory) on system 192.0.2.1. all these ssh and sshfs commands work properly. when I run them manually and output of program ossec-rootcheck is written to file res ,but when I run this script, mount is done but no output is written to file res. I guess program ossec-rootcheck is runned but I don't know why the output isn't written!
this script used to work properly before I don't know what happend suddenly!
As far as I understand the program, the remote machine has stdin>stderr, but how do you get that to the local machine where ssh is being evaluated?
The end ' means on the rk= line, the 2>res happens locally. (and there is no error from ssh, the remote error, if any, is lost when ssh successfully completes.) You could try >res it will get whatever ssh prints out, unfortunately including non-errors.

Bash Script - umount a device, but don't fail if it's not mounted?

I'm writing a bash script and I have errexit set, so that the script will die if any command doesn't return a 0 exit code, i.e. if any command doesn't complete successfully. This is to make sure that my bash script is robust.
I have to mount some filesystems, copy some files over, the umount it. I'm putting a umount /mnt/temp at the start so that it'll umount it before doing anything. However if it's not mounted then umount will fail and stop my script.
Is it possible to do a umount --dont-fail-if-not-mounted /mnt/temp? So that it will return 0 if the device isn't mounted? Like rm -f?
The standard trick to ignore the return code is to wrap the command in a boolean expression that always evaluates to success:
umount .... || /bin/true
Assuming that your umount returns 1 when device isn't mounted, you can do it like that:
umount … || [ $? -eq 1 ]
Then bash will assume no error if umount returns 0 or 1 (i.e. unmounts successfully or device isn't mounted) but will stop the script if any other code is returned (e.g. you have no permissions to unmount the device).
Ignoring exit codes isn't really safe as it won't distinguish between something that is already unmounted and a failure to unmount a mounted resource.
I'd recommend testing that the path is mounted with mountpoint which returns 0 if and only if the given path is a mounted resource.
This script will exit with 0 if the given path was not mounted otherwise it give the exit code from umount.
#!/bin/sh
if mountpoint -q "$1"; then
umount "$1"
fi
You an also do it as a one liner.
! mountpoint -q "$mymount" || umount "$mymount"
I just found the ":" more use ful and wanted a similar solution but let the script know whats happening.
umount ...... || { echo "umount failed but not to worry" ; : ; }
This returns true with the message, though the umount failed.

Resources