test -x in Mounted Filesystem - linux

I'm using QEMU to test Raspberry Pi before putting the image onto an SD card. I'm setting up an automated script to put some files onto the Pi, among other things, so that when I put the SD card into the Pi, it is immediately usable. I think I've run into a quirk in how permissions work, but I'm not sure.
When you run test -x, the file is supposed to be executable. Basically, the x bit is supposed to be on for your user. However, this doesn't seem to apply to files inside mounted filesystems.
The host is Ubuntu, and the guest backing image is Raspberry Pi Buster. I created the mountpoint with guestmount, because I was mounting a snapshot, not the original, and this seems to be the only/best way to do that. The basic flow was:
qemu-img convert -Oqcow2 raspberry-pi.img raspberry-pi.qcow
qemu-img create -f qcow2 snapshot.qcow -b raspberry-pi.qcow
sudo guestmount -a 'snapshot.qcow' -i 'mountpoint/'
For example, I have a file outside the repository. The file I'm testing inside the mountpoint was created by root, so I chmoded this file to root for comparison:
$ sudo ls -l --author ~/test/file
-rw-r--r-- 1 root root root 1133 Oct 8 21:43 /home/me/test/file
$ sudo test -x ~/test/file && echo 'exists' || echo 'doesn\'t exist'
doesn't exist
However, for a file inside the mountpoint, with the same permissions, the test is successful:
$ sudo ls -l --author mountpoint/home/pi/test/file
-rw-r--r-- 1 root root root 0 Oct 8 22:41 mountpoint/home/pi/test/file
$ sudo test -x ~/test/file && echo 'exists' || echo 'doesn\'t exist'
exists
Why is the file inside the mountpoint executable, whereas the one outside is not executable? Is this because the mounted filesystem is a different architecture (x86 vs. ARM)? Is it because I'm using guestmount, and the filesystem isn't the real filesysem, but an amalgamation of the snapshot & the original file? Or is this just the way mounting works? Where can I find more resources on this peculiar behavior, like other permission quirks I might encounter?
If you need any more information about the host or guest, please ask.

This is a bug in libguestfs used by guestmount. You can see it here:
/* Root user should be able to access everything, so only bother
* with these fine-grained tests for non-root. (RHBZ#1106548).
*/
if (fuse->uid != 0) {
[...]
if (mask & X_OK)
ok = ok &&
( fuse->uid == statbuf.st_uid ? statbuf.st_mode & S_IXUSR
: fuse->gid == statbuf.st_gid ? statbuf.st_mode & S_IXGRP
: statbuf.st_mode & S_IXOTH);
}
The FS takes a shortcut saying that since you're root you have full access, so there's no point checking the permissions.
As you've demonstrated, this is not true. Root should only have execute permissions for directories, and for files where at least one of the execute bits is set.
I was unable to build the project to submit a patch, but you can file a bug.

Related

Allow user to run binary as root

I have a script written by someone else, which mounts a file system, and I would like to reproduce it.
The script has been compiled with shc, and is used to mount a filesystem for a particular user, but is able to be run with root priveleges. The guess is it does something like this mount_script.sh:
#!/bin/bash
mount -t cifs -o username=$USER,domain=my_domain //hostname.com/Files /mnt/${USER}-drive
I have compiled the script with shc and then applied
chmod u+s mount_script.sh.x
so that
-rwsr-xr-x. 1 root root 11088 Feb 15 14:11 mount_script.sh.x
matches the original compiled bash script's permissions, the original is wrapped with the following mount_drive.sh
#!/bin/bash
if [ "$(mountpoint -q /mnt/${USER}-drive/ && echo "mounted" || echo "not mounted")" = "not mounted" ]; then
echo
echo "Not mounted, running mount script..."
echo
mount_script.sh.x
else
echo
echo "The drive is already mounted at /mnt/${USER}-drive..."
echo
fi
WIth permissions:
-rwxr-xr-x 1 root root 335 Sep 20 10:58 /usr/local/bin/mount_drive.sh
When I try and run it as my normal user i get:
Not mounted, running mount script...
mount: only root can use "--options" option
What should the script contain to avoid this probelm and allow the $USER to run it successfully?
Is there any reason this would be a stupid idea from a security perspective?
Thanks!
If this is NOT intended for multiple users, then the simplest (and most secure) method is for the partition to be mounted via fstab with a specified user as owner of the partition and restricted privileges. Namely,
UUID={something} / ext4 defaults,nosuid,uid=1000,gid=1000,fmask=0077,dmask=0077 0 1
That would have the partition mounted every time the system boots, but only the specified user could access that (or anyone with sudo privileges able to assume that identity). If that leaves it too open, you could consider whether to encrypt that partition as well. Implementing that is beyond my experience, but would allow only that user to mount/use/access on basis of the password required to mount. You also have to control the who and how the partition encryption password is changed.
If you pursue the encryption option, you can avoid the fstab approach, and allow the user to mount/unmount at will, since he would be the only one with the password.
The danger with encryption is that when the password is set, it needs to be stored securely, so that administrators can use to recover data when (not if) the organization loses the person that had the "master key".

mount cifs too long due to chown for each file

I need to run an application on a VM , where I can do my set up in a script that will be run as root when the machine is built.
In this script I would like to mount a windows FS, so using CIFS.
So I am writing the following in the fstab:
//win/dir /my/dir cifs noserverino,ro,uid=1002,gid=1002,credentials=/root/.secret 0 0
After this, still in the same script, I try to mount it:
mount /my/dir
THat results in output of 2 lines for each file:
chown: changing ownership of `/my/dir/afile' Read-only file system
Because I have a lot of files, this takes forever...
With the same fstab I have asked an admin to manually mount the same directory :
sudo mount /my/dir
-> this is very quick with NO extra output.
I assume the difference of behavior is due to the fact that the script is run as root.
Any idea how to avoid the issue while keeping the idea of the script run as root ( this is not under my control )
Cheers.
Renaud

zsh compinit: insecure directories. Compaudit shows /tmp directory

I'm running zsh on a Raspberry Pi 2 (Raspbian Jessie). zsh compinit is complaining about the /tmp directory being insecure. So, I checked the permissions on the directory:
$ compaudit
There are insecure directories:
/tmp
$ ls -ld /tmp
drwxrwxrwt 13 root root 16384 Apr 10 11:17 /tmp
Apparently anyone can do anything in the /tmp directory. Which makes sense, given it's purpose. So I tried the suggestions on this stackoverflow question. I also tried similar suggestions on other sites. Specifiacally, it suggests turning off group write permissions on that directory. Because of how the permissions looked according to ls -ld, I had to turn off the 'all' write permissions as well. So:
$ sudo su
% chmod g-w /tmp
% chmod a-w /tmp
% exit
$ compaudit
# nothing shows up, zsh is happy
This shut zsh up. However, other programs started to break. For example, gnome-terminal would crash whenever I typed the letter 'l'. Because of this, I had to turn the write permissions back on, and just run compinit -u in my .zshrc.
What I want to know: is there any better way to fix this? I'm not sure that it's a great idea to let compinit use an insecure directory. My dotfiles repo is hosted here, and the file where I now run compinit -u is here.
First, the original permissions on /tmp were correct. Make sure you've restored them correctly: ls -ld /tmp must start with drwxrwxrwt. You can use sudo chmod 1777 /tmp to set the correct permissions. /tmp is supposed to be writable by everyone, and any other permissions is highly likely to break stuff.
compaudit complains about directories in fpath, so one of the directories in your fpath is of the form /tmp/… (not necessarily /tmp itself). Check how fpath is being set. Normally the directories in fpath should be only subdirectories of the zsh installation directory, and places in your home directory. A subdirectory of /tmp wouldn't get in there without something unusual on your part.
If you can't find out where the stray directory is added to fpath, run zsh -x 2>zsh-x.log, and look for fpath in the trace file zsh-x.log.
It can be safe to use a directory under /tmp, but only if you created it securely. The permissions on /tmp allow anybody to create files, but users can only remove or rename their own files (that's what the t at the end of the permissions means). So if a directory is created safely (e.g. with mktemp -d), it's safe to use it in fpath. compaudit isn't sophisticated enough to recognize this case, and in any case it wouldn't have enough information since whether the directory is safe depends on how it was created.

pivot_root device or resource busy

Produces the following command on Ubuntu 64bit on VMWare:
mount /dev/sda1 /newroot
cd /newroot
mkdir old-root
pivot_root . old-root
I get an error that I do not understand
pivot_root: device or resource busy
Any ideas?
I saw the same error when the new root directory is a plain directory. When the new root is a mount, it will be ok. A bind mount of a directory is ok too. Also need to make sure the root directory permission is 0755, and owned by the root user.
The related answer states that you need to umount /proc first. I do not see the same.
The host ubnutu is 16.04 and it pivots into 18.04. Used unshare -m -p -f /bin/bash, followed by pivot_root . old_root. The -f is necessary to avoid a memory allocation error.

Linux: 'transferring'/mirroring read-only permissions for symlinks (for webserver)

Please let me explain what I mean by the question:
This is the context: I'm a user on a webserver, where I have phpicalendar installed; then, I choose a directory, say /webroot/mylogin/phpicalendar/mycals to host my .ics calendar text files.
EDIT: Previously, instead of '/webroot', I had used '/root' - but I really didn't mean the Linux '/root' directory - I'm just wanted to use it as a stand in for the real location on the webserver (so it serves just as a common point of reference). Otherwise, what I mean by common point of reference, is simply /webroot = /media/some/path ..
Then, I can enter this directory in the phpicalendar's config.inc.php:
$configs = array(
'calendar_path' => '/webroot/mylogin/phpicalendar/mycals;
...
Then, phpicalendar will run through this directory, grab the .ics files there (say, mycal.ics and mycal2.ics) and render them - so far, so good.
The thing is, I would now like to add a second calendar directory, located at the same webserver, but where I have read-only permissions, say /webroot/protected/cals. I know that I have read permissions, because I can do in the shell, say
$ less /webroot/protected/cals/maincal.ics
and I can read the contents fine.. So now:
If I enter /webroot/protected/cals as a 'calendar_path', phpicalendar can read and render the files there (say, 'maincal.ics', 'maincal2.ics') without a problem
However, phpicalendar can have only one 'calendar_path', so I can either use the protected calendars, or my customized calendars - but not both
So, I thought, I could symlink the protected calendars in my customized directory - and get the best of both worlds :)
So, here is a shell snippet of what I would do
$ cd /webroot/mylogin/phpicalendar/mycals
$ ls -la
drwxrwxrwx 2 myself myself 4096 2011-03-03 12:50 .
-rw-r--r-- 1 myself myself 1234 2011-01-20 07:32 mycal.ics
-rw-r--r-- 1 myself myself 1234 2011-01-20 07:32 mycal2.ics
...
$ ln /webroot/protected/cals/maincal.ics . # try a hard link first
ln: creating hard link `./maincal.ics' => `/webroot/protected/cals/maincal.ics': Invalid cross-device link'
$ ln -s /webroot/protected/cals/maincal.ics . # symlink - works
$ ln -s ../../../protected/cals/maincal.ics relmaincal.ics # symlink via relative
$ ln -s mycal.ics testcal.ics # try a symlink to a local file
$ ls -la # check contents of dir now
drwxrwxrwx 2 myself myself 4096 .
-rw-r--r-- 1 myself myself 1234 mycal.ics
-rw-r--r-- 1 myself myself 1234 mycal2.ics
lrwxrwxrwx 1 myself myself 21 testcal.ics -> mycal.ics
lrwxrwxrwx 1 myself myself 56 maincal.ics -> /webroot/protected/cals/maincal.ics
lrwxrwxrwx 1 myself myself 66 relmaincal.ics -> ../../../protected/cals/maincal.ics
Ok, so here's what happens:
less maincal.ics works on shell
less relmaincal.ics fails with 'relmaincal.ics: No such file or directory' (even if shell autocompletion for the relative path did work during the execution of the symlink command!)
When you open phpicalendar now, it will render mycal.ics, mycal2.ics and testcal.ics (and they will work)
however, maincal.ics and relmaincal.ics will not be parsed or displayed
Now - this could be that PHP cannot resolve symlinks; however I speculate that the situation is this:
When I do less maincal.ics - it is myself who is user, who has read permission for /webroot/protected/cals
phpicalendar (so Apache webserver user) can otherwise also access /webroot/protected/cals as read-only, when given 'hardcoded' path
phpicalendar is also capable of reading local symlinks fine
Thus, I suspect, that the problem is: when trying to read the symlinks to protected cals, the user that is visible to the shell during that operation is Apache web user, which then doesn't get permissions to access a symlink to the protected/cals location!
The thing now is - I can easily copy the .ics files locally; however they are being changed by someone else, which is why I'd have preferred a symlink.
And my question is: can I do some sort of trickery, so that when phpicalendar/Apache tries to access a symlink to protected/cals, it 'thinks' that it is a local file - and otherwise, the contents of the protected/cals file are being 'piped' back to phpicalendar/Apache?? I guess I'm thinking something in terms of:
$ mkfifo mypipe
$ ln -s mypipe testpipe.ics
$ cat ./testpipe.ics # in one terminal
$ cat /webroot/protected/cals/maincal.ics > mypipe # in other terminal
... which would otherwise (I think) handle the permissions problem - except that, I don't want to cat manually; that would be something that would have to be done in the background, each time an application requests to read testpipe.ics:)
Well, thanks in advance for any comments on this - looking forward to hearing some,
Cheers!
Umm, I really doubt that the account the web server runs under can read anything under /root. That directory is usually mode 0700, user root, group root, or something very similar to that - meaning no non-root access is allowed. If you're running the web server as root, file read permissions are the least of your problems...
Your best bet then would be to place the read-only calendar files somewhere publicly available, and symlink to that location from wherever under /root you want to be able to access them.
Start by checking whether the Apache user can view your calendars:
you#host $ sudo -i -u <apache-user> -s /bin/bash
apache#host $ less /root/protected/cals/maincal.ics

Resources