Setcap over SSHFS - linux

I am running a VM on my machine and have mounted a host folder inside VM using sshfs (auto-mounted via fstab).
abc#xyz:/home/machine/test on /home/vm/test type fuse.sshfs (rw,relatime,user_id=0,group_id=0,allow_other)
That folder has an executable which I want to run inside the VM. But I also need some capabilities before running that executable. So my script looks like:
#!/bin/bash
# Some preprocessing.
sudo setcap CAP_DAC_OVERRIDE+ep /home/vm/test/my_exec
/home/vm/test/my_exec
But I am getting below error :
Failed to set capabilities on file `/home/vm/test/my_exec' (Operation not supported)
The value of the capability argument is not permitted for a file. Or the file is not a regular (non-symlink) file
But if I copy executable inside the VM (say in /tmp/), then it works perfectly fine. Is this a known limitation of sshfs or am I missing something here ?

File capabilities are implemented on Linux with extended attributes (specifically the security.capability attribute), and not all filesystems implement extended attributes.
sshfs in particular does not.

sshfs can only perform operations which the remote user is authorized to perform. You're logged into the remote host as abc, so you can only perform actions over sshfs which abc can perform -- which doesn't include setcap, since that operation can only be performed by root. Using sudo on your local machine doesn't change that.

Related

setcap cap_net_admin in linux containers prevents user access to every file

I have a tcpdump application in a CentOS container. I was trying to run tcpdump as nonroot. Following this forum post: https://askubuntu.com/questions/530920/tcpdump-permissions-problem (and some other documentation that reinforced this), I tried to use setcap cap_net_admin+eip /path/to/tcpdump in the container.
After running this, I tried to run tcpdump as a different user (with permissions to tcpdump) and I got "Operation Not Permitted". I then tried to run it as root which had previously been working and also got, "Operation Not Permitted". After running getcap, I verified that the permissions were what they should be. I thought it may be my specific use case so I tried running the setcap command against several other executables. Every single executable returned "Operation Not Permitted" until I ran setcap -r /filepath.
Any ideas on how I can address this issue, or even work around it without using root to run tcpdump?
The NET_ADMIN capability is not included in containers by default because it could allow a container process to modify and escape any network isolation settings applied on the container. Therefore explicitly setting this permission on a binary with setcap is going to fail since root and every other user in the container is blocked from that capability. To run a container with this, you would need to add this capability onto the container with the command used to start your container. e.g.
docker run --cap-add NET_ADMIN ...
However, I believe all you need is NET_RAW (setcap cap_net_raw) which is included in the default capabilities. From man capabilities:
CAP_NET_RAW
* Use RAW and PACKET sockets;
* bind to any address for transparent proxying.

Can I create a chroot environment with /proc without administrator priviliges?

I am trying to make a chroot'ed, sandboxed build-environement, which creates itself from a Git checkout before proceeding with building the application. One of the requirements is that the developers doing the git checkout and invoking the build should not need admin privileges on the host machine.
unshare -r chroot
works fine - except there is no /proc which again means a lot of standeard stuff wont work.
Various methods to create /proc I have found with mount require sudo rights.
Docker does this but the developers have to be in the "docker" group which effectively gives them uncontrolled root access - then rather give them sudo rights.
I have found the "proot" which does some kind of emulation to do this. This, however, has some performance penalties.
You also need a mount namespace which will give you the ability to perform recursive bind mounts (and plain bind mount where there are no child mounts). pivot_root and the ability to mount tmpfs, so use unshare -rm.
With a pid namesapce you can also mount fresh instances of procfs.
I ended up using bubblewrap (bwrap). For a few things using ttys, I had to let it run with pseudo uid 0 to work.
If I should do it now I would use podman I think.

Ansible doesn't work when user home directory mounted on NFS volume

Here is the situation. I got a number of hosts that I'd like to maintain via Ansible. The baseline configuration of the hosts like logins/users/etc is controlled by corporate IT overlords, so I can only change things that are related to application not general host setup. Some of the tasks related to application require running as 'root' or some other privileged user.
I do have a password-less sudo access on all the hosts, however all user home directories are located on NFS mounted volume. From my understanding how ansible works it first logs in into the target host as a regular user and places some files into $HOME/.ansible directory, then it switches to root user using sudo and tries to run the stuff from that directory.
But here is the problem. As I mentioned above the home directories are on NFS volume, so after ansible process on the target machine becomes root it can no longer access the $HOME/.ansible directory anymore due to NFS restrictions. Is there a way to tell ansible to put these work files outside of home directory on some non-NFS volume.
There were two parameters for the ansible.cfg configuration file introduced in Ansible 2.1 which allow specifying the location of temporary directory on target and control machines:
remote_tmp
Ansible works by transferring modules to your remote machines, running them, and then cleaning up after itself. In some cases, you may not wish to use the default location and would like to change the path. You can do so by altering this setting:
remote_tmp = ~/.ansible/tmp
local_tmp
When Ansible gets ready to send a module to a remote machine it usually has to add a few things to the module: Some boilerplate code, the module’s parameters, and a few constants from the config file. This combination of things gets stored in a temporary file until ansible exits and cleans up after itself. The default location is a subdirectory of the user’s home directory. If you’d like to change that, you can do so by altering this setting:
local_tmp = $HOME/.ansible/tmp

You don't have permission to access / on this server ubuntu 14.04

Agenda: To have an common Project Folder between Linux and Windows
I have changed my document root from : /var/www/html to /media/mithun/Projects/test in my ubuntu machine 14.04
I get error as :
Forbidden
You don't have permission to access / on this server.
Apache/2.4.7 (Ubuntu) Server at localhost Port 80
So i added some scripts to : sudo gedit /etc/apache2/sites-available/000-default.conf
# DocumentRoot /var/www/html
DocumentRoot /media/mithun/Projects/test
But Document Root /var/www/test works but not with Windows NTFS Partition Drive.
Even after referring to :
Error message "Forbidden You don't have permission to access / on this server"
Issue with my Ubuntu Apache Conf file. (Forbidden You don't have permission to access / on this server.)
No success :( So kindly assist me with it...
Note: Projects is an New Volume (Internal Drive: In Windows its E:/ Drive)
#Lmwangi - Please check my updates for your reference below:
Output of : ls /etc/apparmor.d/
abstractions lightdm-guest-session usr.bin.evince usr.sbin.cupsd
cache local usr.bin.firefox usr.sbin.mysqld
disable sbin.dhclient usr.lib.telepathy usr.sbin.rsyslogd
force-complain tunables usr.sbin.cups-browsed usr.sbin.tcpdump
I tried killing apparmor:
sudo /etc/init.d/apparmor kill
I receive output as : Usage: /etc/init.d/apparmor
{start|stop|restart|reload|force-reload|status|recache}
After this, i was also able to restart apache successfully
maybe the problem is simple : is your new root directory accessible to the www-data user ?
Try :
$ chown -R www-data:www-data /media/mithun/Projects
As you have you have discovered by now, you cannot just manipulate permissions on an NTFS partition (using tools like chmod)
However, you can try forcing a given owner/permissions for the entire partition when you mount it.
Now the wayto do this, depends on the NTFS-utilities you are actually using (and which i don't know, so I'm assuming you are using ntfs-3g)
E.g. mount the partition with the following parameters (replace dev/sdX with your actual partition, and /path/to/wheredrive/is/mounted` with your target path):
mount -o gid=www-data /dev/sdX /path/where/the/drive/is/mounted
should make all the files on the partition belong to the www-data group.
If the filesystem sets the group ownership explicitely, this still might not work.
In this case, you might need to setup a usermap, that maps your windows users/groups (as found on the partition) to your linux users/groups.
The ntfs-3g.usermap utility will help you generate an initial usermap file, which you can then edit to your needs:
ntfs-3g.usermap /dev/sdX
Then pass the usermap to the mount options:
mount -o usermapping=/path/to/usermap.file /dev/sdX /path/where/the/drive/is/mounted
I suspect that you have apparmor enforcing rules that prevent Apache from reading non-whitelisted directory paths. I suggest that you
Edit the apparmor config for Apache to access your custom path. You'll need to hunt around /etc/apparmor.d/ . You may also find that using apparmor in non-enforcing mode helpful.
$ sudo aa-complain /etc/apparmor.d/*
Use mod_apparmor? See this
Or disable apparmor completely. See this
My order of preference would be 1,3,2. That should fix this for you :)
While using ubuntu with windows I faced same issue and it is resolved by remounting drive with read and write access. Below command will help you to do that:
sudo mount -o remount,rw /disk/location /disk/new_location
If it is still not working then in windows os, go to the power options and disable fast startup.
When you shut down a computer with Fast Startup enabled, Windows locks down the Windows hard disk. You won’t be able to access it from other operating systems if you have your computer configured to dual-boot. Even worse, if you boot into another OS and then access or change anything on the hard disk (or partition) that the hibernating Windows installation uses, it can cause corruption. If you’re dual booting, it’s best not to use Fast Startup or Hibernation at all.
Original article: https://www.howtogeek.com/243901/the-pros-and-cons-of-windows-10s-fast-startup-mode/

Mounting a folder from other machine in linux

I want to mount a folder which is on some other machine to my linux server. To do that i am using the following command
mount -t nfs 192.xxx.x.xx:/opt/oracle /
Which is executing with the following error
mount.nfs: access denied by server while mounting 192.xxx.x.xx:/opt/oracle
Do anyone knows what's going on ??? I am new to linux.
Depending on what distro you're using, you simply edit the /etc/exports file on the remote machine to export the directories you want, then start your NFS daemon.
Then on the local PC, you mount it using the following command:
mount -t nfs {remote_pc_address}:/remote/dir /some/local/dir
Please try with your home directory as per my knowledge you can't dump anything directly on root like that.
For more reference, find full configuration steps here.

Resources