How to work with files which belong to the subuser namespace under Linux? - linux

I am using docker on ubuntu 16.04 with user id mapping (user namepsaces) enabled. I have following settings:
/etc/passwd
myusername:x:1000:1000:,,,:/home/myusername:/bin/bash
/etc/subuid
myusername:100000:65536
/etc/subguid
myusername:100000:65536
When I start a container the files are being correctly mapped from 0 (root) to my subuid 100000.
host
-rw-r--r-- 1 100000 100000 0 Mär 30 13:05 testfile
container
rw-r--r-- 1 root root 0 Mar 30 13:05 testfile
I can read the file on the host machine, but I cannot edit it. My assumption was that 100000 is "my" subuid, so I can edit those files. How can I achieve that those files are accessible by myusername without sudo?

I'm not sure how to fix this with user namespace mapping but you can work around it with ACL's.
If you don't mind leaking some UID information into the container, you can add an ACL to the directory for your host user. ACL's sit on top of the standard POSIX permissions.
To set a default ACL on the parent directory, that new entries inherit:
setfacl -d -m u:1000:rwx volume_dir/
To set the ACL on all existing files and directories in a directory:
setfacl -R -m u:1000:rwX volume_dir/
The X auto detects directories and sets them to executable but skips making files executable. Unfortunately this type of differentiation is not available on the default ACL.

Related

Unable to write to a file with group permissions

We are getting the error "permission denied" when trying to write to a file that is owned by a service user and a shared group. In particular that is www-data:www-data and the user trying to write to it is in the group www-data.
There is no acl on none of the parent folders and the permissions to the file and folders are correct.
Here some details:
$ sudo -u deploy id -Gn
www-data
$ ls -lah /tmp
drwxrwxrwt 17 root root 4.0K Jul 11 11:22 .
drwxr-xr-x 23 root root 4.0K Jul 8 10:08 ..
...
-rw-rw-r-- 1 www-data www-data 0 Jul 11 10:50 test
...
$ echo 'hello world' | sudo -u deploy tee -a /tmp/test
tee: /tmp/test: Permission denied
hello world
we tried that on different folders and made sure there is no acl on any of the folders or parents or files...
Unfortunately that is not described in the link stark posted in the comment. And also not in any other page I found until I found an answer here on stackoverflow that clarified it.
2018 two new filesystem configurations got added to sysctl that should prevent regular files and fifos from beeing opened with the O_CREAT flag (as append mode is doing) in directories with the sticky bit set unless the user is the owner of the file. This commit added the settings: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=30aba6656f61ed44cba445a3c0d38b296fa9e8f5
To change that behaviour you have to set fs.protected_regular to 0:
sudo sysctl fs.protected_regular=0
Or to persist the change add fs.protected_regular=0 to your sysctl.conf.
side node: since O_CREAT is not deleting or renaming the file I'm wondered why it is connected to the sticky bit. It really is possible to create a file in directories with the stick bit set.

How to use rsync properly to keep all file permissions and ownership?

I am trying to use rsync to backup some data from one computer (PopOS! 21.04) to another (Rocky 8.4). But no matter which flags I use with rsync, file permissions and ownership never seem to be saved.
What I do, is run this command locally on PopOS:
sudo rsync -avz /home/user1/test/ root#192.168.10.11:/root/ttt/
And the result I get something link this:
[root#rocky_clone0 ~]# ls -ld ttt/
drwxrwxr-x. 2 user23 user23 32 Dec 17 2021 ttt/
[root#rocky_clone0 ~]# ls -l ttt/
total 8
-rw-rw-r--. 1 user23 user23 57 Dec 17 2021 test1
-rw-rw-r--. 1 user23 user23 29 Dec 17 2021 test2
So all the file ownership change to user23, which is the only regular user on Rocky. I don't understand how this happens, with rsync I am connecting to root on the remote host, but as the result files are copied as user23. Why isn't -a flag work properly in this case?
I have also tried these flags:
sudo rsync -avz --rsync-path="sudo -u user23 rsync -a" /home/user1/test root#192.168.10.11:/home/user23/rrr
This command couldn't copy to the root directory, so I had to change the remote destination to user23's home folder. But the result is the same.
If someone could explain to me what am I doing wrong, and how to backup files with rsync so that permissions and ownership stay the same as on the local computer I would very much appreciate it.
Have a look at how the (target)filesystem is mounted on the Rocky(target) system.
Some mounted filesystems (such as many FUSE mounts) do not support the classical unix permissions, and simply use the name of the user who mounted the filesystem as owner/group.
Any attempt to chown/chmod/etc (either by you or by rsync) will just silently be ignored, but appear to "succeed" (no errors reported).

Create a user on the docker host from inside a container

I am trying to create a user "foo" on the docker host from within a container, but it fails.
The following files are volume-mounted read-write in the container:
/etc/group:/etc/group:rw
/etc/gshadow:/etc/gshadow:rw
/etc/passwd:/etc/passwd:rw
/etc/shadow:/etc/shadow:rw
When running the following command as root inside the container:
adduser --debug --system --shell /bin/bash --group foo
Then the output is:
Selecting UID from range 100 to 999 ...
Selecting GID from range 100 to 999 ...
Adding system user `foo' (UID 130) ...
Adding new group `foo' (GID 139) ...
/sbin/groupadd -g 139 foo
groupadd: failure while writing changes to /etc/group
adduser: `/sbin/groupadd -g 139 foo' returned error code 10. Exiting.
Permissions of these files look okay to me. Both on the docker host as well as inside the container, permissions are the same.
-rw-r--r-- 1 root root 1167 apr 14 12:51 /etc/group
-rw-r----- 1 root shadow 969 apr 14 12:51 /etc/gshadow
-rw-r--r-- 1 root root 3072 apr 14 12:51 /etc/passwd
-rw-r----- 1 root shadow 1609 apr 14 12:51 /etc/shadow
I have also tried chattr -i on these files, but it still fails.
Is there some other file that I have overlooked and needs to be mounted? Is it even possible what I am trying to achieve?
When you mount an individual file, you end up mounting the inode of that file with the bind mount. And when you write to the file, many tools create a new file, with a new inode, and replace the existing file with that. This avoids partial reads, and other file corruption risks if you were to modify the file in place.
What you are attempting to do is likely a very bad idea, it's the very definition of a container escape, allowing the container to setup credentials on the host. If you really need host access, I'd mount the folder in a different location because containers have other files that are automatically mounted in /etc. So you could say /etc:/host/etc and access the files in the container under /host/etc. Just realize that's even a larger security hole.
Note, if the entire goal is to avoid permission issues between the host and the container, there are much better ways to do this, but that would be an X-Y problem.

Run each Docker container in a specific user namespace configuration

Problem:
I am trying to mount a directory as Docker volume in such a way,
that a user, which is created inside a container could write
into a file in that volume. And at the same time, the file should
be at least readable to my user lape outside the container.
Essentially, I need to remap a user UID from container user namespace to a specific UID on the host user namespace.
How can I do that?
I would prefer answers that:
do not involve changing the way how Docker daemon is run;
and allows a possibility to configure container user namespace for each container separately;
do not require rebuilding the image;
I would accept answer that shows a nice solution using Access Control Lists as well;
Setup:
This is how the situation can be replicated.
I have my Linux user lape, assigned to docker group, so I
can run Docker containers without being root.
lape#localhost ~ $ id
uid=1000(lape) gid=1000(lape) groups=1000(lape),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),121(lpadmin),131(sambashare),999(docker)
Dockerfile:
FROM alpine
RUN apk add --update su-exec && rm -rf /var/cache/apk/*
# I create a user inside the image which i want to be mapped to my `lape`
RUN adduser -D -u 800 -g 801 insider
VOLUME /data
COPY ./entrypoint.sh /entrypoint.sh
ENTRYPOINT ["sh", "/entrypoint.sh"]
entrypoint.sh:
#!/bin/sh
chmod 755 /data
chown insider:insider /data
# This will run as `insider`, and will touch a file to the shared volume
# (the name of the file will be current timestamp)
su-exec insider:insider sh -c 'touch /data/$(date +%s)'
# Show permissions of created files
ls -las /data
Once the is built with:
docker build -t nstest
I run the container:
docker run --rm -v $(pwd)/data:/data nstest
The output looks like:
total 8
4 drwxr-xr-x 2 insider insider 4096 Aug 26 08:44 .
4 drwxr-xr-x 31 root root 4096 Aug 26 08:44 ..
0 -rw-r--r-- 1 insider insider 0 Aug 26 08:44 1503737079
So the file seems to be created as user insider.
From my host the permissions look like this:
lape#localhost ~ $ ls -las ./data
total 8
4 drwxr-xr-x 2 800 800 4096 Aug 26 09:44 .
4 drwxrwxr-x 3 lape lape 4096 Aug 26 09:43 ..
0 -rw-r--r-- 1 800 800 0 Aug 26 09:44 1503737079
Which indicates that the file belongs to uid=800 (that is the insider user which does not even exist outside the Docker namespace).
Things I tried already:
I tried specifying --user parameter to docker run, but it seems it can only map which user on the host is mapped to uid=0 (root) inside the docker namespace, in my case the insider is not root. So it did not really work in this case.
The only way how I achieved insider(uid=800) from within container, to be seen as lape(uid=1000) from host, was by adding --userns-remap="default" to the dockerd startup script, and adding dockremap:200:100000 to files /etc/subuid and /etc/subgid as suggested in documentation for --userns-remap. Coincidentally this worked for me, but it is not sufficient solution, because:
it requires reconfigure the way how the Docker daemon runs;
requires to do some arithmetic on user ids: '200 = 1000 - 800', where 1000 is the UID my user on the host, and 800 the UID is of the insider user;
that would not even work if the insider user would need to have a higher UID than my host user;
it can only configure how user namespaces are mapped globally, without a way to have unique configuration per container;
this solution kind of works but it is a bit too ugly for practical usage.
If you just need a read access for your user, the simplest will be to add the read permissions for all files and subdirectories in /data with acls outside of docker.
Add default acl: setfacl -d -m u:lape:-rx /data.
You will also need to give access to the directory itself: setfacl -m u:lape:-rx /data.
Are there any obstacles for such a solution?

How can I allow my apache user to upload to user directory when using Apache on EC2?

I have set up an Amazon EC2 instance. By default my apache is running as user apache of the group apache.
I created new ftp user test and uploaded my files to /home/test/public_html. I then configured the virtual host DocumentRoot to point to my project, residing in /home/test/public_html.
When the directory is owned by user test, uploading in the project is denied: it gives a permission error. If I change the owner of the directory to apache, it works.
How can give the apache user superuser rights, to permit it to upload without changing the directory's owner?
It's always same problem. Upload with user ftp and no access for user apache.
I solved that problem using filesystems extended acls.
It's possible to put a 'default' user and/or group to new generated files.
What you have to do:
add 'acl' to your mount options for your desired filesystem. (Please check if your kernel is configured for posix acl before doing so!)
use command 'setfacl' to set permissions (you may need to install a package containing 'setfacl' before depending on your distribution.)
Example:
First own for user ftp so uploads can be made
# chown ftp:ftp /var/www/server/htdocs
# ls -la /var/www/server/htdocs/
insgesamt 0
drwxr-xr-x 2 ftp ftp 40 26. Nov 12:40 .
drwxrwxrwt 15 root root 360 26. Nov 12:40 ..
Next set default for user apache
# setfacl -d -m u:apache:rwx /var/www/server/htdocs
# setfacl -d -m g:apache:rwx /var/www/server/htdocs
# getfacl /var/www/server/htdocs
# file: /var/www/server/htdocs
# owner: ftp
# group: ftp
user::rwx
group::r-x
other::r-x
default:user::rwx
default:user:apache:rwx
default:group::r-x
default:group:apache:rwx
default:mask::rwx
default:other::r-x
After putting files or directorys to this directory you will see that you have multiple accessrights to them. But keep in mind that those rights are only given to new files not to existing once.
# getfacl /var/www/server/htdocs/test.txt
# file: /var/www/server/htdocs/test.txt
# owner: ftp
# group: ftp
user::rw-
user:apache:rwx #effective:rw-
group::r-x #effective:r--
group:apache:rwx #effective:rw-
mask::rw-
other::r--
When using 'ls -l' you see a '+' after the permissions to inform about acl rights:
# ls -la /var/www/server/htdocs
insgesamt 0
drwxr-xr-x+ 3 ftp ftp 80 26. Nov 12:43 .
drwxrwxrwt 15 root root 360 26. Nov 12:40 ..
drwxrwxr-x+ 2 ftp ftp 40 26. Nov 12:43 test
-rw-rw-r--+ 1 ftp ftp 0 26. Nov 12:43 test.txt
I would at all cost avoid letting apache user have root privileges.
This would be quite a serious security issue: exactly because the server is potentially (more) vulnerable you normally make a specifc user for it (here the 'apache' user) where you can specify the privileges to only those that are really needed for the server run.
If there are problems with the Apache user not having all the right permissions you should solve them by changing the ownership of corresponding files to apache user,
chown apache:apache <filename>
or, by making them readable/writable/executable for more users, e.g. using
chmod 777 <filename>

Resources