Vulnerabilty when running docker as non-root? - security

I'm kind of fighting with privileges (no troll) for my Docker project as I'm trying to make one of my user on Docker, able to read/write a volume shared with the host, while the host user should also be able to read/write with the docker user in this directory.
In my case, nor Docker user, nor Host user should be root. This mean that, on the shared volume, the user which is running docker shouldn't be able to reach files in the volume that aren't. However, I discovered that running a volume as an user without root privileges do not save root's files
Example
For instance, in the following situation
Users directory with two files, one root one non-root, the user's name is user and has no root privileges, however he is part of Docker group :
C:/.../directory :
dwrx-rx--x root file1
dwrx-rx--x user file2
Users run docker through the command :
docker run -v /c/.../directory:/volume:rw -e USER_ID=$(id -u) -e GROUP_ID=$(id -g)
And the entrypoint of Docker is the following script.sh :
#!/bin/bash
usermod -u ${USER_ID} dockeruser \;
groupmod -g ${GROUP_ID} dockeruser ;
chown dockeruser:dockeruser -R /volume ;
exit;
The permissions, even are changed on the host's directory, even for roots file that I shouldn't have been to write on :
C:/.../directory :
dwrx-rx--x user file1
dwrx-rx--x user file2
Is it normal that, an user that isn't the root could do anything with files which do not belongs to him ?
I'm pretty a beginner so, I don't know if it's a misleading vulnerability due to the fact we force user to not be root nor sudo but in fact it doesn't change anything, or if I just am getting it wrong ^^, so feel free to tell me if it's not the way I should handle it.
Regards,
Waldo

Related

Linux: How can I SSH connect using Apache user?

As a web developer I always have the problem when updating PHP (and other) files from an SSH client, because I am logged in as a user or simply root.
After that update I always have to run manually from a terminal 'chown -R apache:apache *' to make the files accessible.
I tried to make a user ID and add it to the group 'apache' and add the apache user to the group of my user id. That works only for existing files on the server file system, because newly created files have permissions rwxr--r-- which does not allow writing by my user even as it is in the 'apache' group.
I'd like to make a login (shell is not needed) for the Apache user, so I can use an SSH based file browser like Forklift to login as Apache or use sshfs to mount as Apache user.
Another way is make umask that my user id always sets attributes of newly created files from sshfs mount or a file browser (mounted with my user id, not root) that they have permission rwxrwxr-- (i.e. 0775) by default.
Is there a way I can upload files to the server (updating existing op create new ones) without having to worry about permissions by Apache ?
You have to set the setgid
For example, do the following steps:
adduser hugo
addgroup apache
usermod -a -G apache hugo
mkdir /tmp/example
chown hugo:apache /tmp/example
chmod g+s /tmp/example
su hugo
cd /tmp/example
touch my_file
ls -l

Normal user touching a file in /var/run failed

I have a program called HelloWorld belonging to user test
HelloWorld will create a file HelloWorld.pid in /var/run to keep single instance.
I using following command to try to make test can access /var/run
usermod -a -G root test
However, when I run it, falied
could someone help me?
What are the permissions on /var/run? On my system, /var/run is rwxr-xr-x, which means only the user root can write to it. The permissions do not allow write access by members of the root group.
The normal way of handling this is by creating a subdirectory of /var/run that is owned by the user under which you'll be running your service. E.g.,
sudo mkdir /var/run/helloworld
sudo chown myusername /var/run/helloworld
Note that /var/run is often an ephemeral filesystem that disappears when your system reboots. If you would like your target directory to be created automatically when the system boots you can do that using the systemd tmpfiles service.
Some linux systems store per-user runtime files in /var/run/user/UID/.
In this case you can create your pid file in /var/run/user/$(id -u test)/HelloWorld.pid.
Alternatively just use /tmp.
You may want to use the user's name as a prefix to the pid filename to avoid collision with other users, for instance /tmp/test-HelloWorld.pid.

docker.sock permission denied

When I try to run simple docker commands like:
$ docker ps -a
I get an error message:
Got permission denied ... /var/run/docker.sock: connect: permission denied
When I check permissions with
$ ls -al /var/run/
I see this line:
srw-rw---- root docker docker.sock
So, I follow an advice from many forums and add local user to docker group:
$ sudo usermod -aG docker $USER
But it does not help. I still get the very same error message. How can I fix it?
For those new to the shell, the command:
$ sudo usermod -aG docker $USER
needs to have $USER defined in your shell. This is often there by default, but you may need to set the value to your login id in some shells.
Changing the groups of a user does not change existing logins, terminals, and shells that a user has open. To avoid performing a login again, you can simply run:
$ newgrp docker
to get access to that group in your current shell.
Once you have done this, the user effectively has root access on the server, so only do this for users that are trusted with unrestricted sudo access.
Reason: The error message means that the current user can’t access the docker engine, because the user hasn't enough permissions to access the UNIX socket to communicate with the engine.
Quick Fix:
Run the command as root using sudo.
sudo docker ps
Change the permissions of /var/run/docker.sock for the current user.
sudo chown $USER /var/run/docker.sock
Caution: Running sudo chmod 777 /var/run/docker.sock will solve your problem but it will open the docker socket for everyone which is a security vulnerability as pointed out by #AaylaSecura. Hence it shouldn't be used, except for testing purposes on the local system.
Permanent Solution:
Add the current user to the docker group.
sudo usermod -a -G docker $USER
Note: You have to log out and log in again for the changes to take effect.
Refer to this blog to know more about managing Docker as a non-root user.
Make sure your $USER variable is set
$ echo $USER
$ sudo usermod -aG docker $USER
logout
Upon login, restart the docker service
$ sudo systemctl restart docker
$ docker ps
enter the command and explore docker without sudo command
sudo chmod 666 /var/run/docker.sock
As mentioned earlier in the comment the changes won't apply until your re-login. If you were doing a SSH and opening a new terminal, it would have worked in new terminal
But since you were using GUI and opening the new terminal the changes were not applied. That is the reason the error didn't go away
So below command did do its job, its just a re-login was missed
sudo usermod -aG docker $USER
You need to manage docker as a non-root user.
To create the docker group and add your user:
Create the docker group.
$ sudo groupadd docker
Add your user to the docker group.
$ sudo usermod -aG docker $USER
Log out and log back in so that your group membership is re-evaluated.
If testing on a virtual machine, it may be necessary to restart the virtual machine for changes to take effect.
On a desktop Linux environment such as X Windows, log out of your session completely and then log back in.
On Linux, you can also run the following command to activate the changes to groups:
$ newgrp docker
Verify that you can run docker commands without sudo.
$ docker run hello-world
As my user is and AD user, I have to add the AD user to the local group by manually editing /etc/group file. Unforrtunately the adduser commands do not seem to be nsswitch aware and do not recognize a user not locally defined when adding someone to a group.
Then reboot or refresh /etc/group. Now, you can use docker without sudo.
Regards.
***Important Note on these answers: the docker group is not always "docker" sometimes it is "dockerroot", for example the case of Centos 7 installation by
sudo yum install -y docker
The first thing you should do, after installing Docker, is
sudo tail /etc/group
it should say something like
......
sshd:x:74:
postdrop:x:90:
postfix:x:89:
yourusername:x:1000:yourusername
cgred:x:996:
dockerroot:x:995:
In this case, it is "dockerroot" not "docker". So,
sudo usermod -aG dockerroot yourusername
logout
When I try to run simple docker commands like: $ docker ps -a
I get an error message: Got permission denied ... /var/run/docker.sock: connect: permission denied.
[…] How can I fix it?
TL;DR: There are two ways (the first one, also mentioned in the question itself, was extensively addressed by other answers, but comes with security concerns; so I'll elaborate on this issue, and develop the second solution that can also be applicable for this fairly sensible use case).
Just to recall the context, the Docker daemon socket is owned by root:docker:
$ ls -l /var/run/docker.sock
srw-rw---- 1 root docker 0 janv. 28 14:23 /var/run/docker.sock
so with this default setup, one needs to prepend all docker CLI commands by sudo.
To avoid this, one can either:
add one's user account ($USER) to the docker group − but that's quite risky to do this on one's personal workstation, as this would amount to provide all programs run by the user with root permissions without any sudo password prompt nor auditing.
See also:
this page in the official Docker documentation:
https://docs.docker.com/engine/security/#docker-daemon-attack-surface
this page that documents the related exploit:
https://fosterelli.co/privilege-escalation-via-docker.html
one can otherwise prepend sudo automatically without typing sudo docker manually: to this aim, a solution consists in adding the following alias in the ~/.bashrc (see e.g. this thread for details):
__docker() {
if [[ "${BASH_SOURCE[*]}" =~ "bash-completion" ]]; then
docker "$#"
else
sudo docker "$#"
fi
}
alias docker=__docker
Then one can test this by opening a new terminal and typing:
docker run --pul〈TAB〉 # → docker run --pull
# autocompletion works
docker run --pull always --rm -it debian:11 # ask one's password
\docker run --help # bypass the alias (thanks to the \) and ask no password
With the help of the below command I was able to execute the docker command without sudo
sudo setfacl -m user:$USER:rw /var/run/docker.sock
bash into container as root user
docker exec -it --user root <dc5> bash
create docker group if it's not already created
groupadd -g 999 docker
add user to docker group
usermod -aG docker jenkins
change permissions
chmod 777 /var/run/docker.sock
You have to use pns executer instead of docker.
run the following patch which modifies the configmap and you are all set.
kubectl -n argo patch cm workflow-controller-configmap -p '{"data": {"containerRuntimeExecutor": "pns"}}' ;
ref: https://www.youtube.com/watch?v=XySJb-WmL3Q&list=PLGHfqDpnXFXLHfeapfvtt9URtUF1geuBo&index=2&t=3996s

Can't access mounted host directory in Docker container

I'm trying to get my Docker container to read and write to a host directory.
I run the container with:
docker run -it -v $(pwd):/file logstash-5.1.2
Inside the container, I can see that /file has the uid of my (non-root) user on the host, and the same permissions as that on the host:
drwxrwxrwx. 2 1156 1156 4096 Jul 21 05:00 file
and that root can't access /file.
root#c642b0c37e09:~# ls /file
ls: cannot open directory /file: Permission denied
I've read posts about creating a user in the container with the same uid as the host, but that seems to be frowned upon.
Why can't root access the directory? I thought it could do everything.
What's the best way to have the container read and write to the mounted directory, which is not owned by root, in Docker?
We're also using Rancher. Does that make it easier? I haven't yet come across something different there, mainly as I'm trying to see if I can do it purely within Docker.
You should change the context as svirt_sandbox_file_t to let container access this folder in this context.
If you are sure about folder permission then just only try;
chcon -R -t svirt_sandbox_file_t /your/host/path
If are not sure try;
chown -R groupId:userId /your/host/path
chcon -R -t svirt_sandbox_file_t /your/host/path
In here chcon command applies the SELinux context with changing the context of "/your/host/path" to the svirt_sandbox_file_t.

Docker with '--user' can not write to volume with different ownership

I've played a lot with any rights combinations to make docker to work, but... at first my environment:
Ubuntu linux 15.04 and Docker version 1.5.0, build a8a31ef.
I have a directory '/test/dockervolume' and two users user1 and user2 in a group users
chown user1.users /test/dockervolume
chmod 775 /test/dockervolume
ls -la
drwxrwxr-x 2 user1 users 4096 Oct 11 11:57 dockervolume
Either user1 and user2 can write delete files in this directory.
I use standard docker ubuntu:15.04 image. user1 has id 1000 and user2 has id 1002.
I run docker with next command:
docker run -it --volume=/test/dcokervolume:/tmp/job_output --user=1000 --workdir=/tmp/job_output ubuntu:15.04
Within docker I just do simple 'touch test' and it works for user1 with id 1000. When I run docker with --user 1002 I can't write to that directory:
I have no name!#6c5e03f4b3a3:/tmp/job_output$ touch test2
touch: cannot touch 'test2': Permission denied
I have no name!#6c5e03f4b3a3:/tmp/job_output$
Just to be clear both users can write to that directory if not in docker.
So my question is this behavior by docker design or it is a bug or I missed something in the manual?
docker's --user parameter changes just id not a group id within a docker. So, within a docker I have:
id
uid=1002 gid=0(root) groups=0(root)
and it is not like in original system where I have groups=1000(users)
So, one workaround might be mapping passwd and group files into a docker.
-v /etc/docker/passwd:/etc/passwd:ro -v /etc/docker/group:/etc/group:ro
The other idea is to map a tmp directory owned by running --user and when docker's work is complete copy files to a final location
TMPFILE=`mktemp`; docker run -v $TMPFILE:/working_dir/ --user=$(id -u); cp $TMPDIR $NEWDIR
This discussion Understanding user file ownership in docker: how to avoid changing permissions of linked volumes brings some light to my question.
For both correct uid and gid mapping try: docker run --user=$(id -u):$(id -g)
Avoid use another use, because the UID is different and you can't sure about the user name. You can use root without problem inside container.

Resources