I've set up Docker to run as a non-root user. Now I can start my containers as an ordinary user and I feel more comfortable.
me#machine:~$ docker run -it -v ~/test:/test alpine:3.6 sh
/ # touch /test/test1
Meanwhile on the host:
me#machine:~$ ls -l ~/test/
total 0
-rw-r--r-- 1 root root 0 Jul 31 15:50 test1
Why do the files belong to root? How can I make them and all created files in the container belong to me?
Interesting fact: This happens on Debian Linux. Contrary, doing the same on a Mac, the created files would belong to me.
Mac OS Docker and Linux Docker have lot of changes in behavior. So ignore that part. Focus on the side of Linux.
What you did using https://docs.docker.com/engine/installation/linux/linux-postinstall/#manage-docker-as-a-non-root-user basically just means that a non-root user has access to the docker group. Through that docker group you are able to execute docker command. But docker daemon is still running as root user.
That you can confirm by running
ps aux | grep dockerd
And when you do a volume mapping, the directory gets created by docker, which eventually has root user permission. What you are looking for has been launched very recently as Docker user namespaces. Please read the details on below URL
https://success.docker.com/KBase/Introduction_to_User_Namespaces_in_Docker_Engine
This will guide you how to run your docker containers with a mapped user instead of root. In short create/update /etc/docker/daemon.json file to have below content
/etc/docker/daemon.json
{
"userns-remap": "<a non root user>"
}
And restart the docker service. Now your docker containers inside will think they have root privileges but they would run as a non-root user on host
Related
I have a docker container running which start up few daemon processes post run with normal user (say with non-root privileges) id. The process which was running with normal user has to create some files and directories under /dev inside the container by calling python function which executes os.system('mkdir -p /dev/some_dir') calls. However when run, these calls are failing without the directory being created. But I can run those cmds from container bash prompt where my id is uid=0(root) gid=0(root) groups=0(root).
Even providing sudo before the cmd inside os.system('sudo mkdir -p /dev/some_dir') is not working.
Is there any way I can make it work. I can not run the process with root user id due to security implications, but I need to create this directory as well.
thanks for your pointers
You should give /dev directory a permission to write files for your non-root user.
I was recently told that running docker or docker-compose with sudo is a big nono, and that I had to create/add my user to the docker group in order to run docker and docker-compose commands without sudo. Which I did, as per the documentation here
Now, docker runs normally via my user. e.g. :
~$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete
Digest: sha256:df5f5184104426b65967e016ff2ac0bfcd44ad7899ca3bbcf8e44e4461491a9e
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
But when I try to run docker-compose, I get a Permission Denied
~$ docker-compose --help
-bash: /usr/local/bin/docker-compose: Permission denied
Could you please explain how this works ? I thought having a docker group enabled the usage of these commands because the binaries belong to this group, but actually they don't, they only belong to root...
~$ ls -al /usr/bin/docker*
-rwxr-xr-x 1 root root 71706288 Jul 23 19:36 /usr/bin/docker
-rwxr-xr-x 1 root root 804408 Jul 23 19:36 /usr/bin/docker-init
-rwxr-xr-x 1 root root 2944247 Jul 23 19:36 /usr/bin/docker-proxy
-rwxr-xr-x 1 root root 116375640 Jul 23 19:36 /usr/bin/dockerd
~$ ls -al /usr/local/bin/
total 12448
drwxr-xr-x 2 root root 4096 May 26 11:08 .
drwxr-xr-x 10 root root 4096 May 14 19:36 ..
-rwxr--r-- 1 root root 12737304 May 26 11:08 docker-compose
So, how does this work?
And how do I enable docker-compose to run for users that belong to the docker group?
sudo chmod a+x /usr/local/bin/docker-compose
Will turn your permissions on.
docker-compose is just a wrapper, and it uses an external docker daemon, the same way the docker command doesn't actually run anything but gives an order to a docker daemon.
You can change the docker daemon you communicate with using the DOCKER_HOST variable. By default, it is empty ; and when it is empty, both docker and docker-compose assume it is located at /var/run/docker.sock
According to the dockerd documentation :
By default, a unix domain socket (or IPC socket) is created at /var/run/docker.sock, requiring either root permission, or docker group membership.
And this is enforced by giving read and write access to the docker group to the socket.
$ ls -l /var/run/docker.sock
srw-rw---- 1 root docker 0 nov. 15 19:54 /var/run/docker.sock
As described in https://docs.docker.com/engine/install/linux-postinstall/, to add an user to the docker group, you can do it like that :
sudo usermod -aG docker $USER # this adds the permissions
newgrp docker # this refreshes the permissions in the current session
That being said, using docker with sudo is the same as using it with the docker group, because giving acces to the /var/run/docker.sock is equivalent to giving full root acces:
From https://docs.docker.com/engine/install/linux-postinstall/
The docker group grants privileges equivalent to the root user. For details on how this impacts security in your system, see Docker Daemon Attack Surface.
If root permission is a security issue for your system, another page is mentioned :
To run Docker without root privileges, see Run the Docker daemon as a non-root user (Rootless mode).
docker is composed of multiple elements : https://docs.docker.com/get-started/overview/
First, there are clients :
$ type docker
docker is /usr/bin/docker
$ dpkg -S /usr/bin/docker
docker-ce-cli: /usr/bin/docker
You can see that the docker command is installed when you install the docker-ce-cli package.
Here, ce stands for community edition.
The docker cli communicates with the docker daemon, also known as dockerd.
dockerd is a daemon (a server) and exposes by default the unix socket /var/run/docker.sock ; which default permissions are root:docker.
There are other components involved, for instance dockerd uses containerd : https://containerd.io/
The rest is basic linux permission management :
operating the docker daemon is the same as having root permission on that machine.
to operate the docker daemon, you need to be able to read and write from and to the socket it listens to ; in your case it is /var/run/docker.sock. whether or not you are a sudoer does not change anything to that.
to be able to read and write to and from /var/run/docker.sock, you must either be root or being in the docker group.
docker-compose is another cli it has the same requirements as docker.
What worked for me was adding myself to the 'docker' group
by running (as root, via sudo):
# usermod -a -G docker` *myUserName*
You may need to re-login, since current shells
may not yet "know" about being added to the docker group.
But you can run this following command if you don't want to re-login
newgrp docker
https://docs.docker.com/engine/install/linux-postinstall/
I am very new to Unix/docker,
I have the following two outputs on the console,
admin#ansible:~/nachiket/workspace/docker-nachi-sample-app$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
admin#ansible:~/nachiket/workspace/docker-nachi-sample-app$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nachiketjoshi/python-log-generator latest ca675b7439ab About an hour ago 908MB
python 2.7 4ee4ea2f0113 3 weeks ago 908MB
can someone explain how the Unix user level is affecting my visibility to docker images...
It is because
The Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can only access it using sudo. The Docker daemon always runs as the root user.
So, after installing docker to have the same access level on another user instead of root you need to perform:
sudo groupadd docker
sudo usermod -aG docker $USER
Then verify if it worked on docker uses:
docker run hello-world
If everything goes right try to execute docker images and see if it has the same access level, I've tested on CentOS and it worked
Reference: https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user
I have a script that communicates over serial port (/dev/ttyUSB0). I want to run it from within a Docker image. However I don't seem to have permissions to do it from within the image. I follow these steps:
On my host, if I run ln -l /dev/ttyUSB0 I get:
crw-rw---- 1 root dialout 188, 0 jul 2 14:34 /dev/ttyUSB0
Good, it means that in order to read/write to it, I need to be either root, or part of the dialout group.
I become member of this group in my host:
$ sudo usermod -aG dialout $(whoami)
Then I log out and log in again to make this effective.
After that, I verify that I can communicate perfectly with /dev/ttyUSB0 from my host. However if I run the docker image:
docker run --user=1000:1000 --rm=true --tty=true --privileged=true --device=/dev/ttyUSB0 --volume=<my_dir>:<my_dir> --workdir=<my_dir> <my_docker_image> <my_script>
Then it complains:
can't open device "/dev/ttyUSB0": Permission denied
However if I use: --user=1000:20, then it works fine. The group 20 is the dialout group.
Now my question:
Why does Docker not understand that my user (1000) and group (1000) is part of the dialout group?
This was working when I used the old docker (apt-get install docker-io, docker-engine), but after updating to the new Docker CE this stopped working.
Setup:
Ubuntu 16.04.2 LTS Kernel 4.4.0-83-generic.
Docker version: Docker version 17.06.0-ce, build 02c1d87.
Thanks!
As stated in a comment, The solution was to pass --group-add=dialout to the docker run call. However, be aware that when using docker images that provides a way to specify the user and group using an environment variable (usually -e PUID=<UID> -e PGID=<GID>) it overwrites that setting.
I use following command to run a docker container, and map a directory from host(/root/database) to container(/tmp/install/database):
# docker run -it --name oracle_install -v /root/database:/tmp/install/database bofm/oracle12c:preinstall bash
But in container, I find I can't use ls to list contents in /tmp/install/database/ though I am root and have all privileges:
[root#77eb235aceac /]# cd /tmp/install/database/
[root#77eb235aceac database]# ls
ls: cannot open directory .: Permission denied
[root#77eb235aceac database]# id
uid=0(root) gid=0(root) groups=0(root)
[root#77eb235aceac database]# cd ..
[root#77eb235aceac install]# ls -alt
......
drwxr-xr-x. 7 root root 4096 Jul 7 2014 database
I check /root/database in host, and all things seem OK:
[root#localhost ~]# ls -lt
......
drwxr-xr-x. 7 root root 4096 Jul 7 2014 database
Why does docker container prompt "Permission denied"?
Update:
The root cause is related to SELinux. Actually, I met similar issue last year.
A permission denied within a container for a shared directory could be due to the fact that this shared directory is stored on a device. By default containers cannot access any devices. Adding the option $docker run --privileged allows the container to access all devices and performs Kernel calls. This is not considered as secure.
A cleaner way to share device is to use the option docker run --device=/dev/sdb (if /dev/sdb is the device you want to share).
From the man page:
--device=[]
Add a host device to the container (e.g. --device=/dev/sdc:/dev/xvdc:rwm)
--privileged=true|false
Give extended privileges to this container. The default is false.
By default, Docker containers are “unprivileged” (=false) and cannot, for example, run a Docker daemon inside the Docker container. This is because by default a container is not allowed to access any devices. A “privileged” container is given access to all devices.
When the operator executes docker run --privileged, Docker will enable access to all devices on the host as well as set some configuration in AppArmor to allow the container nearly all the same access to the host as processes running outside of a container on the host.
I had a similar issue when sharing an nfs mount point as a volume using docker-compose. I was able to resolve the issue with:
docker-compose up --force-recreate
Eventhough you found the issue, this may help someone else.
Another reason is a mismatch with the UID/GID. This often shows up as being able to modify a mount as root but not as the containers user
You can set the UID, so for an ubuntu container running as ubuntu you may need to append :uid=1000 (check with id -u) or set the UID locally depending on your use case.
uid=value and gid=value
Set the owner and group of the files in the filesystem (default: uid=gid=0)
There is a good blog about it here with this tmpfs example
docker run \
--rm \
--read-only \
--tmpfs=/var/run/prosody:uid=100 \
-it learning/tmpfs
http://www.dendeer.com/post/docker-tmpfs/
I got answer from a comment under: Why does docker container prompt Permission denied?
man docker-run gives the proper answer:
Labeling systems like SELinux require that proper labels are placed on volume content mounted into a container. Without a label, the security system might prevent the processes running
inside the container from using the content. By default, Docker does not change the labels set by the OS.
To change a label in the container context, you can add either of two suffixes :z or :Z to the volume mount. These suffixes tell Docker to relabel file objects on the shared volumes. The z option tells Docker that two containers share the volume content. As a result, Docker labels the content with a shared content label. Shared volume labels allow all containers to
read/write content. The Z option tells Docker to label the content with a private unshared label. Only the current container can use a private volume.
For example:
docker run -it --name oracle_install -v /root/database:/tmp/install/database:z ...
So I was trying to run a C file using Python os.system in the container but the I was getting the same error my fix was while creating the image add this line RUN chmod -R 777 app it worked for me