I use following command to run a docker container, and map a directory from host(/root/database) to container(/tmp/install/database):
# docker run -it --name oracle_install -v /root/database:/tmp/install/database bofm/oracle12c:preinstall bash
But in container, I find I can't use ls to list contents in /tmp/install/database/ though I am root and have all privileges:
[root#77eb235aceac /]# cd /tmp/install/database/
[root#77eb235aceac database]# ls
ls: cannot open directory .: Permission denied
[root#77eb235aceac database]# id
uid=0(root) gid=0(root) groups=0(root)
[root#77eb235aceac database]# cd ..
[root#77eb235aceac install]# ls -alt
......
drwxr-xr-x. 7 root root 4096 Jul 7 2014 database
I check /root/database in host, and all things seem OK:
[root#localhost ~]# ls -lt
......
drwxr-xr-x. 7 root root 4096 Jul 7 2014 database
Why does docker container prompt "Permission denied"?
Update:
The root cause is related to SELinux. Actually, I met similar issue last year.
A permission denied within a container for a shared directory could be due to the fact that this shared directory is stored on a device. By default containers cannot access any devices. Adding the option $docker run --privileged allows the container to access all devices and performs Kernel calls. This is not considered as secure.
A cleaner way to share device is to use the option docker run --device=/dev/sdb (if /dev/sdb is the device you want to share).
From the man page:
--device=[]
Add a host device to the container (e.g. --device=/dev/sdc:/dev/xvdc:rwm)
--privileged=true|false
Give extended privileges to this container. The default is false.
By default, Docker containers are “unprivileged” (=false) and cannot, for example, run a Docker daemon inside the Docker container. This is because by default a container is not allowed to access any devices. A “privileged” container is given access to all devices.
When the operator executes docker run --privileged, Docker will enable access to all devices on the host as well as set some configuration in AppArmor to allow the container nearly all the same access to the host as processes running outside of a container on the host.
I had a similar issue when sharing an nfs mount point as a volume using docker-compose. I was able to resolve the issue with:
docker-compose up --force-recreate
Eventhough you found the issue, this may help someone else.
Another reason is a mismatch with the UID/GID. This often shows up as being able to modify a mount as root but not as the containers user
You can set the UID, so for an ubuntu container running as ubuntu you may need to append :uid=1000 (check with id -u) or set the UID locally depending on your use case.
uid=value and gid=value
Set the owner and group of the files in the filesystem (default: uid=gid=0)
There is a good blog about it here with this tmpfs example
docker run \
--rm \
--read-only \
--tmpfs=/var/run/prosody:uid=100 \
-it learning/tmpfs
http://www.dendeer.com/post/docker-tmpfs/
I got answer from a comment under: Why does docker container prompt Permission denied?
man docker-run gives the proper answer:
Labeling systems like SELinux require that proper labels are placed on volume content mounted into a container. Without a label, the security system might prevent the processes running
inside the container from using the content. By default, Docker does not change the labels set by the OS.
To change a label in the container context, you can add either of two suffixes :z or :Z to the volume mount. These suffixes tell Docker to relabel file objects on the shared volumes. The z option tells Docker that two containers share the volume content. As a result, Docker labels the content with a shared content label. Shared volume labels allow all containers to
read/write content. The Z option tells Docker to label the content with a private unshared label. Only the current container can use a private volume.
For example:
docker run -it --name oracle_install -v /root/database:/tmp/install/database:z ...
So I was trying to run a C file using Python os.system in the container but the I was getting the same error my fix was while creating the image add this line RUN chmod -R 777 app it worked for me
Related
I have a container that's based on the matspfeiffer/flutter image. I'm trying to forward some of my devices present on my host to the container so eventually I can run an android emulator from inside it.
I'm providing the following options to the docker run command:
--device /dev/kvm
--device /dev/dri:/dev/dri
-v /tmp/.X11-unix:/tmp/.X11-unix
-e DISPLAY
This renders the /dev/kvm device accessible from within the container.
However, the permissions for the /dev/kvm device on my host are the following:
crw-rw----+ 1 root kvm 10, 232 oct. 5 19:12 /dev/kvm
So from within the container I'm unable to interact with the device properly because of insufficient permissions.
My best shot at fixing the issue so far has been to alter the permissions of the device on my host machine like so:
sudo chmod 777 /dev/klm
It fixes the issue but it goes without saying that it is not in any case an appropriate solution.
I was wondering if there was a way to grant the container permission to interact with that specific device without altering the permissions on my host.
I am open to giving --privileged access to my host to my container.
I also wish to be able to create files from within the container without the permissions being messed up (I was once root inside a Docker container which made it so every file I would create in a shared volume from within the container inaccessible from my host).
For reference, I'm using VS Code remote containers to build and run the container so the complete docker run command as provided by VS Code is the following
docker run --sig-proxy=false -a STDOUT -a STDERR --mount type=bind,source=/home/diego/Code/Epitech/B5/redditech,target=/workspaces/redditech --mount type=volume,src=vscode,dst=/vscode -l vsch.local.folder=/home/diego/Code/Epitech/B5/redditech -l vsch.quality=stable -l vsch.remote.devPort=0 --device /dev/kvm --device /dev/dri:/dev/dri -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY --fifheri --entrypoint /bin/sh vsc-redditech-850ec704cd6ff6a7a247e31da931a3fb-uid -c echo Container started
I was recently told that running docker or docker-compose with sudo is a big nono, and that I had to create/add my user to the docker group in order to run docker and docker-compose commands without sudo. Which I did, as per the documentation here
Now, docker runs normally via my user. e.g. :
~$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete
Digest: sha256:df5f5184104426b65967e016ff2ac0bfcd44ad7899ca3bbcf8e44e4461491a9e
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
But when I try to run docker-compose, I get a Permission Denied
~$ docker-compose --help
-bash: /usr/local/bin/docker-compose: Permission denied
Could you please explain how this works ? I thought having a docker group enabled the usage of these commands because the binaries belong to this group, but actually they don't, they only belong to root...
~$ ls -al /usr/bin/docker*
-rwxr-xr-x 1 root root 71706288 Jul 23 19:36 /usr/bin/docker
-rwxr-xr-x 1 root root 804408 Jul 23 19:36 /usr/bin/docker-init
-rwxr-xr-x 1 root root 2944247 Jul 23 19:36 /usr/bin/docker-proxy
-rwxr-xr-x 1 root root 116375640 Jul 23 19:36 /usr/bin/dockerd
~$ ls -al /usr/local/bin/
total 12448
drwxr-xr-x 2 root root 4096 May 26 11:08 .
drwxr-xr-x 10 root root 4096 May 14 19:36 ..
-rwxr--r-- 1 root root 12737304 May 26 11:08 docker-compose
So, how does this work?
And how do I enable docker-compose to run for users that belong to the docker group?
sudo chmod a+x /usr/local/bin/docker-compose
Will turn your permissions on.
docker-compose is just a wrapper, and it uses an external docker daemon, the same way the docker command doesn't actually run anything but gives an order to a docker daemon.
You can change the docker daemon you communicate with using the DOCKER_HOST variable. By default, it is empty ; and when it is empty, both docker and docker-compose assume it is located at /var/run/docker.sock
According to the dockerd documentation :
By default, a unix domain socket (or IPC socket) is created at /var/run/docker.sock, requiring either root permission, or docker group membership.
And this is enforced by giving read and write access to the docker group to the socket.
$ ls -l /var/run/docker.sock
srw-rw---- 1 root docker 0 nov. 15 19:54 /var/run/docker.sock
As described in https://docs.docker.com/engine/install/linux-postinstall/, to add an user to the docker group, you can do it like that :
sudo usermod -aG docker $USER # this adds the permissions
newgrp docker # this refreshes the permissions in the current session
That being said, using docker with sudo is the same as using it with the docker group, because giving acces to the /var/run/docker.sock is equivalent to giving full root acces:
From https://docs.docker.com/engine/install/linux-postinstall/
The docker group grants privileges equivalent to the root user. For details on how this impacts security in your system, see Docker Daemon Attack Surface.
If root permission is a security issue for your system, another page is mentioned :
To run Docker without root privileges, see Run the Docker daemon as a non-root user (Rootless mode).
docker is composed of multiple elements : https://docs.docker.com/get-started/overview/
First, there are clients :
$ type docker
docker is /usr/bin/docker
$ dpkg -S /usr/bin/docker
docker-ce-cli: /usr/bin/docker
You can see that the docker command is installed when you install the docker-ce-cli package.
Here, ce stands for community edition.
The docker cli communicates with the docker daemon, also known as dockerd.
dockerd is a daemon (a server) and exposes by default the unix socket /var/run/docker.sock ; which default permissions are root:docker.
There are other components involved, for instance dockerd uses containerd : https://containerd.io/
The rest is basic linux permission management :
operating the docker daemon is the same as having root permission on that machine.
to operate the docker daemon, you need to be able to read and write from and to the socket it listens to ; in your case it is /var/run/docker.sock. whether or not you are a sudoer does not change anything to that.
to be able to read and write to and from /var/run/docker.sock, you must either be root or being in the docker group.
docker-compose is another cli it has the same requirements as docker.
What worked for me was adding myself to the 'docker' group
by running (as root, via sudo):
# usermod -a -G docker` *myUserName*
You may need to re-login, since current shells
may not yet "know" about being added to the docker group.
But you can run this following command if you don't want to re-login
newgrp docker
https://docs.docker.com/engine/install/linux-postinstall/
I have performed the following experiment on two Docker hosts, "Host A" and "Host B": pulled a certain JupyterHub image, started it with /var/run/docker.sock mounted, then exec-ed into the running container and checked the ownership/permissions of /var/run/docker.sock inside the container. Details:
docker pull jupyterhub/jupyterhub:1.3
docker run -d --name jhub -v /var/run/docker.sock:/var/run/docker.sock jupyterhub/jupyterhub:1.3
docker exec -it jhub /bin/bash
Now in the container: ls -l /var/run/docker.sock
On "Host A" I get something unexpected:
srw-rw---- 1 nobody nogroup 0 Jun 24 08:22 /var/run/docker.sock
whereas on "Host B" I get what I should:
srw-rw---- 1 root 998 0 May 27 12:30 /var/run/docker.sock
(note that the GID 998 is the docker group ID on the host, so this is OK). It does not matter whether I explicitly mount /var/run/docker.sock read-write or read-only.
Both "Host A" and "Host B"...
...run Ubuntu 20.04.2 LTS,
...have Docker version 20.10.6, build 370c289 installed,
...the /var/run/docker.sock socket is owned by root:docker on both hosts as it should,
...the JupyterHub image is exactly the same, ID=c9d26511309a,
...the containers' users are root so there's no reason to map docker.sock to the nobody:nogroup user in one of them.
The only difference is that "Host A" is an Azure VM and "Host B" is a physical machine. I set up both and installed Docker on them exactly the same way (or so I think), carefully following the instructions on the Docker website.
Why this matters? Because I get "Permission denied" errors if I try and spawn a notebook container from the JupyterHub container on "Host A" (the Azure VM). The DockerSpawner class needs to access /var/run/docker.sock and if it's not owned by root it can't perform its job.
Diligent Googling turned up several discussions on having a similar problem in a Jenkins container but the solutions offered usually revolve around adding a user to the docker group which does not apply to my case. Help is therefore desperately needed :-) Thanks.
Update:
After a complete uninstall/purge and reinstall cycle the problem disappeared, as it so often happens.... :-(
I don't know if this solves your problem, but in my case I found that docker is running "rootless". You can check by docker info, under Security Options Therefore instead of mounting /var/run/docker.sock apparently I need to mount /run/user/$USERID/docker.sock
docker run --rm -it -v /run/user/1118/docker.sock:/var/run/docker.sock docker sh
So in your case,
docker pull jupyterhub/jupyterhub:1.3
docker run -d --name jhub -v /run/user/"$(id -u)"/docker.sock:/var/run/docker.sock jupyterhub/jupyterhub:1.3
docker exec -it jhub /bin/bash
I've set up Docker to run as a non-root user. Now I can start my containers as an ordinary user and I feel more comfortable.
me#machine:~$ docker run -it -v ~/test:/test alpine:3.6 sh
/ # touch /test/test1
Meanwhile on the host:
me#machine:~$ ls -l ~/test/
total 0
-rw-r--r-- 1 root root 0 Jul 31 15:50 test1
Why do the files belong to root? How can I make them and all created files in the container belong to me?
Interesting fact: This happens on Debian Linux. Contrary, doing the same on a Mac, the created files would belong to me.
Mac OS Docker and Linux Docker have lot of changes in behavior. So ignore that part. Focus on the side of Linux.
What you did using https://docs.docker.com/engine/installation/linux/linux-postinstall/#manage-docker-as-a-non-root-user basically just means that a non-root user has access to the docker group. Through that docker group you are able to execute docker command. But docker daemon is still running as root user.
That you can confirm by running
ps aux | grep dockerd
And when you do a volume mapping, the directory gets created by docker, which eventually has root user permission. What you are looking for has been launched very recently as Docker user namespaces. Please read the details on below URL
https://success.docker.com/KBase/Introduction_to_User_Namespaces_in_Docker_Engine
This will guide you how to run your docker containers with a mapped user instead of root. In short create/update /etc/docker/daemon.json file to have below content
/etc/docker/daemon.json
{
"userns-remap": "<a non root user>"
}
And restart the docker service. Now your docker containers inside will think they have root privileges but they would run as a non-root user on host
I have a web application running in a Docker container. This application needs to access some files on our corporate file server (Windows Server with an Active Directory domain controller). The files I'm trying to access are image files created for our clients and the web application displays them as part of the client's portfolio.
On my development machine I have the appropriate folders mounted via entries in /etc/fstab and the host mount points are mounted in the Docker container via the --volume argument. This works perfectly.
Now I'm trying to put together a production container which will be run on a different server and which doesn't rely on the CIFS share being mounted on the host. So I tried to add the appropriate entries to the /etc/fstab file in the container & mounting them with mount -a. I get mount error(13): Permission denied.
A little research online led me to this article about Docker security. If I'm reading this correctly, it appears that Docker explicitly denies the ability to mount filesystems within a container. I tried mounting the shares read-only, but this (unsurprisingly) also failed.
So, I have two questions:
Am I correct in understanding that Docker prevents any use of mount inside containers?
Can anyone think of another way to accomplish this without mounting a CIFS share on the host and then mounting the host folder in the Docker container?
Yes, Docker is preventing you from mounting a remote volume inside the container as a security measure. If you trust your images and the people who run them, then you can use the --privileged flag with docker run to disable these security measures.
Further, you can combine --cap-add and --cap-drop to give the container only the capabilities that it actually needs. (See documentation) The SYS_ADMIN capability is the one that grants mount privileges.
yes
There is a closed issue mount.cifs within a container
https://github.com/docker/docker/issues/22197
according to which adding
--cap-add SYS_ADMIN --cap-add DAC_READ_SEARCH
to the run options will make mount -t cifs operational.
I tried it out and:
mount -t cifs //<host>/<path> /<localpath> -o user=<user>,password=<user>
within the container then works
You could use the smbclient command (part of the Samba package) to access the SMB/CIFS server from within the Docker container without mounting it, in the same way that you might use curl to download or upload a file.
There is a question on StackExchange Unix that deals with this, but in short:
smbclient //server/share -c 'cd /path/to/file; put myfile'
For multiple files there is the -T option which can create or extract .tar archives, however this looks like it would be a two step process (one to create the .tar and then another to extract it locally). I'm not sure whether you could use a pipe to do it in one step.
You can use a Netshare docker volume plugin which allows to mount remote CIFS/Samba as volumes.
Do not make your containers less secure by exposing many ports just to mount a share. Or by running it as --privileged
Here is how I solved this issue:
First mount the volume on the server that runs docker.
sudo mount -t cifs -o username=YourUserName,uid=$(id -u),gid=$(id -g) //SERVER/share ~/WinShare
Change the username, SERVER and WinShare here. This will ask your sudo password, then it will ask password for the remote share.
Let's assume you created WinShare folder inside your home folder. After running this command you should be able to see all the shared folders and files in WinShare folder. In addition to that since you use the uidand gid tags you will have write access without using sudo all the time.
Now you can run your container by using -v tag and share a volume between the server and the container.
Let's say you ran it like the following.
docker run -d --name mycontainer -v /home/WinShare:/home 2d244422164
You should be able to access the windows share and modify it from your container now.
To test it just do:
docker exec -it yourRunningContainer /bin/bash
cd /Home
touch testdocfromcontainer.txt
You should see testdocfromcontainer.txt in the windows share.