Docker "permission denied" in container - linux

I am trying to run a docker image by
docker run -it -v $PWD/examples:/home/user/examples image
which should make $PWD/examples in the host accessible in the container. However when I ls in the container, it keeps giving me
ls: cannot access 'examples': Permission denied
I have tried the answers for similar questions, the z/Z option and chcon -Rt svirt_sandbox_file_t /host/path/ and run --privileged, but neither of them have any effect in my case.
In fact, the z option appears to work for the first time ls, but when I issue ls the second time it is denied again.

In the comments it turned out that there is probably a USER instruction in the Dockerfile of the image. This user is not allowed to access examples due to file access permissions of examples.
It is possible to supersede USER with docker run option --user.
A quick and dirty solution is to run with --user=root to allow arbitrary access.
Be aware that files written as root in container to folder examples will be owned by root.
A better solution is to look for owner of examples, call him foo. Specify its user id and group id to have exactly the same user in container:
docker run --user $(id -u foo):$(id -g foo) imagename
Another possible solution is to allow arbitray access with chmod 666 examples or chmod 644 examples, but most probably you don't want that.
The best way would be to look at the Dockerfile and check the purpose of USER instruction.
If it only serves the purpose of avoiding root in container, the best way is to use --user=foo or more precisely --user=$(id -u foo):$(id -g foo).
If something in Dockerfile/image relies on specific USER, it may be the best to change access permissions of examples.
If you have access to the Dockerfile, you may adjust it to fit your host user/the owner of examples.

Try running the container as privileged:
sudo docker run --privileged=true -itd -v /***/***:/*** ubuntu bash
for example: sudo docker run --privileged=true -itd -v
/home/willie:/wille ubuntu bash

Related

Docker Access to Raspberry Pi GPIO Pins --privileged does not work

I know similar question had already been answered, and I studied dilligently.
I believe, I have tried nearly all possible combinations, without success:
sudo docker run --device /dev/ttyAMA0:/dev/ttyAMA0 --device /dev/mem:/dev/mem --device /dev/gpiomem:/dev/gpiomem --privileged my_image_name /bin/bash
I have also refered to the docker manual and tried also with --cap-add=SYS_ADMIN
sudo docker run --cap-add=SYS_ADMIN --device /dev/ttyAMA0:/dev/ttyAMA0 --device /dev/mem:/dev/mem --device /dev/gpiomem:/dev/gpiomem --privileged my_image_name /bin/bash
I also tried combintions with volumes: -v /sys:/sys
But I still get failed access to devices, due to Permission denied:
I have checked that those devices possibly needed exist and I can read them:
I am wasted. What am I still doing wrong ? Is it that I must run my app inside container as root ? How in the world ? :D
You're running commands in the container as appuser, while the device files are owned by root with various group permissions and no world access (crw-rw--- and crw-r-----). Those groups may look off because /etc/groups inside the container won't match the host, and what passes through to the container is the uid/gid, not the user/group name. The app itself appears to expect you are running as root and even suggests sudo. That sudo is not on the docker command itself (though you may need that if your user on the host is not a member of the docker group) but on the process started inside the container:
docker run --user root --privileged my_image_name /bin/bash
Realize that this is very insecure, so make sure you trust the process inside the container as if it was running as root on the host outside of the container, because it has all the same access.

default user not added to docker group, have to do su $USER?

I have Ubuntu 18.04. and after installing docker i added my user to docker group with the command
sudo usermod -aG docker ${USER}
and logged in
su - ${USER}
and if I check id, my user is added to docker group.
But when I reopen the terminal i cant do docker commands without sudo unless i explicitly do su ${USER}
also, I can't find docker group with the default user.
What am I missing here?
#larsks already replied to the main question in a comment, however I would like to elaborate on the implications of that change (adding your default user to the docker group).
Basically, the Docker daemon socket is owned by root:docker, so in order to use the Docker CLI commands, you need either to be in the docker group, or to prepend all docker commands by sudo.
As indicated in the documentation of Docker, it is risky to follow the first solution on your personal workstation, because this just amounts to providing the default user with root permissions without sudo-like password prompt protection. Indeed, users in the docker group are de facto root on the host. See for example this article and that one.
Instead, you may want to follow the second solution, which can be somewhat simplified by adding to your ~/.bashrc file an alias such as:
alias docker="sudo /usr/bin/docker"
Thus, docker run --rm -it debian will be automatically expanded to sudo /usr/bin/docker run --rm -it debian, thereby preserving sudo’s protection for your default user.

How to give non-root user in Docker container access to a volume mounted on the host

I am running my application in a Docker container as a non-root user. I did this since it is one of the best practices. However, while running the container I mount a host volume to it -v /some/folder:/some/folder . I am doing this because my application running inside the docker container needs to write files to the mounted host folder. But since I am running my application as a non-root user, it doesn't have permission to write to that folder
Question
Is it possible to give a nonroot user in a docker container access to the hosted volume?
If not, is my only option to run the process in docker container as root?
There's no magic solution here: permissions inside docker are managed the same as permissions without docker. You need to run the appropriate chown and chmod commands to change the permissions of the directory.
One solution is to have your container run as root and use an ENTRYPOINT script to make the appropriate permission changes, and then your CMD as an unprivileged user. For example, put the following in entrypoint.sh:
#!/bin/sh
chown -R appuser:appgroup /path/to/volume
exec runuser -u appuser "$#"
This assumes you have the runuser command available. You can accomplish pretty much the same thing using sudo instead.
Use the above script by including an ENTRYPOINT directive in your Dockerfile:
FROM baseimage
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/bin/sh", "entrypoint.sh"]
CMD ["/usr/bin/myapp"]
This will start the container with:
/bin/sh entrypoint.sh /usr/bin/myapp
The entrypoint script will make the required permissions changes, then run /usr/bin/myapp as appuser.
There will throw error if host env don't have appuser or appgroup, so better to use a User ID instead of user name:
inside your container, run
appuser$ id
This will show:
uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
From host env, run:
mkdir -p /some/folder
chown -R 1000:1000 /some/folder
docker run -v /some/folder:/some/folder [your_container]
inside your container, check
ls -lh
to see the user and group name, if it's not root, then it's should worked.
In the specific situation of using an image built from a custom Dockerfile, you can do the following (using example commands for a debian image):
FROM baseimage
...
RUN useradd --create-home appuser
USER appuser
RUN mkdir /home/appuser/my_volume
...
Then mount the volume using
-v /some/folder:/home/appuser/my_volume
Now appuser has write permissions to the volume as it's in their home directory. If the volume has to be mounted outside of their home directory, you can create it and assign appuser write permissions as an extra step within the Dockerfile.
I found it easiest to recursively apply Linux ACL (Access Control Lists) permissions on the host directory so the non root host user can access volume contents.
sudo setfacl -m u:$(id -u):rwx -R /some/folder
To check who has access to the folder:
getfacl /some/folder
Writing to the volume will create files and directories with host user id which might not be desirable for host -> container transfer. Writing can be disabled with just giving :rx permission instead of :rwx.
To enable writing, add a mirror ACL policy in a container allowing container user id full access to volume parent path.

Why does docker container prompt "Permission denied"?

I use following command to run a docker container, and map a directory from host(/root/database) to container(/tmp/install/database):
# docker run -it --name oracle_install -v /root/database:/tmp/install/database bofm/oracle12c:preinstall bash
But in container, I find I can't use ls to list contents in /tmp/install/database/ though I am root and have all privileges:
[root#77eb235aceac /]# cd /tmp/install/database/
[root#77eb235aceac database]# ls
ls: cannot open directory .: Permission denied
[root#77eb235aceac database]# id
uid=0(root) gid=0(root) groups=0(root)
[root#77eb235aceac database]# cd ..
[root#77eb235aceac install]# ls -alt
......
drwxr-xr-x. 7 root root 4096 Jul 7 2014 database
I check /root/database in host, and all things seem OK:
[root#localhost ~]# ls -lt
......
drwxr-xr-x. 7 root root 4096 Jul 7 2014 database
Why does docker container prompt "Permission denied"?
Update:
The root cause is related to SELinux. Actually, I met similar issue last year.
A permission denied within a container for a shared directory could be due to the fact that this shared directory is stored on a device. By default containers cannot access any devices. Adding the option $docker run --privileged allows the container to access all devices and performs Kernel calls. This is not considered as secure.
A cleaner way to share device is to use the option docker run --device=/dev/sdb (if /dev/sdb is the device you want to share).
From the man page:
--device=[]
Add a host device to the container (e.g. --device=/dev/sdc:/dev/xvdc:rwm)
--privileged=true|false
Give extended privileges to this container. The default is false.
By default, Docker containers are “unprivileged” (=false) and cannot, for example, run a Docker daemon inside the Docker container. This is because by default a container is not allowed to access any devices. A “privileged” container is given access to all devices.
When the operator executes docker run --privileged, Docker will enable access to all devices on the host as well as set some configuration in AppArmor to allow the container nearly all the same access to the host as processes running outside of a container on the host.
I had a similar issue when sharing an nfs mount point as a volume using docker-compose. I was able to resolve the issue with:
docker-compose up --force-recreate
Eventhough you found the issue, this may help someone else.
Another reason is a mismatch with the UID/GID. This often shows up as being able to modify a mount as root but not as the containers user
You can set the UID, so for an ubuntu container running as ubuntu you may need to append :uid=1000 (check with id -u) or set the UID locally depending on your use case.
uid=value and gid=value
Set the owner and group of the files in the filesystem (default: uid=gid=0)
There is a good blog about it here with this tmpfs example
docker run \
--rm \
--read-only \
--tmpfs=/var/run/prosody:uid=100 \
-it learning/tmpfs
http://www.dendeer.com/post/docker-tmpfs/
I got answer from a comment under: Why does docker container prompt Permission denied?
man docker-run gives the proper answer:
Labeling systems like SELinux require that proper labels are placed on volume content mounted into a container. Without a label, the security system might prevent the processes running
inside the container from using the content. By default, Docker does not change the labels set by the OS.
To change a label in the container context, you can add either of two suffixes :z or :Z to the volume mount. These suffixes tell Docker to relabel file objects on the shared volumes. The z option tells Docker that two containers share the volume content. As a result, Docker labels the content with a shared content label. Shared volume labels allow all containers to
read/write content. The Z option tells Docker to label the content with a private unshared label. Only the current container can use a private volume.
For example:
docker run -it --name oracle_install -v /root/database:/tmp/install/database:z ...
So I was trying to run a C file using Python os.system in the container but the I was getting the same error my fix was while creating the image add this line RUN chmod -R 777 app it worked for me

Mount data volume to docker with read&write permission

I want to mount a host data volume to docker. But the container should have read and write permission to it, meantime, any changes on the data volumes should not affect the data in host.
I can image a solution that mount several data volumes to single folder, one is read only another is read and write. But only this second '-v' works in my command,
docker run -ti --name build_cent1 -v /codebase/:/code:ro -v /temp:/code:rw centos6:1.0 bash
only this second '-v' works in my command,
That might be because both -v options attempt to mount host folders on the same container destination folder /code.
-v /codebase/:/code:ro
^^^^^
-v /temp:/code:rw
^^^^^
You could mount those host folders in two separate folders within /code.
As in:
-v /codebase/:/code/base:ro -v /temp:/code/temp:rw.
Normally in this case I think you ADD the folder to the Docker image, so that any container running it will have it in its (writeable) filesystem, but writes will go to a different layer.
You need to write a Dockerfile in the folder above the one you wish to use, which should look something like this:
FROM my/image
ADD codebase /codebase
Then you build the container using docker build -t some-name <path>. These steps could be added to the build scripts of your app (maybe you will find some plugin to help there). Then you can docker run some-name.
The downside is that there is one copy to do and the image creation, but should you launch many containers they will share the same copy of the layer in read-only and write their own modifications to independent layers above.
Got one answer from nixun in github.
you can simply use overlayfs to fix this:
mount -t overlay overlay \
-olowerdir=/codebase,upperdir=/temp,workdir=/workdir /codebase_new
docker run -ti --name build_cent1 -v /codebase_new:/code:rw centos6:1.0 bash
This solution has a good flexibility. Create image with share folder would be a solution, but it cannot update folder data easily.
This answer is not for docker users but it will help anyone who uses Lima to manage their containers.
I was stuck trying to solve the issue with limactl and lima nerdctl . I thought it is worth sharing the fix so that it may help anyone in the community who's using lima instead of docker:
By default Lima mounts volumes as read only. to be make them writeable by default do the following:
Edit the file and set write: true under mount section
$ vim ~/.lima/default/lima.yaml
then restart lima
limactl list #this lists all running vms
limactl stop default #or name of the machine
limactl start default #or name of the machine
you would still need to specify mount options exactly as with docker
lima nerdctl run -ti --name build_cent1 \
-v /codebase/:/code/base:ro \
-v /temp:/code/temp:rw \
centos6:1.0 bash
For more information about lima, please check this out

Resources