I am trying to mount a volume into docker on a compute cluster running ubuntu 18.04. This volume is on a mounted filesystem to which my user has access, but sudo does not. I do have sudo permissions on this cluster. I use this command:
docker run -it --mount type=bind,source="$(pwd)"/logs,target=/workspace/logs tmp:latest bash
The result is this error:
docker: Error response from daemon: invalid mount config for type "bind": stat /home/logs: permission denied.
See 'docker run --help'.
Mounting the volume works fine on my local machine where both sudo and I have access to the drive I want to mount, which makes me believe that the problem is indeed that on the server sudo does not have permissions to the drive I want to mount into docker.
What I have tried:
running the post-install steps $ sudo groupadd docker && sudo usermod -aG docker $USER
running docker with sudo
running docker with --privileged
running docker with --user $(id -u):$(id -g)
setting the user inside the dockerfile with USER $(id -u):$(id -g) (plugging in the actual values)
Is there a way to mount the volume in this setup or to change the dockerfile to correctly access the drive with my personal user? Any help would be much appreciated.
On a sidenote, within docker I would only require readaccess to the volume in case that changes anything.
The container is created by the Docker daemon, which runs as root. That's why it still doesn't work even if you run the container or the docker command as your own user.
You might be able to run the daemon as your own user (rootless mode).
You could also look at changing the mount options (on the host system) so that the root user on the host does have access. How to do this depends on the type of filesystem.
Related
I have an Azure Container Instance that has a non root user as default. For debugging and experimentation, I'd like to exec into the container like you would with a normal docker container: docker exec -u root ..., so that I have sudo permissions in the container. As detailed in Interacting with a container in Azure Container Instances, you can run exec commands through az container exec ..., but as was mentioned in Christian's answer, https://stackoverflow.com/a/50334426/17129046, there doesn't seem to be a way to add extra parameters, not just for the program being run, but there also doesn't seem to be support for any of the additional options you'd have with docker exec, including the -u option to change the user that logs in to the container when running docker exec -u root ... '/bin/bash'.
I have tried using su in the container, but it prompts for a password, and I don't know what that password would be, since the dockerfile that created the image this ACI uses doesn't set a password as far as I know (The image is created via bentoml). The default user is called bentoml. Result from running id:
uid=1034(bentoml) gid=1034(bentoml) groups=1034(bentoml)
Is there a workaround for this? Maybe a way to ssh into the container as root?
I tried to reproduce the issue and got the below output
I have pulled the docker image from the docker hub using below command
docker pull <image_name>
While pulling the image from docker we need to give the credentials if it ask
I have run the image using below command
docker run -it <image_id> /bin/bash
Here the container is running and I am not able to use root user commands
For accessing container as root use the below command
docker run -u 0 -it image_id /bin/bash
Here I can able to install all root packages now
You can use this docker file for setting the no password then it won't ask any password
RUN apt-get update \
&& apt-get install -y sudo
RUN adduser --disabled-password --gecos '' docker
RUN adduser docker sudo
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER docker
After mounting /var/run/docker.sock to a running docker container, I would like to explore the possibilities. Can I issue docker commands from inside the container, like docker stop? Why is it considered a security risk:- what exact commands could I run as a root user in docker that could possibly compromise the host?
It's trivial to escalate access to the docker socket to a root shell on the host.
docker run -it --rm --privileged --pid host debian nsenter -t 1 -m -u -n -i bash
I couldn't give you exact commands to execute since I'm not testing this but I'm assuming you could:
Execute docker commands, including mounting host volumes to newly spawned docker containers, allowing you to write to the host
Overwrite the socket to somehow inject arbitrary code into the host
Escalate privileges to other docker containers running on the same machine
I have a container that's based on the matspfeiffer/flutter image. I'm trying to forward some of my devices present on my host to the container so eventually I can run an android emulator from inside it.
I'm providing the following options to the docker run command:
--device /dev/kvm
--device /dev/dri:/dev/dri
-v /tmp/.X11-unix:/tmp/.X11-unix
-e DISPLAY
This renders the /dev/kvm device accessible from within the container.
However, the permissions for the /dev/kvm device on my host are the following:
crw-rw----+ 1 root kvm 10, 232 oct. 5 19:12 /dev/kvm
So from within the container I'm unable to interact with the device properly because of insufficient permissions.
My best shot at fixing the issue so far has been to alter the permissions of the device on my host machine like so:
sudo chmod 777 /dev/klm
It fixes the issue but it goes without saying that it is not in any case an appropriate solution.
I was wondering if there was a way to grant the container permission to interact with that specific device without altering the permissions on my host.
I am open to giving --privileged access to my host to my container.
I also wish to be able to create files from within the container without the permissions being messed up (I was once root inside a Docker container which made it so every file I would create in a shared volume from within the container inaccessible from my host).
For reference, I'm using VS Code remote containers to build and run the container so the complete docker run command as provided by VS Code is the following
docker run --sig-proxy=false -a STDOUT -a STDERR --mount type=bind,source=/home/diego/Code/Epitech/B5/redditech,target=/workspaces/redditech --mount type=volume,src=vscode,dst=/vscode -l vsch.local.folder=/home/diego/Code/Epitech/B5/redditech -l vsch.quality=stable -l vsch.remote.devPort=0 --device /dev/kvm --device /dev/dri:/dev/dri -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY --fifheri --entrypoint /bin/sh vsc-redditech-850ec704cd6ff6a7a247e31da931a3fb-uid -c echo Container started
I know similar question had already been answered, and I studied dilligently.
I believe, I have tried nearly all possible combinations, without success:
sudo docker run --device /dev/ttyAMA0:/dev/ttyAMA0 --device /dev/mem:/dev/mem --device /dev/gpiomem:/dev/gpiomem --privileged my_image_name /bin/bash
I have also refered to the docker manual and tried also with --cap-add=SYS_ADMIN
sudo docker run --cap-add=SYS_ADMIN --device /dev/ttyAMA0:/dev/ttyAMA0 --device /dev/mem:/dev/mem --device /dev/gpiomem:/dev/gpiomem --privileged my_image_name /bin/bash
I also tried combintions with volumes: -v /sys:/sys
But I still get failed access to devices, due to Permission denied:
I have checked that those devices possibly needed exist and I can read them:
I am wasted. What am I still doing wrong ? Is it that I must run my app inside container as root ? How in the world ? :D
You're running commands in the container as appuser, while the device files are owned by root with various group permissions and no world access (crw-rw--- and crw-r-----). Those groups may look off because /etc/groups inside the container won't match the host, and what passes through to the container is the uid/gid, not the user/group name. The app itself appears to expect you are running as root and even suggests sudo. That sudo is not on the docker command itself (though you may need that if your user on the host is not a member of the docker group) but on the process started inside the container:
docker run --user root --privileged my_image_name /bin/bash
Realize that this is very insecure, so make sure you trust the process inside the container as if it was running as root on the host outside of the container, because it has all the same access.
I have this image in which I mount a volume from the host
-v /Users/john/workspace:/data/workspace
Inside the container I'm using a user different than root. Now the problem is that it cannot create/modify files inside /data/workspace (permission denied). Now I solved it for now to do chmod -R 777 workspace on the host. What would be the docker way to solve this ?
This might be solved with user mapping (issue 7198), but that same thread include:
Managed to solve this using the new dockerfile args. It doesn't require doing anything special after the container is built, so I thought I'd share. (Requires Docker 1.9)
In the Dockerfile:
# Setup User to match Host User, and give superuser permissions
ARG USER_ID=0
RUN useradd code_executor -u ${USER_ID} -g sudo
RUN echo 'code_executor ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER ${USER_ID}
Then to build:
docker build --build-arg USER_ID=$(id -u)
That way, the user in the container can write in the mounted host volume (no chown/chmod required)