I know similar question had already been answered, and I studied dilligently.
I believe, I have tried nearly all possible combinations, without success:
sudo docker run --device /dev/ttyAMA0:/dev/ttyAMA0 --device /dev/mem:/dev/mem --device /dev/gpiomem:/dev/gpiomem --privileged my_image_name /bin/bash
I have also refered to the docker manual and tried also with --cap-add=SYS_ADMIN
sudo docker run --cap-add=SYS_ADMIN --device /dev/ttyAMA0:/dev/ttyAMA0 --device /dev/mem:/dev/mem --device /dev/gpiomem:/dev/gpiomem --privileged my_image_name /bin/bash
I also tried combintions with volumes: -v /sys:/sys
But I still get failed access to devices, due to Permission denied:
I have checked that those devices possibly needed exist and I can read them:
I am wasted. What am I still doing wrong ? Is it that I must run my app inside container as root ? How in the world ? :D
You're running commands in the container as appuser, while the device files are owned by root with various group permissions and no world access (crw-rw--- and crw-r-----). Those groups may look off because /etc/groups inside the container won't match the host, and what passes through to the container is the uid/gid, not the user/group name. The app itself appears to expect you are running as root and even suggests sudo. That sudo is not on the docker command itself (though you may need that if your user on the host is not a member of the docker group) but on the process started inside the container:
docker run --user root --privileged my_image_name /bin/bash
Realize that this is very insecure, so make sure you trust the process inside the container as if it was running as root on the host outside of the container, because it has all the same access.
Related
After mounting /var/run/docker.sock to a running docker container, I would like to explore the possibilities. Can I issue docker commands from inside the container, like docker stop? Why is it considered a security risk:- what exact commands could I run as a root user in docker that could possibly compromise the host?
It's trivial to escalate access to the docker socket to a root shell on the host.
docker run -it --rm --privileged --pid host debian nsenter -t 1 -m -u -n -i bash
I couldn't give you exact commands to execute since I'm not testing this but I'm assuming you could:
Execute docker commands, including mounting host volumes to newly spawned docker containers, allowing you to write to the host
Overwrite the socket to somehow inject arbitrary code into the host
Escalate privileges to other docker containers running on the same machine
I have a container that's based on the matspfeiffer/flutter image. I'm trying to forward some of my devices present on my host to the container so eventually I can run an android emulator from inside it.
I'm providing the following options to the docker run command:
--device /dev/kvm
--device /dev/dri:/dev/dri
-v /tmp/.X11-unix:/tmp/.X11-unix
-e DISPLAY
This renders the /dev/kvm device accessible from within the container.
However, the permissions for the /dev/kvm device on my host are the following:
crw-rw----+ 1 root kvm 10, 232 oct. 5 19:12 /dev/kvm
So from within the container I'm unable to interact with the device properly because of insufficient permissions.
My best shot at fixing the issue so far has been to alter the permissions of the device on my host machine like so:
sudo chmod 777 /dev/klm
It fixes the issue but it goes without saying that it is not in any case an appropriate solution.
I was wondering if there was a way to grant the container permission to interact with that specific device without altering the permissions on my host.
I am open to giving --privileged access to my host to my container.
I also wish to be able to create files from within the container without the permissions being messed up (I was once root inside a Docker container which made it so every file I would create in a shared volume from within the container inaccessible from my host).
For reference, I'm using VS Code remote containers to build and run the container so the complete docker run command as provided by VS Code is the following
docker run --sig-proxy=false -a STDOUT -a STDERR --mount type=bind,source=/home/diego/Code/Epitech/B5/redditech,target=/workspaces/redditech --mount type=volume,src=vscode,dst=/vscode -l vsch.local.folder=/home/diego/Code/Epitech/B5/redditech -l vsch.quality=stable -l vsch.remote.devPort=0 --device /dev/kvm --device /dev/dri:/dev/dri -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY --fifheri --entrypoint /bin/sh vsc-redditech-850ec704cd6ff6a7a247e31da931a3fb-uid -c echo Container started
I have performed the following experiment on two Docker hosts, "Host A" and "Host B": pulled a certain JupyterHub image, started it with /var/run/docker.sock mounted, then exec-ed into the running container and checked the ownership/permissions of /var/run/docker.sock inside the container. Details:
docker pull jupyterhub/jupyterhub:1.3
docker run -d --name jhub -v /var/run/docker.sock:/var/run/docker.sock jupyterhub/jupyterhub:1.3
docker exec -it jhub /bin/bash
Now in the container: ls -l /var/run/docker.sock
On "Host A" I get something unexpected:
srw-rw---- 1 nobody nogroup 0 Jun 24 08:22 /var/run/docker.sock
whereas on "Host B" I get what I should:
srw-rw---- 1 root 998 0 May 27 12:30 /var/run/docker.sock
(note that the GID 998 is the docker group ID on the host, so this is OK). It does not matter whether I explicitly mount /var/run/docker.sock read-write or read-only.
Both "Host A" and "Host B"...
...run Ubuntu 20.04.2 LTS,
...have Docker version 20.10.6, build 370c289 installed,
...the /var/run/docker.sock socket is owned by root:docker on both hosts as it should,
...the JupyterHub image is exactly the same, ID=c9d26511309a,
...the containers' users are root so there's no reason to map docker.sock to the nobody:nogroup user in one of them.
The only difference is that "Host A" is an Azure VM and "Host B" is a physical machine. I set up both and installed Docker on them exactly the same way (or so I think), carefully following the instructions on the Docker website.
Why this matters? Because I get "Permission denied" errors if I try and spawn a notebook container from the JupyterHub container on "Host A" (the Azure VM). The DockerSpawner class needs to access /var/run/docker.sock and if it's not owned by root it can't perform its job.
Diligent Googling turned up several discussions on having a similar problem in a Jenkins container but the solutions offered usually revolve around adding a user to the docker group which does not apply to my case. Help is therefore desperately needed :-) Thanks.
Update:
After a complete uninstall/purge and reinstall cycle the problem disappeared, as it so often happens.... :-(
I don't know if this solves your problem, but in my case I found that docker is running "rootless". You can check by docker info, under Security Options Therefore instead of mounting /var/run/docker.sock apparently I need to mount /run/user/$USERID/docker.sock
docker run --rm -it -v /run/user/1118/docker.sock:/var/run/docker.sock docker sh
So in your case,
docker pull jupyterhub/jupyterhub:1.3
docker run -d --name jhub -v /run/user/"$(id -u)"/docker.sock:/var/run/docker.sock jupyterhub/jupyterhub:1.3
docker exec -it jhub /bin/bash
I am trying to mount a volume into docker on a compute cluster running ubuntu 18.04. This volume is on a mounted filesystem to which my user has access, but sudo does not. I do have sudo permissions on this cluster. I use this command:
docker run -it --mount type=bind,source="$(pwd)"/logs,target=/workspace/logs tmp:latest bash
The result is this error:
docker: Error response from daemon: invalid mount config for type "bind": stat /home/logs: permission denied.
See 'docker run --help'.
Mounting the volume works fine on my local machine where both sudo and I have access to the drive I want to mount, which makes me believe that the problem is indeed that on the server sudo does not have permissions to the drive I want to mount into docker.
What I have tried:
running the post-install steps $ sudo groupadd docker && sudo usermod -aG docker $USER
running docker with sudo
running docker with --privileged
running docker with --user $(id -u):$(id -g)
setting the user inside the dockerfile with USER $(id -u):$(id -g) (plugging in the actual values)
Is there a way to mount the volume in this setup or to change the dockerfile to correctly access the drive with my personal user? Any help would be much appreciated.
On a sidenote, within docker I would only require readaccess to the volume in case that changes anything.
The container is created by the Docker daemon, which runs as root. That's why it still doesn't work even if you run the container or the docker command as your own user.
You might be able to run the daemon as your own user (rootless mode).
You could also look at changing the mount options (on the host system) so that the root user on the host does have access. How to do this depends on the type of filesystem.
Using the latest Docker engine, I want to create a container that mounts a volume over the network. But when I try to execute the mount command, I got the error Unable to apply new capability set.. Found out, that Docker restricts permission, like on mounting here. Different sources say, that its necessary to add SYS_ADMIN permission.
I did this, but still not working with the following command:
docker run --cap-add=SYS_ADMIN --cap-add=DAC_READ_SEARCH --privileged --memory=2g -d --name $containerName $imageName
This seems to work
docker run ... \
--cap-add SYS_ADMIN \
--cap-add DAC_READ_SEARCH \
my_container
Currently you will probably need to be sure to unmount your volume before you stop the container. Otherwise the host will now allow restarting any containers due to an untidy work queue or something. I created a script to stop my container by first unmounting, then killing the CMD process. I run this inside the container when I need to kill it.
umount /mnt/efbo_share -t cifs -l
sleep 1
pkill npm
pkill node
You can read about the unmount issues at these links:
https://github.com/moby/moby/issues/22197
https://github.com/moby/moby/issues/5618