I'd like to use Docker-in-Docker however the --privileged gives blanket access to devices. Is there a way to run this using a combination of volumes and cap-add etc. instead?
Unfortunately no, you must use the --privileged flag to run Docker in Docker, you can take a look at the official announcement where they state this is one of the many purposes of the --privileged flag.
Basically, you need more access to the host system devices to run docker than you get when running without --privileged.
Yes, you can run docker in docker without the --privileged flag. It involves mounting the docker socket to the container like so:
docker run -it -v /var/run/docker.sock:/var/run/docker.sock \
-v $(which docker):/bin/docker \
alpine docker ps -a
That is going mount the docker socket and executable into the container and run docker ps -a within the alpine container. Jérôme Petazzoni, who authored the the dind example and did a lot of the work on the --privileged flag had this to say about docker in docker:
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
I have been using this approach for a while now and it works pretty good.
The caveat with this approach is things get funky with storage. You're better off using data volume containers or named data volumes rather than mounting directories. Since you're using the docker socket from the host, any directories you want to mount with in a child container need to be from the context of the host, not the parent container. It gets weird. I have had better luck with data volume containers.
Yes. There are dind-rootless versions of docker image in docker hub.
https://hub.docker.com/_/docker
Related
After mounting /var/run/docker.sock to a running docker container, I would like to explore the possibilities. Can I issue docker commands from inside the container, like docker stop? Why is it considered a security risk:- what exact commands could I run as a root user in docker that could possibly compromise the host?
It's trivial to escalate access to the docker socket to a root shell on the host.
docker run -it --rm --privileged --pid host debian nsenter -t 1 -m -u -n -i bash
I couldn't give you exact commands to execute since I'm not testing this but I'm assuming you could:
Execute docker commands, including mounting host volumes to newly spawned docker containers, allowing you to write to the host
Overwrite the socket to somehow inject arbitrary code into the host
Escalate privileges to other docker containers running on the same machine
docker cp is used to copy files from one container to host but you have to run docker cp from outside the container. My question is, how to do it when I am inside the container using command
docker exec -it container_id /bin/bash
Do we have something like scp here?
I think the easiest way of doing this is to mount part of the host filesystem to the container and just spit out the file to the mount point.
The whole idea of containers is that they're not able to do stuff with the host system unless the docker admin explicitly allows it.
You may also try mounting the docker socket to the container (if it's a Docker-in-Docker container) and docker cp from within the container itself, but the first solution is way cleaner IMHO
Now we can run docker containers like docker run --device /dev/fuse $IMAGE.
But Kubernetes couldn't support host deivces yet, refer to https://github.com/kubernetes/kubernetes/issues/5607 .
Is that possible to mount devices like volumes? We have try -v /dev/fuse:/dev/fuse but the container didn't have permissions to open that char device. Can we add more capabilities to do that?
We have tried docker run --cap-add=ALL -v /dev/fuse:/dev/fuse and it didn't work. I think --device or --privileged is needed for this scenario.
I'm using non-root user on a secured env to run stock DB docker container (elasticsearch). Of course - I want the data to be mounted so I won't lose it when the container is destroyed.
The problem is that this container writes to that volume with root ownership, and then the host doesn't have permissions to move/rm them.
I know that most docker images use root user from inside, but how can I control the file ownership of the hosting machine?
You can create a data container docker create -v /usr/share/elasticsearch/data --name esdata elasticsearch /bin/true, then use it in your container docker run -d --volumes-from esdata --name some-elasticsearch elasticsearch.
This is a prefer data pattern for docker, you can find out more in this docker page.
To answer you question use "docker run --user '$(id -u)' ..." it will run program within container with current user id, then you might have the same question as I did.
I answered it in some way I hope it might be useful.
Docker with '--user' can not write to volume with different ownership
I would like to run a docker container that hosts a simple web application, however I do not understand how to design/run the image as a server. For example:
docker run -d -p 80:80 ubuntu:14.04 /bin/bash
This will start and immediately shutdown the container. Instead we can start it interactively:
docker run -i -p 80:80 ubuntu:14.04 /bin/bash
This works, but now I have to keep open the interactive shell for every container that is running? I would rather just start it and have it running in the background. A hack would be using a command that never returns:
docker run -d -p 80:80 {image} tail -F /var/log/kern.log
But now I cannot connect to the shell anymore, to inspect what is going on if the application is acting up.
Is there a way to start the container in the background (as we would do for a vm), in a way that allows for attaching/detaching a shell from the host? Or am I completely missing the point?
The final argument to docker run is the command to run within the container. When you run docker run -d -p 80:80 ubuntu:14.04 /bin/bash, you're running bash in the container and nothing more. You actually want to run your web application in a container and to keep that container alive, so you should do docker run -d -p 80:80 ubuntu:14.04 /path/to/yourapp.
But your application probably depends on some configuration in order to run. If it reads its configuration from environment variables, you can use the -e key=value arguments with docker run. If your application needs a configuration file to be in place, you should probably use a Dockerfile to set up the configuration first.
This article provides a nice complete example of running a node application in a container.
Users of docker tend to assume a container to be a complete a VM, while the docker design concept is more focused on optimal containerization rather than mimic the VM within a container.
Both are correct however some implementation details are not easy to get familiar with in the beginning. I am trying to summarize some of the implementational difference in a way that is easier to understand.
SSH
SSH would be the most straight-forward way to go inside a Linux VM (or container), however many dockerized templates do not have ssh server installed. I believe this is because of optimization & security reasons for the container.
docker attach
docker attach can be handy if working as out-of-the-box. However as of writing it is not stable - https://github.com/docker/docker/issues/8521. Might be associated with SSH set up, but not sure when it is completely fixed.
docker recommended practices (nsenter and etc)
Some alternatives (or best practices in some sense) recommended by Docker at https://blog.docker.com/2014/06/why-you-dont-need-to-run-sshd-in-docker/
This practice basically separates out mutable elements out of a container and maps them to some places in a docker host so they can be manipulated from outside of container and/or persisted. Could be a good practice in production environment but not now when more docker related projects are around dev and staging environment.
bash command line
"docker exec -it {container id} bash" cloud be very handy and practical tool to get in to the machine.
Some basics
"docker run" creates a new container so previous changes will not be saved.
"docker start" will start an existing container so previous changes will still be in the container, however you need to find the correct container-id among many with a same image-id. Need to "docker commit" to suppress versions if wanted.
Ctrl-C will stop the container when exiting. You will want to append "&" at the end so the container can run background and gives you the prompt when hitting enter key.
To the original question, you can tail some file, like you mentioned, to keep the process running.
To reach the shell, instead of "attach", you have two options:
docker exec -it <container_id> /bin/bash
Or
run ssh daemon in the container, port map the ssh and then ssh to container.