Escaping Docker Container - security

I have installed a Docker on my Ubuntu machine 16.04.
Is there any way to bypass Docker container to host? (RCE, Privilege Escalation etc..) Which means is there any way to access the host machine inside the docker container.
Below is the command which I am using it to launch the container.
docker run --rm -ti ubuntu:16.04
I am going to give docker containers access in my college for testing purpose. And, I have hosted everything on my personal cloud. Is it possible to compromise the host machine from the container?
Please let me know about this. Before I start giving access in my college I need to make sure about it.
PS: I have configured macvlan and containers cannot talk to each other.
Thanks!!

Related

Use of Docker and Linux containers (LXC)

Given that using Docker alongside with LXC containers in the same host can create problems in iptables (if I understood correctly) (source: https://github.com/docker/for-linux/issues/103), the same applies when using Docker INSIDE lxc container ?
In other words, could we create a LXC container on the host A, and inside of that container, install Docker, use it and be not affected by this issue?
Context: Why I am asking this? because I want to create and run a gitlab-runner inside a Docker container (with other things such as Docker, maven, etc) in a Host that has lxc containers already running, I do not want to touch these containers.

Starting and stopping docker container from other container

I need to start, stop and restart containers from inside another container.
For Example:
Container A -> start Container B
Container A -> stop Container C
My Dockerfile:
FROM node:7.2.0-slim
WORKDIR /docker
COPY . /docker
CMD [ "npm", "start" ]
Docker Version 1.12.3
I want to avoid using a ssh connection. Any Ideas?
Per se a container runs in an isolated environment (e.g. with its own file system or network stack) and thus has no direct way to interact with the host it is running on. This is of course intended that way to allow for real isolation.
But there is a way to run containers with some more privileges. To talk to the docker daemon on the host, you can for example mount the docker socket of the host system into the container. This works the same way as you probably would mount some host folder into the container.
docker run -v /var/run/docker.sock:/var/run/docker.sock yourimage
For an example, please see the docker-compose file of the traefik proxy which is a process that listenes for starting and stopping containers on the host to activate some proxy routes to them. You can find the example in the traefik proxy repository.
To be able to talk to the docker daemon on the host, you then also need to have a docker client installed in the container or use some docker api for your programming language. There is an official list of such libraries for different programming languages in the docker docs.
Of course you should be aware of what privileges you give to the container. Someone who manages to exploit your application could possibly shut down your other containers or - even worse - start own containers on your system which can easily be used to gain control over your system. Keep that in mind when you build your application.

Docker containers as Linux services?

I just created a secure Docker Registry and ran it on a remote VM (using docker run ...). I then ran docker ps and saw that it is in fact running. I exited the machine and then SSHed back in. Again, I ran docker ps and verified it "survived" me exiting the SSH session.
This has me wondering: do Docker containers actually run as Linux services? If not, is there any way of getting them to run as traditional (upstart- or systemd-based) services? Is there even any reason/merit to do so?
The docker engine runs as a daemon.
That is mentioned in "Host integration":
As of Docker 1.2, restart policies are the built-in Docker mechanism for restarting containers when they exit. If set, restart policies will be used when the Docker daemon starts up, as typically happens after a system boot. Restart policies will ensure that linked containers are started in the correct order.
If restart policies don’t suit your needs (i.e., you have non-Docker processes that depend on Docker containers), you can use a process manager like upstart, systemd or supervisor instead.
That involves (when a container runs with some options) some security fraught, by the way: see issue 14767 and issue 6401:
The container (with --net host option) is the host when it comes to the network stack so any services running on the host are accessible to the container. It just so happens that you communicate to upstart ( and others ) this way.
This feature is a runtime only option, just like the --privileged flag, therefore an image cannot request this, it must be explicitly set at runtime.

connecting to services on docker host from docker container

Apologies for asking two unrelated questions.
what is the best way of accessing the host machine of the docker container (i.e. I am trying to access a kafka instance running on the host, from my docker container so that I can publish some messages)
when I run docker run ..... on an image which I've modified that may have an issue/syntax error, it will naturally not start - is there a log file anywhere that I would be able to take a look at to debug the issue. (this question is somewhat related to the 1st question, since I did what was suggested on another post, but the image is still not starting)
This is an ongoing discussion on what to use and what not, I don't really know what is best. Using the docker run --net="host" is pretty easy but can be dangerous. See From inside of a Docker container, how do I connect to the localhost of the machine?.
Use docker logs containerid or lookup the raw data in /var/lib/docker/containers/containerid/ for Ubuntu.
You should have no problem connecting to the host using the local lan interface ip address. Suppose you have a host with ip 192.168.0.1:
docker run --rm -ti ubuntu bash
ping 192.168.0.1
should give you a response.
You can use docker logs to see the standard output of your container.

communication between containers in docker

Is there any to way to communicate among docker containers other than via sockets/network? I have read docker documentation which says we can link docker containers using --link option but it doesn't speicify how to transfer data/msg from one container to another. I have already created a container named checkram.
Now I want to link a new container with this container and I run
docker run -i -t --privileged --link=checkram:linkcheck --name linkcont topimg command.
Then i checked env variable LINKCHECK_PORT in linkcont container which contains tcp://172.17.0.14:22.
I don't know what to do with this ip and port and how to communicate with checkram container from linkcont container. can anyone help me out of this? Thanks in advance.
There are several tools you can use to achieve running multiple docker containers and interact with them. docker has a tool: docker Compose where you can build and interact multiple containers.
Another tool that works as well: decking you can also use FIG, but i found decking was very straight forward and easy to configure. At that time when i was using decking, docker compose was not released yet. docker compose is a newer tool, yet it is developed by docker.

Resources