Running docker container through mitmproxy - security

I'm trying to route all traffic of a docker container through mitmproxy running in another docker container. In order for mitmproxy to work, I have to change the gateway IP of the original docker container.
Here is an example of what I want to do, but I want to restrict this to be entirely inside docker containers.
Any thoughts on how I might be able to do this? Also, I want to avoid running either of the two docker containers in privileged mode.

The default capability set granted to containers does not allow a container to modify network settings. By running in privileged mode, you grant all capabilities to the container -- but there is also an option to grant individual capabilities as needed. In this case, the one you require is called CAP_NET_ADMIN (full list here: http://man7.org/linux/man-pages/man7/capabilities.7.html), so you could add --cap-add NET_ADMIN to your docker run command.
Make sure to use that option when starting both containers, since they both require some network adjustments to enable transparent packet interception.
In the "proxy" container, configure the iptables pre-routing NAT rule according to the mitmproxy transparent mode instructions, then start mitmproxy (with the -T flag to enable transparent mode). I use a small start script as the proxy image's entry point for this since network settings changes occur at container runtime only and cannot be specified in a Dockerfile or otherwise persisted.
In the "client" container, just use ip route commands to change the default gateway to the proxy container's IP address on the docker bridge. If this is a setup you'll be repeating regularly, consider using an entry point script on the client image that will set this up for you automatically when the container starts. Container linking makes that easier: you can start the proxy container, and link it when starting the client container. Then the client entry point script has access to the proxy container's IP via an environment variable.
By the way, if you can get away with using mitmproxy in non-transparent mode (configure the client explicitly to use an HTTP proxy), I'd highly recommend it. It's much less of a headache to set up.
Good luck, have fun!

Related

Best Practise for docker intercontainer communication

I have two docker containers A and B. On container A a django application is running. On container B a WEBDAV Source is mounted.
Now I want to check from container A if a folder exists in container B (in the WebDAV mount destination).
What is the best solution to do something like that? Currently I solved it mounting the docker socket into the container A to execute cmds from A inside B. I am aware that mounting the docker socket into a container is a security risk for the host and the whole application stack.
Other possible solutions would be to use SSH or share and mount the directory which should be checked. Of course there are further possible solutions like doing it with HTTP requests.
Because there are so many ways to solve a problem like that, I want to know if there is a best practise (considering security, effort to implement, performance) to execute commands from container A in contianer B.
Thanks in advance
WebDAV provides a file-system-like interface on top of HTTP. I'd just directly use this. This requires almost no setup other than providing the other container's name in configuration (and if you're using plain docker run putting both containers on the same network), and it's the same setup in basically all container environments (including Docker Swarm, Kubernetes, Nomad, AWS ECS, ...) and a non-Docker development environment.
Of the other options you suggest:
Sharing a filesystem is possible. It leads to potential permission problems which can be tricky to iron out. There are potential security issues if the client container isn't supposed to be able to write the files. It may not work well in clustered environments like Kubernetes.
ssh is very hard to set up securely in a Docker environment. You don't want to hard-code a plain-text password that can be easily recovered from docker history; a best-practice setup would require generating host and user keys outside of Docker and bind-mounting them into both containers (I've never seen a setup like this in an SO question). This also brings the complexity of running multiple processes inside a container.
Mounting the Docker socket is complicated, non-portable across environments, and a massive security risk (you can very easily use the Docker socket to root the entire host). You'd need to rewrite that code for each different container environment you might run in. This should be a last resort; I'd consider it only if creating and destroying containers would need to be a key part of this one container's operation.
Is there a best practise to execute commands from container A in contianer B?
"Don't." Rearchitect your application to have some other way to communicate between the two containers, often over HTTP or using a message queue like RabbitMQ.
One solution would be to mount one filesystem readonly on one container and read-write on the other container.
See this answer: Docker, mount volumes as readonly

Obtaining Linux host metrics from within a docker container

Looking for some recommendations for how to report linux host metrics such as cpu and memory utilization and disk usage stats from within a docker container. The host will contain a number of docker containers. One thought was to run Top and other basic linux commands from the outside the container and push them into a container folder that has the appropriate authorization so that they can be consumed. Another thought was to use the docker api to run docker stats for the containers but not sure this is the best as it may not report on other processes running on the host that are not containerized. A third option would be to somehow execute something like TOP and other commands on the host from within the container, this option being the most ideal for my situation. I was just looking for some proven design patterns that others have used. Also, I don’t have the ability to install a bunch of tools on the host as this would be a customer host which I don’t have control as to what is already installed.
You may run your container in privileged mode, but be aware that it this could compromise the host security as your container will no longer be in a sandboxed environment.
docker run -d --privileged --pid=host alpine:3.8 sh
When the operator executes docker run --privileged, Docker will enable access to all devices on the host as well as set some configuration in AppArmor or SELinux to allow the container nearly all the same access to the host as processes running outside containers on the host. Additional information about running with --privileged is available on the Docker Blog.
https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities
Good reference: https://security.stackexchange.com/a/218379

How to scale application with multiple exposed ports and multiple volume mounted by using docker swarm?

I have one Java based application(Jboss version 6.1 Community) with heavy traffic on it. Now I want to migrate this application deployments using docker and docker-swarm for clustering.
Scenario
My application needs two ports exposed from docker container one is web port(i.e.9080) and another one is databases connection port(i.e.1521) and there are few things like logs directory for each container mounted on host system.
Simple Docker example
docker run -it -d --name web1 -h "My Hostname" -p 9080:9080 -p 1521:1521 -v /home/web1/log:/opt/web1/jboss/server/log/ -v /home/web1/license:/opt/web1/jboss/server/license/ MYIMAGE
Docker with Swarm example
docker service create --name jboss_service --mount type=bind,source=/home/web1/license,destination=/opt/web1/jboss/server/license/ --mount type=bind,source=/home/web1/log,destination=/opt/web1/jboss/server/log/ MYIMAGE
Now if I scale/replicate above service to 2 or 3, which host port it will bind and which mount directory will it bind for the newly created containers ??
Can anyone help me to get how scale and replication service will work in this type of scenario ?
I also gone through --publish and --name global but nothing help me in my case.
Thank you!
Supporting stateful containers is still immature in the Docker universe.
I'm not sure this is possible with Docker Swarm (if it is I'd like to know) and it's not a simple problem to solve.
I would suggest you review the Statefulset feature that comes in the latest version of Kubernetes:
https://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
It supports the creation of a unique volume for each container in a scale-up event. As for port handling that is part of Kubernetes nornal Service feature that implements container load balancing.
I would suggest building your stack into a docker-compose v3 file, which could be run onto an swarn-cluster.
Instead publishing those ports, you should expose them. That means, the ports are NOT available onto the hostsystem directly, but in the docker-network. Every Composefile got it's own network by default, eg: 172.18.0.0/24. Each Container got's an own ip and makes that Service available other the specified port.
If you scale up to 3 Containers you will got:
172.18.0.1:9080,1521
172.18.0.2:9080,1521
172.18.0.3:9080,1521
You would need a Loadbalancer to access those Services. I do use Jwilder/Nginx if you prefer a container approach. I also can recommand Rancher which comes with an internal Loadbalancer.
In Swarm-mode you have to use the overlay network driver and create the network, otherwise it would just be accessible from the local host itself.
Related to logging, you should redirect your log file to stdout and catch them with an logging driver (fluentd, syslog, graylog2)
For persistent Storage you should have a look at flocker! However Databases might not support those storage implementations. EG: MYsql doesnot support them, mongodb does work with a flocker volume.
It seems like you have to read alot.. :)
https://docs.docker.com/

NSolid Apps In Docker Containers register wrong IP

I'm deploying a bunch of node apps in docker containers and trying to use N|Solid to monitor them. However, the process in the container is using the internal ip address of the container( 172.17.0.1 ). Which makes sense technically, but the those IPs are not resolvable and the UI never picks them up.
Is there a way to tell the process the IP address to use? Environment variable or something
Will with NodeSource here.
Yes. This is a bit of a problem. We have a set of N|Solid Docker Images baking in the oven that address this.
For now, the best way to get N|Solid to work with Docker is to create a network using docker network create nsolid, and run the N|Solid proxy, console, and etcd all in docker containers on that network using docker run --net nsolid.
When you add a container to the network, it will grab the ip address and register it with etcd. Since everything is on the same network, the proxy will be able to use that ip address to reach the N|Solid agent.
If you want to try out the N|Solid Docker Images we are baking, shoot me an email at wblankenship#nodesource.com

Internal infrastructure with docker

I have a small company network with the following services/servers:
Jenkins
Stash (Atlassian)
Confluence (Atlassian)
LDAP
Owncloud
zabbix (monitoring)
puppet
and some Java web apps
all running in separate kvm(libvirt)-vms in separate virtual-subnets on 2 machines (1 internal, 1 hetzner-rootserver) with shorewall inbetween. I'm thinking about switching to Docker.
But I have two questions:
How can I achieve network security between docker containers (i.e. I want to prevent owncloud to access any host in the network except ldap-hosts-sslport)
Just by using docker-linking? If yes: does docker really allow to access only linked containers, but no others?
By using kubernetes?
By adding multiple bridging-network-interfaces for each container?
Would you switch all my infra-services/-servers to docker, or a hybrid solution with just the owncloud and the java-web-apps on docker?
Regarding the multi-host networking: you're right that Docker links won't work across hosts. With Docker 1.9+ you can use "Docker Networking" like described in their blog post http://blog.docker.com/2015/11/docker-multi-host-networking-ga/
They don't explain how to secure the connections, though. I strongly suggest to enable TLS on your Docker daemons, which should also secure your multi-host network (that's an assumption, I haven't tried).
With Kubernetes you're going to add another layer of abstraction, so that you'll need to learn working with the pods and services concept. That's fine, but might be a bit too much. Keep in mind that you can still decide to use Kubernetes (or alternatives) later, so the first step should be to learn how you can wrap your services in Docker containers.
You won't necessarily have to switch everything to Docker. You should start with Jenkins, the Java apps, or OwnCloud and then get a bit more used to the Docker universe. Jenkins and OwnCloud will give you enough challenges to gain some experience in maintaining containers. Then you can evaluate much better if Docker makes sense in your setup and with your needs to be applied to the other services.
I personally tend to wrap everything in Docker, but only due to one reason: keeping the host clean. If you get to the point where everything runs in Docker you'll have much more freedom to choose where a service can run and you can move containers to other hosts much more easily.
You should also explore the Docker Hub, where you can find ready to run solutions, e.g. Atlassian Stash: https://hub.docker.com/r/atlassian/stash/
If you need inspiration for special applications and how to wrap them in Docker, I recommend to have a look in https://github.com/jfrazelle/dockerfiles - you'll find a bunch of good examples there.
You can give containers their own IP from your subnet by creating a network like so:
docker network create \
--driver=bridge \
--subnet=135.181.x.y/28 \
--gateway=135.181.x.y+1 \
network
Your gateway is the IP of your subnet + 1 so if my subnet was 123.123.123.123 then my gateway should be 123.123.123.124
Unfortunately I have not yet figured out how to make the containers appear to the public from their own ip, at the moment they appear as the dedicated servers' ip. Let me know if you know how I can fix that. I am able to access the container using its ip though.

Resources