Exposing dynamically opened ports inside docker container - linux

Assuming an application that dynamically opens UDP ports running inside docker container, how would one expose/bind such ports to the outside (host) ports?
This is perhaps same as the question raised here, but, the answer (using --net=host) limits the scalability of running multiple container instances exposing same ports to host.
Is there any way to configure one to one mapping of dynamically opened ports in containers with host?
e.g. port 45199/udp is opened inside container and is exposed to port 45199/udp on host?

Probably you can find some way to automagiclly foreword ports from container host, but then you will have the same problems like in case of host networking (possible ports conflicts in case of multiple container instances).
Probably in your scenario best approach will be exposing some port range i.e.:
docker run --expose=7000-8000 ...
And refer to containers by IP address in case of default bridge networking (you will have to container IP using docker inspect) or by name in case of user defined network (https://docs.docker.com/engine/userguide/networking/configure-dns/).

I also find extremely annoying that you are not allowed to dynamically expose a port in Docker.
With Kubernetes apparently you can:
kubectl expose deployment first-deployment --port=80 --type=NodePort
See also the katacoda tutorial https://www.katacoda.com/courses/kubernetes/launch-single-node-cluster
and the kubectl manual here https://www.mankier.com/1/kubectl-expose

Related

How to scale application with multiple exposed ports and multiple volume mounted by using docker swarm?

I have one Java based application(Jboss version 6.1 Community) with heavy traffic on it. Now I want to migrate this application deployments using docker and docker-swarm for clustering.
Scenario
My application needs two ports exposed from docker container one is web port(i.e.9080) and another one is databases connection port(i.e.1521) and there are few things like logs directory for each container mounted on host system.
Simple Docker example
docker run -it -d --name web1 -h "My Hostname" -p 9080:9080 -p 1521:1521 -v /home/web1/log:/opt/web1/jboss/server/log/ -v /home/web1/license:/opt/web1/jboss/server/license/ MYIMAGE
Docker with Swarm example
docker service create --name jboss_service --mount type=bind,source=/home/web1/license,destination=/opt/web1/jboss/server/license/ --mount type=bind,source=/home/web1/log,destination=/opt/web1/jboss/server/log/ MYIMAGE
Now if I scale/replicate above service to 2 or 3, which host port it will bind and which mount directory will it bind for the newly created containers ??
Can anyone help me to get how scale and replication service will work in this type of scenario ?
I also gone through --publish and --name global but nothing help me in my case.
Thank you!
Supporting stateful containers is still immature in the Docker universe.
I'm not sure this is possible with Docker Swarm (if it is I'd like to know) and it's not a simple problem to solve.
I would suggest you review the Statefulset feature that comes in the latest version of Kubernetes:
https://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
It supports the creation of a unique volume for each container in a scale-up event. As for port handling that is part of Kubernetes nornal Service feature that implements container load balancing.
I would suggest building your stack into a docker-compose v3 file, which could be run onto an swarn-cluster.
Instead publishing those ports, you should expose them. That means, the ports are NOT available onto the hostsystem directly, but in the docker-network. Every Composefile got it's own network by default, eg: 172.18.0.0/24. Each Container got's an own ip and makes that Service available other the specified port.
If you scale up to 3 Containers you will got:
172.18.0.1:9080,1521
172.18.0.2:9080,1521
172.18.0.3:9080,1521
You would need a Loadbalancer to access those Services. I do use Jwilder/Nginx if you prefer a container approach. I also can recommand Rancher which comes with an internal Loadbalancer.
In Swarm-mode you have to use the overlay network driver and create the network, otherwise it would just be accessible from the local host itself.
Related to logging, you should redirect your log file to stdout and catch them with an logging driver (fluentd, syslog, graylog2)
For persistent Storage you should have a look at flocker! However Databases might not support those storage implementations. EG: MYsql doesnot support them, mongodb does work with a flocker volume.
It seems like you have to read alot.. :)
https://docs.docker.com/

Docker instance port Management

I have different docker instances and I need to start node.js processes in each of these instances. For this to happen, is it needed that each of them start on different port numbers? How does the container manage that and is there a docker management system for it? I want the client to know which port has the instance initiated the node.js process. How can this be automated?
this problem is called "orchestration". I kind of think Docker is a bit overblown because it actually doesn't solve this problem.
Kubernetes is an open source tool. Tutum is an online service. Docker has started a tool but it's not done.
Honestly, it's a bit of a cluster-show at the moment. If you're not hosting 20+ instances, I'd recommend building bash scripts.
Currently, I use a bespoke solution made from DigitalOcean, Dokku, and bash scripting. This gives me the flexibility of a self-hosted Heroku like environment that is very dev friendly.
Dokku let's you deploy docker apps using a 'git push'. It reads files in your repo to build the image.
You don't have to start the applications inside docker on different ports. You can map any port (for example, port 80) inside the docker container to any port on your host machine.
There is no rule about how to use this to your benefit.
If your clients all have ids say in the 1-10000 range, you can map the docker container's port 80 to "client_id + 20000".

NSolid Apps In Docker Containers register wrong IP

I'm deploying a bunch of node apps in docker containers and trying to use N|Solid to monitor them. However, the process in the container is using the internal ip address of the container( 172.17.0.1 ). Which makes sense technically, but the those IPs are not resolvable and the UI never picks them up.
Is there a way to tell the process the IP address to use? Environment variable or something
Will with NodeSource here.
Yes. This is a bit of a problem. We have a set of N|Solid Docker Images baking in the oven that address this.
For now, the best way to get N|Solid to work with Docker is to create a network using docker network create nsolid, and run the N|Solid proxy, console, and etcd all in docker containers on that network using docker run --net nsolid.
When you add a container to the network, it will grab the ip address and register it with etcd. Since everything is on the same network, the proxy will be able to use that ip address to reach the N|Solid agent.
If you want to try out the N|Solid Docker Images we are baking, shoot me an email at wblankenship#nodesource.com

Running docker container through mitmproxy

I'm trying to route all traffic of a docker container through mitmproxy running in another docker container. In order for mitmproxy to work, I have to change the gateway IP of the original docker container.
Here is an example of what I want to do, but I want to restrict this to be entirely inside docker containers.
Any thoughts on how I might be able to do this? Also, I want to avoid running either of the two docker containers in privileged mode.
The default capability set granted to containers does not allow a container to modify network settings. By running in privileged mode, you grant all capabilities to the container -- but there is also an option to grant individual capabilities as needed. In this case, the one you require is called CAP_NET_ADMIN (full list here: http://man7.org/linux/man-pages/man7/capabilities.7.html), so you could add --cap-add NET_ADMIN to your docker run command.
Make sure to use that option when starting both containers, since they both require some network adjustments to enable transparent packet interception.
In the "proxy" container, configure the iptables pre-routing NAT rule according to the mitmproxy transparent mode instructions, then start mitmproxy (with the -T flag to enable transparent mode). I use a small start script as the proxy image's entry point for this since network settings changes occur at container runtime only and cannot be specified in a Dockerfile or otherwise persisted.
In the "client" container, just use ip route commands to change the default gateway to the proxy container's IP address on the docker bridge. If this is a setup you'll be repeating regularly, consider using an entry point script on the client image that will set this up for you automatically when the container starts. Container linking makes that easier: you can start the proxy container, and link it when starting the client container. Then the client entry point script has access to the proxy container's IP via an environment variable.
By the way, if you can get away with using mitmproxy in non-transparent mode (configure the client explicitly to use an HTTP proxy), I'd highly recommend it. It's much less of a headache to set up.
Good luck, have fun!

Docker : Linking containers on different host machines

How can i connect two containers on different host machines in Docker ? I need to use data from mongodb on one host by a nodejs application on another host ? Can any one give me an example like this?
You could use the abassador pattern for container linking
http://docs.docker.com/articles/ambassador_pattern_linking/
Flocker is also addressing this issue, but needs more time for infrastructure setup:
https://docs.clusterhq.com/en/0.3.2/gettingstarted/
You might want to checkout also Kontena (http://www.kontena.io). Kontena supports multicast (provided by Weave) and DNS service discovery. Because of DNS discovery you can predict before the deploy what addresses each container will get.
As Flocker, Kontena also needs some time for infrastructure setup: https://github.com/kontena/kontena/tree/master/docs#getting-started
But you will get service scaling and deploy automation as a bonus.
You can connect container from different host by creating an overlay network.
Docker Engine supports multi-host networking out-of-the-box through
the overlay network driver.
It doesn't matter what machine the other container is on, all you need to is ensure that the port is exposed on that machine and then direct the second container on the first machine to the IP of the second machine.
Machine 1: Postgres:5432 172.25.8.10 ifconfig
Machine 2: Web Server:80 172.25.8.11 -> Point DB to 172.25.8.10:5432

Resources