Use multiple specific network interfaces with docker swarm - linux

I am using docker swarm to deploy an on-premise third party application. The machine I am deploying on is running RHEL 7.6 and has two network interfaces. The users will interact with the application from eth0, but internal communication with their system must use eth1 or the connection will be blocked by their firewalls. My application requires some of my services to establish connections internal services in their network.
I created my swarm using:
$ docker swarm init --advertise-addr x.x.x.x
Where x.x.x.x is the eth0 inet address. This works for incoming user traffic to the service. However, when I try to establish connections to another service the connection times out, blocked by the firewall.
Outside of docker, on the machine, I can run:
ssh -b y.y.y.y user#server
Where y.y.y.y is the eth1 inet address, and it works. When I run the same in my docker swarm container I get this error:
bind: y.y.y.y: Cannot assign requested address
Is there some way I can use multiple network interfaces with docker swarm and specify which one is used within containers? I couldn't find much documentation on this. Do I need to set up some sort of proxy?

By default, Docker creates a bridged network from the network interfaces on your machine and then attaches each container instance to it via a separate virtual interface internally. So you would not be able to bind to eth1 directly as your Docker container has a different interface.
See https://docs.docker.com/v17.09/engine/userguide/networking/ for a general guide and https://docs.docker.com/v17.09/engine/userguide/networking/default_network/custom-docker0/ for custom bridged networks with multiple physical interfaces.

Related

What is the significance of docker bridge address in kubernetes cluster hosted on AKS?

I would like to understand the purpose of Docker bridge address in Kubernetes cluster. Is it related to container to container communication or is it something else.
The Docker bridge network address represents the default docker0 bridge network address present in all Docker installations.
While docker0 bridge is not used by AKS clusters or the pods themselves, you must set this address to continue to support scenarios such as docker build within the AKS cluster.
It is required to select a CIDR for the Docker bridge network address because otherwise Docker will pick a subnet automatically which could conflict with other CIDRs.
This network 172.17.0.0/16 is default when you installed docker. Please refer below link for more explanation. https://github.com/docker/for-aws/issues/117 Because docker choose first available network and most of the time end up choosing 172.17.0.0/16 hence this route is added locally towards docker0 bridge
Regards,
Vinoth

Linking containers across networks

I have two containers a webapp and a database.
The webapp is is on network 10.10.1.10/24 using macvlan driver for this user defined network and the name of this network is mnet
link/ether 02:42:0a:84:01:05
inet 10.10.1.10/24 scope global eth0
Db container is on inet 10.10.11.2/24 scope global eth0
Using bridge driver for this user defined network and the name of this network is int
I can ping the gateway of mnet ( 10.10.10.1 ) and all other machines on local network but not the webapp (10.10.10.5 ) hosted in docker from db container.
I want to link the webapp with db on this network but not been able to establish a connection with db or vice versa.
I can link both if i put the db container on same as webapp network.
Reason i don't want them to put on the same network is that mnet is integrated with our LAN so i want to isolate users from accessing the db container.
Environment : Production
Docker version: 17.03.1-ce
OS : RancherOs
Is there any work around,reason or solution for this problem.
A container can be on multiple networks. Make a new user defined network and put both the DB and web app on that. Then when they want to talk to each other they will use this new network but other traffic will use your existing networks.

Networking setup in docker

If I have a network setup in Docker, as follows
Here MQTT and Nodejs are two separate docker containers.
Do I also have to use TLS for securing channel A? Do I even have to secure channel A? I guess no, as it is just a container to container channel. Am I correct?
I am a newbie as far as Docker is concerned but I've read that docker0 will allow containers to intercommunicate but prevent outside world from connecting to containers, until a port is mapped from the host to a container.
It's not clear fro the question if the nodejs and mqtt broker are in the same docker container or in 2 separate containers, but...
The interlink between 2 docker containers for a port not mapped to the host will be on the internal virtual network, the only way to sniff that traffic will be from the host machine so as long as the host machine is secure that traffic should be secure with out the need to run a SSL/TLS listener.
If both the nodejs app and the broker are within the same docker container then they can communicate over localhost so not even over the virtual network interconnect so again not need to add SSL/TLS.
I wouldn't worry about the traffic unless the host system has potential hostile containers or users... but sometimes my imagination fails me.
You can use the deprecated --link flag to keep the port from being mapped to the host (IIUC):
docker run --name mqtt mqtt:latest & docker run --name nodejs --link mqtt nodehs:latest
Again, IIUC, this creates a private network between the two containers...
Evidence of that is a netstat -an |grep EST does not show connections between containers that are --link'd this way, even if one the target port is open to the world

Connect docker containers directly to host subnet

I'm facing some problems trying to directly connect docker containers to the network of the host.
The configuration is as follows
One host has one interface (eth0) in the subnet, say, 10.0.15.0/24. The IP on eth0 is 10.0.15.5/24.
I customized the docker0 bridge to use a subnet within the subnet available from eth0, namely 10.0.15.64/26. So docker can use IPs from this /26 to give to containers, and I want the containers to be directly accessible from the rest of the network. The docker bridge also has an IP set, namely 10.0.15.65/26.
When a containers is created, it gets an IP, say 10.0.15.66/26. Now, I did some test with pinging:
anything on the network can ping 10.0.15.5 (eth0 of host)
anything on the network can ping 10.0.15.65 (docker0 bridge of host)
host can ping 10.0.15.66 (ip of container)
container can ping anything on the network
anything other than the host can not ping the container at 10.0.15.66
IP forwarding is turned on
[root#HOSTNAME~]# cat /proc/sys/net/ipv4/ip_forward
1
What am I missing here?
The containers connected to the docker0 bridge should be reachable from the network I think.
Expected behaviour
Containers should be pingable from anywhere on the network, just like the docker0 bridge etc.
Any thoughts or help would be greatly appreciated!
Finally figured out why it wasn't working for us.
The machine I was running the docker container in, was a VM on a hypervisor. The hypervisor only accepts one MAC address from the NIC attached to the VM. In other words, the NIC in the VM was not set to promiscuous mode.
What I did to work around this issue was just use a bare metal machine. Another solution would be to manually set the NIC to promiscuous mode, so it accepts all packets, instead of just the packets for it's own MAC.

Simulate VPN connection in docker

I want to simulate a VPN traffic on my machine. I've set up VPN server which runs inside a docker image. I can successfully log in. The problem is that the docker image is running on my machine on the default docker's bridge - docker0.
There is no change if I do connect to machine using VPN or not. It is still reachable due to the bridge. I'm wondering machine should be on different (simulated) LAN. Is there some solution how to simulate a VPN connection in docker?
The client needs to be on a different subnet from that of docker0 otherwise you will always connect directly.
Think about the basics of setting up a VPN tunnel: you run a VPN so that you can connect a two endpoints and make it so that those two endpoints on the same subnet can talk to each other across a public net.
When both your client and your server VPN are running on the same subnet then, well there's not need to setup a VPN !
Hope it helps.

Resources