Connect docker containers directly to host subnet - linux

I'm facing some problems trying to directly connect docker containers to the network of the host.
The configuration is as follows
One host has one interface (eth0) in the subnet, say, 10.0.15.0/24. The IP on eth0 is 10.0.15.5/24.
I customized the docker0 bridge to use a subnet within the subnet available from eth0, namely 10.0.15.64/26. So docker can use IPs from this /26 to give to containers, and I want the containers to be directly accessible from the rest of the network. The docker bridge also has an IP set, namely 10.0.15.65/26.
When a containers is created, it gets an IP, say 10.0.15.66/26. Now, I did some test with pinging:
anything on the network can ping 10.0.15.5 (eth0 of host)
anything on the network can ping 10.0.15.65 (docker0 bridge of host)
host can ping 10.0.15.66 (ip of container)
container can ping anything on the network
anything other than the host can not ping the container at 10.0.15.66
IP forwarding is turned on
[root#HOSTNAME~]# cat /proc/sys/net/ipv4/ip_forward
1
What am I missing here?
The containers connected to the docker0 bridge should be reachable from the network I think.
Expected behaviour
Containers should be pingable from anywhere on the network, just like the docker0 bridge etc.
Any thoughts or help would be greatly appreciated!

Finally figured out why it wasn't working for us.
The machine I was running the docker container in, was a VM on a hypervisor. The hypervisor only accepts one MAC address from the NIC attached to the VM. In other words, the NIC in the VM was not set to promiscuous mode.
What I did to work around this issue was just use a bare metal machine. Another solution would be to manually set the NIC to promiscuous mode, so it accepts all packets, instead of just the packets for it's own MAC.

Related

Open docker to internet from azure Redhat server without IP forwarding

I have 5 docker containers running inside Azure Redhat Server. If IP forwarding is enabled it works over the internet.
As a security issue need to disable IP forwarding.
Is there any solution like switching it from docker0 to eth0?

Use multiple specific network interfaces with docker swarm

I am using docker swarm to deploy an on-premise third party application. The machine I am deploying on is running RHEL 7.6 and has two network interfaces. The users will interact with the application from eth0, but internal communication with their system must use eth1 or the connection will be blocked by their firewalls. My application requires some of my services to establish connections internal services in their network.
I created my swarm using:
$ docker swarm init --advertise-addr x.x.x.x
Where x.x.x.x is the eth0 inet address. This works for incoming user traffic to the service. However, when I try to establish connections to another service the connection times out, blocked by the firewall.
Outside of docker, on the machine, I can run:
ssh -b y.y.y.y user#server
Where y.y.y.y is the eth1 inet address, and it works. When I run the same in my docker swarm container I get this error:
bind: y.y.y.y: Cannot assign requested address
Is there some way I can use multiple network interfaces with docker swarm and specify which one is used within containers? I couldn't find much documentation on this. Do I need to set up some sort of proxy?
By default, Docker creates a bridged network from the network interfaces on your machine and then attaches each container instance to it via a separate virtual interface internally. So you would not be able to bind to eth1 directly as your Docker container has a different interface.
See https://docs.docker.com/v17.09/engine/userguide/networking/ for a general guide and https://docs.docker.com/v17.09/engine/userguide/networking/default_network/custom-docker0/ for custom bridged networks with multiple physical interfaces.

What is the significance of docker bridge address in kubernetes cluster hosted on AKS?

I would like to understand the purpose of Docker bridge address in Kubernetes cluster. Is it related to container to container communication or is it something else.
The Docker bridge network address represents the default docker0 bridge network address present in all Docker installations.
While docker0 bridge is not used by AKS clusters or the pods themselves, you must set this address to continue to support scenarios such as docker build within the AKS cluster.
It is required to select a CIDR for the Docker bridge network address because otherwise Docker will pick a subnet automatically which could conflict with other CIDRs.
This network 172.17.0.0/16 is default when you installed docker. Please refer below link for more explanation. https://github.com/docker/for-aws/issues/117 Because docker choose first available network and most of the time end up choosing 172.17.0.0/16 hence this route is added locally towards docker0 bridge
Regards,
Vinoth

VM Networking Dilemma

Some background:
I'm attempting to set up a pentesting network with a handful of virtual machines for the SANS 560 (Network Penetration Testing and Ethical Hacking) course, but I'm having an issue with the network configuration.
To paint a picture of the network (at least how it's intended to be):
My home router (connected to the internet, also the gateway for all other machines on the network) IP is 192.168.0.1/24, with all other machines on the network in the 255.255.255.0 subnet
As per the course notes, I should be setting up all my virtual machines with bridged adapters on the 10.10.X.X/16 subnet - with Linux machines on 10.10.75.X/16, Windows guest machines on 10.10.76.X/16, and my "host" (also a VM running Windows) machine on 10.10.78.1/16
My question:
How (assuming it's possible) do I configure my host machine (with new new IP 10.10.78.1/16) to be able to talk to the other guest machines (Virtual machines) while also being able to connect to the internet?
I've tried setting up a static route to use the new IP as the gateway (seeing as the router is on a different subnet):
route ADD 192.168.0.0 MASK 255.255.255.0 10.10.78.1 (192.168.0.0 is the destination, obviously the mask is 255.255.255.0, and the gateway is 10.10.78.1) - it didn't work (all I get is Destination Host Unreachable)
Do I need to have two interfaces on this Windows machine (i.e. one configured as 10.10.78.1/16 to talk to the other VMs, and another configured as 192.168.0.X/24 to access the internet) to make this configuration possible?
I understand it's not how a network would be set up typically, so please let me know if you need me to clarify or provide more information.
I found a solution that seems to work.
Again, for context, below is a list of the machines on the network:
Name | Adapter type | IP | Static routes?
============================================================================
Windows VM1 | Bridged | 192.168.0.11/24 | Nil
| Bridged | 10.10.78.1/16 | Yes, see below
-------------|----------------|---------------------|-----------------------
Windows VM2 | Bridged | 10.10.76.1/16 | Yes, see below
-------------|----------------|---------------------|-----------------------
Linux VM3 | Bridged | 10.10.75.1/16 | Nil
Static routes:
Static routes for VM1:
Note: In the adapter settings for 192.168.0.11/24, I set the default gateway as the IP for my internet router (192.168.0.1), and the netmask as 255.255.255.0
Note: In the adapter settings for 10.10.78.1/16, I left the default gateway blank (as it gets set when adding the static route), and the netmask as 255.255.0.0
route -P ADD 10.10.0.0 MASK 255.255.0.0 192.168.0.11 (must use -P so that the route persists between reboots)
Static routes for VM2:
route -P ADD 10.10.0.0 MASK 255.255.0.0 10.10.76.1 (must use -P so that the route persists between reboots)
Note: you must run netsh advfirewall set allprofiles state off to allow the other VMs (including other Windows machines) on the 10.10.0.0/16 subnet to talk to this machine.
This configuration allows the following behaviour:
VM1 can initiate a connection with VM2 and VM3
Neither VM2 nor VM3 can initiate a connection with VM1
VM2 and VM3 can inter-communicate (i.e. can initiate connections with each other, in either direction)
Furthermore, this configuration should allow all of the VMs to communicate to the VPN that is setup for labs later on in the course, since they all have an adapter configured on the 10.10.x.x/16 network.
There are two solutions:
Add 10.10.0.0/16 to your router as a secondary IP subnet - if possible - or change the 192.168.0.0/24 range to 10.10.0.0/16.
Use another router to create the 10.10.0.0/16 subnet and connect it to 192.168.0.0/24 through on of its interfaces. On your Internet router, add a static route to 10.10.0.0/16. The router can be anything, a hardware router, a layer-3 switch, or a Windows/Linux machine with routing enabled.
A third approach - running both subnets in the same layer-2 segment connected by a router-on-a-stick - doesn't really cut it for the purpose.
Edit: The route in your question is the wrong way - assuming your inter-subnet router uses 192.168.0.99 and 10.10.78.1, on your Internet router, add route 10.10.0.0/16 -> 192.168.0.99 and on the new subnet use 10.10.78.1 as the default gateway.

Bridged linux networking and virtualization

I have Linux host with libvirt/kvm virtualization, VMs needs the "real" static IP addresses, so I decided to setup bridged network. I make br0 on the host and in the VMs properties I set source device: Host device vnet0(Bridge 'br0').
For example, my br0 have ip 192.168.1.1 and one of the VM have 192.168.1.5
Everything works pretty well, but then I connect to the virtual machine, the client address detects as 192.168.1.1. Also, all the HTTP requests comes from 192.168.1.1.
Q: Is it my mistake, some sort of misconfiguration? How can VM get the real IPs of the clients?
Let me try to answer based on what i infer from your question:
Since you want to assign routable IP addresses to the VMs,
Option 1: Add the host physical ethernet interface to the vswitch (aka vswitch uplink). Further, for all the VM ethernet interface, assign IP address in the same subnet in which the physical ethernet interface's IP belongs. Alternatively, if DHCP server is running in the same broadcast domain (subnet), the VMs would get the IP from the DHCP server if the interfaces are configured to get IP via DHCP
Option 2: Create the vswitch and assign X.Y.Z.1 IP to the vswitch (br0). Also enable IP forwarding in the host. Now you can assign IPs from the same subnet to the VM ethernet interfaces. Alternatively, you can run DHCP (e.g. DNSmasq) on br0 and assign IPs to the VM interfaces
Is it my mistake, some sort of misconfiguration? How can VM get the real IPs of the clients?
If you are connecting from the host on which your vms are running, then they are getting the real IP address. Your host shares a network (192.168.1.0/24 or similar, apparently) with the virtual machines. When you connect to your virtual machines from your host, the source address if 192.168.1.1.
If you are connecting from elsewhere on your network, you would need to provide us with more details of your configuration.

Resources