Bridged Xen domU with gateway in different subnet - linux

I have a Xen dom0 running Debian Wheezy (7.8) and Xen 4.1, set up with bridged networking.
199.XXX.161.64 is the dom0 gateway.
199.XXX.161.65 is the dom0 address.
192.XXX.13.128/28 is the subnet for the domU's.
Configuration dom0:
root#dom0:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
auto xenbr0
iface xenbr0 inet static
address 199.XXX.161.65
netmask 255.255.255.254
network 199.XXX.161.64
broadcast 199.XXX.161.65
gateway 199.XXX.161.64
dns-nameservers 199.XXX.162.41 199.XXX.162.141
bridge_ports eth0
bridge_stp off # disable Spanning Tree Protocol
bridge_fd 0 # no forwarding delay
bridge_maxwait 0 # no delay before a port becomes available
allow-hotplug xenbr0 # start interface on hotplug event
root#dom0:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master xenbr0 state UP qlen 1000
link/ether 00:25:90:d5:06:1a brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether 00:25:90:d5:06:1b brd ff:ff:ff:ff:ff:ff
4: xenbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 00:25:90:d5:06:1a brd ff:ff:ff:ff:ff:ff
inet 199.XXX.161.65/31 brd 199.XXX.161.65 scope global xenbr0
inet6 fe80::XXXX:90ff:fed5:61a/64 scope link
valid_lft forever preferred_lft forever
8: vif1.0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master xenbr0 state UP qlen 32
link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
inet6 fe80::fcff:ffff:feff:ffff/64 scope link
valid_lft forever preferred_lft forever
root#dom0:~# brctl show
bridge name bridge id STP enabled interfaces
xenbr0 8000.002590d5061a no eth0
vif1.0
root#dom0:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 199.XXX.161.64 0.0.0.0 UG 0 0 0 xenbr0
192.XXX.13.128 0.0.0.0 255.255.255.240 U 0 0 0 xenbr0
199.XXX.161.64 0.0.0.0 255.255.255.254 U 0 0 0 xenbr0
root#dom0:~# iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-out vif1.0 --physdev-is-bridged
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in vif1.0 --physdev-is-bridged udp spt:68 dpt:67
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-out vif1.0 --physdev-is-bridged
ACCEPT all -- 192.XXX.13.129 0.0.0.0/0 PHYSDEV match --physdev-in vif1.0 --physdev-is-bridged
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
This host can reach its gateway and thus the internet.
root#dom0:~# ping -c 1 199.XXX.161.64
PING 199.XXX.161.64 (199.XXX.161.64) 56(84) bytes of data.
64 bytes from 199.XXX.161.64: icmp_req=1 ttl=64 time=0.459 ms
--- 199.XXX.161.64 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms
I also have a domU (with the same OS) which needs a primary IP address in a different subnet. There is no gateway on the network in this subnet. I want to keep my network setup bridged (no dom0 routing or NAT) so I added the dom0 gateway as the gateway for the domU as described in this blogpost.
Configuration domU:
root#domU:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:16:3e:b7:7e:cc brd ff:ff:ff:ff:ff:ff
inet 192.XXX.13.129/28 brd 192.XXX.13.143 scope global eth0
inet6 fe80::XXXX:3eff:feb7:7ecc/64 scope link
valid_lft forever preferred_lft forever
root#domU:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 199.XXX.161.64 0.0.0.0 UG 0 0 0 eth0
192.XXX.13.128 0.0.0.0 255.255.255.240 U 0 0 0 eth0
199.XXX.161.64 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
With this configuration the domU still has no network access. To test if the bridge was working I manually added a route to dom0.
root#domU:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 199.XXX.161.64 0.0.0.0 UG 0 0 0 eth0
192.XXX.13.128 0.0.0.0 255.255.255.240 U 0 0 0 eth0
199.XXX.161.64 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
199.XXX.161.65 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
Now the dom0 and domU can communicate through the bridge.
root#domU:~# ping -c 1 199.XXX.161.65
PING 199.XXX.161.65 (199.XXX.161.65) 56(84) bytes of data.
64 bytes from 199.XXX.161.65: icmp_req=1 ttl=64 time=0.037 ms
--- 199.XXX.161.65 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms
root#dom0:~# ping -c 1 192.XXX.13.129
PING 192.184.13.129 (192.XXX.13.129) 56(84) bytes of data.
64 bytes from 192.XXX.13.129: icmp_req=1 ttl=64 time=0.100 ms
--- 192.XXX.13.129 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms
However, the domU still can't reach the gateway.
root#domU:~# ping -c 1 199.XXX.161.64
PING 199.XXX.161.64 (199.XXX.161.64) 56(84) bytes of data.
From 192.XXX.13.129 icmp_seq=1 Destination Host Unreachable
--- 199.XXX.161.64 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms
I have attempted to log establish if the traffic is actually being sent through the bridge by inserting a -j LOG rule at the top of the INPUT, OUTPUT and FORWARD iptables chains. When the domU attempts to ping the gateway the dom0 does not log a single packet. I also tried manually adding an entry for the gateway in domU's ARP table but the results were the same. The domU can't reach the gateway and thus has no network access other than being able to communicate with dom0 via static route.

So if I am understanding this correctly, the following is the network configuration for your DomU:
192.XXX.13.129/28 - DomU IP Address
199.XXX.161.64 - DomU GW Address
The problem is that your DomU does not have a route (layer 3) to allow it to talk to the GW address since the GW address is in a different subnet. So even though the router is on the same layer 2 network, the router (if it is processing your packets) does not know about your layer 3 network and is sending it's responses to it's default gateway.
That you are able to ping the Dom0 from the DomU is odd and probably the result of both the Dom0 and the DomU using the same Linux Bridge (which it not a true ethernet switch, more like a dumb hub).
The simple fix is to add an address from your DomU network to the LAN interface on your router.
The better fix would be to use VLANs to segment the different networks via layer 2 and replace Linux Bridges with Open vSwitch. This would completely isolate the Dom0 and DomU traffic so that they would be required to communication via a router and possibly a firewall.

Related

Strongswan: packets received and decrypted correctly but not forwarded

I have a Lan-to-Lan vpn tunnel between Cisco CSR router and Strongswan. On Strongswan i see:
[root#ip-172-31-20-224 log]# strongswan status
Security Associations (1 up, 0 connecting):
tenant-13[2]: ESTABLISHED 66 minutes ago, 172.31.20.224[local_public_ip]...remote_public_ip[remote_public_ip]
tenant-13{3}: INSTALLED, TRANSPORT, reqid 1, ESP in UDP SPIs: cdf35340_i cb506e65_o
tenant-13{3}: 172.31.20.224/32 === remote_public_ip/32
tenant-13{147}: INSTALLED, TUNNEL, reqid 3, ESP in UDP SPIs: ca2c0328_i 0295d7bf_o
tenant-13{147}: 0.0.0.0/0 === 0.0.0.0/0
My crypto SA's allow for 0/0 -> 0/0. So all looks good.
I do receive encrypted packet on Strongswan and those are decrypted correctly, example: we can see that on virtual vti interface the udp packets are received (decrypted correctly):
[root#ip-172-31-20-224 log]# tcpdump -i vti13 -n udp port 3000
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vti13, link-type RAW (Raw IP), capture size 262144 bytes
11:19:57.834374 IP 192.168.1.116.54545 > X.X.X.X.hbci: UDP, length 340
Now X.X.X.X is a public ip address and those packets should be forwarded (out via eth0 using default routing), but i do not see those when looking via tcpdump:
[root#ip-172-31-20-224 log]# tcpdump -i eth0 -n host X.X.X.X
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
^C
0 packets captured
I have only one physical interface (eth0, transport for ipsec and default route) + one virtual (for decrypted traffic). So the traffic after decryption should be sent back out via the same eth0 interface:
[root#ip-172-31-20-224 log]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 02:ab:39:97:b0:7e brd ff:ff:ff:ff:ff:ff
inet 172.31.20.224/20 brd 172.31.31.255 scope global dynamic eth0
valid_lft 2673sec preferred_lft 2673sec
inet6 fe80::ab:39ff:fe97:b07e/64 scope link
valid_lft forever preferred_lft forever
3: ip_vti0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
9: vti13#NONE: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1400 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 172.31.20.224 peer 89.68.162.135
inet 1.0.0.2/30 scope global vti13
valid_lft forever preferred_lft forever
inet6 fe80::5efe:ac1f:14e0/64 scope link
valid_lft forever preferred_lft forever
I have confirmed that:
routing is enabled
policy checks are disabled (sysctl -w net.ipv4.conf.default.rp_filter=0 and sysctl -w net.ipv4.conf.vti13.disable_policy=1)
iptables INPUT, OUTPUT, FORWARD was empty with ALLOW, but i have added specific rules also and see 0 hits:
[root#ip-172-31-20-224 log]# iptables -I INPUT -i vti13 -j ACCEPT
[root#ip-172-31-20-224 log]# iptables -I FORWARD -i vti13 -j ACCEPT
[root#ip-172-31-20-224 log]# iptables -L -v -n
Chain INPUT (policy ACCEPT 9 packets, 1164 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- vti13 * 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- vti13 * 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 6 packets, 776 bytes)
pkts bytes target prot opt in out source destination
I have added entries to PREROUTING and POSTROUTING, just to check if i see those packets there and can confirm i can see those only in PREROUTING (so indeed the packet is not routed):
[root#ip-172-31-20-224 log]# iptables -L -v -n -t nat
Chain PREROUTING (policy ACCEPT 2 packets, 184 bytes)
pkts bytes target prot opt in out source destination
19192 25M DNAT udp -- vti13 * 0.0.0.0/0 0.0.0.0/0 udp dpt:3000 to:X.X.X.X:3000
I've tried to look via syslog (enabled kernel logging), but did not spot anything interesting.
What is the problem ? why my Linux is not forwarding those packets ?
Thanks,
OK, found the solution, as per https://docs.strongswan.org/docs/5.9/features/routeBasedVpn.html
had to disable charon.install_routes.

Why I can send packets through `eth0` but not `eth1` in Docker container?

I created a docker container and connected it to two bridge networks as:
# network 1
docker network create --driver=bridge network1 --subnet=172.56.0.0/24
#network 2
docker network create --driver=bridge network2 --subnet=172.56.1.0/24
docker run \
--name container \
--privileged \
--cap-add=ALL -d \
-v /dev:/dev \
--network network1 \
-v /lib/modules:/lib/modules \
container-image tail -f /dev/null
docker network connect network2 container
Now, if I run ip addr inside container, I have two ethernet network interfaces:
6551: eth0#if6552: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:38:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.56.0.2/24 brd 172.56.0.255 scope global eth0
valid_lft forever preferred_lft forever
6553: eth1#if6554: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:38:01:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.56.1.2/24 brd 172.56.1.255 scope global eth1
valid_lft forever preferred_lft forever
I'm using scapy to send/receive through IP and ICMP protocol, with something like this:
from scapy.all import *
import sys
src = sys.argv[1]
dst = sys.argv[2]
msg = "Hello World!"
packet = IP(src = src, dst = dst)/ICMP()/msg
data = sr1(packet).load.decode('utf-8')
print(f"Received {data!r}")
I'm able to run it when src = 172.56.0.2 and dst = www.google.com or when I'm using eth0 as a source, but if I change it to src = 172.56.1.2, it won't work at all. Is there anything wrong with my eth1 interface here? Any help would be appreciated.
The problem is the routing table. Take a look at conf.route:
>>> from scapy.all import *
>>> conf.route
Network Netmask Gateway Iface Output IP Metric
0.0.0.0 0.0.0.0 172.56.0.1 eth0 172.56.0.2 0
127.0.0.0 255.0.0.0 0.0.0.0 lo 127.0.0.1 1
172.56.0.0 255.255.255.0 0.0.0.0 eth0 172.56.0.2 0
172.56.1.0 255.255.255.0 0.0.0.0 eth1 172.56.1.2 0
In the above route table, the default route is via 172.56.0.1. Any attempt to reach an address that isn't on a directly connected network will be sent via the default gateway, which is only reachable via eth0. If you want your request to go out eth1, you need to modify your routing table. For example, we can replace the default route:
>>> conf.route.delt(net='0.0.0.0/0', gw='172.56.0.1', metric=0)
>>> conf.route.add(net='0.0.0.0/0', gw='172.56.1.1', metric=0)
>>> conf.route
Network Netmask Gateway Iface Output IP Metric
0.0.0.0 0.0.0.0 172.56.1.1 eth1 172.56.1.2 0
127.0.0.0 255.0.0.0 0.0.0.0 lo 127.0.0.1 1
172.56.0.0 255.255.255.0 0.0.0.0 eth0 172.56.0.2 0
172.56.1.0 255.255.255.0 0.0.0.0 eth1 172.56.1.2 0
With this modified routing table, our requests will be sent out eth1.
If we assume that the appropriate gateway will always be the .1 address associated with the source interface, we can rewrite your code like this to automatically apply the correct route:
from scapy.all import *
import sys
import ipaddress
src = ipaddress.ip_interface(sys.argv[1])
dst = sys.argv[2]
gw = src.network[1]
conf.route.routes = [route for route in conf.route.routes if route[1] != 0]
conf.route.add(net='0.0.0.0/0', gw=f'{gw}', metric=0)
msg = "Hello World!"
packet = IP(src = f'{src.ip}', dst=dst)/ICMP()/msg
data = sr1(packet).load.decode('utf-8')
print(f"Received {data!r}")
With the modified code, the first argument (sys.argv[1]) needs to be an address/mask expression. This now works with both interface addresses:
root#bacb8598b801:~# python sendpacket.py 172.56.0.2/24 8.8.8.8
Begin emission:
Finished sending 1 packets.
.*
Received 2 packets, got 1 answers, remaining 0 packets
Received 'Hello World!'
root#bacb8598b801:~# python sendpacket.py 172.56.1.2/24 8.8.8.8
Begin emission:
Finished sending 1 packets.
.*
Received 2 packets, got 1 answers, remaining 0 packets
Received 'Hello World!'
Watching tcpdump on the two bridge interfaces, you can see that traffic is being routed via the expected interface for each source address.

How to send/receive raw ethernet packets from `eth0` to `eth1`?

I created a docker container and connected it to two bridge networks as:
# network 1
docker network create --driver=bridge network1 --subnet=172.56.0.0/24
#network 2
docker network create --driver=bridge network2 --subnet=172.56.1.0/24
docker run \
--name container \
--privileged \
--cap-add=ALL -d \
-v /dev:/dev \
--network network1 \
-v /lib/modules:/lib/modules \
container-image tail -f /dev/null
docker network connect network2 container
Then, I add a bridge NIC to connect eth0 to eth1 and add IP address to it:
# creating bridge NIC
brctl addbr br0
brctl addif br0 eth0
brctl addif br0 eth1
ifconfig br0 up
# adding IP address to br0
ip addr add 172.56.0.3/24 brd + dev br0
route add default gw 172.56.0.1 dev br0
Now, If I run ip addr, this is my output:
2: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 02:42:ac:38:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.56.0.3/24 brd 172.56.0.255 scope global br0
valid_lft forever preferred_lft forever
6773: eth0#if6774: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default
link/ether 02:42:ac:38:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.56.0.2/24 brd 172.56.0.255 scope global eth0
valid_lft forever preferred_lft forever
6775: eth1#if6776: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default
link/ether 02:42:ac:38:01:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.56.1.2/24 brd 172.56.1.255 scope global eth1
valid_lft forever preferred_lft forever
Now, I'm trying to send a raw ethernet packet by using scapy as:
from scapy.all import *
import sys
src_mac_address = sys.argv[1]
dst_mac_address = sys.argv[2]
packet = Ether(src = src_mac_address, dst = dst_mac_address)
answer = srp(packet, iface='eth0')
It sends the packet, but it would not be received by the other side. I checked the traffic by using bmon, So I see the packet coming out of eth0, but it is never going to be received by the eth1. Is something wrong here? I appreciate your help. By the way, I chose src and dst MAC addresses as: src_mac_address = 02:42:ac:38:00:02 for eth0 and dst_mac_address = 02:42:ac:38:00:02 for eth1.

Set eth0 as DHCP in raspberry pi

I want to set eth0 of RPI (Raspbian Stretch) to be DHCP, my goal is that when I will connect any device that communicates using TCP/IP protocol, that device will receive an IP address.
I have found many guides that all lead to making eth0 ip address static, which is not my intention.
Currently a device is connected through eth0 , ifconfig says it has some IP but pinging the hostname of the device gets no reply. wlan0 is connected via wifi.
Here is some info :
pi#raspberrypi:~ $ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 169.254.31.197 netmask 255.255.0.0 broadcast 169.254.255.255
inet6 fe80::ad5a:8219:4c27:b59b prefixlen 64 scopeid 0x20<link>
ether b8:27:eb:f3:f2:87 txqueuelen 1000 (Ethernet)
RX packets 81 bytes 26568 (25.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 46 bytes 10544 (10.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 31 bytes 3472 (3.3 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 31 bytes 3472 (3.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.0.71 netmask 255.255.255.0 broadcast 192.168.0.255
inet6 fd01::60ff:8818:5965:dc58 prefixlen 64 scopeid 0x0<global>
inet6 fe80::473d:110e:5474:8000 prefixlen 64 scopeid 0x20<link>
ether b8:27:eb:a6:a7:d2 txqueuelen 1000 (Ethernet)
RX packets 955 bytes 74427 (72.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 184 bytes 22096 (21.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
pi#raspberrypi:~ $ cat /etc/network/interfaces
# interfaces(5) file used by ifup(8) and ifdown(8)
# Please note that this file is written to be used with dhcpcd
# For static IP, consult /etc/dhcpcd.conf and 'man dhcpcd.conf'
# Include files from /etc/network/interfaces.d:
source-directory /etc/network/interfaces.d
Any advice to how to achieve my goal will be happily received.
Have you tried this: https://wiki.debian.org/NetworkConfiguration#Using_DHCP_to_automatically_configure_the_interface
i.e. just add this to /etc/network/interfaces:
auto eth0
allow-hotplug eth0
iface eth0 inet dhcp
if you're really pedantic you could create a file like /etc/network/interfaces.d/eth0-dhcp and paste these lines into it - the end result will be the same.
If you simply want to connect two devices (without routing/forwarding/etc), i.e. for a standalone test network, this should work:
Figure out what IP range you want to use, and what IP you want your
DHCP server to have (let's call it IP_DHCP)
Install isc-dhcp server
Set static IP address for eth0 (e.g. set it to the IP_DHCP you
chose earlier)
Configure /etc/dhcp/dhcpd.conf for the desired network range (man
dhcpd has a decent reference), make it authoritative (unless there
are other DHCP servers).
Run service isc-dhcpd-server start
Example of a configuration:
sudo apt-get install isc-dhcp-server
sudo nano /etc/conf/dhcpd.conf
Uncomment #authoritative to make it authoritative
Add along the following lines(customize based on what you decided in step 1), where routers is IP_DHCP:
subnet 10.0.0.0 netmask 255.255.255.0 {
range 10.0.0.1 10.0.0.100;
option subnet-mask 255.255.255.0;
option broadcast-address 10.0.0.255;
option routers 10.0.0.1;
}
Save buffer and close
sudo service isc-dhcpd-server start
Good luck!
P.S. if you are getting connect:network not found issues, then you likely have an issue in your routing tables (route) setup.
Something along the lines of
sudo route add default gw 10.0.0.1
where the IP address is your eth0 will likely allow communication.
NetWork Manager (nmcli) might be running on the device and has been configured to work with a static IP.
The following commands can help you check if your device is managed by nmcli:
nmcli d
nmcli con show
Run the following command and replace 'Wired connection 1' with the name of your connection.
nmcli con show 'Wired connection 1' | grep 'ipv4.method'
ipv4.method will show "auto" if set to DHCP or "manual" if set to static.
http://www.intellamech.com/RaspberryPi-projects/rpi_nmcli.html

Docker swarm, listening in container but not outside

We have a number docker images running in a swarm-mode and are having trouble getting one of them to listen externally.
If I exec to container I can curl the URL on 0.0.0.0:8080.
When I look at networking on the host I see 1 packet being stuck in Recv-Q for this listening port (but not for the others that are working correctly.
Looking at the NAT rules I actually can curl 172.19.0.2:8084 on the docker host (docker_gwbridge) but not on the actual docker-host IP (172.31.105.59).
I've tried a number of different points (7080, 8084, 8085) and also stopped docker, did a rm -rf /var/lib/docker, and then tried only running this container but no luck. Any ideas on why this wouldn't be working for this one container image but 5 others work fine?
Docker Service
docker service create --with-registry-auth --replicas 1 --network myoverlay \
--publish 8084:8080 \
--name containerimage \
docker.repo.net/containerimage
ss -ltn
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 172.31.105.59:7946 *:*
LISTEN 0 128 *:ssh *:*
LISTEN 0 128 127.0.0.1:smux *:*
LISTEN 0 128 172.31.105.59:2377 *:*
LISTEN 0 128 :::webcache :::*
LISTEN 0 128 :::tproxy :::*
LISTEN 0 128 :::us-cli :::*
LISTEN 0 128 :::us-srv :::*
LISTEN 0 128 :::4243 :::*
LISTEN 1 128 :::8084 :::*
LISTEN 0 128 :::ssh :::*
LISTEN 0 128 :::cslistener :::*
iptables -n -L -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DOCKER-INGRESS all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
DOCKER-INGRESS all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.19.0.0/16 0.0.0.0/0
MASQUERADE all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match src-type LOCAL
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
MASQUERADE all -- 172.18.0.0/16 0.0.0.0/0
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-INGRESS (2 references)
target prot opt source destination
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8084 to:172.19.0.2:8084
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:9000 to:172.19.0.2:9000
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8083 to:172.19.0.2:8083
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 to:172.19.0.2:8080
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8081 to:172.19.0.2:8081
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8082 to:172.19.0.2:8082
RETURN all -- 0.0.0.0/0 0.0.0.0/0
ip a | grep 172.19
inet 172.19.0.1/16 scope global docker_gwbridge
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc pfifo_fast state UP qlen 1000
link/ether 12:d1:da:a7:1d:1a brd ff:ff:ff:ff:ff:ff
inet 172.31.105.59/24 brd 172.31.105.255 scope global dynamic eth0
valid_lft 3088sec preferred_lft 3088sec
inet6 fe80::10d1:daff:fea7:1d1a/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:55:ae:ff:f5 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
4: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ce:b5:27:49 brd ff:ff:ff:ff:ff:ff
inet 172.19.0.1/16 scope global docker_gwbridge
valid_lft forever preferred_lft forever
inet6 fe80::42:ceff:feb5:2749/64 scope link
valid_lft forever preferred_lft forever
23: vethe2712d7#if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether 92:58:81:03:25:20 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::9058:81ff:fe03:2520/64 scope link
valid_lft forever preferred_lft forever
34: vethc446bc2#if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether e2:a7:0f:d4:aa:1d brd ff:ff:ff:ff:ff:ff link-netnsid 4
inet6 fe80::e0a7:fff:fed4:aa1d/64 scope link
valid_lft forever preferred_lft forever
40: vethf1238ff#if39: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether e6:1a:87:a4:18:2a brd ff:ff:ff:ff:ff:ff link-netnsid 5
inet6 fe80::e41a:87ff:fea4:182a/64 scope link
valid_lft forever preferred_lft forever
46: vethe334e2d#if45: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether a2:5f:2c:98:10:42 brd ff:ff:ff:ff:ff:ff link-netnsid 6
inet6 fe80::a05f:2cff:fe98:1042/64 scope link
valid_lft forever preferred_lft forever
58: vethda32f8d#if57: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether ea:40:a2:68:d3:89 brd ff:ff:ff:ff:ff:ff link-netnsid 7
inet6 fe80::e840:a2ff:fe68:d389/64 scope link
valid_lft forever preferred_lft forever
41596: veth9eddb38#if41595: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether fa:99:eb:48:be:b0 brd ff:ff:ff:ff:ff:ff link-netnsid 9
inet6 fe80::f899:ebff:fe48:beb0/64 scope link
valid_lft forever preferred_lft forever
41612: veth161a89a#if41611: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether b6:33:62:08:da:c4 brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet6 fe80::b433:62ff:fe08:dac4/64 scope link
valid_lft forever preferred_lft forever
Ok so that's the normal behavior of your container, the port mapping is usable only with the host IP.
So if you use the container IP you have to reach the port 8080 (the real port of your application).
Because of the --publish you used, the port 8080 of your container is mapped to the port 8084 on your host IP

Resources