How to send/receive raw ethernet packets from `eth0` to `eth1`? - python-3.x

I created a docker container and connected it to two bridge networks as:
# network 1
docker network create --driver=bridge network1 --subnet=172.56.0.0/24
#network 2
docker network create --driver=bridge network2 --subnet=172.56.1.0/24
docker run \
--name container \
--privileged \
--cap-add=ALL -d \
-v /dev:/dev \
--network network1 \
-v /lib/modules:/lib/modules \
container-image tail -f /dev/null
docker network connect network2 container
Then, I add a bridge NIC to connect eth0 to eth1 and add IP address to it:
# creating bridge NIC
brctl addbr br0
brctl addif br0 eth0
brctl addif br0 eth1
ifconfig br0 up
# adding IP address to br0
ip addr add 172.56.0.3/24 brd + dev br0
route add default gw 172.56.0.1 dev br0
Now, If I run ip addr, this is my output:
2: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 02:42:ac:38:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.56.0.3/24 brd 172.56.0.255 scope global br0
valid_lft forever preferred_lft forever
6773: eth0#if6774: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default
link/ether 02:42:ac:38:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.56.0.2/24 brd 172.56.0.255 scope global eth0
valid_lft forever preferred_lft forever
6775: eth1#if6776: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default
link/ether 02:42:ac:38:01:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.56.1.2/24 brd 172.56.1.255 scope global eth1
valid_lft forever preferred_lft forever
Now, I'm trying to send a raw ethernet packet by using scapy as:
from scapy.all import *
import sys
src_mac_address = sys.argv[1]
dst_mac_address = sys.argv[2]
packet = Ether(src = src_mac_address, dst = dst_mac_address)
answer = srp(packet, iface='eth0')
It sends the packet, but it would not be received by the other side. I checked the traffic by using bmon, So I see the packet coming out of eth0, but it is never going to be received by the eth1. Is something wrong here? I appreciate your help. By the way, I chose src and dst MAC addresses as: src_mac_address = 02:42:ac:38:00:02 for eth0 and dst_mac_address = 02:42:ac:38:00:02 for eth1.

Related

Why I can send packets through `eth0` but not `eth1` in Docker container?

I created a docker container and connected it to two bridge networks as:
# network 1
docker network create --driver=bridge network1 --subnet=172.56.0.0/24
#network 2
docker network create --driver=bridge network2 --subnet=172.56.1.0/24
docker run \
--name container \
--privileged \
--cap-add=ALL -d \
-v /dev:/dev \
--network network1 \
-v /lib/modules:/lib/modules \
container-image tail -f /dev/null
docker network connect network2 container
Now, if I run ip addr inside container, I have two ethernet network interfaces:
6551: eth0#if6552: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:38:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.56.0.2/24 brd 172.56.0.255 scope global eth0
valid_lft forever preferred_lft forever
6553: eth1#if6554: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:38:01:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.56.1.2/24 brd 172.56.1.255 scope global eth1
valid_lft forever preferred_lft forever
I'm using scapy to send/receive through IP and ICMP protocol, with something like this:
from scapy.all import *
import sys
src = sys.argv[1]
dst = sys.argv[2]
msg = "Hello World!"
packet = IP(src = src, dst = dst)/ICMP()/msg
data = sr1(packet).load.decode('utf-8')
print(f"Received {data!r}")
I'm able to run it when src = 172.56.0.2 and dst = www.google.com or when I'm using eth0 as a source, but if I change it to src = 172.56.1.2, it won't work at all. Is there anything wrong with my eth1 interface here? Any help would be appreciated.
The problem is the routing table. Take a look at conf.route:
>>> from scapy.all import *
>>> conf.route
Network Netmask Gateway Iface Output IP Metric
0.0.0.0 0.0.0.0 172.56.0.1 eth0 172.56.0.2 0
127.0.0.0 255.0.0.0 0.0.0.0 lo 127.0.0.1 1
172.56.0.0 255.255.255.0 0.0.0.0 eth0 172.56.0.2 0
172.56.1.0 255.255.255.0 0.0.0.0 eth1 172.56.1.2 0
In the above route table, the default route is via 172.56.0.1. Any attempt to reach an address that isn't on a directly connected network will be sent via the default gateway, which is only reachable via eth0. If you want your request to go out eth1, you need to modify your routing table. For example, we can replace the default route:
>>> conf.route.delt(net='0.0.0.0/0', gw='172.56.0.1', metric=0)
>>> conf.route.add(net='0.0.0.0/0', gw='172.56.1.1', metric=0)
>>> conf.route
Network Netmask Gateway Iface Output IP Metric
0.0.0.0 0.0.0.0 172.56.1.1 eth1 172.56.1.2 0
127.0.0.0 255.0.0.0 0.0.0.0 lo 127.0.0.1 1
172.56.0.0 255.255.255.0 0.0.0.0 eth0 172.56.0.2 0
172.56.1.0 255.255.255.0 0.0.0.0 eth1 172.56.1.2 0
With this modified routing table, our requests will be sent out eth1.
If we assume that the appropriate gateway will always be the .1 address associated with the source interface, we can rewrite your code like this to automatically apply the correct route:
from scapy.all import *
import sys
import ipaddress
src = ipaddress.ip_interface(sys.argv[1])
dst = sys.argv[2]
gw = src.network[1]
conf.route.routes = [route for route in conf.route.routes if route[1] != 0]
conf.route.add(net='0.0.0.0/0', gw=f'{gw}', metric=0)
msg = "Hello World!"
packet = IP(src = f'{src.ip}', dst=dst)/ICMP()/msg
data = sr1(packet).load.decode('utf-8')
print(f"Received {data!r}")
With the modified code, the first argument (sys.argv[1]) needs to be an address/mask expression. This now works with both interface addresses:
root#bacb8598b801:~# python sendpacket.py 172.56.0.2/24 8.8.8.8
Begin emission:
Finished sending 1 packets.
.*
Received 2 packets, got 1 answers, remaining 0 packets
Received 'Hello World!'
root#bacb8598b801:~# python sendpacket.py 172.56.1.2/24 8.8.8.8
Begin emission:
Finished sending 1 packets.
.*
Received 2 packets, got 1 answers, remaining 0 packets
Received 'Hello World!'
Watching tcpdump on the two bridge interfaces, you can see that traffic is being routed via the expected interface for each source address.

Which ip is the server ip to use on the host file on centos 7 and to register domain nameserver

It's first time to create a server (Centos 7) to host my websites, And I tried out which ip address is used for the host file on centos 7 and to register domain nameserver. I have entered the ip addr in the command line but not sure which ip should be used. Here is the copy from the command line.
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever 2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP
qlen 1000
link/ether 08:00:27:af:b0:21 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 85832sec preferred_lft 85832sec
inet6 fe80::a00:27ff:feaf:b021/64 scope link
valid_lft forever preferred_lft forever 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
qlen 1000
link/ether 52:54:00:87:c4:e3 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state
DOWN qlen 1000
link/ether 52:54:00:87:c4:e3 brd ff:ff:ff:ff:ff:ff
Thanks
You can get public IP for DNS (Domain Name Server) by using this command
curl -s checkip.dyndns.org | sed -e 's/.*Current IP Address: //' -e 's/<.*$//'
Output of this command will give Public IP of your server.

Docker swarm, listening in container but not outside

We have a number docker images running in a swarm-mode and are having trouble getting one of them to listen externally.
If I exec to container I can curl the URL on 0.0.0.0:8080.
When I look at networking on the host I see 1 packet being stuck in Recv-Q for this listening port (but not for the others that are working correctly.
Looking at the NAT rules I actually can curl 172.19.0.2:8084 on the docker host (docker_gwbridge) but not on the actual docker-host IP (172.31.105.59).
I've tried a number of different points (7080, 8084, 8085) and also stopped docker, did a rm -rf /var/lib/docker, and then tried only running this container but no luck. Any ideas on why this wouldn't be working for this one container image but 5 others work fine?
Docker Service
docker service create --with-registry-auth --replicas 1 --network myoverlay \
--publish 8084:8080 \
--name containerimage \
docker.repo.net/containerimage
ss -ltn
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 172.31.105.59:7946 *:*
LISTEN 0 128 *:ssh *:*
LISTEN 0 128 127.0.0.1:smux *:*
LISTEN 0 128 172.31.105.59:2377 *:*
LISTEN 0 128 :::webcache :::*
LISTEN 0 128 :::tproxy :::*
LISTEN 0 128 :::us-cli :::*
LISTEN 0 128 :::us-srv :::*
LISTEN 0 128 :::4243 :::*
LISTEN 1 128 :::8084 :::*
LISTEN 0 128 :::ssh :::*
LISTEN 0 128 :::cslistener :::*
iptables -n -L -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DOCKER-INGRESS all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
DOCKER-INGRESS all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.19.0.0/16 0.0.0.0/0
MASQUERADE all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match src-type LOCAL
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
MASQUERADE all -- 172.18.0.0/16 0.0.0.0/0
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-INGRESS (2 references)
target prot opt source destination
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8084 to:172.19.0.2:8084
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:9000 to:172.19.0.2:9000
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8083 to:172.19.0.2:8083
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 to:172.19.0.2:8080
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8081 to:172.19.0.2:8081
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8082 to:172.19.0.2:8082
RETURN all -- 0.0.0.0/0 0.0.0.0/0
ip a | grep 172.19
inet 172.19.0.1/16 scope global docker_gwbridge
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc pfifo_fast state UP qlen 1000
link/ether 12:d1:da:a7:1d:1a brd ff:ff:ff:ff:ff:ff
inet 172.31.105.59/24 brd 172.31.105.255 scope global dynamic eth0
valid_lft 3088sec preferred_lft 3088sec
inet6 fe80::10d1:daff:fea7:1d1a/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:55:ae:ff:f5 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
4: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ce:b5:27:49 brd ff:ff:ff:ff:ff:ff
inet 172.19.0.1/16 scope global docker_gwbridge
valid_lft forever preferred_lft forever
inet6 fe80::42:ceff:feb5:2749/64 scope link
valid_lft forever preferred_lft forever
23: vethe2712d7#if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether 92:58:81:03:25:20 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::9058:81ff:fe03:2520/64 scope link
valid_lft forever preferred_lft forever
34: vethc446bc2#if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether e2:a7:0f:d4:aa:1d brd ff:ff:ff:ff:ff:ff link-netnsid 4
inet6 fe80::e0a7:fff:fed4:aa1d/64 scope link
valid_lft forever preferred_lft forever
40: vethf1238ff#if39: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether e6:1a:87:a4:18:2a brd ff:ff:ff:ff:ff:ff link-netnsid 5
inet6 fe80::e41a:87ff:fea4:182a/64 scope link
valid_lft forever preferred_lft forever
46: vethe334e2d#if45: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether a2:5f:2c:98:10:42 brd ff:ff:ff:ff:ff:ff link-netnsid 6
inet6 fe80::a05f:2cff:fe98:1042/64 scope link
valid_lft forever preferred_lft forever
58: vethda32f8d#if57: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether ea:40:a2:68:d3:89 brd ff:ff:ff:ff:ff:ff link-netnsid 7
inet6 fe80::e840:a2ff:fe68:d389/64 scope link
valid_lft forever preferred_lft forever
41596: veth9eddb38#if41595: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether fa:99:eb:48:be:b0 brd ff:ff:ff:ff:ff:ff link-netnsid 9
inet6 fe80::f899:ebff:fe48:beb0/64 scope link
valid_lft forever preferred_lft forever
41612: veth161a89a#if41611: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether b6:33:62:08:da:c4 brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet6 fe80::b433:62ff:fe08:dac4/64 scope link
valid_lft forever preferred_lft forever
Ok so that's the normal behavior of your container, the port mapping is usable only with the host IP.
So if you use the container IP you have to reach the port 8080 (the real port of your application).
Because of the --publish you used, the port 8080 of your container is mapped to the port 8084 on your host IP

How to bring bond0/eth0 interface UP

My clusters nodes are mostly tied to eth0 & bond0 interfaces:
[root#machine]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000
link/ether 00:25:90:68:79:4a brd ff:ff:ff:ff:ff:ff
3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
link/ether 00:25:90:68:79:4b brd ff:ff:ff:ff:ff:ff
5: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 00:25:90:68:79:4a brd ff:ff:ff:ff:ff:ff
8: gre0: <NOARP> mtu 1476 qdisc noop state DOWN
link/gre 0.0.0.0 brd 0.0.0.0
9: brffef350: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
10: ffef350: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master brffef350 state UP qlen 32
link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
If I bring down this interface(via: ip link set down) then connectivity to that node is lost. And then we can't SSH to that node.
Is there a way by which I can restore the connectivity to the nodes? Since the interface went down thus it is prohibiting SSH. Is there a way I can bring these two interfaces UP?
Reason I bring it down because I was contemplating, though not sure, that interface state transition(from up->down->up) might change the interface index(scenario which I wanted to simulate).
Using ip:
# ip link set dev <interface> up
# ip link set dev <interface> down
Using ifconfig:
# /sbin/ifconfig <interface> up
# /sbin/ifconfig <interface> down
If that does not work try # ifconfig -a
The output from that might help
have you tried pinging?
Bring up all/eth0 the interfaces defined with auto in /etc/network/interfaces :
ifup -a or ifup eth0
ifdown -a or ifdown eth0
Bring down all interfaces that are currently up:
ifquery -l
Print names of all interfaces specified with the allow-hotplug keyword:
ifquery eth0
You can set a cron job to bring the interface back up. Put a line like this in root's crontab (will work for Debian and derivatives; different command for RedHat, etc):
5 10 * * * /sbin/ifup bond0
You just need to change the 5 and 10 in the above to a time a minute or two from when you bring the interface down. In this case, 10 is the hour and 5 is the minute.

Bridged Xen domU with gateway in different subnet

I have a Xen dom0 running Debian Wheezy (7.8) and Xen 4.1, set up with bridged networking.
199.XXX.161.64 is the dom0 gateway.
199.XXX.161.65 is the dom0 address.
192.XXX.13.128/28 is the subnet for the domU's.
Configuration dom0:
root#dom0:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
auto xenbr0
iface xenbr0 inet static
address 199.XXX.161.65
netmask 255.255.255.254
network 199.XXX.161.64
broadcast 199.XXX.161.65
gateway 199.XXX.161.64
dns-nameservers 199.XXX.162.41 199.XXX.162.141
bridge_ports eth0
bridge_stp off # disable Spanning Tree Protocol
bridge_fd 0 # no forwarding delay
bridge_maxwait 0 # no delay before a port becomes available
allow-hotplug xenbr0 # start interface on hotplug event
root#dom0:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master xenbr0 state UP qlen 1000
link/ether 00:25:90:d5:06:1a brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether 00:25:90:d5:06:1b brd ff:ff:ff:ff:ff:ff
4: xenbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 00:25:90:d5:06:1a brd ff:ff:ff:ff:ff:ff
inet 199.XXX.161.65/31 brd 199.XXX.161.65 scope global xenbr0
inet6 fe80::XXXX:90ff:fed5:61a/64 scope link
valid_lft forever preferred_lft forever
8: vif1.0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master xenbr0 state UP qlen 32
link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
inet6 fe80::fcff:ffff:feff:ffff/64 scope link
valid_lft forever preferred_lft forever
root#dom0:~# brctl show
bridge name bridge id STP enabled interfaces
xenbr0 8000.002590d5061a no eth0
vif1.0
root#dom0:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 199.XXX.161.64 0.0.0.0 UG 0 0 0 xenbr0
192.XXX.13.128 0.0.0.0 255.255.255.240 U 0 0 0 xenbr0
199.XXX.161.64 0.0.0.0 255.255.255.254 U 0 0 0 xenbr0
root#dom0:~# iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-out vif1.0 --physdev-is-bridged
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in vif1.0 --physdev-is-bridged udp spt:68 dpt:67
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-out vif1.0 --physdev-is-bridged
ACCEPT all -- 192.XXX.13.129 0.0.0.0/0 PHYSDEV match --physdev-in vif1.0 --physdev-is-bridged
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
This host can reach its gateway and thus the internet.
root#dom0:~# ping -c 1 199.XXX.161.64
PING 199.XXX.161.64 (199.XXX.161.64) 56(84) bytes of data.
64 bytes from 199.XXX.161.64: icmp_req=1 ttl=64 time=0.459 ms
--- 199.XXX.161.64 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms
I also have a domU (with the same OS) which needs a primary IP address in a different subnet. There is no gateway on the network in this subnet. I want to keep my network setup bridged (no dom0 routing or NAT) so I added the dom0 gateway as the gateway for the domU as described in this blogpost.
Configuration domU:
root#domU:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:16:3e:b7:7e:cc brd ff:ff:ff:ff:ff:ff
inet 192.XXX.13.129/28 brd 192.XXX.13.143 scope global eth0
inet6 fe80::XXXX:3eff:feb7:7ecc/64 scope link
valid_lft forever preferred_lft forever
root#domU:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 199.XXX.161.64 0.0.0.0 UG 0 0 0 eth0
192.XXX.13.128 0.0.0.0 255.255.255.240 U 0 0 0 eth0
199.XXX.161.64 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
With this configuration the domU still has no network access. To test if the bridge was working I manually added a route to dom0.
root#domU:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 199.XXX.161.64 0.0.0.0 UG 0 0 0 eth0
192.XXX.13.128 0.0.0.0 255.255.255.240 U 0 0 0 eth0
199.XXX.161.64 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
199.XXX.161.65 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
Now the dom0 and domU can communicate through the bridge.
root#domU:~# ping -c 1 199.XXX.161.65
PING 199.XXX.161.65 (199.XXX.161.65) 56(84) bytes of data.
64 bytes from 199.XXX.161.65: icmp_req=1 ttl=64 time=0.037 ms
--- 199.XXX.161.65 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms
root#dom0:~# ping -c 1 192.XXX.13.129
PING 192.184.13.129 (192.XXX.13.129) 56(84) bytes of data.
64 bytes from 192.XXX.13.129: icmp_req=1 ttl=64 time=0.100 ms
--- 192.XXX.13.129 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms
However, the domU still can't reach the gateway.
root#domU:~# ping -c 1 199.XXX.161.64
PING 199.XXX.161.64 (199.XXX.161.64) 56(84) bytes of data.
From 192.XXX.13.129 icmp_seq=1 Destination Host Unreachable
--- 199.XXX.161.64 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms
I have attempted to log establish if the traffic is actually being sent through the bridge by inserting a -j LOG rule at the top of the INPUT, OUTPUT and FORWARD iptables chains. When the domU attempts to ping the gateway the dom0 does not log a single packet. I also tried manually adding an entry for the gateway in domU's ARP table but the results were the same. The domU can't reach the gateway and thus has no network access other than being able to communicate with dom0 via static route.
So if I am understanding this correctly, the following is the network configuration for your DomU:
192.XXX.13.129/28 - DomU IP Address
199.XXX.161.64 - DomU GW Address
The problem is that your DomU does not have a route (layer 3) to allow it to talk to the GW address since the GW address is in a different subnet. So even though the router is on the same layer 2 network, the router (if it is processing your packets) does not know about your layer 3 network and is sending it's responses to it's default gateway.
That you are able to ping the Dom0 from the DomU is odd and probably the result of both the Dom0 and the DomU using the same Linux Bridge (which it not a true ethernet switch, more like a dumb hub).
The simple fix is to add an address from your DomU network to the LAN interface on your router.
The better fix would be to use VLANs to segment the different networks via layer 2 and replace Linux Bridges with Open vSwitch. This would completely isolate the Dom0 and DomU traffic so that they would be required to communication via a router and possibly a firewall.

Resources