connection between 2 namespaces - linux

I wrote the following code to make two namespaces ns1 and ns2 and making connection between them with using bridge(br), tap0 and tap1. But at the end with "ping" I have unreachable network. Could u please guide me what is the problem?
ip netns add ns1
ip netns add ns2
ip link add name br type bridge
ip tuntap add dev tap0 mode tap
ip tuntap add dev tap1 mode tap
ip link set dev tap0 master br
ip link set tap0 up
ip link set dev tap1 master br
ip link set tap1 up
ip link set tap0 netns ns1
ip link set tap1 netns ns2
ip netns exec ns1 ip addr add 10.0.0.1/24 dev tap0
ip netns exec ns2 ip addr add 10.0.0.2/24 dev tap1
ip netns exec ns1 ip link set dev tap0 up
ip netns exec ns2 ip link set dev tap1 up
ip netns exec ns1 ip link set dev lo up
ip netns exec ns2 ip link set dev lo up
ip link set br up
ip netns exec ns1 ping 10.0.0.2

The problem would probably become more obvious if you were to inspect the state of your bridge periodically in your script. Before you set the namespace of one of your tap devices, it looks like this:
# ip netns exec ip link show tap0
10: tap0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast master br state DOWN mode DEFAULT qlen 500
link/ether b2:e6:85:8a:43:61 brd ff:ff:ff:ff:ff:ff
After the setns operation, it looks like this:
10: tap0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 500
link/ether b2:e6:85:8a:43:61 brd ff:ff:ff:ff:ff:ff
Notice that you no longer see master br in this output; when you
moved the interface out of the global namespace it was removed from
the bridge (because it is no longer visible).
Generally, to connect namespaces to a bridge on your host you would
use veth devices, rather than tap devices. A veth device is a
connected pair of interfaces (think of it like a virtual patch cable).
You add one side of the pair to your bridge, and the other end goes
into the network namespace. Something like this:
ip netns add ns1
ip netns add ns2
ip link add name br0 type bridge
for ns in ns1 ns2; do
# create a veth pair named $ns-inside and $ns-outside
# (e.g., ns1-inside and ns1-outside)
ip link add $ns-inside type veth peer name $ns-outside
# add the -outside half of the veth pair to the bridge
ip link set dev $ns-outside master br0
ip link set $ns-outside up
# add the -inside half to the network namespace
ip link set $ns-inside netns $ns
done
ip netns exec ns1 ip addr add 10.0.0.1/24 dev ns1-inside
ip netns exec ns2 ip addr add 10.0.0.2/24 dev ns2-inside
ip netns exec ns1 ip link set dev ns1-inside up
ip netns exec ns2 ip link set dev ns2-inside up
ip link set br0 up
After the above:
# ip netns exec ns1 ping -c2 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.044 ms
--- 10.0.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.034/0.039/0.044/0.005 ms

Related

Why I can send packets through `eth0` but not `eth1` in Docker container?

I created a docker container and connected it to two bridge networks as:
# network 1
docker network create --driver=bridge network1 --subnet=172.56.0.0/24
#network 2
docker network create --driver=bridge network2 --subnet=172.56.1.0/24
docker run \
--name container \
--privileged \
--cap-add=ALL -d \
-v /dev:/dev \
--network network1 \
-v /lib/modules:/lib/modules \
container-image tail -f /dev/null
docker network connect network2 container
Now, if I run ip addr inside container, I have two ethernet network interfaces:
6551: eth0#if6552: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:38:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.56.0.2/24 brd 172.56.0.255 scope global eth0
valid_lft forever preferred_lft forever
6553: eth1#if6554: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:38:01:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.56.1.2/24 brd 172.56.1.255 scope global eth1
valid_lft forever preferred_lft forever
I'm using scapy to send/receive through IP and ICMP protocol, with something like this:
from scapy.all import *
import sys
src = sys.argv[1]
dst = sys.argv[2]
msg = "Hello World!"
packet = IP(src = src, dst = dst)/ICMP()/msg
data = sr1(packet).load.decode('utf-8')
print(f"Received {data!r}")
I'm able to run it when src = 172.56.0.2 and dst = www.google.com or when I'm using eth0 as a source, but if I change it to src = 172.56.1.2, it won't work at all. Is there anything wrong with my eth1 interface here? Any help would be appreciated.
The problem is the routing table. Take a look at conf.route:
>>> from scapy.all import *
>>> conf.route
Network Netmask Gateway Iface Output IP Metric
0.0.0.0 0.0.0.0 172.56.0.1 eth0 172.56.0.2 0
127.0.0.0 255.0.0.0 0.0.0.0 lo 127.0.0.1 1
172.56.0.0 255.255.255.0 0.0.0.0 eth0 172.56.0.2 0
172.56.1.0 255.255.255.0 0.0.0.0 eth1 172.56.1.2 0
In the above route table, the default route is via 172.56.0.1. Any attempt to reach an address that isn't on a directly connected network will be sent via the default gateway, which is only reachable via eth0. If you want your request to go out eth1, you need to modify your routing table. For example, we can replace the default route:
>>> conf.route.delt(net='0.0.0.0/0', gw='172.56.0.1', metric=0)
>>> conf.route.add(net='0.0.0.0/0', gw='172.56.1.1', metric=0)
>>> conf.route
Network Netmask Gateway Iface Output IP Metric
0.0.0.0 0.0.0.0 172.56.1.1 eth1 172.56.1.2 0
127.0.0.0 255.0.0.0 0.0.0.0 lo 127.0.0.1 1
172.56.0.0 255.255.255.0 0.0.0.0 eth0 172.56.0.2 0
172.56.1.0 255.255.255.0 0.0.0.0 eth1 172.56.1.2 0
With this modified routing table, our requests will be sent out eth1.
If we assume that the appropriate gateway will always be the .1 address associated with the source interface, we can rewrite your code like this to automatically apply the correct route:
from scapy.all import *
import sys
import ipaddress
src = ipaddress.ip_interface(sys.argv[1])
dst = sys.argv[2]
gw = src.network[1]
conf.route.routes = [route for route in conf.route.routes if route[1] != 0]
conf.route.add(net='0.0.0.0/0', gw=f'{gw}', metric=0)
msg = "Hello World!"
packet = IP(src = f'{src.ip}', dst=dst)/ICMP()/msg
data = sr1(packet).load.decode('utf-8')
print(f"Received {data!r}")
With the modified code, the first argument (sys.argv[1]) needs to be an address/mask expression. This now works with both interface addresses:
root#bacb8598b801:~# python sendpacket.py 172.56.0.2/24 8.8.8.8
Begin emission:
Finished sending 1 packets.
.*
Received 2 packets, got 1 answers, remaining 0 packets
Received 'Hello World!'
root#bacb8598b801:~# python sendpacket.py 172.56.1.2/24 8.8.8.8
Begin emission:
Finished sending 1 packets.
.*
Received 2 packets, got 1 answers, remaining 0 packets
Received 'Hello World!'
Watching tcpdump on the two bridge interfaces, you can see that traffic is being routed via the expected interface for each source address.

Odroidh2 Debian - Unable to ping network gateway / no network connectivity

I have an OdroidH2 with docker setup.
It was working fine for a few months and suddenly, out of nowhere it stopped having any internet/intranet connectivity.
It's connectivity is going through an Ethernet cable, not WiFi and the interface that is supposed to have the connection is enp3s0 with an ip address of 192.168.1.100.
I have performed the following troubleshooting steps:
Restart (of course, always the first step)
Checked interface settings via ifconfig and also in /etc/network/interfaces
Checked the routing via route -n
Checked iptables (iptables was populated with the docker configuration, I've flushed the iptables including nat and mangle and set the default policy to ACCEPT for input, forward and output. Restarted the networking service afterwards)
Checked if it was able to ping itself and the default gateway (it is able to ping itself but not the gateway, or any other devices)
Checked if another device was able to ping the OdroidH2 (host unreachable)
Checked dmesg and for some reason, I had 2 firmwares that were not able to be loaded (already installed and rebooted after installation):
rtl_nic/rtl8168g-2.fw (after checking, this is the firmware for the network interfaces)
i915/glk_dmc_ver1_04.bin (didn't research much about this one, something to do with runtime power management??)
After all of these steps, I still am unable to get the network connectivity going.
Below you can find information regarding my current configuration:
dmesg output
Stackoverflow does not allow me to put all the information from my dmesg output so I had to put it on google drive: dmesg_output
/etc/hosts
127.0.0.1 localhost
192.168.1.100 dc1 dc1.samdom.andrewoliverhome.local samdom.andrewoliverhome.local
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
iptables -nvL output (after clearing and reloading the networking service)
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
/etc/resolv.conf
#nameserver 127.0.0.1
#nameserver 8.8.8.8
#nameserver 8.8.4.4
search samdom.andrewoliverhome.local
#domain samdom.andrewoliverhome.local
nameserver 192.168.1.100
nameserver 8.8.8.8
route -n output
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.254 0.0.0.0 UG 0 0 0 enp3s0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker_gwbridge
172.19.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-debc10cb5b21
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 enp3s0
/etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo enp2s0 enp3s0
#auto lo br0
iface lo inet loopback
# The primary network interface
iface enp2s0 inet dhcp
allow-hotplug enp2s0 enp3s0
#iface enp2s0 inet manual
# post-up iptables-restore < /etc/iptables.up.rules
# This is an autoconfigured IPv6 interface
#iface enp2s0 inet dhcp
iface enp3s0 inet static
address 192.168.1.100
netmask 255.255.255.0
# broadcast 169.254.99.255
network 192.168.1.0
gateway 192.168.1.254
#iface enp2s0 inet manual
#iface enp3s0 inet manual
#iface br0 inet static
# bridge_ports enp2s0 enp3s0
# address 192.168.1.100
# broadcast 192.168.1.255
# netmask 255.255.255.0
# gateway 192.168.1.254
#
In /etc/resolv.conf, the reason I have the primary nameserver to be itself is because I am running a docker container that is serving as a samba-ad-dc.
In order for OdroidH2 to find all of my devices in the domain, it needs to make dns queries to the samba dc, if samba is not able to find a dns record, it will autoforward it to 8.8.8.8.
Any help would be greatly appreciated (:
After all the troubleshooting done, the issue is not within the OdroidH2 itself, it was with my router.
The LAN port that I'm using malfunctioned. I switched the Ethernet cable to a different LAN port and it worked.

Docker container can't ping out side of network

I'm studying docker containers and now I'm trying to setup a network for docker containers.
Docker basically uses a 'docker0' bridge that has172.17.0.0/16 network and this network is not the same host network.
I want the docker container and host container to use the same network.
So now I am trying to make up docker container's network without 'bridge driver'.
I am using the 192.168.77.0/24 network by using sharer.
I want the host and docker container to use this network.
I referenced here - https://docs.docker.com/v1.7/articles/networking/#how-docker-networks-a-container.
Below are the commands that I tried to make up the network for containers.
All the commands were ran as the root user.
# ifconfig eth0 promisc
# docker run -i -t --net=none --name net_test ubuntu_net /bin/bash
# docker inspect -f '{{.State.Pid}}' net_test
# 14673
# pid=14673
# mkdir -p /var/run/netns
# ln -s /proc/$pid/ns/net /var/run/netns/$pid
# brctl addbr my_bridge
# brctl addif my_bridge eth0
# ifconfig my_bridge 192.168.77.250 up
# route del default gw 192.168.77.1
# route add default gw 192.168.77.1
# ifconfig eth0 0.0.0.0 up
# ip link add veth0 type veth peer name veth1
# brctl addif my_bridge veth0
# ip link set veth0 up
# ip link set veth1 netns $pid
# ip netns exec $pid ip link set dev veth1 name eth0
# ip netns exec $pid ip link set eth0 address 12:34:56:78:9a:bc
# ip netns exec $pid ip link set eth0 up
# ip netns exec $pid ip addr add 192.168.77.251/24 broadcast 192.168.77.255 dev eth0
# ip netns exec $pid ip route add default via 192.168.77.1
After these commands the docker container can ping to host, and the host can ping to the docker container but the container can't ping to the outside network.
Below is routing table in host after executing commands.
Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.77.1 0.0.0.0 UG 0 0 0 my_bridge
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
192.168.77.0 * 255.255.255.0 U 0 0 0 my_bridge
Below is routing table in the docker container after executing
commands.Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.77.1 0.0.0.0 UG 0 0 0 eth0
192.168.77.0 * 255.255.255.0 U 0 0 0 eth0
but the docker container can't ping to gateway(192.168.77.1) too.
I'm using ubuntu 14.04 LTS desktop verison.
and I didn't set any firewalls.
Am I setting anything wrong? Thanks.

Pass IP-address to cloud-init metadata

I am searching for a way to pass a ip-address to cloud-init metadata. So when my qcow boots, it does not have to wait for 120 - 180 seconds to boot.
Currently I've created a workaround by adding IP-address information to the userdata part of cloud-init. The con is, it does take some time because cloud-init userdata is only executed after booting the VM.
echo -e "
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet static
\taddress $address
\tnetmask $netmask
\tgateway $gateway
\tdns-nameservers $dnsnameservers
" > /etc/network/interfaces
ifdown eth0; ifup eth0
Currently to set the hostname in cloud-init metadata, I've already obtained this part:
cat > $config_dir/meta-data <<-EOF
instance-id: $uuid
hostname: $hostname
local-hostname: $hostname
EOF
But I need something more solid, more concrete because the cloud-init documentation is very vague
EDIT: I've seen that it is possible because Openstack can do it.
It is possible by using the following format:
network-interfaces: |
auto lo
iface lo inet loopback
iface eth0 inet static
address xxx.xxx.xxx.xxx (static ip)
netmask xxx.xxx.xxx.xxx
gateway xxx.xxx.xxx.xxx (gateway ip, usually ip address of router)
You basically write out the configuration you want your interfaces to have as an interfaces file would look as long as you have 'network-interfaces: |'
Hope that answers your question!

how to route 2 Nics with 2 public IP on same subnet running with same gateway

I'm newbie in networking field. I have trouble with my web server Network configuration (OS is Centos).
I have 2 NICs (eth0 + eth2 - physically) running 2 public IP which have the same subnet, same gateway.
When I configure nginx to listen on these 2 NICs, everything works just fine. But when I monitor the traffic, all traffic is on the eth0 only, nothing on eth2.
My question is: How can I configure so that traffic goes in a NIC, go out on that NIC, too?
This is my ethernet card config:
DEVICE="eth0"
ONBOOT=yes
BOOTPROTO=static
IPADDR=x.x.x.38
PREFIX=27
GATEWAY=x.x.x.x.33
DNS1=8.8.8.8
DNS2=8.8.4.4
NAME="System eth0"
DEVICE="eth2"
ONBOOT=yes
BOOTPROTO=static
IPADDR=x.x.x.39
PREFIX=27
GATEWAY=x.x.x.33
DNS1=8.8.8.8
DNS2=8.8.4.4
NAME="System eth2"
This is my route -n result
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
10.14.8.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
y.z.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
y.z.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 eth1
y.z.0.0 0.0.0.0 255.255.0.0 U 1004 0 0 eth2
0.0.0.0 x.x.x.33 0.0.0.0 UG 0 0 0 eth0
Hope you can help, thanks in advanced!
In Linux, routing is performed by looking at the destination address only, so a packet will follow whichever route can be used to reach the packet's destination, with no regard to the source address.
The behaviour you want requires choosing a route depending not only on the destination address, but also on the source address — this is sometimes called source-sensitive routing or SADR (source-address dependent routing). The most portable way of implementing source-sensitive routing under Linux is to define routing rules across multiple routing tables using the ip rule and ip route ... table ... commands.
This is described in detail in Section 4 of the Linux Advanced Routing and Traffic Control HOWTO
Probably, the problem can be solved even with NAT.
ip tuntap add dev tap0 mode tap
ip tuntap add dev tap1 mode tap
Then you can assign separate ip addresses to these devices:
ifconfig tap0 10.10.10.1 netmask 255.255.255.255
ifconfig tap1 10.10.10.2 netmask 255.255.255.255
And finally - redirect incoming traffic to specific virtual device
iptables -t nat -A PREROUTING -i eth0 -j DNAT --to-destination 10.10.10.1
iptables -t nat -A PREROUTING -i eth2 -j DNAT --to-destination 10.10.10.2
In this case, all traffic will be routed definetely to the interface it came from.

Resources