Docker container can't ping out side of network - linux

I'm studying docker containers and now I'm trying to setup a network for docker containers.
Docker basically uses a 'docker0' bridge that has172.17.0.0/16 network and this network is not the same host network.
I want the docker container and host container to use the same network.
So now I am trying to make up docker container's network without 'bridge driver'.
I am using the 192.168.77.0/24 network by using sharer.
I want the host and docker container to use this network.
I referenced here - https://docs.docker.com/v1.7/articles/networking/#how-docker-networks-a-container.
Below are the commands that I tried to make up the network for containers.
All the commands were ran as the root user.
# ifconfig eth0 promisc
# docker run -i -t --net=none --name net_test ubuntu_net /bin/bash
# docker inspect -f '{{.State.Pid}}' net_test
# 14673
# pid=14673
# mkdir -p /var/run/netns
# ln -s /proc/$pid/ns/net /var/run/netns/$pid
# brctl addbr my_bridge
# brctl addif my_bridge eth0
# ifconfig my_bridge 192.168.77.250 up
# route del default gw 192.168.77.1
# route add default gw 192.168.77.1
# ifconfig eth0 0.0.0.0 up
# ip link add veth0 type veth peer name veth1
# brctl addif my_bridge veth0
# ip link set veth0 up
# ip link set veth1 netns $pid
# ip netns exec $pid ip link set dev veth1 name eth0
# ip netns exec $pid ip link set eth0 address 12:34:56:78:9a:bc
# ip netns exec $pid ip link set eth0 up
# ip netns exec $pid ip addr add 192.168.77.251/24 broadcast 192.168.77.255 dev eth0
# ip netns exec $pid ip route add default via 192.168.77.1
After these commands the docker container can ping to host, and the host can ping to the docker container but the container can't ping to the outside network.
Below is routing table in host after executing commands.
Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.77.1 0.0.0.0 UG 0 0 0 my_bridge
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
192.168.77.0 * 255.255.255.0 U 0 0 0 my_bridge
Below is routing table in the docker container after executing
commands.Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.77.1 0.0.0.0 UG 0 0 0 eth0
192.168.77.0 * 255.255.255.0 U 0 0 0 eth0
but the docker container can't ping to gateway(192.168.77.1) too.
I'm using ubuntu 14.04 LTS desktop verison.
and I didn't set any firewalls.
Am I setting anything wrong? Thanks.

Related

Odroidh2 Debian - Unable to ping network gateway / no network connectivity

I have an OdroidH2 with docker setup.
It was working fine for a few months and suddenly, out of nowhere it stopped having any internet/intranet connectivity.
It's connectivity is going through an Ethernet cable, not WiFi and the interface that is supposed to have the connection is enp3s0 with an ip address of 192.168.1.100.
I have performed the following troubleshooting steps:
Restart (of course, always the first step)
Checked interface settings via ifconfig and also in /etc/network/interfaces
Checked the routing via route -n
Checked iptables (iptables was populated with the docker configuration, I've flushed the iptables including nat and mangle and set the default policy to ACCEPT for input, forward and output. Restarted the networking service afterwards)
Checked if it was able to ping itself and the default gateway (it is able to ping itself but not the gateway, or any other devices)
Checked if another device was able to ping the OdroidH2 (host unreachable)
Checked dmesg and for some reason, I had 2 firmwares that were not able to be loaded (already installed and rebooted after installation):
rtl_nic/rtl8168g-2.fw (after checking, this is the firmware for the network interfaces)
i915/glk_dmc_ver1_04.bin (didn't research much about this one, something to do with runtime power management??)
After all of these steps, I still am unable to get the network connectivity going.
Below you can find information regarding my current configuration:
dmesg output
Stackoverflow does not allow me to put all the information from my dmesg output so I had to put it on google drive: dmesg_output
/etc/hosts
127.0.0.1 localhost
192.168.1.100 dc1 dc1.samdom.andrewoliverhome.local samdom.andrewoliverhome.local
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
iptables -nvL output (after clearing and reloading the networking service)
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
/etc/resolv.conf
#nameserver 127.0.0.1
#nameserver 8.8.8.8
#nameserver 8.8.4.4
search samdom.andrewoliverhome.local
#domain samdom.andrewoliverhome.local
nameserver 192.168.1.100
nameserver 8.8.8.8
route -n output
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.254 0.0.0.0 UG 0 0 0 enp3s0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker_gwbridge
172.19.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-debc10cb5b21
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 enp3s0
/etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo enp2s0 enp3s0
#auto lo br0
iface lo inet loopback
# The primary network interface
iface enp2s0 inet dhcp
allow-hotplug enp2s0 enp3s0
#iface enp2s0 inet manual
# post-up iptables-restore < /etc/iptables.up.rules
# This is an autoconfigured IPv6 interface
#iface enp2s0 inet dhcp
iface enp3s0 inet static
address 192.168.1.100
netmask 255.255.255.0
# broadcast 169.254.99.255
network 192.168.1.0
gateway 192.168.1.254
#iface enp2s0 inet manual
#iface enp3s0 inet manual
#iface br0 inet static
# bridge_ports enp2s0 enp3s0
# address 192.168.1.100
# broadcast 192.168.1.255
# netmask 255.255.255.0
# gateway 192.168.1.254
#
In /etc/resolv.conf, the reason I have the primary nameserver to be itself is because I am running a docker container that is serving as a samba-ad-dc.
In order for OdroidH2 to find all of my devices in the domain, it needs to make dns queries to the samba dc, if samba is not able to find a dns record, it will autoforward it to 8.8.8.8.
Any help would be greatly appreciated (:
After all the troubleshooting done, the issue is not within the OdroidH2 itself, it was with my router.
The LAN port that I'm using malfunctioned. I switched the Ethernet cable to a different LAN port and it worked.

Linux Bridge over ethernet

I have eth0 (Dhcp running). I want to create bridge over eth0 without losing N/w on eth0.
I have tried following
brctl addbr br0
brctl addif br0 eth1
Is it possible to create Bridge (br0 here) without losing n/w on interface (eth0)
You can create a route with iptables :
su -
iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE
iptables --append FORWARD --in-interface eth1 -j ACCEPT
echo 1 > /proc/sys/net/ipv4/ip_forward
How to Setup Linux Bridge
The standard configuration should consist of:
Create the bridge interface.
root#ubuntu-1:~ # brctl addbr br0
Add existing interfaces to the bridge.
root#ubuntu-1:~ # brctl addif br0 eth0
root#ubuntu-1:~ # brctl addif br0 eth1
Zero IP the interfaces.
root#ubuntu-1:~ # ifconfig eth0 0.0.0.0
root#ubuntu-1:~ # ifconfig eth1 0.0.0.0
Put up the bridge.
root#ubuntu-1:~ # ifconfig br0 up
Optionally you can configure the virtual interface br0 to take part in your network. To behaves like one interface (i.e a normal network card).
root#ubuntu-1:~ # ifconfig br0 192.168.0.1 netmask 255.255.255.0 up
To learn visit How to Setup bridge network for KVM

connection between 2 namespaces

I wrote the following code to make two namespaces ns1 and ns2 and making connection between them with using bridge(br), tap0 and tap1. But at the end with "ping" I have unreachable network. Could u please guide me what is the problem?
ip netns add ns1
ip netns add ns2
ip link add name br type bridge
ip tuntap add dev tap0 mode tap
ip tuntap add dev tap1 mode tap
ip link set dev tap0 master br
ip link set tap0 up
ip link set dev tap1 master br
ip link set tap1 up
ip link set tap0 netns ns1
ip link set tap1 netns ns2
ip netns exec ns1 ip addr add 10.0.0.1/24 dev tap0
ip netns exec ns2 ip addr add 10.0.0.2/24 dev tap1
ip netns exec ns1 ip link set dev tap0 up
ip netns exec ns2 ip link set dev tap1 up
ip netns exec ns1 ip link set dev lo up
ip netns exec ns2 ip link set dev lo up
ip link set br up
ip netns exec ns1 ping 10.0.0.2
The problem would probably become more obvious if you were to inspect the state of your bridge periodically in your script. Before you set the namespace of one of your tap devices, it looks like this:
# ip netns exec ip link show tap0
10: tap0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast master br state DOWN mode DEFAULT qlen 500
link/ether b2:e6:85:8a:43:61 brd ff:ff:ff:ff:ff:ff
After the setns operation, it looks like this:
10: tap0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 500
link/ether b2:e6:85:8a:43:61 brd ff:ff:ff:ff:ff:ff
Notice that you no longer see master br in this output; when you
moved the interface out of the global namespace it was removed from
the bridge (because it is no longer visible).
Generally, to connect namespaces to a bridge on your host you would
use veth devices, rather than tap devices. A veth device is a
connected pair of interfaces (think of it like a virtual patch cable).
You add one side of the pair to your bridge, and the other end goes
into the network namespace. Something like this:
ip netns add ns1
ip netns add ns2
ip link add name br0 type bridge
for ns in ns1 ns2; do
# create a veth pair named $ns-inside and $ns-outside
# (e.g., ns1-inside and ns1-outside)
ip link add $ns-inside type veth peer name $ns-outside
# add the -outside half of the veth pair to the bridge
ip link set dev $ns-outside master br0
ip link set $ns-outside up
# add the -inside half to the network namespace
ip link set $ns-inside netns $ns
done
ip netns exec ns1 ip addr add 10.0.0.1/24 dev ns1-inside
ip netns exec ns2 ip addr add 10.0.0.2/24 dev ns2-inside
ip netns exec ns1 ip link set dev ns1-inside up
ip netns exec ns2 ip link set dev ns2-inside up
ip link set br0 up
After the above:
# ip netns exec ns1 ping -c2 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.044 ms
--- 10.0.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.034/0.039/0.044/0.005 ms

LXC Container as VPS

I've been looking into LXC containers and I was wondering as to whether or not it is possible to use an LXC container like an ordinary VPS?
What I mean is;
How do I assign an external IP address to an LXC container?
How do I ssh into an LXC container directly?
I'm quite new to LXC containers so please let me know if there are any other differences I should be aware of.
lxc-create -t download -n cn_name
lxc-start -n cn_name -d
lxc-attach -n cn_name
then in container cn_name install openssh server so you can use ssh then reboot it or restart ssh service.
To make any container "services" available to the world configure port forwarding from the host to the container.
For instance if you had a web server in a container, to forward port 80 from the host ip 192.168.1.1 to a container with ip 10.0.3.1 you can use the iptables rule below.
iptables -t nat -I PREROUTING -i eth0 -p TCP -d 191.168.1.1/32 --dport 80 -j DNAT --to-destination 10.0.3.1:80
now the web server on port 80 of the container will be available via port 80 of the host OS.
It sounds like what you want is to bridge the host NIC to the container. In that case, the first thing you need to do is create a bridge. Do this by first ensuring bridge-utils is installed on the system, then open /etc/networking/interfaces for editing and change this:
auto eth0
iface eth0 inet dhcp
to this:
auto br0
iface br0 inet dhcp
bridge-interfaces eth0
bridge-ports eth0
up ifconfig eth0 up
iface eth0 inet manual
If your NIC is not named eth0, you should replace eth0 with whatever your NIC is named (mine is named enp5s0). Once you've made the change, you can start the bridge by issuing the command
sudo ifup br0
Assuming all went well, you should maintain internet access and even your ssh session should stay online during the process. I recommend you have physical access to the host because messing up the above steps could block the host from internet access. You can verify your setup is correct by running ifconfig and checking that br0 has an assigned IP address while eth0 does not.
Once that's all set up, open up /etc/lxc/default.conf and change
lxc.network.link = lxcbr0
to
lxc.network.link = br0
And that's it. Any containers that you launch will automatically bridge to eth0 and will effectively exist on the same LAN as the host. At this point, you can install ssh if it's not already and ssh into the container using its newly assigned IP address.
"Converting eth0 to br0 and getting all your LXC or LXD onto your LAN"

how to route 2 Nics with 2 public IP on same subnet running with same gateway

I'm newbie in networking field. I have trouble with my web server Network configuration (OS is Centos).
I have 2 NICs (eth0 + eth2 - physically) running 2 public IP which have the same subnet, same gateway.
When I configure nginx to listen on these 2 NICs, everything works just fine. But when I monitor the traffic, all traffic is on the eth0 only, nothing on eth2.
My question is: How can I configure so that traffic goes in a NIC, go out on that NIC, too?
This is my ethernet card config:
DEVICE="eth0"
ONBOOT=yes
BOOTPROTO=static
IPADDR=x.x.x.38
PREFIX=27
GATEWAY=x.x.x.x.33
DNS1=8.8.8.8
DNS2=8.8.4.4
NAME="System eth0"
DEVICE="eth2"
ONBOOT=yes
BOOTPROTO=static
IPADDR=x.x.x.39
PREFIX=27
GATEWAY=x.x.x.33
DNS1=8.8.8.8
DNS2=8.8.4.4
NAME="System eth2"
This is my route -n result
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
10.14.8.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
y.z.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
y.z.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 eth1
y.z.0.0 0.0.0.0 255.255.0.0 U 1004 0 0 eth2
0.0.0.0 x.x.x.33 0.0.0.0 UG 0 0 0 eth0
Hope you can help, thanks in advanced!
In Linux, routing is performed by looking at the destination address only, so a packet will follow whichever route can be used to reach the packet's destination, with no regard to the source address.
The behaviour you want requires choosing a route depending not only on the destination address, but also on the source address — this is sometimes called source-sensitive routing or SADR (source-address dependent routing). The most portable way of implementing source-sensitive routing under Linux is to define routing rules across multiple routing tables using the ip rule and ip route ... table ... commands.
This is described in detail in Section 4 of the Linux Advanced Routing and Traffic Control HOWTO
Probably, the problem can be solved even with NAT.
ip tuntap add dev tap0 mode tap
ip tuntap add dev tap1 mode tap
Then you can assign separate ip addresses to these devices:
ifconfig tap0 10.10.10.1 netmask 255.255.255.255
ifconfig tap1 10.10.10.2 netmask 255.255.255.255
And finally - redirect incoming traffic to specific virtual device
iptables -t nat -A PREROUTING -i eth0 -j DNAT --to-destination 10.10.10.1
iptables -t nat -A PREROUTING -i eth2 -j DNAT --to-destination 10.10.10.2
In this case, all traffic will be routed definetely to the interface it came from.

Resources