Multiple Linux network namespaces for single application - linux

I'm trying to use network namespaces to achieve VRF behavior (virtual routing and forwarding) for network isolation. Essentially I have a server application (C/C++) running on a TCP port in the default namespace. What I'd like to do is use network namespaces to create isolated VRF's using VLANs, and then have that application running in the default namespace be able to spawn a thread to each namespace to listen on that same port per namespace.
I have the network side figured out, I just can't see how I can spawn a thread (prefer to use pthread's instead of clone() if possible), call setns() on one of those namespaces, and then bind to the same port inside the namespace. Here's what I'm doing to create the namespaces and bridges (limiting to one namespace here for simplicity):
# ip netns add ns_vlan100
# ip link add link eno1 eno1.100 type vlan id 100
# ip link add veth0 type veth peer name veth_vlan100
# ip link set veth0 netns ns_vlan100
# ip netns exec ns_vlan100 ip link set dev veth0 up
# ip link set dev veth_vlan100 up
# brctl addbr bridge_vlan100
# brctl addif bridge_vlan100 eno1.100
# brctl addif bridge_vlan100 veth_vlan100
# ip link set dev bridge_vlan100 up
# ip link set dev eno1.100 up
# ip netns exec ns_vlan100 ifconfig veth0 10.10.10.1 netmask 255.255.255.0 up
# ip netns exec ns_vlan100 ip route add default via 10.10.10.1
With this, I can create a VLAN on a peer machine (no containers) and ping 10.10.10.1 without issue. So I know the links are good. What I want to do is then have my existing application be able to spawn a thread in C or C++ (pthreads are heavily preferred), and that thread call setns() with something to put it into namespace ns_vlan100, so I can then bind to the same port for my application, just inside that namespace.
I can't seem to figure out how to do this. Any help is much appreciated.

Related

Raspian: Multiple IP addresses are created on eth0

When trying to initiate docker swarm, I get the following error response from daemon:
could not choose an IP address to advertise since this system has multiple addresses on interface eth0 (aa11:a111:1a1a::111 and ba11:a111:1a1a::111).
When I execute:
sudo ip addr flush dev eth0
I successfully remove all ip addresses, however, they are recreated once I reboot. What might be the reason for this?

Using ip link To Connect Multiple Chroot Jails

I've done some research online, and I'm not really finding any info about how to establish a communication link between two or more chroot jails. The "standard" seems to be just using a single chroot jail for sandboxing.
To put things into context, I'm trying to perform the basic functions of docker containers, but with chroot jails. So, in the same way that two docker containers can ping each other via IP and/or be connected on the same user-defined docker network, can this exist for two chroot jails?
I understand that containers and chroot jails are different things, and I'm familiar with both. I just need to know if there is a way to link two chroot jails in a similar way to linking two containers, or if this is nonexistent and I'm just wasting my time
There's nothing magic about Docker: it's just using the facilities provided by the Linux kernel to enable various sorts of isolation. You can take advantage of the same features.
A Docker "network" is nothing more than a bridge device on your host. You can create those trivially using either brctl or the ip link command, as in:
ip link add mynetwork type bridge
You'll want to activate the interface and assign an ip address:
ip addr add 192.168.23.1/24 dev mynetwork
ip link set mynetwork up
A Docker container has a separate network environment from the host.
This is called a network namespace, and you can manipulate them by
hand with the ip netns command.
You can create a network namespace with the ip netns add command. For example, here we create two namespaces named chroot1 and chroot2:
ip netns add chroot1
ip netns add chroot2
Next, you'll create two pairs of veth network interfaces. One end of each pair will be attached to one of the above network namespaces, and the other will be attached to the mynetwork bridge:
# create a veth-pair named chroot1-inside and chroot1-outside
ip link add chroot1-inside type veth peer name chroot1-outside
ip link set master mynetwork dev chroot1-outside
ip link set chroot1-outside up
ip link set netns chroot1 chroot1-inside
# do the same for chroot2
ip link add chroot2-inside type veth peer name chroot2-outside
ip link set netns chroot2 chroot2-inside
ip link set chroot2-outside up
ip link set master mynetwork dev chroot2-outside
And now configure the interfaces inside of the network namespaces.
We can do this using the -n option to the ip command, which causes
the command to run inside of the specified network namespace:
ip -n chroot1 addr add 192.168.23.11/24 dev chroot1-inside
ip -n chroot1 link set chroot1-inside up
ip -n chroot2 addr add 192.168.23.12/24 dev chroot2-inside
ip -n chroot2 link set chroot2-inside up
Note that in the above, ip -n <namespace> is just a shortcut for:
ip netns exec <namespace> ip ...
So that:
ip -n chroot1 link set chroot1-inside up
Is equivalent to:
ip netns exec chroot1 ip link set chroot1-inside up
(Apparently older versions of the iproute package don't include the
-n option.)
And finally, you can start your chroot environments inside of these
two namespaces. Assuming that your chroot filesystem is mounted on
/mnt, in one terminal, run:
ip netns exec chroot1 chroot /mnt bash
And in another terminal:
ip netns exec chroot2 chroot /mnt bash
You will find that these two chroot environments can ping each other.
For example, inside the chroot1 environment:
# ip addr show
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
36: chroot1-inside#if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether 12:1c:9c:39:22:fa brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.23.11/24 scope global chroot1-inside
valid_lft forever preferred_lft forever
inet6 fe80::101c:9cff:fe39:22fa/64 scope link
valid_lft forever preferred_lft forever
# ping -c1 192.168.23.12
PING 192.168.23.12 (192.168.23.12) 56(84) bytes of data.
From 192.168.23.1 icmp_seq=1 Destination Host Prohibited
--- 192.168.23.12 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms
And they can ping the host:
# ping -c1 192.168.23.1
PING 192.168.23.1 (192.168.23.1) 56(84) bytes of data.
64 bytes from 192.168.23.1: icmp_seq=1 ttl=64 time=0.115 ms
--- 192.168.23.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms
Of course, for this chroot environment to be useful, there's a bunch
of stuff missing, such as the virtual filesystems on /proc, /sys,
and /dev. Setting all of this up and tearing it back down cleanly
is why people use tools like Docker, or systemd-nspawn, because
there's a lot to manage by hand.

How to connect the host to her virtual bridge?

A bridge brOnline is connected to eth0 which provides access to the LAN / Internet. The setup is archived within modifying /etc/network/interfaces like below.
Why? The aim of this adventure is establish a virtual network between several virtual machines and the system hosting the virtual bridge an the virtual machines (host).
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
auto MyBridge
iface MyBridge inet dhcp
bridge_port eth0
bridge_stp on
bridge_fd 0.0
How can I connect to the bridge from my host?
One important thing: Adding eth0 to the bridge makes it somehow unavailable to the host!
So before adding the interface eth0 to the bridge, which magic was connected to eth0 which enabled my browser the access to the local network? Can or how can I connect this magic to the bridge to have access to the LAN and can talk to the other clients connected to the bridge?
The attempts have been wrong. For the host it is not necessary to connect over an tap-device to the bridge, it has the abilety to connect directly to the bridge. In Other words if you set your default route to bridge connecting to the gateway, than you can connect to the LAN-Interface too.
# see actual settings
# The displayed via is the default gw which may be provided by your dhcp
ip route
default via 42.69.42.69 dev eth0
...
# delete the default route, otherwise error: "file exists" will show up
sudo ip route del default via 42.69.42.69 dev eth0
# Add your bridge as default route
sudo ip route add default via 42.69.42.69 dev brOnline
# check
ip route
default via 10.13.0.10 dev brOnline
ping/ssh to the outside are possible, also firefox is working with those settings.
Hint:
Those changes are not permanent. To do so, you need to edit /etc/network/interfaces.
I'm still not able to ping to the other VM's and vice versa, but this might be an other topic.

How to use Linux Network Namespaces for per processes routing?

I want to crawl webpages through browser and store network traffic per URL (not only HTTP but also udp, rtmp etc.) I came across this solution to use linux network namespace for per process routing. Following are the steps I followed, however unable to browse the webpage.
ip netns add test
create a pair of virtual network interfaces (veth-a and veth-b):
ip link add veth-a type veth peer name veth-b
change the active namespace of the veth-a interface:
ip link set veth-a netns test
configure the IP addresses of the virtual interfaces:
ip netns exec test ifconfig veth-a up 192.168.163.1 netmask 255.255.255.0
ifconfig veth-b up 192.168.163.254 netmask 255.255.255.0
configure the routing in the test namespace:
ip netns exec test route add default gw 192.168.163.254 dev veth-a
sudo bash -c ‘echo 1 > /proc/sys/net/ipv4/ip_forward’
sudo iptables -t nat -A POSTROUTING -s 192.168.163.0/24 -o wlan0 -j MASQUERADE
Open Browser in the namepace and get following:
sudo ip netns exec test /usr/bin/firefox http://google.com
(firefox:15861): GConf-WARNING **: Client failed to connect to the D-BUS daemon:
Failed to connect to socket /tmp/dbus-xE8M4KnMPn: Connection refused
(firefox:15861): LIBDBUSMENU-GLIB-WARNING **: Unable to get session bus: Could not connect: Connection refused
In wireshark: sudo ip netns exec test wireshark
I can see Only Outgoing DNS requests from 192.168.163 to 127.0.1.1.
Kindly let me know what I am missing here?
Instead of modifying the host /etc/resolv.conf a cleaner way would be to create a network namespace specific resolv.conf in the following path /etc/netns/ . The "ip netns" utility will bind-mound any resolv.conf on this path to a /etc/resolv.conf in a mount namespace for the process launched with the new network namespace.
Got it. I am able to ping 8.8.8.8. The problem was in DNS resolving.
Update DNS resolver.
put nameserver 8.8.8.8 in /etc/resolvconf/resolv.conf.d/base and in /etc/resolvconf/resolv.conf.d/head.
Restart Network.
sudo service network-manager restart
Now /etc/resolv.conf looks like.
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 8.8.8.8
nameserver 127.0.1.1
Finally.
sudo ip netns exec test /opt/google/chrome/google-chrome --user-data-dir=/tmp/chrome2/ http://yahoo.com

private and public ipaddress on azure cloud

I'm trying to run gwan on an Azure cloud machine. But I get issues with network interfaces, or I simply cannot hit the machine with the browser.
I believe the issue has to do with internal IP address assigned by the Azure router, but also I could be missing some critical security issue (or something else)
The machine is running CentOS.
Here is my configuration:
/etc/sysconfig/iptables-config
added a rule for accepting traffic via port 80
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
/etc/hosts
added the public IP address and mapped that to a subdomain of cloudapp.net
192.12.45.23 myappname.cloudapp.net
gwan_linux64-bit
changed the directories to suit the public IP.
mv 0.0.0.0_8080/#0.0.0.0
192.12.45.23_80/#192.12.45.23
run gwan
sudo ./gwan
can't listen on 168.62.8.160:80 (Cannot assign requested address)
Available network interfaces (2):
127.0.0.1 12.109.24.35
Then I tried both 12.109.24.35 and 127.0.0.1 interfaces -
gwan ran without an error, but I couldn't browse the machine using the public IP of 168.62.8.160:80
further info:
/etc/sysconfig/network doesnt use the FQDN myappname.cloudapp.net but
HOSTNAME=myappname
NETWORKING=yes
as well, /etc/sysconfig/network-scripts/ifcfg-eth0
DHCP_HOSTNAME=myappname
DEVICE=eth0
I don't know Azure and its specificities. But it seems that you are missing a system configuration for your IP address (a problem that has little to do with G-WAN).
Your error is:
"can't listen on 168.62.8.160:80
Available network interfaces (2): 127.0.0.1 12.109.24.35"
On a Linux machine you would have to assign the 168.62.8.160 IP address to one of your network adapters in order for the system to be able to use it.
For temporary changes: ifconfig eth0:1 168.62.8.160
For permanent changes:
vim /etc/network/interfaces
--------------------------------------------------------------
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 12.109.24.35
network ... // replace ... by the relevant data
netmask ... // replace ... by the relevant data
auto eth0:1
iface eth0:1 inet static
address 168.62.8.160
network ... // replace ... by the relevant data
netmask ... // replace ... by the relevant data
--------------------------------------------------------------
...and then run: /etc/init.d/networking restart
That's what would work if you were running Linux, just in case that may help to understand what you are missing on Microsoft Azure.

Resources