Using ip link To Connect Multiple Chroot Jails - linux

I've done some research online, and I'm not really finding any info about how to establish a communication link between two or more chroot jails. The "standard" seems to be just using a single chroot jail for sandboxing.
To put things into context, I'm trying to perform the basic functions of docker containers, but with chroot jails. So, in the same way that two docker containers can ping each other via IP and/or be connected on the same user-defined docker network, can this exist for two chroot jails?
I understand that containers and chroot jails are different things, and I'm familiar with both. I just need to know if there is a way to link two chroot jails in a similar way to linking two containers, or if this is nonexistent and I'm just wasting my time

There's nothing magic about Docker: it's just using the facilities provided by the Linux kernel to enable various sorts of isolation. You can take advantage of the same features.
A Docker "network" is nothing more than a bridge device on your host. You can create those trivially using either brctl or the ip link command, as in:
ip link add mynetwork type bridge
You'll want to activate the interface and assign an ip address:
ip addr add 192.168.23.1/24 dev mynetwork
ip link set mynetwork up
A Docker container has a separate network environment from the host.
This is called a network namespace, and you can manipulate them by
hand with the ip netns command.
You can create a network namespace with the ip netns add command. For example, here we create two namespaces named chroot1 and chroot2:
ip netns add chroot1
ip netns add chroot2
Next, you'll create two pairs of veth network interfaces. One end of each pair will be attached to one of the above network namespaces, and the other will be attached to the mynetwork bridge:
# create a veth-pair named chroot1-inside and chroot1-outside
ip link add chroot1-inside type veth peer name chroot1-outside
ip link set master mynetwork dev chroot1-outside
ip link set chroot1-outside up
ip link set netns chroot1 chroot1-inside
# do the same for chroot2
ip link add chroot2-inside type veth peer name chroot2-outside
ip link set netns chroot2 chroot2-inside
ip link set chroot2-outside up
ip link set master mynetwork dev chroot2-outside
And now configure the interfaces inside of the network namespaces.
We can do this using the -n option to the ip command, which causes
the command to run inside of the specified network namespace:
ip -n chroot1 addr add 192.168.23.11/24 dev chroot1-inside
ip -n chroot1 link set chroot1-inside up
ip -n chroot2 addr add 192.168.23.12/24 dev chroot2-inside
ip -n chroot2 link set chroot2-inside up
Note that in the above, ip -n <namespace> is just a shortcut for:
ip netns exec <namespace> ip ...
So that:
ip -n chroot1 link set chroot1-inside up
Is equivalent to:
ip netns exec chroot1 ip link set chroot1-inside up
(Apparently older versions of the iproute package don't include the
-n option.)
And finally, you can start your chroot environments inside of these
two namespaces. Assuming that your chroot filesystem is mounted on
/mnt, in one terminal, run:
ip netns exec chroot1 chroot /mnt bash
And in another terminal:
ip netns exec chroot2 chroot /mnt bash
You will find that these two chroot environments can ping each other.
For example, inside the chroot1 environment:
# ip addr show
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
36: chroot1-inside#if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether 12:1c:9c:39:22:fa brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.23.11/24 scope global chroot1-inside
valid_lft forever preferred_lft forever
inet6 fe80::101c:9cff:fe39:22fa/64 scope link
valid_lft forever preferred_lft forever
# ping -c1 192.168.23.12
PING 192.168.23.12 (192.168.23.12) 56(84) bytes of data.
From 192.168.23.1 icmp_seq=1 Destination Host Prohibited
--- 192.168.23.12 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms
And they can ping the host:
# ping -c1 192.168.23.1
PING 192.168.23.1 (192.168.23.1) 56(84) bytes of data.
64 bytes from 192.168.23.1: icmp_seq=1 ttl=64 time=0.115 ms
--- 192.168.23.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms
Of course, for this chroot environment to be useful, there's a bunch
of stuff missing, such as the virtual filesystems on /proc, /sys,
and /dev. Setting all of this up and tearing it back down cleanly
is why people use tools like Docker, or systemd-nspawn, because
there's a lot to manage by hand.

Related

Docker macvlan network, unable to access internet

I have a dedicated server with multiple IP addresses, some IP's have mac address associated while others(in a subnetwork) doesn't have mac addresses.
I have created docker macvlan network using:
docker network create -d macvlan -o macvlan_mode=bridge --subnet=188.40.76.0/26 --gateway=188.40.76.1 -o parent=eth0 macvlan_bridge
I have ip: 88.99.102.115 with mac: 00:50:56:00:60:42. Created a container using:
docker run --name cont1 --net=macvlan_bridge --ip=88.99.102.115 --mac-address 00:50:56:00:60:42 -itd nginx
This works, I can access nginx hosted at that ip address from outside.
Case with IP which doesn't have mac address and the gateway is out of subnet.
subnet: 88.99.114.16/28, gateway: 88.99.102.103
Unable to create network using:
docker network create -d macvlan -o macvlan_mode=bridge --subnet=88.99.114.16/28 --gateway=88.99.102.103 -o parent=eth0 mynetwork
Throws error:
no matching subnet for gateway 88.99.102.103
Tried with increasing subnet scope to include gateway:
docker network create -d macvlan -o macvlan_mode=bridge --subnet=88.99.0.0/16 --gateway=88.99.102.103 -o parent=eth0 mynetwork
Network got created, then started nginx container using 'mynetwork' and well I dont have mac address for 88.99.114.18 so used some random mac address 40:1c:0f:bd:a1:d2.
docker run --name cont1 --net=mynetwork --ip=88.99.114.18 --mac-address 40:1c:0f:bd:a1:d2 -itd nginx
Can't reach nginx(88.99.102.115).
How do I create a macvlan docker network if my gateway is out of my subnet?
How do I run a container using macvlan network when I have only IP address but no mac address?
I don't have much knowledge in networking, it will be really helpful if you explain in detail.
My /etc/network/interfaces file:
### Hetzner Online GmbH - installimage
# Loopback device:
auto lo
iface lo inet loopback
iface lo inet6 loopback
# device: eth0
auto eth0
iface eth0 inet static
address 88.99.102.103
netmask 255.255.255.192
gateway 88.99.102.65
# default route to access subnet
up route add -net 88.99.102.64 netmask 255.255.255.192 gw 88.99.102.65 eth0
iface eth0 inet6 static
address 2a01:4f8:221:1266::2
netmask 64
gateway fe80::1
You might want to start by reading up on simple routing concepts or subnets and routing
How do I create a macvlan docker network if my gateway is out of my subnet?
A gateway address must be on the same subnet as an interface. To use this new subnet you will need to use up one of the IP addresses and assign it somewhere on the host as a gateway.
Subnet routing to a bridge network.
From the hosting screen shot, the 88.99.114.16/28 subnet has been setup to route via your host 88.99.102.103. You need to create an interface somewhere on your host to use as the gateway if you want Docker to use the rest of the IP addresses in the subnet.
Create a bridge network for Docker to use, the bridge will be assigned the gateway address 88.99.114.17
docker network create \
--driver=bridge \
--subnet 88.99.114.16/28 \
--gateway=88.99.114.17 \
name0
You may also need to enable IP forwarding for routing to work. Configure ip forwarding in /etc/sysctl.conf:
net.ipv4.ip_forward = 1
and Apply the new setting
sysctl -p /etc/sysctl.conf
Then run a container on the new bridge with your routed network should be able to access the gateway and the internet
docker run --net=name0 --rm busybox \
sh -c "ip ad sh && ping -c 4 88.99.114.17 && wget api.ipify.org"
You may need to allow access into the subnet in iptables, depending on your default FORWARD policy
iptables -I DOCKER -d 88.99.114.16/28 -j ACCEPT
Services on the subnet will be accessible from the outside world
docker run --net=name0 busybox \
nc -lp 80 -e echo -e "HTTP/1.0 200 OK\nContent-Length: 3\n\nHi\n"
Then outside
○→ ping -c 2 88.99.114.18
PING 88.99.114.18 (88.99.114.18): 56 data bytes
64 bytes from 88.99.114.18: icmp_seq=0 ttl=63 time=0.527 ms
64 bytes from 88.99.114.18: icmp_seq=1 ttl=63 time=0.417 ms
--- 88.99.114.18 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.417/0.472/0.527/0.055 ms
○→ curl 88.99.114.18
Hi
No need for macvlan interface mapping.
How do I run a container using macvlan network when I have only IP
address but no mac address?
macvlan is use to map a physical/host interface into a container. As you don't have a physical interface for these addresses it will be hard to map one into a container.

Multiple Linux network namespaces for single application

I'm trying to use network namespaces to achieve VRF behavior (virtual routing and forwarding) for network isolation. Essentially I have a server application (C/C++) running on a TCP port in the default namespace. What I'd like to do is use network namespaces to create isolated VRF's using VLANs, and then have that application running in the default namespace be able to spawn a thread to each namespace to listen on that same port per namespace.
I have the network side figured out, I just can't see how I can spawn a thread (prefer to use pthread's instead of clone() if possible), call setns() on one of those namespaces, and then bind to the same port inside the namespace. Here's what I'm doing to create the namespaces and bridges (limiting to one namespace here for simplicity):
# ip netns add ns_vlan100
# ip link add link eno1 eno1.100 type vlan id 100
# ip link add veth0 type veth peer name veth_vlan100
# ip link set veth0 netns ns_vlan100
# ip netns exec ns_vlan100 ip link set dev veth0 up
# ip link set dev veth_vlan100 up
# brctl addbr bridge_vlan100
# brctl addif bridge_vlan100 eno1.100
# brctl addif bridge_vlan100 veth_vlan100
# ip link set dev bridge_vlan100 up
# ip link set dev eno1.100 up
# ip netns exec ns_vlan100 ifconfig veth0 10.10.10.1 netmask 255.255.255.0 up
# ip netns exec ns_vlan100 ip route add default via 10.10.10.1
With this, I can create a VLAN on a peer machine (no containers) and ping 10.10.10.1 without issue. So I know the links are good. What I want to do is then have my existing application be able to spawn a thread in C or C++ (pthreads are heavily preferred), and that thread call setns() with something to put it into namespace ns_vlan100, so I can then bind to the same port for my application, just inside that namespace.
I can't seem to figure out how to do this. Any help is much appreciated.

docker networking - Could not discover any bindable network interfaces

I am trying to execute a jar from within a docker container which tries to start a service and detects an IP and port of a remote service and tries to connect to it. But as soon as I do that I get this exception -
Exception occured: org.teleal.cling.transport.spi.InitializationException: Could not discover any bindable network interfaces and/or addresses
org.teleal.cling.transport.spi.InitializationException: Could not discover any bindable network interfaces and/or addresses
at org.teleal.cling.transport.impl.NetworkAddressFactoryImpl.<init>(NetworkAddressFactoryImpl.java:99)
at org.teleal.cling.DefaultUpnpServiceConfiguration.createNetworkAddressFactory(DefaultUpnpServiceConfiguration.java:231)
at org.teleal.cling.DefaultUpnpServiceConfiguration.createNetworkAddressFactory(DefaultUpnpServiceConfiguration.java:220)
at org.teleal.cling.transport.RouterImpl.<init>(RouterImpl.java:93)
at org.teleal.cling.UpnpServiceImpl.createRouter(UpnpServiceImpl.java:97)
at org.teleal.cling.UpnpServiceImpl.<init>(UpnpServiceImpl.java:81)
at org.teleal.cling.UpnpServiceImpl.<init>(UpnpServiceImpl.java:58)
I tried to run the container with the --privileged flag as well so that I could get all the possible networking capabilities but it still threw the same exception.
I am running a boot2docker VM on my MAC OS X , where I am running this docker container.
On doing an ip addr within my docker container, I get this -
eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether abcd brd ff:ff:ff:ff:ff:ff
inet 172.17.0.90/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:5a/64 scope link
valid_lft forever preferred_lft forever
Would anyone have suggestions to get this service up and running ? I think the issue is with the docker networking here since I am able to run the same service on a native linux VM.
I got this working by using the --net host flag which essentially says that i am going to use to your host's networking and in this case ip addr displays the host's machine networking eth0, eth1 details.
docker run -it --net host <image name> /bin/bash

How to use Linux Network Namespaces for per processes routing?

I want to crawl webpages through browser and store network traffic per URL (not only HTTP but also udp, rtmp etc.) I came across this solution to use linux network namespace for per process routing. Following are the steps I followed, however unable to browse the webpage.
ip netns add test
create a pair of virtual network interfaces (veth-a and veth-b):
ip link add veth-a type veth peer name veth-b
change the active namespace of the veth-a interface:
ip link set veth-a netns test
configure the IP addresses of the virtual interfaces:
ip netns exec test ifconfig veth-a up 192.168.163.1 netmask 255.255.255.0
ifconfig veth-b up 192.168.163.254 netmask 255.255.255.0
configure the routing in the test namespace:
ip netns exec test route add default gw 192.168.163.254 dev veth-a
sudo bash -c ‘echo 1 > /proc/sys/net/ipv4/ip_forward’
sudo iptables -t nat -A POSTROUTING -s 192.168.163.0/24 -o wlan0 -j MASQUERADE
Open Browser in the namepace and get following:
sudo ip netns exec test /usr/bin/firefox http://google.com
(firefox:15861): GConf-WARNING **: Client failed to connect to the D-BUS daemon:
Failed to connect to socket /tmp/dbus-xE8M4KnMPn: Connection refused
(firefox:15861): LIBDBUSMENU-GLIB-WARNING **: Unable to get session bus: Could not connect: Connection refused
In wireshark: sudo ip netns exec test wireshark
I can see Only Outgoing DNS requests from 192.168.163 to 127.0.1.1.
Kindly let me know what I am missing here?
Instead of modifying the host /etc/resolv.conf a cleaner way would be to create a network namespace specific resolv.conf in the following path /etc/netns/ . The "ip netns" utility will bind-mound any resolv.conf on this path to a /etc/resolv.conf in a mount namespace for the process launched with the new network namespace.
Got it. I am able to ping 8.8.8.8. The problem was in DNS resolving.
Update DNS resolver.
put nameserver 8.8.8.8 in /etc/resolvconf/resolv.conf.d/base and in /etc/resolvconf/resolv.conf.d/head.
Restart Network.
sudo service network-manager restart
Now /etc/resolv.conf looks like.
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 8.8.8.8
nameserver 127.0.1.1
Finally.
sudo ip netns exec test /opt/google/chrome/google-chrome --user-data-dir=/tmp/chrome2/ http://yahoo.com

Docker container cannot resolve dns

I'm trying to configure my docker container to see my local private rpm repository through http. It cannot resolve the dns name and I'm probably not setting up DNS correctly on the host CENTOS 6.5 VM.
http://
172.17.42.1
/repository/CENTOS/6/nginx/x86_64/repodata/repomd.xml:
[Errno 14] PYCURL ERROR 7 - > "couldn't connect to host"
bash-4.1# more /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://
172.17.42.1/repository/CENTOS/$releasever/nginx/$basearch/
(can't connect to host)
gpgcheck=0
enabled=1
The container /etc/resolv.conf contains this
bash-4.1# cat /etc/resolv.conf
nameserver 192.168.64.2
nameserver 192.168.64.129
nameserver 127.0.0.1
search localdomain eadis.local
When I try to add the domainname to the IP address it does not
resolve.
Docker container IP address
eth0 Link encap:Ethernet HWaddr 7E:EB:4C:25:F4:DA
inet addr:172.17.0.7 Bcast:0.0.0.0 Mask:255.255.0.0
Host VM Docker server
docker0 Link encap:Ethernet HWaddr FE:EF:63:A8:65:5C
inet addr:
172.17.42.1
Bcast:0.0.0.0 Mask:255.255.0.0
eth3 Link encap:Ethernet HWaddr 00:0C:29:10:0A:77
inet addr:192.168.64.129 Bcast:192.168.64.255 Mask:255.255.255.0
[root#centos named]# cat /etc/resolv.conf
domain localdomain
search localdomain eadis.local
nameserver 192.168.64.129
nameserver 192.168.64.2
When you run your container with --net or --network, docker will use a self DNS server to discover services that runs with it. All DNS queries are sent to the docker engine.
In normal times (when using default network) you can use the --dns option on your run command. But when you run your images with a self-defined network, you must add your customize resolv.conf file to the container. You make one on your system and name it custom-resolv.conf.
In this file you can add your custom dns address and add it to your container.
nameserver 1.0.0.1
nameserver 4.2.2.4
Then you must add this small part to your run command:
-v /path/to/file/custom-resolv.conf:/etc/resolv.conf:ro
Use ro for read-only volume mapping. When your container is started, you can exec to your container and check the /etc/resolv.conf file which must be the same as custom-resolv.conf.
Now all of your DNS requests are sent to your DNS server.

Resources