Error starting FreeIPA server as docker container - linux

I am getting error when I run the following command:
docker run --name freeipa-server-container -ti \
-h ipa.example.test \
--read-only \
-v /var/lib/ipa-data:/data:Z freeipa-server [ opts ]
ERROR:
systemd 239 running in system mode. (+PAM +AUDIT +SELINUX +IMA
-APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy)
Detected virtualization container-other.
Detected architecture x86-64. Set hostname to <ipa.example.test>.
Initializing machine ID from random generator.
Couldn't move remaining
userspace processes, ignoring: Input/output error
Sun Mar 22 16:47:43
UTC 2020 /usr/sbin/ipa-server-configure-first
IPv6 stack is enabled
in the kernel but there is no interface that has ::1 address assigned.
Add ::1 address resolution to 'lo' interface. You might need to enable
IPv6 on the interface 'lo' in sysctl.conf. The ipa-server-install
command failed. See /var/log/ipaserver-install.log for more
information
Last part says I need to enable enable IPv6 on the interface 'lo' in sysctl.conf.
Here is the output of ifconfig. It is already enabled. Isn't it?
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 661 bytes 56283 (56.2 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 661 bytes 56283 (56.2 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
and also I couldn't find much about
Couldn't move remaining userspace processes, ignoring: Input/output error
Any pointers??
I am following this resource: https://github.com/freeipa/freeipa-container

I was able to resolve the same issue following this other answer, basically by adding --sysctl net.ipv6.conf.lo.disable_ipv6=0 into my docker run ... command. I don't actually know why it needs to be there but my symptoms were the same as yours and this did the trick. Here is my full command for testing:
$ docker run -it --rm \
--sysctl net.ipv6.conf.lo.disable_ipv6=0
--name freeipa-server-container \
-h idm.example.test \
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
-v /var/lib/ipa-data:/data \
--tmpfs /run \
--tmpfs /tmp \
freeipa/freeipa-server:latest
Sorry this isn't a great answer, but it might at least get you further down the road if you're still stuck.

Related

why is wlan0 interface visible from xterm but not from running program?

I'm running on an embedded linux buildroot OS and have installed a custom WiFi driver which starts in "wireless extensions mode". I can see that driver from an xterm window (whether I'm running as root or a normal user) with the following commands:
cat /proc/net/dev
cat /proc/net/wireless
ifconfig -a
The wlan0 interface looks like this when printed via ifconfig -a:
wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 169.254.185.103 netmask 255.255.0.0 broadcast 169.254.255.255
inet6 fe80::24b:5bfc:8a9e:b148 prefixlen 64 scopeid 0x20<link>
ether 00:23:a7:40:22:f1 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 40 overruns 0 carrier 0 collisions 0
But, when I run from my application, I can see all the interfaces on my system EXCEPT wlan0. wlan0 is completely filtered out when viewed by my application. I've tried these things from code within my application:
system("cat /proc/net/dev");
system("cat /proc/net/wireless");
system("ifconfig -a");
FILE* fopen("/proc/net/dev", "r"); fgets(...
I've run my application as both root and as a normal user and see the same results. I've changed the ownership of my program to root:root and installed it as root with the s bit and that didn't help. So, it doesn't seem to be a security issue...?
Any ideas?

How to raise the buffer value with ethtool on the network

It occurred the Dropped packet with checking ifconfig command tool. The Dropped counted up with a lot of volumes.
$ ifconfig eth0
eth0 Link encap:イーサネット ハードウェアアドレス 4c:72:b9:f6:27:a8
inetアドレス:192.168.1.102 ブロードキャスト:192.168.1.255 マスク:255.255.255.0
inet6アドレス: fe80::4e72:b9ff:fef6:27a8/64 範囲:リンク
UP BROADCAST RUNNING MULTICAST MTU:1500 メトリック:1
RXパケット:2745254558 エラー:0 **損失:1003363** オーバラン:0 フレーム:0
TXパケット:7633337281 エラー:0 損失:0 オーバラン:0 キャリア:0
衝突(Collisions):0 TXキュー長:1000
RXバイト:1583378766375 (1.5 TB) TXバイト:10394167206386 (10.3 TB)
So I'll use ethtool to raise the network buffer value.
$ sudo ethtool -g eth0
Ring parameters for eth0:
Cannot get device ring settings: Operation not supported
I can't confirm eth0 status.
And, I don't understand what the ring is.
is this Virtual Machine ?
So from the symbols you pasted I assume there is drop on the RX.
You need to rise the RX ring buffer with ethtool -G eth0 rx 4096
Show more info ethtool -i eth0 and netstat -s
There is a lot more tuning to eth0 than just ring buffers.
Try to rise net.core.netdev_max_backlog.
Check it with sysctl net.core.netdev_max_backlog and set the new value with sysctl -w net.core.netdev_max_backlog=numberhere.
EDIT:
Please also show card HW info
sudo lshw -C network
google for Cannot get device ring settings r8169

How to add Vxlan Tag to isolation different group of Docker Containers

First, I am aware of creating a VXLAN interface with tag based on ip command:
ip link add vxlan-br0 type vxlan id <tag-id> group <multicast-ip> local <host-ip> dstport 0
But it is useless for my actual demand, and my demand is to isolate multiple docker containers using different tags, something like:
brctl addif br1 veth111111 tag=10 # veth111111 is the netdev used by docker container 1
brctl addif br1 veth222222 tag=20 # veth222222 is the netdev used by docker container 2
brctl addif br1 veth333333 tag=10 # veth111111 is the netdev used by docker container 3
I want to isolate container 2 from container 1 and 3, and don't isolate communication bewteen container 1 and 3. How to achieve this?
Adding two bridge networks will provide isolation.
docker create network net1
docker create network net2
Then start some containers
docker run -d --name one --net net1 busybox sleep 600
docker run -d --name two --net net2 busybox sleep 600
docker run -d --name three --net net1 busybox sleep 600
one and three will communicate as they are attached to the same bridge
docker exec one ping three
docker exec three ping one
Others will fail as they cross networks/bridges
docker exec one ping two
docker exec two ping one
docker exec three ping two
You'll notice docker provides host/name resolution inside a network so it's actually the host name resolution that is failing above. IP's are not routed between bridges either.
$ docker exec three ip ad sh dev eth0
17: eth0#if18: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:14:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.20.0.3/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe14:3/64 scope link
valid_lft forever preferred_lft forever
Ping two
$ docker exec three ping -c 1 -w 1 172.21.0.2
PING 172.21.0.2 (172.21.0.2): 56 data bytes
--- 172.21.0.2 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss
Ping one
docker exec three ping -c 1 -w 1 172.20.0.2
PING 172.20.0.2 (172.20.0.2): 56 data bytes
64 bytes from 172.20.0.2: seq=0 ttl=64 time=0.044 ms
This setup will work with the overlay networking driver as well but that is more complex to setup.

How do I make my Docker container communicate with another node through a 2nd interface?

I am struggling to perform a pathetic test that involves the communication between a server in a sandbox01 network and a Docker container that is running in my "Docker Host" server (this machine is in the same subnet as the other nodes in the sandbox01 network. i.e., it has an interface called ens34, on the 10.* address/range. It also has an eth0 interface, on the 9.* network, which allows it to access the outside world: download packages, docker images, etc. etc.).
Anyway, here is a little diagram to illustrate what I have:
The problem:
Cannot communicate between a node in sandbox01 subnet (10.* network) and the container.
e.g., someserver.sandbox01 → mydocker2 : ens34 :: docker0 :: vethXXX → container
The communication only works when I stop iptables, which makes things really mysterious!!! Just wondering if you faced any similar issues.. any ideas would be extremely appreciated.
The mystery:
After many tests, it was confirmed that the container can't communicate with any other node in the 10.* network – it doesn't behave as expected: it was supposed to produce a response through its gateway, docker0 (172.17.0.1), and find its way through the routing table in the docker host to communicate with "someserver.sandbox01" (10.1.21.59).
It only works when we let it process the MASQUARADE in iptables. However, Docker automatically adds this rule: -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -c 0 0 -j MASQUERADE
**Note the " ! -o docker0" there, so Docker doesn't want us to mask the ip addresses that are sending requests??? This is messing up the communication somehow...
The container responds ok to any communication coming through the IP 9.* (eth0) -- i.e., I can send requests from my laptop -- but never through the 10.* (ens34). If I run a terminal within the container, the container can ping ALL the IP addresses leveraging all the mapped routes, EXCEPT, EXCEPT!!! the IP addresses in the 10.* range. Why??????
[root#mydocker2 my-nc-server]# docker run -it -p 8080:8080 --name nc-server nc-server /bin/sh
sh-4.2# ping 9.83.90.55
PING 9.83.92.20 (9.83.90.55) 56(84) bytes of data.
64 bytes from 9.83.90.55: icmp_seq=1 ttl=117 time=124 ms
64 bytes from 9.83.90.55: icmp_seq=2 ttl=117 time=170 ms
^C
--- 9.83.90.55 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 124.422/147.465/170.509/23.046 ms
sh-4.2# ping 9.32.145.98
PING 9.32.148.67 (9.32.145.98) 56(84) bytes of data.
64 bytes from 9.32.145.98: icmp_seq=1 ttl=63 time=1.37 ms
64 bytes from 9.32.145.98: icmp_seq=2 ttl=63 time=0.837 ms
^C
--- 9.32.145.98 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.837/1.104/1.372/0.269 ms
sh-4.2# ping 10.1.21.5
PING 10.1.21.5 (10.1.21.5) 56(84) bytes of data.
^C
--- 10.1.21.5 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 2999ms
sh-4.2# ping 10.1.21.60
PING 10.1.21.60 (10.1.21.60) 56(84) bytes of data.
^C
--- 10.1.21.60 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 2999ms
For some reason, this interface here doesn't play well with Docker:
ens34: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.1.21.18 netmask 255.255.255.0 broadcast 10.1.21.255
Could this be related to the fact that the eth0 is the primary NIC for this Docker host?
The workaround:
In mydocker2 we need to stop iptables and add a new sub-interface under ens34 →
service iptables stop
ifconfig ens34:0 10.171.171.171 netmask 255.255.255.0
And in someserver.sandbox01 we need to add a new route →
route add -net 10.171.171.0 netmask 255.255.255.0 gw 10.1.21.18
Then the communication between then works. I know.. bizarre, right?
In case any of you wants to ask, no, I don't want to use the " --net=host " option to replicate the interfaces from the docker host to my container.
So, thoughts? Suggestions? Ideas?
SOLVED!!!
Inside /etc/sysconfig/network-scripts, there were 2 files:
route-ens34 and rule-ens34-
if you remove those, and restart the network, it should start working.
Cheers!

docker networking namespace not visible in ip netns list

When I create a new docker container like with
docker run -it -m 560m --cpuset-cpus=1,2 ubuntu sleep 120
and check its namespaces, I can see that new namespaces have been created (example for pid 7047).
root#dude2:~# ls /proc/7047/ns -la
total 0
dr-x--x--x 2 root root 0 Jul 7 12:17 .
dr-xr-xr-x 9 root root 0 Jul 7 12:16 ..
lrwxrwxrwx 1 root root 0 Jul 7 12:17 ipc -> ipc:[4026532465]
lrwxrwxrwx 1 root root 0 Jul 7 12:17 mnt -> mnt:[4026532463]
lrwxrwxrwx 1 root root 0 Jul 7 12:17 net -> net:[4026532299]
lrwxrwxrwx 1 root root 0 Jul 7 12:17 pid -> pid:[4026532466]
lrwxrwxrwx 1 root root 0 Jul 7 12:17 user -> user:[4026531837]
lrwxrwxrwx 1 root root 0 Jul 7 12:17 uts -> uts:[4026532464]
root#dude2:~# ls /proc/self/ns -la
When I check with ip netns list I cannot see the new net namespace.
dude#dude2:~/docker/testroot$ ip netns list
dude#dude2:~/docker/testroot$
Any idea why?
That's because docker is not creating the reqired symlink:
# (as root)
pid=$(docker inspect -f '{{.State.Pid}}' ${container_id})
mkdir -p /var/run/netns/
ln -sfT /proc/$pid/ns/net /var/run/netns/$container_id
Then, the container's netns namespace can be examined with ip netns ${container_id}, e.g.:
# e.g. show stats about eth0 inside the container
ip netns exec "${container_id}" ip -s link show eth0
As #jary indicates, the ip netns command only works with namespace symlinks in /var/run/netns. However, if you you have the nsenter command available (part of the util-linux package), you can accomplish the same thing using the PID of your docker container.
To get the PID of a docker container, you can run:
docker inspect --format '{{.State.Pid}}' <container_name_or_Id>
To get a command inside the network namespace of a container:
nsenter -t <contanier_pid> -n <command>
E.g:
$ docker inspect --format '{{.State.Pid}}' weechat
4432
$ sudo nsenter -t 4432 -n ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
75: eth0#if76: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:1b brd ff:ff:ff:ff:ff:ff
inet 172.17.0.27/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:1b/64 scope link
valid_lft forever preferred_lft forever
The above was equivalent to running ip netns exec <some_namespace> ip addr show.
As you can see here, you will need to run nsenter with root privileges.
Similar but different with #jary’s answer.
There is no need to introduce /proc/<pid>/ or netster. Only one move below to achieve what you want. Thus, you could operate containers’ network namespace just like they are created manually on host machine.
One Move:
ln -s /var/run/docker/netns /var/run/netns
Result:
Start a container:
docker run -tid ubuntu:18.04
List container:
root#Light-G:/var/run# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
972909a27ea1 ubuntu:18.04 "/bin/bash" 19 seconds ago Up 18 seconds peaceful_easley
List network namespace of this container:
root#Light-G:/var/run# ip netns list
733443afef58 (id: 0)
Delete container:
root#Light-G:/var/run# docker rm -f 972909a27ea1
972909a27ea1
List network namespace again:
root#Light-G:/var/run# ip netns list
root#Light-G:/var/run#
NOTE: THIS SEEM MUCH MORE SECURE than the symlink practice BUT MAY STILL LEAD TO SERIOUS SECURITY PROBLEMS.
It seems to me we may be violating what the containers should be - a contained environments.
That's because docker is not creating the reqired symlink:
# (as root)
pid=$(docker inspect -f '{{.State.Pid}}' "${container_id}")
mkdir -p /var/run/netns/
touch /var/run/$container_id
mount -o ro,bind "/proc/$pid/ns/net" "/var/run/netns/$container_id"
Then, the container's netns namespace can be examined with ip netns "${container_id}", e.g.:
# e.g. show stats about eth0 inside the container
ip netns exec "${container_id}" ip -s link show eth0
Thank you #TheDiveO for notifying me.

Resources