jGroups does not join any group on RHEL with link-local ipv6 address - ubuntu-14.04

Am seeing this exception in RHEL 7.3.1 when running my code to form a jGroups cluster. The following exception is seen in log.
[DEBUG] 2017-10-03 20:23:01.339 [pool-10-thread-1] client.jgroups - Creating new Channel
[WARN ] 2017-10-03 20:23:01.342 [pool-10-thread-1] stack.Configurator - JGRP000014: TP.loopback has been deprecated: enabled by default
[DEBUG] 2017-10-03 20:23:01.343 [pool-10-thread-1] stack.Configurator - set property UDP.bind_addr to default value /fe80:0:0:0:2d57:389e:e4fe:9520%eth0
[DEBUG] 2017-10-03 20:23:01.345 [pool-10-thread-1] stack.Configurator - set property UDP.diagnostics_addr to default value /ff0e:0:0:0:0:0:75:75
[DEBUG] 2017-10-03 20:23:01.346 [pool-10-thread-1] client.jgroups - STATE OPEN
[DEBUG] 2017-10-03 20:23:01.347 [pool-10-thread-1] protocols.UDP - sockets will use interface fe80:0:0:0:2d57:389e:e4fe:9520%eth0
[ERROR] 2017-10-03 20:23:01.374 [pool-10-thread-1] client.jgroups - Catching
java.lang.Exception: failed to open a port in range 40000-40255
at org.jgroups.protocols.UDP.createDatagramSocketWithBindPort(UDP.java:500) ~[xxx-xxx.jar:2.0.1]
at org.jgroups.protocols.UDP.createSockets(UDP.java:361) ~[xxx-xxx.jar:2.0.1]
at org.jgroups.protocols.UDP.start(UDP.java:270) ~[xxx-xxx.jar:2.0.1]
at org.jgroups.stack.ProtocolStack.startStack(ProtocolStack.java:965) ~[xxx-xxx.jar:2.0.1]
at org.jgroups.JChannel.startStack(JChannel.java:891) ~[xxx-xxx.jar:2.0.1]
at org.jgroups.JChannel._preConnect(JChannel.java:553) ~[xxx-xxx.jar:2.0.1]
at org.jgroups.JChannel.connect(JChannel.java:288) ~[xxx-xxx.jar:2.0.1]
at org.jgroups.JChannel.connect(JChannel.java:279) ~[xxx-xxx.jar:2.0.1]
Now the same client code runs perfectly on a Ubuntu 14.04 machine. Another thing to note is that the following flag is not provided in both the cases.
-Djava.net.preferIPv4Stack=true
Also in both the cases link-local IPv6 addresses are being used.
How to make the same code work on RHEL?
Adding the following info, for the questions asked by #bela-ban :
Trying options in config xml.
I tried both LINK_LOCAL & NON_LOOPBACK, but still getting the same error.
JGroups version?
I am using 3.6.3-Final version of JGroups.
Omitting IPv4 flag
We have omitted -Djava.net.preferIPv4Stack=true, as we want to test our client in an IPv6 client environment.
Running ifconfig -a
Also running the command ifconfig -a , gives the following output :
ifconfig -a
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.66.194.103 netmask 255.255.252.0 broadcast 10.66.195.255
inet6 fe80::4b16:4a66:2bc3:c505 prefixlen 64 scopeid 0x20<link>
inet6 fe80::30cb:2f41:5e04:51c2 prefixlen 64 scopeid 0x20<link>
inet6 fe80::2d57:389e:e4fe:9520 prefixlen 64 scopeid 0x20<link>
ether 00:15:5d:b8:65:47 txqueuelen 1000 (Ethernet)
RX packets 8485475 bytes 1961303302 (1.8 GiB)
RX errors 0 dropped 109087 overruns 0 frame 0
TX packets 49088 bytes 4169469 (3.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1 (Local Loopback)
RX packets 154252 bytes 11261136 (10.7 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 154252 bytes 11261136 (10.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Which version of JGroups do you use? (java -cp jgroups.jar org.jgroups.Version prints the version to stdout).
Using system property -Djava.net.preferIPv4Stack=true will force the use of IPv4 addresses. In your case, on RHEL you seemed to have omitted this property, therefore using IPv6 addresses.
Make sure you do have an address fe80:0:0:0:2d57:389e:e4fe:9520%eth0 (ifconfig -a). Note that you can use bind_addr=link_local to pick any link local address.
[1] http://www.jgroups.org/manual4/index.html#Transport

So this was all failing due to bad local link addresses.
The first mistake was to use the now obsolete ifconfig command. It does not provide any information if the local link addresses assigned are valid or not.
The correct command to use is ip address. This command in my case returns the following :
# ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:15:5d:b8:65:47 brd ff:ff:ff:ff:ff:ff
inet 10.66.194.103/22 brd 10.66.195.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::2d57:389e:e4fe:9520/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::30cb:2f41:5e04:51c2/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::4b16:4a66:2bc3:c505/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
As you can see the ipv6 local link addresses listed here are marked as tentative dadfailed. This means that these addresses cannot be used for anything. So the next step was to weed out these bad addresses and add our own unique local address. I did the following steps to accomplish this:
#add the new unique local address. Again this can be duplicate, so chose wisely. A reboot may be required after this.
$nmcli c mod eth0 ipv6.addresses fc00::10:8:8:71/7 ipv6.method manual
# Remove out the old local link addresses
$ip address delete fe80::4b16:4a66:2bc3:c505/64 dev eth0
$ip address delete fe80::30cb:2f41:5e04:51c2/64 dev eth0
$ip address delete fe80::2d57:389e:e4fe:9520/64 dev eth0
After this we can verify again if the above steps have worked or not
ip address show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:15:5d:b8:65:47 brd ff:ff:ff:ff:ff:ff
inet 10.66.194.103/22 brd 10.66.195.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fc00::10:8:8:71/7 scope global
valid_lft forever preferred_lft forever
See no more tentative dadfailed.
Hence, to conclude this was not related to JGroups at all and was only caused by bad local link addresses.

Related

How linux forwards packets to k8s service [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I have installed k8s on VM.
Now I have access to some k8s services from this VM, like
[root#vm ~]# netcat -vz 10.96.0.10 9153
kube-dns.kube-system.svc.cluster.local [10.96.0.10] 9153 open
10.96.0.10 is ClusterIP of kube-dns service.
My question is how Linux forwards requests to 10.96.0.10 to right destination?
I don't see any interfaces with 10.96.0.10 IP and any routing rules for 10.96.0.10 on VM.
[root#vm ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:00:00:10 brd ff:ff:ff:ff:ff:ff
inet 192.168.4.104/24 brd 192.168.4.255 scope global dynamic noprefixroute ens3
valid_lft 33899sec preferred_lft 28499sec
inet6 fe80::ca7d:cdfe:42a3:75f/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:cd:1d:8a:77 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: tunl0#NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 10.0.219.64/32 brd 10.0.219.64 scope global tunl0
valid_lft forever preferred_lft forever
7: calib9d0c90540c#if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
8: cali81206f5bf92#if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
[root#vm ~]# ip route
default via 192.168.4.1 dev ens3 proto dhcp src 192.168.4.104 metric 202
10.0.189.64/26 via 192.168.4.107 dev tunl0 proto bird onlink
blackhole 10.0.219.64/26 proto bird
10.0.219.107 dev calib9d0c90540c scope link
10.0.219.108 dev cali81206f5bf92 scope link
10.0.235.128/26 via 192.168.4.105 dev tunl0 proto bird onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.4.0/24 dev ens3 proto dhcp scope link src 192.168.4.104 metric 202
kubelet manages iptables NAT rules to route the traffic to actual endpoints of a service. So the service IP is purely virtual and gets rewritten in a round-robin fashing across all endpoints of a service.

On a BBBW, how to change primary Ethernet Interface without adding secondary interfaces from code or script

The setup is a BBBW with a USB to ethernet dongle. The aim is to have both a wireless interface as well as a wired interface. For compatibility with other hardware code, we cannot use connman services as they are not available on all platforms. As these changes will be done from configuration files sent to the device. The solution must be done programatically. We are using GO for the interpretation of the configuration files. At this time, we chose to use the /etc/network/interfaces files to effect the changes. Since the device needs to be active at all times, no reboot is permitted.
We can get the changes to be applied by a reboot (not what we want though), but a network restart does not work. The main problem we encounter is when trying to set a static address for an interface without rebooting the device.
Example: Changing the wlan0 interface static IP
Initially we have one IP on wlan0
output from "ip a s":
4: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 50:65:83:dc:43:cc brd ff:ff:ff:ff:ff:ff
inet 192.168.0.90/24 brd 192.168.0.255 scope global wlan0
valid_lft forever preferred_lft forever
inet6 fe80::5265:83ff:fedc:43cc/64 scope link
valid_lft forever preferred_lft forever
ifconfig shows:
wlan0 Link encap:Ethernet HWaddr 50:65:83:dc:43:cc
inet addr:192.168.0.90 Bcast:192.168.0.255 Mask:255.255.255.0
We edit /etc/network/interfaces with :
iface wlan0 inet static
address 192.168.0.91
netmask 255.255.0.0
gateway 192.168.0.1
wpa-ssid SOME_SSID
wpa-psk SOME_KEY
Then we issue :
sudo ifdown wlan0
sudo ifup wlan0
sudo service networking restart
And we see:
ip a s :
4: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 50:65:83:dc:43:cc brd ff:ff:ff:ff:ff:ff
inet 192.168.0.90/24 brd 192.168.0.255 scope global wlan0
valid_lft forever preferred_lft forever
inet 192.168.0.91/24 brd 192.168.0.255 scope global secondary wlan0
valid_lft forever preferred_lft forever
inet6 fe80::5265:83ff:fedc:43cc/64 scope link
valid_lft forever preferred_lft forever
ifconfig:
wlan0 Link encap:Ethernet HWaddr 50:65:83:dc:43:cc
inet addr:192.168.0.90 Bcast:192.168.0.255 Mask:255.255.255.0
As you can see the new IP address is taken as a secondary IP. Ifconfig only shows the primary IP and not the secondary IP. We can connect to the device with both address but the presence of the old IP address is not acceptable.
Is there a correct way to do this from system files edits or even system calls (bash commands)
Is this secondary IP behaviour only a BBB thing or is this a Linux one ?

Network namespace and bridging

Helo everyone, i am occasional linux user, but i have a project to do and i need some help with bridging :)
I have tried with google, but didn't solve the problem.
My task is to create network namespace, so it can be used to perform some other tasks from it.
Debian 8.2 is used in VMWare virtual machine on windows 7. I have also tried same things on Raspberry Pi 2, but same problems appear.
First, i have followed tutorial https://lwn.net/Articles/580893/ to create pair of virtual ethernet interfaces.
So now i have veth0 in global namespace with ip address 10.1.1.2/24, and veth1 in netns1 namespace with ip address 10.1.1.1/24.
Next, i have followed tutorial http://www.linuxfoundation.org/collaborate/workgroups/networking/bridge to bridge eth0 and veth0, so i can access internet from netns1 namespace.
First, i have deleted ip addresses for both eth0 and veth0
interfaces, and set them do DOWN state.
New bridge is created (br0) and both interfaces (eth0 and veth0) are
added to it.
Then both interfaces are set to UP state, and i run "dhclient br0" to
assign ip address to br0.
From global namespace now it is possible to run "ping google.com", but from netns1 namespace i get error "Network is unreachable".
(I suppose there is problem with routes, i have tried with adding some default routes to netns1 namespace, but no luck. My network knowledge is modest, so i'm asking for help.)
$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000
link/ether 00:0c:29:45:b6:1d brd ff:ff:ff:ff:ff:ff
4: veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000
link/ether 86:e4:6c:02:b6:79 brd ff:ff:ff:ff:ff:ff
inet6 fe80::84e4:6cff:fe02:b679/64 scope link
valid_lft forever preferred_lft forever
5: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 00:0c:29:45:b6:1d brd ff:ff:ff:ff:ff:ff
inet 192.168.178.135/24 brd 192.168.178.255 scope global br0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe45:b61d/64 scope link
valid_lft forever preferred_lft forever
$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.178.2 0.0.0.0 UG 0 0 0 br0
default 192.168.178.2 0.0.0.0 UG 1024 0 0 br0
192.168.178.0 * 255.255.255.0 U 0 0 0 br0
$ ip netns exec netns1 ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether ee:b8:f3:47:f7:0c brd ff:ff:ff:ff:ff:ff
inet 10.1.1.1/24 brd 10.1.1.255 scope global veth1
valid_lft forever preferred_lft forever
inet6 fe80::ecb8:f3ff:fe47:f70c/64 scope link
valid_lft forever preferred_lft forever
$ ip netns exec netns1 route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
10.1.1.0 * 255.255.255.0 U 0 0 0 veth1
$ brctl show
bridge name bridge id STP enabled interfaces
br0 8000.000c2945b61d no eth0
veth0
Thanks in advance for help :)
I have found solution.
Basically, ip forward was missing, along with 2 more steps (i have tried them before, but because of ip forward wasn't enabled, it wasn't working).
Here steps for future readers (after making bridge to work in global namespace ):
Assign ip address to veth0 in global namespace (10.1.1.2) because ip
address was deleted before creating bridge (in tutorial for bridge
they say: "The IP address needs to be set after the bridge has been
configured.")
Assign default gateway in netns1 namespace to be veth0 in global namespace "ip netns exec netns1 route add default gw 10.1.1.2"
Enable ip forwarding "echo 1 > /proc/sys/net/ipv4/ip_forward"

Problems with vagrant, centos, dhclient

I have a problem, where the centos dhclient script keeps overwriting my /etc/resolv.conf file.
I'm using a centos7 image with vagrant 1.7.2
I have a pretty simple setup, here is my VagrantFile
config.vm.define "puppetmaster" do |pm|
## Map the local puppet configuration to the puppetmaster
pm.vm.synced_folder "./puppetmaster", "/etc/puppet"
pm.vm.provision "puppet" do |puppet|
puppet.manifests_path = ["vm", "/etc/puppet/manifests"]
puppet.manifest_file = "site.pp"
end
pm.vm.box = "puppetlabs/centos-7.0-64-puppet"
pm.vm.network "private_network", ip: "192.168.2.2"
## Enable the GUI
pm.vm.provider :virtualbox do |v|
v.gui = true
v.name = "mattlab-puppetmaster"
v.customize ["modifyvm", :id, "--memory",2048]
v.customize ["modifyvm", :id, "--cpus",4]
end
end
As you can see, I only have a network single interface configured and it's got some static settings.
I know Vagrant also has an internal interface, which it uses for communication. This seems to pick an IP using DHCP (although i'm not certain where this comes from).
I know that having a single dhcp interface will trigger the dhclient script to overwrite the resolv.conf.
As this box will be a puppetmaster and DNS server, I need to find a way to disable the vagrant dhcp interface in a way that will
My interfaces look like
[root#puppetmaster dhcp]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:19:cd:16 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 81995sec preferred_lft 81995sec
inet6 fe80::a00:27ff:fe19:cd16/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:85:18:83 brd ff:ff:ff:ff:ff:ff
inet 192.168.2.2/24 brd 192.168.2.255 scope global enp0s8
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe85:1883/64 scope link
valid_lft forever preferred_lft forever
Still allow me to connect via the command line with vagrant ssh
Allow vagrant to use my static 192.168.2.2 address instead of the dhcp address.
Does this sound possible ?
Many thanks in advance.
Ok, answered my own question.
When dhclient-script generates a new /etc/resolv.conf it looks for a function called make_resolv_conf().
By putting the following in /etc/dhcp/dhclient-enter-hooks and making it executable, it stops overwriting it.
make_resolv_conf() {
# Do not overwrite /etc/resolv.conf.
return 0
}
Thanks
Matt
I resolved in Centos7. I hope it helps you.
In ifcfg-eth0 (in my case) it was set PEERDNS="yes". Changing to PEERDNS="no" does not resolve the problem. But
PEERDNS="no"
export PEERDNS
ifup ifcfg-eth0
did work! I suspect it is a bug in dhclient-script or its caller.

How to check what route (interface) does a destination IP address belong to without sending anything?

How to look up IP address in routing table (retrieve network interface index and MAC) without sending any packets there?
You could use the ip command:
# ip route get to 74.125.228.197
74.125.228.197 via 192.168.1.1 dev eth0 src 192.168.1.100
And once you know the interface you can get its index and link/ether(MAC) address:
# ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 100
link/ether 88:00:00:00:f2:4c brd ff:ff:ff:ff:ff:ff
inet 192.168.1.100/24 brd 192.168.1.255 scope global eth0
inet6 fe80::3039:34ff:fe2c:f24c/64 scope link
valid_lft forever preferred_lft forever

Resources