Why Docker containers can't communicate with each other? - linux

I have created a small project to test Docker clustering. Basically, the cluster.sh script launches three identical containers, and uses pipework to configure a bridge (bridge1) on the host and add an NIC (eth1) to each container.
If I log into one of the containers, I can arping other containers:
# 172.17.99.1
root#d01eb56fce52:/# arping 172.17.99.2
ARPING 172.17.99.2
42 bytes from aa:b3:98:92:0b:08 (172.17.99.2): index=0 time=1.001 sec
42 bytes from aa:b3:98:92:0b:08 (172.17.99.2): index=1 time=1.001 sec
42 bytes from aa:b3:98:92:0b:08 (172.17.99.2): index=2 time=1.001 sec
42 bytes from aa:b3:98:92:0b:08 (172.17.99.2): index=3 time=1.001 sec
^C
--- 172.17.99.2 statistics ---
5 packets transmitted, 4 packets received, 20% unanswered (0 extra)
So it seems packets can go through bridge1.
But the problem is I can't ping other containers, neither can I send any IP packets through via any tools like telnet or netcat.
In contrast, the bridge docker0 and NIC eth0 work correctly in all containers.
Here's my route table
# 172.17.99.1
root#d01eb56fce52:/# ip route
default via 172.17.42.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.17
172.17.99.0/24 dev eth1 proto kernel scope link src 172.17.99.1
and bridge config
# host
$ brctl show
bridge name bridge id STP enabled interfaces
bridge1 8000.8a6b21e27ae6 no veth1pl25432
veth1pl25587
veth1pl25753
docker0 8000.56847afe9799 no veth7c87801
veth953a086
vethe575fe2
# host
$ brctl showmacs bridge1
port no mac addr is local? ageing timer
1 8a:6b:21:e2:7a:e6 yes 0.00
2 8a:a3:b8:90:f3:52 yes 0.00
3 f6:0c:c4:3d:f5:b2 yes 0.00
# host
$ ifconfig
bridge1 Link encap:Ethernet HWaddr 8a:6b:21:e2:7a:e6
inet6 addr: fe80::48e9:e3ff:fedb:a1b6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:163 errors:0 dropped:0 overruns:0 frame:0
TX packets:68 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:8844 (8.8 KB) TX bytes:12833 (12.8 KB)
# I'm showing only one veth here for simplicity
veth1pl25432 Link encap:Ethernet HWaddr 8a:6b:21:e2:7a:e6
inet6 addr: fe80::886b:21ff:fee2:7ae6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:155 errors:0 dropped:0 overruns:0 frame:0
TX packets:162 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:12366 (12.3 KB) TX bytes:23180 (23.1 KB)
...
and IP FORWARD chain
# host
$ sudo iptables -x -v --line-numbers -L FORWARD
Chain FORWARD (policy ACCEPT 10675 packets, 640500 bytes)
num pkts bytes target prot opt in out source destination
1 15018 22400195 DOCKER all -- any docker0 anywhere anywhere
2 15007 22399271 ACCEPT all -- any docker0 anywhere anywhere ctstate RELATED,ESTABLISHED
3 8160 445331 ACCEPT all -- docker0 !docker0 anywhere anywhere
4 11 924 ACCEPT all -- docker0 docker0 anywhere anywhere
5 56 4704 ACCEPT all -- bridge1 bridge1 anywhere anywhere
Note the pkts cound for rule 5 isn't 0, which means ping has been routed correctly (FORWARD chain is executed after routing right?), but somehow didn't reach the destination.
I'm out of ideas why docker0 and bridge1 behave differently. Any suggestion?
Update 1
Here's the tcpdump output on the target container when pinged from another.
$ tcpdump -i eth1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
22:11:17.754261 IP 192.168.1.65 > 172.17.99.1: ICMP echo request, id 26443, seq 1, length 6
Note the source IP is 192.168.1.65, which is the eth0 of the host, so there seems to be some SNAT going on on the bridge.
Finally, printing out the nat IP table revealed the cause of the problem:
$ sudo iptables -L -t nat
...
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 anywhere
...
Because my container's eth0's IP is on 172.17.0.0/16, packets sent have their source IP changed. This is why the responses from ping can't go back to the source.
Conclusion
The solution is to change the container's eth0's IP to a different network than that of the default docker0.

Copied from Update 1 in question
Here's the tcpdump output on the target container when pinged from another.
$ tcpdump -i eth1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
22:11:17.754261 IP 192.168.1.65 > 172.17.99.1: ICMP echo request, id 26443, seq 1, length 6
Note the source IP is 192.168.1.65, which is the eth0 of the host, so there seems to be some SNAT going on on the bridge.
Finally, printing out the nat IP table revealed the cause of the problem:
$ sudo iptables -L -t nat
...
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 anywhere
...
Because my container's eth0's IP is on 172.17.0.0/16, packets sent have their source IP changed. This is why the responses from ping can't go back to the source.
Conclusion
The solution is to change the container's eth0's IP to a different network than that of the default docker0.

Related

Strongswan: packets received and decrypted correctly but not forwarded

I have a Lan-to-Lan vpn tunnel between Cisco CSR router and Strongswan. On Strongswan i see:
[root#ip-172-31-20-224 log]# strongswan status
Security Associations (1 up, 0 connecting):
tenant-13[2]: ESTABLISHED 66 minutes ago, 172.31.20.224[local_public_ip]...remote_public_ip[remote_public_ip]
tenant-13{3}: INSTALLED, TRANSPORT, reqid 1, ESP in UDP SPIs: cdf35340_i cb506e65_o
tenant-13{3}: 172.31.20.224/32 === remote_public_ip/32
tenant-13{147}: INSTALLED, TUNNEL, reqid 3, ESP in UDP SPIs: ca2c0328_i 0295d7bf_o
tenant-13{147}: 0.0.0.0/0 === 0.0.0.0/0
My crypto SA's allow for 0/0 -> 0/0. So all looks good.
I do receive encrypted packet on Strongswan and those are decrypted correctly, example: we can see that on virtual vti interface the udp packets are received (decrypted correctly):
[root#ip-172-31-20-224 log]# tcpdump -i vti13 -n udp port 3000
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vti13, link-type RAW (Raw IP), capture size 262144 bytes
11:19:57.834374 IP 192.168.1.116.54545 > X.X.X.X.hbci: UDP, length 340
Now X.X.X.X is a public ip address and those packets should be forwarded (out via eth0 using default routing), but i do not see those when looking via tcpdump:
[root#ip-172-31-20-224 log]# tcpdump -i eth0 -n host X.X.X.X
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
^C
0 packets captured
I have only one physical interface (eth0, transport for ipsec and default route) + one virtual (for decrypted traffic). So the traffic after decryption should be sent back out via the same eth0 interface:
[root#ip-172-31-20-224 log]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 02:ab:39:97:b0:7e brd ff:ff:ff:ff:ff:ff
inet 172.31.20.224/20 brd 172.31.31.255 scope global dynamic eth0
valid_lft 2673sec preferred_lft 2673sec
inet6 fe80::ab:39ff:fe97:b07e/64 scope link
valid_lft forever preferred_lft forever
3: ip_vti0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
9: vti13#NONE: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1400 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 172.31.20.224 peer 89.68.162.135
inet 1.0.0.2/30 scope global vti13
valid_lft forever preferred_lft forever
inet6 fe80::5efe:ac1f:14e0/64 scope link
valid_lft forever preferred_lft forever
I have confirmed that:
routing is enabled
policy checks are disabled (sysctl -w net.ipv4.conf.default.rp_filter=0 and sysctl -w net.ipv4.conf.vti13.disable_policy=1)
iptables INPUT, OUTPUT, FORWARD was empty with ALLOW, but i have added specific rules also and see 0 hits:
[root#ip-172-31-20-224 log]# iptables -I INPUT -i vti13 -j ACCEPT
[root#ip-172-31-20-224 log]# iptables -I FORWARD -i vti13 -j ACCEPT
[root#ip-172-31-20-224 log]# iptables -L -v -n
Chain INPUT (policy ACCEPT 9 packets, 1164 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- vti13 * 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- vti13 * 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 6 packets, 776 bytes)
pkts bytes target prot opt in out source destination
I have added entries to PREROUTING and POSTROUTING, just to check if i see those packets there and can confirm i can see those only in PREROUTING (so indeed the packet is not routed):
[root#ip-172-31-20-224 log]# iptables -L -v -n -t nat
Chain PREROUTING (policy ACCEPT 2 packets, 184 bytes)
pkts bytes target prot opt in out source destination
19192 25M DNAT udp -- vti13 * 0.0.0.0/0 0.0.0.0/0 udp dpt:3000 to:X.X.X.X:3000
I've tried to look via syslog (enabled kernel logging), but did not spot anything interesting.
What is the problem ? why my Linux is not forwarding those packets ?
Thanks,
OK, found the solution, as per https://docs.strongswan.org/docs/5.9/features/routeBasedVpn.html
had to disable charon.install_routes.

Iptables NAT one-to-one

I use linux serve Fedora 4.14.33-51.37.amzn1.x86_64. I want use NAT 1-to-1.
For example Is it same problem
My scheme is:
My server has two network interfaces.
eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 0a:8a:59:b9:2d:b8 brd ff:ff:ff:ff:ff:ff
inet 172.10.1.72/25 brd 172.10.1.127 scope global eth0
valid_lft forever preferred_lft forever
inet 172.10.1.32/25 brd 172.10.1.127 scope global secondary eth0
valid_lft forever preferred_lft forever
inet 172.10.1.39/25 brd 172.10.1.127 scope global secondary eth0
valid_lft forever preferred_lft forever
inet 172.10.1.101/25 brd 172.10.1.127 scope global secondary eth0
eth1:
inet 172.10.1.246/28 brd 172.10.1.255 scope global eth1
net.ipv4.ip_forward = 1
How can work NAT
(eth0)172.10.1.101 - (server1)192.168.1.10
(eth0)172.10.1.32 - (server2)192.168.1.11
and etc ...
route table on NAT server
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.10.1.1 0.0.0.0 UG 0 0 0 eth0
0.0.0.0 172.10.1.241 0.0.0.0 UG 10001 0 0 eth1
10.0.0.0 172.10.1.1 255.0.0.0 UG 0 0 0 eth0
169.254.169.254 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
172.10.1.0 0.0.0.0 255.255.255.128 U 0 0 0 eth0
172.10.1.240 0.0.0.0 255.255.255.240 U 0 0 0 eth1
192.168.1.0 172.10.1.241 255.255.255.240 UG 0 0 0 eth1
My currently iptables settings:
iptables -nvL
Chain INPUT (policy ACCEPT 1726 packets, 115K bytes)
pkts bytes target prot opt in out source destination
1827 121K LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
664 55128 ACCEPT all -- eth0 eth1 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- eth1 eth0 0.0.0.0/0 0.0.0.0/0
0 0 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4
Chain OUTPUT (policy ACCEPT 2123 packets, 668K bytes)
pkts bytes target prot opt in out source destination
2123 668K LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4
and
iptables -nvL -t nat
Chain PREROUTING (policy ACCEPT 36 packets, 2476 bytes)
pkts bytes target prot opt in out source destination
8 528 DNAT all -- eth0 * 0.0.0.0/0 172.10.1.101 to:192.168.1.10
Chain INPUT (policy ACCEPT 36 packets, 2476 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 195 packets, 14344 bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 202 packets, 14788 bytes)
pkts bytes target prot opt in out source destination
0 0 SNAT all -- * eth0 192.168.1.10 0.0.0.0/0 to:172.10.1.101
When i try check my NAT server as telnet 172.10.1.101 4016 I have error
telnet: connect to address 172.10.1.101: Connection timed out
My server 192.168.1.10 listened port 4016.
On my NAT server I don't have logs.
But I try connect to another ip on my eth0 interface and saw in log
Jun 18 15:04:39 ip-172-10-1-72 kernel: [ 1245.059113] IN= OUT=eth0 SRC=172.10.1.39 DST=10.68.72.90 LEN=40 TOS=0x10 PREC=0x00 TTL=255 ID=57691 DF PROTO=TCP SPT=4016 DPT=47952 WINDOW=0 RES=0x00 ACK RST URGP=0
10.68.72.90 it is my server before NAT. After added roles in iptables I don't able to ping my ip 172.10.1.101.
I had right rules.
iptables -t nat -A POSTROUTING -s 192.168.1.11 -o eth0 -j SNAT --to-source 172.10.1.32
iptables -t nat -A PREROUTING -d 172.10.1.32 -i eth0 -j DNAT --to-destination 192.168.1.11
The problem was in AWS. I should turn of for NAT server "Check source" in interface menu. And all static routs must be in Route Table for subnet, where is my server located, not be on the servers 192.168.1.10 and 192.168.1.11.

Send raw IP packet with tun device

I'm trying to programmatically construct and send IP packet through TUN device.
I've setup the TUN device and proper routes:
# ip tuntap add mode tun tun0
# ip link set tun0 up
# ip addr add 10.0.0.2/24 dev tun0
which results in:
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use
Iface
0.0.0.0 192.168.0.1 0.0.0.0 UG 600 0 0 wlp3s0
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 tun0
192.168.0.0 0.0.0.0 255.255.255.0 U 600 0 0 wlp3s0
$ ifconfig tun0
tun0: flags=4241<UP,POINTOPOINT,NOARP,MULTICAST> mtu 1500
inet 10.0.0.2 netmask 255.255.255.0 destination 10.0.0.2
inet6 fe80::f834:5267:3a1:5d1d prefixlen 64 scopeid 0x20<link>
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
IP forwarding is ON: # echo 1 > /proc/sys/net/ipv4/ip_forward
I've setup NAT for tun0 packets:
# iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o wlp3s0 -j MASQUERADE
# iptables -t nat -L -v
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- any wlp3s0 10.0.0.0/24 anywhere
Then I have python script to produce ICMP packets:
import os
from fcntl import ioctl
import struct
import time
import random
# pip install pypacker==4.0
from pypacker.layer3.ip import IP
from pypacker.layer3.icmp import ICMP
TUNSETIFF = 0x400454ca
IFF_TUN = 0x0001
IFF_NO_PI = 0x1000
ftun = os.open("/dev/net/tun", os.O_RDWR)
ioctl(ftun, TUNSETIFF, struct.pack("16sH", b"tun0", IFF_TUN | IFF_NO_PI))
req_nr = 1
req_id = random.randint(1, 65000)
while True:
icmp_req = IP(src_s="10.0.0.2", dst_s="8.8.8.8", p=1) +\
ICMP(type=8) +\
ICMP.Echo(id=req_id, seq=req_nr, body_bytes=b"povilas-test")
os.write(ftun, icmp_req.bin())
time.sleep(1)
req_nr += 1
I can see packets originating from tun0 interface:
# tshark -i tun0
1 0.000000000 10.0.0.2 → 8.8.8.8 ICMP 48 Echo (ping) request id=0xb673, seq=1/256, ttl=64
2 1.001695939 10.0.0.2 → 8.8.8.8 ICMP 48 Echo (ping) request id=0xb673, seq=2/512, ttl=64
3 2.003375319 10.0.0.2 → 8.8.8.8 ICMP 48 Echo (ping) request id=0xb673, seq=3/768, ttl=6
But wlp3s0 interface is silent, thus it seems that packets don't get NAT'ed and routed to wlp3s0 interface, which is my WLAN card.
Any ideas what I am missing?
I'm running Debian 9.
And turns out that packet forwarding was disabled - default policy for FORWARD chain is DROP:
# iptables -L -v
Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
So I changed the policy:
# iptables -P FORWARD ACCEPT
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Also, I had to change the IP packet source address to something other than 10.0.0.2 which is the preferred source address for tun0 interface:
$ ip route
10.0.0.0/24 dev tun0 proto kernel scope link src 10.0.0.2
So I changed the packet source address to 10.0.0.4:
icmp_req = IP(src_s="10.0.0.4", dst_s="8.8.8.8", p=1) +\
ICMP(type=8) +\
ICMP.Echo(id=req_id, seq=req_nr, body_bytes=b"povilas-test")
Then kernel started forwarding packets coming from tun0 interface to the gateway interface:
# tshark -i wlp3s0
5 0.008428567 192.168.0.103 → 8.8.8.8 ICMP 62 Echo (ping) request id=0xb5c7, seq=9/2304, ttl=63
6 0.041114028 8.8.8.8 → 192.168.0.103 ICMP 62 Echo (ping) reply id=0xb5c7, seq=9/2304, ttl=48 (request in 5)
Also ping responses were sent back to tun0:
# tshark -i tun0
1 0.000000000 10.0.0.4 → 8.8.8.8 ICMP 48 Echo (ping) request id=0xb5c7, seq=113/28928, ttl=64
2 0.035470191 8.8.8.8 → 10.0.0.4 ICMP 48 Echo (ping) reply id=0xb5c7, seq=113/28928, ttl=47 (request in 1)

why i can not disable multicast request

I use the following command to disable the eth0 interface's multicast mode , but i not works :
sudo ifconfig eth0 -multicast
when do this, the eth0's configure is so:
ifconfig -v eth0
eth0 Link encap:Ethernet HWaddr 00:16:3E:E8:43:01
inet addr:10.232.67.1 Bcast:10.232.67.255 Mask:255.255.255.0
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:46728751 errors:0 dropped:0 overruns:0 frame:0
TX packets:15981372 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:8005709841 (7.4 GiB) TX bytes:3372203819 (3.1 GiB)
then, i do icmp echo_request in host 10.232.67.2:
ping 224.0.0.1
and tcpdump the package on host 10.232.67.1:
tcpdump -i eth0 host 224.0.0.1 -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
21:11:03.182813 IP 10.232.67.2 > 224.0.0.1: ICMP echo request, id 3639, seq 324, length 64
21:11:04.184667 IP 10.232.67.2 > 224.0.0.1: ICMP echo request, id 3639, seq 325, length 64
21:11:05.186781 IP 10.232.67.2 > 224.0.0.1: ICMP echo request, id 3639, seq 326, length 64
so, how to disable the multicast mode ?
by the way , when i disable the broadcast:
sudo ifconfig eth0 -broadcast
error message is :
Warning: Interface eth0 still in BROADCAST mode.
so, why can't stop broad cast mode ?
To disable multicast on an interface (you had it right):
ifconfig eth0 -multicast
tcpdump without the -p flag puts the interface in PROMISCUOUS mode, so it captures any traffic on the wire. If you're on a hub, you can see traffic sent to/from everyone else. When an interface isn't in promiscuous mode, it ignores all traffic not sent to it. If you try the tcpdump and ping without promiscuous mode, and multicast disabled, you shouldn't see the traffic:
tcpdump -p -i eth0 host 224.0.0.1 -n
You don't want to disable BROADCAST mode. It just designates the broadcast address for your subnet.

Linux/RHEL5: UDP on IPv6 does not work on the same pc

I try to setup a netcat server/client with UDP and IPv6 on the same pc.
Here are my interfaces on my pc:
[root#rh55hp360g7ss7 trunk_dir]# ifconfig
eth0 Link encap:Ethernet HWaddr xxx
inet addr:192.168.255.166 Bcast:192.168.255.255 Mask:255.255.255.0
inet6 addr: fe80::1ec1:deff:fef3:4870/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:21948499 errors:0 dropped:0 overruns:0 frame:0
TX packets:24300265 errors:0 dropped:0 overruns:0 carrier:0
collisions:360733 txqueuelen:1000
RX bytes:3645218404 (3.3 GiB) TX bytes:1672728274 (1.5 GiB)
Interrupt:162 Memory:f4000000-f4012800
Then I start the netcat server:
nc -6ul fe80::1ec1:deff:fef3:4870%eth0 5678
And the netcat client (still on the same pc)
nc -6u fe80::1ec1:deff:fef3:4870%eth0 5678
But then, when I type something into the NetCat Client, nothing is transfered to the server.
The same example is working if
I start the netcat client on another pc
I'm using TCP instead of UDP (i.e. when I remove the -u option)
When I'm using IPv4 instead of IPv6 (i.e. when I remove the -6 option and if I take the IPv4 address).
Any Ideas?
TSohr.
Here is the routing table, in case it might help:
[root#rh55hp360g7ss7 trunk_dir]# route -A inet6
Kernel IPv6 routing table
Destination Next Hop Flags Metric Ref Use Iface
fe80::/64 * U 256 0 0 eth0
::1/128 * U 0 265 5 lo
fe80::1ec1:deff:fef3:4870/128 * U 0 10551 1 lo
ff00::/8 * U 256 0 0 eth0
[root#rh55hp360g7ss7 trunk_dir]#
## Added 2012-03-13
With ::1, it is working.
I have the same problem when trying to run a SIP stack on the pc.
It's only a problem with Red Hat and with link-local scope. When using an address with global scope, its working.
I tried out with Ubuntu 10.4, here it is working also with link-local adddresses.
This is my Red Hat Distribution:
[root#BETESIP02 sipp]# uname -a
Linux BETESIP02 2.6.18-194.el5PAE #1 SMP Tue Mar 16 22:00:21 EDT 2010 i686 i686 i386 GNU/Linux

Resources