why i can not disable multicast request - linux

I use the following command to disable the eth0 interface's multicast mode , but i not works :
sudo ifconfig eth0 -multicast
when do this, the eth0's configure is so:
ifconfig -v eth0
eth0 Link encap:Ethernet HWaddr 00:16:3E:E8:43:01
inet addr:10.232.67.1 Bcast:10.232.67.255 Mask:255.255.255.0
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:46728751 errors:0 dropped:0 overruns:0 frame:0
TX packets:15981372 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:8005709841 (7.4 GiB) TX bytes:3372203819 (3.1 GiB)
then, i do icmp echo_request in host 10.232.67.2:
ping 224.0.0.1
and tcpdump the package on host 10.232.67.1:
tcpdump -i eth0 host 224.0.0.1 -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
21:11:03.182813 IP 10.232.67.2 > 224.0.0.1: ICMP echo request, id 3639, seq 324, length 64
21:11:04.184667 IP 10.232.67.2 > 224.0.0.1: ICMP echo request, id 3639, seq 325, length 64
21:11:05.186781 IP 10.232.67.2 > 224.0.0.1: ICMP echo request, id 3639, seq 326, length 64
so, how to disable the multicast mode ?
by the way , when i disable the broadcast:
sudo ifconfig eth0 -broadcast
error message is :
Warning: Interface eth0 still in BROADCAST mode.
so, why can't stop broad cast mode ?

To disable multicast on an interface (you had it right):
ifconfig eth0 -multicast
tcpdump without the -p flag puts the interface in PROMISCUOUS mode, so it captures any traffic on the wire. If you're on a hub, you can see traffic sent to/from everyone else. When an interface isn't in promiscuous mode, it ignores all traffic not sent to it. If you try the tcpdump and ping without promiscuous mode, and multicast disabled, you shouldn't see the traffic:
tcpdump -p -i eth0 host 224.0.0.1 -n
You don't want to disable BROADCAST mode. It just designates the broadcast address for your subnet.

Related

Why Docker containers can't communicate with each other?

I have created a small project to test Docker clustering. Basically, the cluster.sh script launches three identical containers, and uses pipework to configure a bridge (bridge1) on the host and add an NIC (eth1) to each container.
If I log into one of the containers, I can arping other containers:
# 172.17.99.1
root#d01eb56fce52:/# arping 172.17.99.2
ARPING 172.17.99.2
42 bytes from aa:b3:98:92:0b:08 (172.17.99.2): index=0 time=1.001 sec
42 bytes from aa:b3:98:92:0b:08 (172.17.99.2): index=1 time=1.001 sec
42 bytes from aa:b3:98:92:0b:08 (172.17.99.2): index=2 time=1.001 sec
42 bytes from aa:b3:98:92:0b:08 (172.17.99.2): index=3 time=1.001 sec
^C
--- 172.17.99.2 statistics ---
5 packets transmitted, 4 packets received, 20% unanswered (0 extra)
So it seems packets can go through bridge1.
But the problem is I can't ping other containers, neither can I send any IP packets through via any tools like telnet or netcat.
In contrast, the bridge docker0 and NIC eth0 work correctly in all containers.
Here's my route table
# 172.17.99.1
root#d01eb56fce52:/# ip route
default via 172.17.42.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.17
172.17.99.0/24 dev eth1 proto kernel scope link src 172.17.99.1
and bridge config
# host
$ brctl show
bridge name bridge id STP enabled interfaces
bridge1 8000.8a6b21e27ae6 no veth1pl25432
veth1pl25587
veth1pl25753
docker0 8000.56847afe9799 no veth7c87801
veth953a086
vethe575fe2
# host
$ brctl showmacs bridge1
port no mac addr is local? ageing timer
1 8a:6b:21:e2:7a:e6 yes 0.00
2 8a:a3:b8:90:f3:52 yes 0.00
3 f6:0c:c4:3d:f5:b2 yes 0.00
# host
$ ifconfig
bridge1 Link encap:Ethernet HWaddr 8a:6b:21:e2:7a:e6
inet6 addr: fe80::48e9:e3ff:fedb:a1b6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:163 errors:0 dropped:0 overruns:0 frame:0
TX packets:68 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:8844 (8.8 KB) TX bytes:12833 (12.8 KB)
# I'm showing only one veth here for simplicity
veth1pl25432 Link encap:Ethernet HWaddr 8a:6b:21:e2:7a:e6
inet6 addr: fe80::886b:21ff:fee2:7ae6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:155 errors:0 dropped:0 overruns:0 frame:0
TX packets:162 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:12366 (12.3 KB) TX bytes:23180 (23.1 KB)
...
and IP FORWARD chain
# host
$ sudo iptables -x -v --line-numbers -L FORWARD
Chain FORWARD (policy ACCEPT 10675 packets, 640500 bytes)
num pkts bytes target prot opt in out source destination
1 15018 22400195 DOCKER all -- any docker0 anywhere anywhere
2 15007 22399271 ACCEPT all -- any docker0 anywhere anywhere ctstate RELATED,ESTABLISHED
3 8160 445331 ACCEPT all -- docker0 !docker0 anywhere anywhere
4 11 924 ACCEPT all -- docker0 docker0 anywhere anywhere
5 56 4704 ACCEPT all -- bridge1 bridge1 anywhere anywhere
Note the pkts cound for rule 5 isn't 0, which means ping has been routed correctly (FORWARD chain is executed after routing right?), but somehow didn't reach the destination.
I'm out of ideas why docker0 and bridge1 behave differently. Any suggestion?
Update 1
Here's the tcpdump output on the target container when pinged from another.
$ tcpdump -i eth1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
22:11:17.754261 IP 192.168.1.65 > 172.17.99.1: ICMP echo request, id 26443, seq 1, length 6
Note the source IP is 192.168.1.65, which is the eth0 of the host, so there seems to be some SNAT going on on the bridge.
Finally, printing out the nat IP table revealed the cause of the problem:
$ sudo iptables -L -t nat
...
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 anywhere
...
Because my container's eth0's IP is on 172.17.0.0/16, packets sent have their source IP changed. This is why the responses from ping can't go back to the source.
Conclusion
The solution is to change the container's eth0's IP to a different network than that of the default docker0.
Copied from Update 1 in question
Here's the tcpdump output on the target container when pinged from another.
$ tcpdump -i eth1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
22:11:17.754261 IP 192.168.1.65 > 172.17.99.1: ICMP echo request, id 26443, seq 1, length 6
Note the source IP is 192.168.1.65, which is the eth0 of the host, so there seems to be some SNAT going on on the bridge.
Finally, printing out the nat IP table revealed the cause of the problem:
$ sudo iptables -L -t nat
...
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 anywhere
...
Because my container's eth0's IP is on 172.17.0.0/16, packets sent have their source IP changed. This is why the responses from ping can't go back to the source.
Conclusion
The solution is to change the container's eth0's IP to a different network than that of the default docker0.

Why I can't connect the eth0 with putty?

eth0 Link encap:Ethernet HWaddr 06:20:08:46:CD:C7
inet addr:172.26.26.60 Bcast:172.26.26.63 Mask:255.255.255.192
inet6 addr: fe80::420:8ff:fe46:cdc7/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
RX packets:1577 errors:0 dropped:0 overruns:0 frame:0
TX packets:1340 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:141738 (138.4 KiB) TX bytes:215770 (210.7 KiB)
Interrupt:165
eth1 Link encap:Ethernet HWaddr 06:81:8C:00:F4:F2
inet addr:172.26.26.37 Bcast:172.26.26.63 Mask:255.255.255.192
inet6 addr: fe80::481:8cff:fe00:f4f2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
RX packets:269 errors:0 dropped:0 overruns:0 frame:0
TX packets:56 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:16216 (15.8 KiB) TX bytes:8921 (8.7 KiB)
Interrupt:164
the server is a LAMP environment. I use DHCP mode to add a eth1, ifconfig shows as the above. and I can't connect to the eth0 with ip 172.26.26.60 I can only connect to the eth1 with 172.26.26.37 with putty. is set but not working.
I want to know why I can't connect to the 172.26.26.60?
edit:
route -n
it outputs like below:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
172.26.26.0 0.0.0.0 255.255.255.192 U 0 0 0 eth0
172.26.26.0 0.0.0.0 255.255.255.192 U 0 0 0 eth1
0.0.0.0 172.26.26.1 0.0.0.0 UG 0 0 0 eth1
Check your Ssh server config. It might only be listening on one address. netstat -an | grep 22 | grep LISTEN should show you which address you're listening on.
Step 0 - ping, check cables, etc.
Step 1 - check if sshd is really listening on 0.0.0.0. Otherwise it's only available on the corresponding interface.
Step 2 - check iptables -nvL
Step 3 - check route -n

zolertia z1 connection with tunslip for webserver

I want to install a webserver on my zolertia z1 sensor. I followed step here : http://wismote.org/doku.php?id=development:sample_code
When i run tunslip program like this :
"sudo ./tunslip -B 115200 -s /dev/ttyUSB0 192.168.1.1 255.255.255.0 "
Results are :
slip started on ``/dev/ttyUSB0''
opened device ``/dev/tun0''
ifconfig tun0 inet `hostname` up
route add -net 192.168.1.0 netmask 255.255.255.0 dev tun0
ifconfig tun0
tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:127.0.1.1 P-t-P:127.0.1.1 Mask:255.255.255.255
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
The route on tun0 is opened but he doesn't detect my sensor connected with serial line. There no "route add -net 192.168.1.2 netmask 255.255.255.255 dev tun0" at end and i don't know why. I don't know if i must change the flag for a TAP or TUN device ?!
if i try to login on my sensor with "make login". it works fine. the program is
correctly installed on this.
I tried this on a virtual image with contiki and on Ubuntu 12.04.4 LTS x86_64. I have the same result on both OS.
Maybe you've to change your BaudRate which it's used to be arround 38.400 bauds on motes z1 Zolertia.
sudo ./tunslip -B 38400 -s /dev/ttyUSB0 192.168.1.1 255.255.255.0

ip route src not working

1I need to use several IPs on interface in Linux, and switch it, but it's not working.
for example:
# ifconfig
eth0 Link encap:Ethernet HWaddr 90:2b:34:33:80:65
inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::922b:34ff:fe33:8065/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:13220652 errors:0 dropped:32864 overruns:0 frame:0
TX packets:8296620 errors:0 dropped:0 overruns:0 carrier:1
collisions:0 txqueuelen:1000
RX bytes:16166162509 (16.1 GB) TX bytes:2186645852 (2.1 GB)
Interrupt:48
eth0:1 Link encap:Ethernet HWaddr 90:2b:34:33:80:65
inet addr:192.168.1.99 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:48
lo Link encap:Локальная петля (Loopback)
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:276685 errors:0 dropped:0 overruns:0 frame:0
TX packets:276685 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:21387439 (21.3 MB) TX bytes:21387439 (21.3 MB)
# ip route
169.254.0.0/16 dev eth0 scope link metric 1000
192.168.1.0/24 dev eth0 scope link src 192.168.1.99
# ping 192.168.1.50
PING 192.168.1.50 (192.168.1.50) 56(84) bytes of data.
64 bytes from 192.168.1.50: icmp_req=1 ttl=255 time=1.21 ms
64 bytes from 192.168.1.50: icmp_req=2 ttl=255 time=1.07 ms
64 bytes from 192.168.1.50: icmp_req=3 ttl=255 time=1.05 ms
and then i use Curl
# curl 192.168.1.50
and tcpdump:
13:24:02.009094 IP 192.168.1.1 > 192.168.1.50: ICMP echo request, id 8919, seq 347, length 64
E..T..#.#..%.......2...s"..[...P....q#...................... !"#$%&'()*+,-./01234567
13:24:02.010087 IP 192.168.1.50 > 192.168.1.1: ICMP echo reply, id 8919, seq 347, length 64
E..T...........2.......s"..[...P....q#...................... !"#$%&'()*+,-./01234567
13:43:40.006264 IP 192.168.1.1.48275 > 192.168.1.50.80: Flags [S], seq 3496592766, win 14600, options [mss 1460,sackOK,TS val 15698502 ecr 0,nop,wscale 7], length 0
E..<O8#.#.h........2...P.i.~......9............
...F........
13:43:40.007663 IP 192.168.1.50.80 > 192.168.1.1.48275: Flags [S.], seq 3006420619, ack 3496592767, win 5792, options [mss 1460,sackOK,TS val 151247914 ecr 15698502,nop,wscale 0], length 0
E..<..#.#..8...2.....P...2V..i.................
..*...F....
and source ip is still 192.168.1.1, what am i doing wrong?
UPD: BTW, just tryed same thing on Ubuntu 10.04 with 2.6.32 kernel, everything works good, and you even dont need to add "-I" to "ping" command, it seams in my kernel (3.2.0) somebody broken this feature.
The src option only affects packets whose source address is chosen by the operating system. The ping program chooses its own source address. If you want to influence its choice, use the -I or -B options. If you want to see if your src option is working, make a TCP connection or send a UDP datagram.
try adding a specific route like below:
ip route add 192.168.1.50/32 dev eth0 proto static src 192.168.1.99
this should result in any traffic to 192.168.1.50 being sent with a src-ip of 192.168.1.99...instead of default 192.168.1.1

Linux/RHEL5: UDP on IPv6 does not work on the same pc

I try to setup a netcat server/client with UDP and IPv6 on the same pc.
Here are my interfaces on my pc:
[root#rh55hp360g7ss7 trunk_dir]# ifconfig
eth0 Link encap:Ethernet HWaddr xxx
inet addr:192.168.255.166 Bcast:192.168.255.255 Mask:255.255.255.0
inet6 addr: fe80::1ec1:deff:fef3:4870/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:21948499 errors:0 dropped:0 overruns:0 frame:0
TX packets:24300265 errors:0 dropped:0 overruns:0 carrier:0
collisions:360733 txqueuelen:1000
RX bytes:3645218404 (3.3 GiB) TX bytes:1672728274 (1.5 GiB)
Interrupt:162 Memory:f4000000-f4012800
Then I start the netcat server:
nc -6ul fe80::1ec1:deff:fef3:4870%eth0 5678
And the netcat client (still on the same pc)
nc -6u fe80::1ec1:deff:fef3:4870%eth0 5678
But then, when I type something into the NetCat Client, nothing is transfered to the server.
The same example is working if
I start the netcat client on another pc
I'm using TCP instead of UDP (i.e. when I remove the -u option)
When I'm using IPv4 instead of IPv6 (i.e. when I remove the -6 option and if I take the IPv4 address).
Any Ideas?
TSohr.
Here is the routing table, in case it might help:
[root#rh55hp360g7ss7 trunk_dir]# route -A inet6
Kernel IPv6 routing table
Destination Next Hop Flags Metric Ref Use Iface
fe80::/64 * U 256 0 0 eth0
::1/128 * U 0 265 5 lo
fe80::1ec1:deff:fef3:4870/128 * U 0 10551 1 lo
ff00::/8 * U 256 0 0 eth0
[root#rh55hp360g7ss7 trunk_dir]#
## Added 2012-03-13
With ::1, it is working.
I have the same problem when trying to run a SIP stack on the pc.
It's only a problem with Red Hat and with link-local scope. When using an address with global scope, its working.
I tried out with Ubuntu 10.4, here it is working also with link-local adddresses.
This is my Red Hat Distribution:
[root#BETESIP02 sipp]# uname -a
Linux BETESIP02 2.6.18-194.el5PAE #1 SMP Tue Mar 16 22:00:21 EDT 2010 i686 i686 i386 GNU/Linux

Resources