UDP packets received on veth, caught by tcpdump, accepted by iptables, but not forwarded to netcat - linux

I have two namespaces srv1 and srv2, interconnected via a softswitch (p4 bmv2) with veth pairs. The softswitch does just simple forwarding. The veth interfaces inside the namespaces have IP addresses assigned to them (respectively 192.168.1.1 and 192.168.1.2). I could ping between the two namespaces using those IP addresses:
sudo ip netns exec srv1 ping 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=1.03 ms
64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=1.04 ms
but when I try netcat I dont receive messages on the server side:
client:
sudo ip netns exec srv1 netcat 192.168.1.2 80 -u
hello!
server:
sudo ip netns exec srv2 netcat -l 80 -u
The interface receives the packets with proper format.I verified with tcpdump on on both namespaces and I saw the packets being sent and received properly:
client:
sudo ip netns exec srv1 tcpdump -XXvv -i srv1p
[sudo] password for simo:
tcpdump: listening on srv1p, link-type EN10MB (Ethernet), capture size 262144 bytes
^C06:09:41.088601 IP (tos 0x0, ttl 64, id 14169, offset 0, flags [DF], proto UDP (17), length 35)
192.168.1.1.55080 > 192.168.1.2.http: [bad udp cksum 0x8374 -> 0x5710!] UDP, length 7
0x0000: 00aa bbcc dd02 00aa bbcc dd01 0800 4500 ..............E.
0x0010: 0023 3759 4000 4011 801d c0a8 0101 c0a8 .#7Y#.#.........
0x0020: 0102 d728 0050 000f 8374 6865 6c6c 6f21 ...(.P...thello!
0x0030: 0a .
1 packet captured
1 packet received by filter
0 packets dropped by kernel
server:
sudo ip netns exec srv2 tcpdump -XXvv -i srv2p
tcpdump: listening on srv2p, link-type EN10MB (Ethernet), capture size 262144 bytes
^C06:09:41.089232 IP (tos 0x0, ttl 64, id 14169, offset 0, flags [DF], proto UDP (17), length 35)
192.168.1.1.55080 > 192.168.1.2.http: [bad udp cksum 0x8374 -> 0x5710!] UDP, length 7
0x0000: 00aa bbcc dd02 00aa bbcc dd01 0800 4500 ..............E.
0x0010: 0023 3759 4000 4011 801d c0a8 0101 c0a8 .#7Y#.#.........
0x0020: 0102 d728 0050 000f 8374 6865 6c6c 6f21 ...(.P...thello!
0x0030: 0a .
1 packet captured
1 packet received by filter
0 packets dropped by kernel
I added on srv2 iptable rules to ACCEPT udp packets on port 80 and to LOG:
sudo ip netns exec srv2 iptables -t filter -A INPUT -p udp --dport 80 -j ACCEPT
sudo ip netns exec srv2 iptables -I INPUT -p udp --dport 80 -j LOG --log-prefix " IPTABLES " --log-level=debug
I could see the stats increasing on the entry and the packets being logged on /var/log/kern.log, but the message never reachets the netcats listening socker.
sudo ip netns exec srv2 iptables -L -n -v
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
1 33 LOG udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:80 LOG flags 0 level 7 prefix " IPTABLES "
4 133 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:80
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
kernel logs:
kernel: [581970.306032] IPTABLES IN=srv2p OUT= MAC=00:aa:bb:cc:dd:02:00:aa:bb:cc:dd:01:08:00 SRC=192.168.1.1 DST=192.168.1.2 LEN=33 TOS=0x00 PREC=0x00 TTL=64 ID=51034 DF PROTO=UDP SPT=48784 DPT=80 LEN=13
When I replace the softswitch with a bridge the netcat works. I thought maybe the softswitch corrputs the packets but the tcpdump shows the right format. The UDP checksum is not correct but it is generated like that from the source server, and it is the same thing when using the linux bridge anyways but it works in that case. Is ther a way to know the reason those packets do not reach the netcat server ?

Related

Packets not getting forwarded on Centos7 between GRE tunnels [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I'm have configured GRE tunnels between centos machines and corresponding routing tables on individual centos machines as shown in the image:
Im able to
Ping from Router-1 to gre1 tunnels other end:
worker]# ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=1.43 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.472 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.291 ms
64 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=0.319 ms
The traffic reaches Transit Router over the GRE tunnel(this is verified by tcpdump proto gre)
Ping from Router-2 to gre2 tunnels other end:
worker]# ping 11.0.0.2
PING 11.0.0.2 (11.0.0.2) 56(84) bytes of data.
64 bytes from 11.0.0.2: icmp_seq=1 ttl=64 time=1.10 ms
64 bytes from 11.0.0.2: icmp_seq=2 ttl=64 time=0.392 ms
64 bytes from 11.0.0.2: icmp_seq=3 ttl=64 time=0.369 ms
64 bytes from 11.0.0.2: icmp_seq=4 ttl=64 time=0.258 ms
This traffic too flows on tunnel
and on the transit router I'm able to ping the private address of both Router-1 and Router-2 after adding the routing entry:
Transit Router:
[root#vmc-centos conf]# ping 10.2.32.1
PING 10.2.32.1 (10.2.32.1) 56(84) bytes of data.
64 bytes from 10.2.32.1: icmp_seq=1 ttl=64 time=0.589 ms
64 bytes from 10.2.32.1: icmp_seq=2 ttl=64 time=0.380 ms
64 bytes from 10.2.32.1: icmp_seq=3 ttl=64 time=0.383 ms
Router-1:
worker]# tcpdump -i any proto gre -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
04:54:36.684864 IP 10.206.83.3 > 10.206.90.103: GREv0, length 88: IP 10.0.0.2 > 10.2.32.1: ICMP echo request, id 20445, seq 34, length 64
04:54:36.684951 IP 10.206.90.103 > 10.206.83.3: GREv0, length 88: IP 10.2.32.1 > 10.0.0.2: ICMP echo reply, id 20445, seq 34, length 64
04:54:37.684776 IP 10.206.83.3 > 10.206.90.103: GREv0, length 88: IP 10.0.0.2 > 10.2.32.1: ICMP echo request, id 20445, seq 35, length 64
Transit Router:
[root#vmc-centos conf]# ping 10.4.32.1
PING 10.4.32.1 (10.4.32.1) 56(84) bytes of data.
64 bytes from 10.4.32.1: icmp_seq=1 ttl=64 time=0.553 ms
64 bytes from 10.4.32.1: icmp_seq=2 ttl=64 time=0.325 ms
64 bytes from 10.4.32.1: icmp_seq=3 ttl=64 time=0.354 ms
Router-2:
worker]# sudo tcpdump -i any proto gre -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
04:56:57.549823 IP 10.206.83.3 > 10.206.86.199: GREv0, length 88: IP 11.0.0.2 > 10.4.32.1: ICMP echo request, id 20690, seq 24, length 64
04:56:57.549896 IP 10.206.86.199 > 10.206.83.3: GREv0, length 88: IP 10.4.32.1 > 11.0.0.2: ICMP echo reply, id 20690, seq 24, length 64
But now when I try to reach the private network of Router-2(10.4.32.1) from Router-1, the packets reach till Transit Router but are not being forwarded from there to Router-2:
Router-1:
worker]# ping 10.4.32.1
PING 10.4.32.1 (10.4.32.1) 56(84) bytes of data.
Transit Router:
[root#vmc-centos conf]# tcpdump -i any proto gre -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
04:59:06.382024 IP 10.206.90.103 > 10.206.83.3: GREv0, length 88: IP 10.0.0.1 > 10.4.32.1: ICMP echo request, id 36131, seq 40, length 64
04:59:07.382007 IP 10.206.90.103 > 10.206.83.3: GREv0, length 88: IP 10.0.0.1 > 10.4.32.1: ICMP echo request, id 36131, seq 41, length 64
Router-2:
[root#wdc-10-206-86-199 worker]# sudo tcpdump -i any proto gre -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
Route forwarding is enabled on all the machines:
[root#vmc-centos conf]# sudo sysctl -p
net.ipv4.ip_forward = 1
iptables on transit router:
[root#vmc-centos ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT gre -- anywhere anywhere
ACCEPT gre -- anywhere anywhere
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT gre -- anywhere anywhere
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Note: I have tried this before and the packets were reaching the other private network. Now Im trying on another setup, theres some config I'm missing.
Got the answer here:
https://serverfault.com/questions/1010565/packets-not-getting-forwarded-on-centos7-between-gre-tunnels
The Docker daemon seems to be running on the forwarding machine. By default to isolate containers on different bridges and the host machine, Docker will install a default DROP policy on the forwarding chain in iptables. There is a setting in Docker daemon to not do this. Set iptables to false in /etc/docker/daemon.json. See Docker and iptables.
If you change default policy to ACCEPT, that will work.
iptables --policy FORWARD ACCEPT
BUT, when you (or a package upgrade of docker, or a reboot) restarts the Docker daemon the default policy will again change to DROP, if you didn't change the setting of the docker daemon.

PXEBOOT, TFTPD-HPA and Firewall

I have setup a pxeboot which basically works fine. I can run any configured linux image.
Then I have enabled the firewall, released UDP port 69 for TFTP
~# iptables -L |grep tftp
ACCEPT udp -- anywhere anywhere udp dpt:tftp
ACCEPT udp -- anywhere anywhere udp dpt:tftp
~# netstat -tulp|grep tftp
udp 0 0 0.0.0.0:tftp 0.0.0.0:* 15869/in.tftpd
udp6 0 0 [::]:tftp [::]:* 15869/in.tftpd
~# cat /etc/services|grep tftp
tftp 69/udp
and now I get a timeout when pxeboot is pulling tftp://192.168.0.220/images/pxelinux.0 (rc = 4c126035).
Anywhere is ok here for now as there is another firewall running between the pxeserver and the router which blocks everything unwanted from/to WAN
The funny part is that tcpdump shows that the request is incoming on the pxeboot server:
~# tcpdump port 69
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp5s0, link-type EN10MB (Ethernet), capture size 262144 bytes
14:00:47.062723 IP 192.168.0.136.1024 > mittelerde.tftp: 47 RRQ "images/pxelinux.0" octet blksize 1432 tsize 0
14:00:47.415412 IP 192.168.0.136.1024 > mittelerde.tftp: 47 RRQ "images/pxelinux.0" octet blksize 1432 tsize 0
14:00:48.184506 IP 192.168.0.136.1024 > mittelerde.tftp: 47 RRQ "images/pxelinux.0" octet blksize 1432 tsize 0
14:00:49.722630 IP 192.168.0.136.1024 > mittelerde.tftp: 47 RRQ "images/pxelinux.0" octet blksize 1432 tsize 0
14:00:52.798136 IP 192.168.0.136.1024 > mittelerde.tftp: 47 RRQ "images/pxelinux.0" octet blksize 1432 tsize 0
Once I stop the firewall service pxeboot works fine again. Of course the conntrack module is loaded:
~# lsmod|grep conntrack
nf_conntrack_tftp 16384 0
nf_conntrack_ftp 20480 0
xt_conntrack 16384 4
nf_conntrack_ipv4 16384 20
nf_defrag_ipv4 16384 1 nf_conntrack_ipv4
nf_conntrack 131072 9 xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_ipv4,nf_nat,nf_conntrack_tftp,ipt_MASQUERADE,nf_nat_ipv4,xt_nat,nf_conntrack_ftp
libcrc32c 16384 2 nf_conntrack,nf_nat
x_tables 40960 8 xt_conntrack,iptable_filter,xt_multiport,xt_tcpudp,ipt_MASQUERADE,xt_nat,xt_comment,ip_tables
What I am missing here?
Problem solved. For tftpd-hpa the following UDP ports must be open as well:
1024
49152:49182

Iptables NAT one-to-one

I use linux serve Fedora 4.14.33-51.37.amzn1.x86_64. I want use NAT 1-to-1.
For example Is it same problem
My scheme is:
My server has two network interfaces.
eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 0a:8a:59:b9:2d:b8 brd ff:ff:ff:ff:ff:ff
inet 172.10.1.72/25 brd 172.10.1.127 scope global eth0
valid_lft forever preferred_lft forever
inet 172.10.1.32/25 brd 172.10.1.127 scope global secondary eth0
valid_lft forever preferred_lft forever
inet 172.10.1.39/25 brd 172.10.1.127 scope global secondary eth0
valid_lft forever preferred_lft forever
inet 172.10.1.101/25 brd 172.10.1.127 scope global secondary eth0
eth1:
inet 172.10.1.246/28 brd 172.10.1.255 scope global eth1
net.ipv4.ip_forward = 1
How can work NAT
(eth0)172.10.1.101 - (server1)192.168.1.10
(eth0)172.10.1.32 - (server2)192.168.1.11
and etc ...
route table on NAT server
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.10.1.1 0.0.0.0 UG 0 0 0 eth0
0.0.0.0 172.10.1.241 0.0.0.0 UG 10001 0 0 eth1
10.0.0.0 172.10.1.1 255.0.0.0 UG 0 0 0 eth0
169.254.169.254 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
172.10.1.0 0.0.0.0 255.255.255.128 U 0 0 0 eth0
172.10.1.240 0.0.0.0 255.255.255.240 U 0 0 0 eth1
192.168.1.0 172.10.1.241 255.255.255.240 UG 0 0 0 eth1
My currently iptables settings:
iptables -nvL
Chain INPUT (policy ACCEPT 1726 packets, 115K bytes)
pkts bytes target prot opt in out source destination
1827 121K LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
664 55128 ACCEPT all -- eth0 eth1 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- eth1 eth0 0.0.0.0/0 0.0.0.0/0
0 0 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4
Chain OUTPUT (policy ACCEPT 2123 packets, 668K bytes)
pkts bytes target prot opt in out source destination
2123 668K LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4
and
iptables -nvL -t nat
Chain PREROUTING (policy ACCEPT 36 packets, 2476 bytes)
pkts bytes target prot opt in out source destination
8 528 DNAT all -- eth0 * 0.0.0.0/0 172.10.1.101 to:192.168.1.10
Chain INPUT (policy ACCEPT 36 packets, 2476 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 195 packets, 14344 bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 202 packets, 14788 bytes)
pkts bytes target prot opt in out source destination
0 0 SNAT all -- * eth0 192.168.1.10 0.0.0.0/0 to:172.10.1.101
When i try check my NAT server as telnet 172.10.1.101 4016 I have error
telnet: connect to address 172.10.1.101: Connection timed out
My server 192.168.1.10 listened port 4016.
On my NAT server I don't have logs.
But I try connect to another ip on my eth0 interface and saw in log
Jun 18 15:04:39 ip-172-10-1-72 kernel: [ 1245.059113] IN= OUT=eth0 SRC=172.10.1.39 DST=10.68.72.90 LEN=40 TOS=0x10 PREC=0x00 TTL=255 ID=57691 DF PROTO=TCP SPT=4016 DPT=47952 WINDOW=0 RES=0x00 ACK RST URGP=0
10.68.72.90 it is my server before NAT. After added roles in iptables I don't able to ping my ip 172.10.1.101.
I had right rules.
iptables -t nat -A POSTROUTING -s 192.168.1.11 -o eth0 -j SNAT --to-source 172.10.1.32
iptables -t nat -A PREROUTING -d 172.10.1.32 -i eth0 -j DNAT --to-destination 192.168.1.11
The problem was in AWS. I should turn of for NAT server "Check source" in interface menu. And all static routs must be in Route Table for subnet, where is my server located, not be on the servers 192.168.1.10 and 192.168.1.11.

Send raw IP packet with tun device

I'm trying to programmatically construct and send IP packet through TUN device.
I've setup the TUN device and proper routes:
# ip tuntap add mode tun tun0
# ip link set tun0 up
# ip addr add 10.0.0.2/24 dev tun0
which results in:
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use
Iface
0.0.0.0 192.168.0.1 0.0.0.0 UG 600 0 0 wlp3s0
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 tun0
192.168.0.0 0.0.0.0 255.255.255.0 U 600 0 0 wlp3s0
$ ifconfig tun0
tun0: flags=4241<UP,POINTOPOINT,NOARP,MULTICAST> mtu 1500
inet 10.0.0.2 netmask 255.255.255.0 destination 10.0.0.2
inet6 fe80::f834:5267:3a1:5d1d prefixlen 64 scopeid 0x20<link>
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
IP forwarding is ON: # echo 1 > /proc/sys/net/ipv4/ip_forward
I've setup NAT for tun0 packets:
# iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o wlp3s0 -j MASQUERADE
# iptables -t nat -L -v
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- any wlp3s0 10.0.0.0/24 anywhere
Then I have python script to produce ICMP packets:
import os
from fcntl import ioctl
import struct
import time
import random
# pip install pypacker==4.0
from pypacker.layer3.ip import IP
from pypacker.layer3.icmp import ICMP
TUNSETIFF = 0x400454ca
IFF_TUN = 0x0001
IFF_NO_PI = 0x1000
ftun = os.open("/dev/net/tun", os.O_RDWR)
ioctl(ftun, TUNSETIFF, struct.pack("16sH", b"tun0", IFF_TUN | IFF_NO_PI))
req_nr = 1
req_id = random.randint(1, 65000)
while True:
icmp_req = IP(src_s="10.0.0.2", dst_s="8.8.8.8", p=1) +\
ICMP(type=8) +\
ICMP.Echo(id=req_id, seq=req_nr, body_bytes=b"povilas-test")
os.write(ftun, icmp_req.bin())
time.sleep(1)
req_nr += 1
I can see packets originating from tun0 interface:
# tshark -i tun0
1 0.000000000 10.0.0.2 → 8.8.8.8 ICMP 48 Echo (ping) request id=0xb673, seq=1/256, ttl=64
2 1.001695939 10.0.0.2 → 8.8.8.8 ICMP 48 Echo (ping) request id=0xb673, seq=2/512, ttl=64
3 2.003375319 10.0.0.2 → 8.8.8.8 ICMP 48 Echo (ping) request id=0xb673, seq=3/768, ttl=6
But wlp3s0 interface is silent, thus it seems that packets don't get NAT'ed and routed to wlp3s0 interface, which is my WLAN card.
Any ideas what I am missing?
I'm running Debian 9.
And turns out that packet forwarding was disabled - default policy for FORWARD chain is DROP:
# iptables -L -v
Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
So I changed the policy:
# iptables -P FORWARD ACCEPT
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Also, I had to change the IP packet source address to something other than 10.0.0.2 which is the preferred source address for tun0 interface:
$ ip route
10.0.0.0/24 dev tun0 proto kernel scope link src 10.0.0.2
So I changed the packet source address to 10.0.0.4:
icmp_req = IP(src_s="10.0.0.4", dst_s="8.8.8.8", p=1) +\
ICMP(type=8) +\
ICMP.Echo(id=req_id, seq=req_nr, body_bytes=b"povilas-test")
Then kernel started forwarding packets coming from tun0 interface to the gateway interface:
# tshark -i wlp3s0
5 0.008428567 192.168.0.103 → 8.8.8.8 ICMP 62 Echo (ping) request id=0xb5c7, seq=9/2304, ttl=63
6 0.041114028 8.8.8.8 → 192.168.0.103 ICMP 62 Echo (ping) reply id=0xb5c7, seq=9/2304, ttl=48 (request in 5)
Also ping responses were sent back to tun0:
# tshark -i tun0
1 0.000000000 10.0.0.4 → 8.8.8.8 ICMP 48 Echo (ping) request id=0xb5c7, seq=113/28928, ttl=64
2 0.035470191 8.8.8.8 → 10.0.0.4 ICMP 48 Echo (ping) reply id=0xb5c7, seq=113/28928, ttl=47 (request in 1)

Where multicast packets could be filtered?

I installed openWRT distro on my router and enable support of avahi in it. My goal is to discover network services in my network.
I plugged my PC to LAN port with announced services. On router I run tcpdump on bridge interface : tcpdump -i br0 -vvn udp port 5353
During avahi browse execution i receive output:
root#localhost:~# avahi-browse -art
21:55:22.995004 IP (tos 0x0, ttl 255, id 0, offset 0, flags [DF], proto UDP (17), length 74)
192.168.1.1.5353 > 224.0.0.251.5353: [udp sum ok] 0 PTR (QM)? _services._dns-sd._udp.local. (46)
But on my PC wireshark didn't show any multicast queries during that call, hence no services found.
Does it mean that router filter multicast packets somehow?
Only way that i know is to filter in ebtables, which shows nothing about filtering of mDNS addresses:
root#localhost:~# ebtables -L
Bridge table: filter
Bridge chain: INPUT, entries: 1, policy: ACCEPT
-j RO_INPUT
Bridge chain: FORWARD, entries: 1, policy: ACCEPT
-j RO
Bridge chain: OUTPUT, entries: 1, policy: ACCEPT
-j IGMPPROXY
Bridge chain: RO, entries: 0, policy: RETURN
Bridge chain: RO_INPUT, entries: 0, policy: RETURN
Bridge chain: IGMPPROXY, entries: 4, policy: RETURN
-p IPv4 -o wl0.1 --ip-dst 239.0.0.0/8 -j DROP
-p IPv4 -o wl0.2 --ip-dst 239.0.0.0/8 -j DROP
-p IPv4 -o wl0.3 --ip-dst 239.0.0.0/8 -j DROP
-p IPv4 -o br0 --ip-dst 239.0.0.0/8 -j DROP
Where these multicast packets could be filtered/dropped?
I found that in my router snooping was enabled, which i think corresponds to IGMP snooping
After disabling it, multicast DNS queries reached destination and were show be wireshark.
Here what i've done (of course path could vary in different hardware and distro):
echo "0" > /proc/hwswitch/default/snooping

Resources