I am writing an application which receives multicast data on a new Redhat Enterprise Linux 6 server. The support team gives me an application which is used for testing whether the server can get multicast data flow.
Once I start the test application, and also having tcpdump running,
I can see the multicast data coming in, e.g.,
12:58:21.645968 IP 10.26.12.22.50002 > 238.6.6.36.50002: UDP, length 729
12:58:21.648369 IP 10.26.12.22.50002 > 238.6.6.36.50002: UDP, length 969
12:58:21.649406 IP 10.26.12.22.50002 > 238.6.6.36.50002: UDP, length 893
12:58:21.651823 IP 10.26.12.22.50002 > 238.6.6.36.50002: UDP, length 604
12:58:21.654079 IP 10.26.12.22.50002 > 238.6.6.36.50002: UDP, length 913
12:58:21.656724 IP 10.26.12.22.50002 > 238.6.6.36.50002: UDP, length 1320
12:58:21.658194 IP 10.26.12.22.50002 > 238.6.6.36.50002: UDP, length 124
12:58:21.658226 IP 10.26.12.22.50002 > 238.6.6.36.50002: UDP, length 217
12:58:21.658348 IP 10.26.12.22.50002 > 238.6.6.36.50002: UDP, length 182
12:58:21.658625 IP 10.26.12.22.50002 > 238.6.6.36.50002: UDP, length 1014
12:58:21.659592 IP 10.26.12.22.50002 > 238.6.6.36.50002: UDP, length 135
12:58:21.659842 IP 10.26.12.22.50002 > 238.6.6.36.50002: UDP, length 242
12:58:21.660674 IP 10.26.12.22.50002 > 238.6.6.36.50002: UDP, length 242
12:58:21.660743 IP 10.26.12.22.50002 > 238.6.6.36.50002: UDP, length 84
12:58:21.662327 IP 10.26.12.22.50002 > 238.6.6.36.50002: UDP, length 84
12:58:21.669154 IP 10.26.12.22.50002 > 238.6.6.36.50002: UDP, length 161
12:58:21.669365 IP 10.26.12.22.50002 > 238.6.6.36.50002: UDP, length 166
12:58:21.670792 IP 10.26.12.22.60002 > 238.230.230.100.60002: UDP, length 49
12:58:21.670796 IP 10.26.12.22.60002 > 238.230.230.100.60002: UDP, length 49
12:58:21.670798 IP 10.26.12.22.60002 > 238.230.230.100.60002: UDP, length 49
12:58:21.670799 IP 10.26.12.22.60002 > 238.230.230.100.60002: UDP, length 49
But the application is not able to pick up any data flow, i.e., the application runs as if the multicast data subscription is unsuccessful.
The support team assures me that there is no problem with the test application, because it is running fine on other servers. Since I am having a new server, it is possible that some settings on the server are not right.
I am wondering what Linux settings shall I look for which potentially may stop the application receiving the multicast data, even thought tcpdump can see the data. Missing libraries or packages?
Thanks.
First off, it's worth checking that RHEL 6 has multicast support enabled at the kernel level. (it probably does but I don't have RHEL 6 available to check)
Make sure that the /proc/net/igmp file exists.
Also check that the multicast address range is routed to the interface that you're expecting. If this is incorrect you can have some interesting symptoms where you receive multicast only while tcpdump is (promiscuously) sniffing packets. This can also be the case if your NIC doesn't properly support multicast. Some older NICs may also need to be set to promiscuous mode to receive any multicast, regardless of the multicast setting shown in ifconfig.
Another thing to do is to check the contents of the /proc/net/igmp file while your test application is running.
The /proc/net/igmp file will contain a list of all of the multicast group addresses that the server is actively receiving. If there is an entry in the "Group" column that corresponds to the multicast group address that the test application is meant to be receiving (in your case 238.6.6.36 and 238.230.230.100) then the IP_ADD_MEMBERSHIP (or IP_ADD_SOURCE_MEMBERSHIP) socket options have probably been called correctly, and on the correct NIC. Note that the Group column lists the multicast group addresses in hex and backwards - so 238.6.6.36 will be listed as 240606EE.
Your situation may be more complicated if you have a multicast router (eg. Xorp, igmpproxy) running on the same machine on which you're running the test application.
If this is the case you should also investigate the /proc/net/ip_mr_vif and /proc/net/ip_mr_cache files to ensure that there are appropriate entries.
pls check on switch level.In my case i was stuck with Clustering. My cluster only will work on mulitcast. But i was facing some packet lose in mulitcast. It was too strange for me. But eventually i got the solution from one of my best friends(google). I have just disable IGMP on my switch level and it's working fine..
I had a similar problem on a RHEL 6 machine. I resolved it by adding the required UDP port to the allowed ports through the firewall. Try adding udp port 50002.
Related
I currently have two different 5g routers. My PC's wifi doesn't work, so I'm only able to connect either of them through wired connection. The old one that I want to get rid of uses a usb, and works. The new one uses ethernet, and fails with DNS lookup (ping www.google.com fails but ping 8.8.8.8 succeeds.)
resolve.conf looks like:
# Generated by NetworkManager
search lan
nameserver 192.168.1.1
nameserver fe80::d4bb:5cff:fe4e:6313
nameserver fe80::38:40ff:fe30:419e
# NOTE: the libc resolver may not support more than 3 nameservers.
# The nameservers listed below may not be recognized.
nameserver fe80::34e1:ffff:fe65:f72b
nameserver fe80::d43f:21ff:fe59:462a
nameserver 192.168.12.1
nameserver fe80::ca99:b2ff:fee7:71f5%enp5s0
nameserver fe80::d43f:21ff:fe59:462a%enp6s0f1u1
nmcli yields:
enp5s0: connected to Wired connection 1
"Intel I211"
ethernet (igb), 3C:7C:3F:1E:C6:01, hw, mtu 1500
ip4 default, ip6 default
inet4 192.168.12.232/24
route4 192.168.12.0/24 metric 100
route4 default via 192.168.12.1 metric 100
inet6 2607:fb90:3307:6e5d:91df:e64b:949:c78c/128
inet6 2607:fb90:3307:6e5d:bcb3:1b35:1589:bd17/64
inet6 fe80::48f9:3fb0:7e83:d1a7/64
route6 2607:fb90:3307:6e5d:91df:e64b:949:c78c/128 metric 100
route6 2607:fb90:3307:6e5d::/64 metric 100
route6 fe80::/64 metric 1024
route6 default via fe80::ca99:b2ff:fee7:71f5 metric 100
enp6s0f1u1: connected to Wired connection 2
"Novatel Wireless M2000"
ethernet (rndis_host), 00:15:FF:30:51:72, hw, mtu 1428
inet4 192.168.1.5/24
route4 192.168.1.0/24 metric 101
route4 default via 192.168.1.1 metric 101
inet6 2607:fb90:3395:673a:5552:7f52:abd9:488e/64
inet6 fe80::20b0:4b16:c9f:e9d0/64
route6 fe80::/64 metric 1024
route6 2607:fb90:3395:673a::/64 metric 101
route6 default via fe80::d43f:21ff:fe59:462a metric 101
"Wired connection 2" is the one that works (the USB one.)
So I'm pretty clear that my resolv.conf is specifically telling the usb interface to use one DNS server (fe80::d43f:21ff:fe59:462a) that works, and telling the ethernet interface to use another (fe80::ca99:b2ff:fee7:71f5) that fails. I just don't know why it's doing that, or how to make it stop (given that I think NetworkManager generates that file, and will presumably re-generate it if I just edit it myself.)
What happen? What do?
I'm trying to set up a RaspberryPi running Nftables as a "router". It's running on RaspberryPi OS 64 bits with kernel 5.15.32-v8+ and Nftables v0.9.8 (E.D.S.). I would like it to allow traffic between the LAN it's connected to through its WiFi interface and an Android phone sharing it's cellular data connection through USB.
The RPi is connected to a LAN through it's WiFi interface. There is a DHCP server running on that LAN. It serves clients class C private addresses like 192.168.80.X/24. The RaspberryPi gets served as below. The network configuration allows for a range of IP addresses to be assigned manually, shall that be needed.
- RaspberryPi : interface wlan0
- IP address : 192.168.80.157
- Subnet : 255.255.255.0
- Gateway : 192.168.80.2
- DNS : 192.168.200.1 (firewall living an another VLAN) / 8.8.8.8
An Android phone is connected to the RaspberryPi through a USB cable and acts as a USB modem, sharing its cellular data. It assigns the RPi a class C private address like 192.168.X.Y/24. It changes each time I plug the Android phone in / reboot, and I can't tune the range of addresses served by the phone.
- RaspberryPi : interface usb0 (Android phone)
- IP address : 192.168.42.48 (at the moment)
- Subnet : 255.255.255.0
- Gateway : 192.168.42.105
- DNS : 192.168.42.105 (same as gateway)
I have enabled IP forwarding in the sysctl.conf
net.ipv4.ip_forward=1
Nftables is configured as follow (example from here :
table ip nat_antoine {
chain prerouting {
type nat hook prerouting priority dstnat; policy accept;
ip protocol icmp counter meta nftrace set 1
}
chain postrouting {
type nat hook postrouting priority srcnat; policy accept;
oifname "usb0" masquerade
}
}
I ran the following tests. I disabled nftables and SSH'd into the RPi through the wlan0 interface. If I configure the firewall at 192.168.200.1 to allow internet access to the RPi (and disconnect the Android phone "usb0"), it works : the RPi can ping / curl / ssh into the "outside" world.
Then I revoked the internet access through wlan0 and plugged the Android phone "usb0" in. Same : I can ping / curl / ssh into the "outside" world. I have made sure that the traffic is going through usb0 and not wlan0.
Issues arise when I enable nftables and try to access internet via the RPi through another client. I took another android phone, connected it to the 192.168.80.0/24 LAN and manually configured the gateway address to be the RPi's wlan0 interface address (DNS from Quad9). Best I can get is pinging the wlan0 interface on the RPi.
"nft monitor" output when pinging 192.168.80.157 from an Android client (client gets an answer) :
trace id 98bebc6f ip nat_antoine prerouting packet: iif "wlan0" ether saddr d0:1b:49:f1:01:29 ether daddr dc:a6:32:65:a2:f4 ip saddr 192.168.80.20 ip daddr 192.168.80.157 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 40158 ip length 84 icmp code net-unreachable icmp id 30 icmp sequence 1 #th,64,96 820786881435526521749372928
trace id 98bebc6f ip nat_antoine prerouting rule ip protocol icmp counter packets 1 bytes 84 meta nftrace set 1 (verdict continue)
trace id 98bebc6f ip nat_antoine prerouting verdict continue
trace id 98bebc6f ip nat_antoine prerouting policy accept
"nft monitor" output when pinging 192.168.42.48 from the same Android client (client gets no answer) :
trace id c59a6b89 ip nat_antoine prerouting packet: iif "wlan0" ether saddr d0:1b:49:f1:01:29 ether daddr dc:a6:32:65:a2:f4 ip saddr 192.168.80.20 ip daddr 192.168.42.48 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 29620 ip length 84 icmp code net-unreachable icmp id 31 icmp sequence 1 #th,64,96 56218603639456293821148039936
trace id c59a6b89 ip nat_antoine prerouting rule ip protocol icmp counter packets 1 bytes 84 meta nftrace set 1 (verdict continue)
trace id c59a6b89 ip nat_antoine prerouting verdict continue
trace id c59a6b89 ip nat_antoine prerouting policy accept
Can someone please help me / point me to the right direction for fixing this ?
Thank you very much.
I have a piece of equipment that sends a stream of unicast UDP packets to a network interface on my Linux machine. I'm receiving them in my application using a normal UDP socket bound to INADDR_ANY and the expected destination port. The destination port and IP address in the datagrams are correct, but the source IP address in the IPv4 header is populated as 0.0.0.0 by the device.
I've found that when the source address is 0.0.0.0, my calls to recv() on my socket never return any packets; they appear to be getting filtered somewhere in the OS UDP processing. I confirmed via tcpdump that the packets are being received (and this is what told me the source address was all zeros):
11:32:41.239706 50:12:34:80:07:2d > 09:22:45:ab:e8:90, ethertype IPv4 (0x0800), length 4194: 0.0.0.0.84 > 192.168.137.144.10000: UDP, length 4152
Is there a way for me to configure my UDP socket so that the datagrams with this source address are not discarded? I have rp_filter configured to disable reverse-path filtering on all of my network interfaces already.
I am trying to figure out why the "default" Proxmox network configuration is not behaving as I expect.
I have:
a Proxmox server (10.0.40.10)
the network bridge (vmbr0) created by Proxmox by default
a VM (10.0.40.20) connected to vmbr0 (let's call it VM1)
a VM (10.0.40.25) connected to vmbr0 (let's call it VM2)
a gateway (10.0.40.254) configured on vmbr0
When I performed an HTTP transfer (GET) on VM2 from VM1, the speed I observed indicated that traffic was exiting the Proxmox host, going to the gateway, and returning back to the Proxmox host.
Both VM1 and VM2 are connected to vmbr0, so my expectation was that vmbr0 would "switch" between the two VMs, based on the MAC/ARP, and that the traffic would remain entirely local (and be one or two orders of magnitude faster).
When I run a ping from VM2 to VM1, I observe this:
[root#vm2 ~]# ping -c 5 10.0.40.20
PING 10.0.40.20 (10.0.40.20) 56(84) bytes of data.
64 bytes from 10.0.40.20: icmp_seq=1 ttl=64 time=0.485 ms
From 10.0.40.254 icmp_seq=1 Redirect Host(New nexthop: 10.0.40.20)
64 bytes from 10.0.40.20: icmp_seq=2 ttl=64 time=0.609 ms
From 10.0.40.254 icmp_seq=2 Redirect Host(New nexthop: 10.0.40.20)
64 bytes from 10.0.40.20: icmp_seq=3 ttl=64 time=0.598 ms
--- 10.0.40.20 ping statistics ---
3 packets transmitted, 3 received, +2 errors, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 0.485/0.564/0.609/0.056 ms
Running a traceroute shows:
[root#vm2 ~]# traceroute -I 10.0.40.20
traceroute to 10.0.40.20 (10.0.40.20), 30 hops max, 60 byte packets
1 gateway (10.0.40.254) 0.328 ms 0.308 ms 0.393 ms
2 10.0.40.20 (10.0.40.20) 0.472 ms 0.481 ms 0.533 ms
The vmbr0 configuration looks like this:
Network definition for VM1:
Network definition for VM2:
This seems like such a fundamental use-case it seems like I must be missing something.
Does anyone know if my expectations are correct, or is this correct behavior for vmbr0? Does something look misconfigured? Do I need something like Proxy ARP or an on-box virtual router to solve such a simple use-case?
Solved: The IP addresses of some of the Virtual Machines were configured as /32 rather than /24, eg. 10.0.40.25/32 instead of 10.0.40.25/24, and therefore did not consider themselves in the same subnet as their peers, and traffic was sent out on the default route.
Scenario:
Clients on VLANs X
DHCP server on VLAN Y
WDS server on VLAN Z
We have IP helper-address command on our layer 3 device for DHCP. I would like to avoid using DHCP options and instead add another IP helper-address command to point clients to WDS as well. Is this possible? I know having two IP helper-address commands will direct traffic to both IPs but will this work correctly if the WDS server is not also hosting DHCP services?
in your case, use the ip helpper to specify the dhcp server and WDS server specify using the options dhcp server (use option 66 and 67).
example dhcp server for cisco IOS:
ip dhcp pool NETWORK10.10.10.0/24
network 10.10.10.0 255.255.255.0
default-router 10.10.10.1
dns-server 10.10.10.10
option 66 ascii WDS-server.domainname.local
option 67 ascii Boot\x86\wdsnbp.com
lease 0 24
good like