Losing multicast datagrams on join and leave - multicast

I've been experiencing an issue with my server software where if one thread joins a multicast, another thread may not receive an incoming datagram on a different multicast at that same instant. I'm not sure if this can be dismissed as an expected loss due to "unreliable nature" of UDP multicast, or if this is a serious driver/nic defect. Packet capture also shows a gap at that moment.
I've observed this problem on multiple nic models and manufacturers, including Intel and HP. The reason I feel this is a nic or driver issue is that the problem doesn't occur at all if I run a packet sniffer that puts the interface into promiscuous mode.
Is it possible that while IGMP join or leave is updating forwarding tables in the nic it simply stops forwarding all multicast traffic at that moment? Is this acceptable?

Related

How to enable IP fragmentation in embedded Linux?

I have 3.8 linux kernel. I have created /sys/class/net in my device to receive control packets i.e. protocol related packets from other devices. However, sometime these protocol messages are too big to receive. Thus, my device gets fragmented data. However, when I did a packet capture, I could see some Frame check sequence errors. My guess is that some packets were lost due to the fragmented data. My protocol relies on IP layer to handle fragmentation rather than handle by itself.
My question is how do I enable or check IP fragmentation support is enabled or not in the linux kernel? The MTU of my network device is 1500 and I am sending 1590 bytes from other host.
AFAIN, there is no way to disable linux IP fragmentation explicitly, for more information ,you can refer to http://lxr.free-electrons.com/source/Documentation/networking/ip-sysctl.txt

Ethernet Multicast without sending member ship reports

this is a homework question.
I tried almost a week for finding a solution for this problem. The problem is as follows
Consider doing a multicast in an Extended Ethernet LAN (multiple Ethernet LAN segments connected via bridges). Assume that hosts do not send Ethernet membership reports (which we discussed in class). However, the bridges (not the hosts) can have their software configured as we please. Assume want to implement IPv4 multicasting on that LAN. How would you modify the bridges to allow efficient multicasting? I.e., bridges forward IP multicast packets only to the LAN segments where there are receivers, and it should involve the least amount of processing at the bridges.
Thanks in advance
Are you saying hosts do not send IGMP membership reports? As I've never heard Ethernet membership reports..
And I'm confused by the requirements. If the hosts cannot send membership reports (assume it's IGMP one), how could the bridges know whether there is any receiver in a LAN segment? IP multicast is receiver driven design, the receivers must indicate their interests in some multicast group.

Can't send raw packets to local mac with PF_PACKET?

I've been playing around with an ethernet protocol (not IP) constructed using
socket(PF_PACKET, SOCK_RAW, ether_type)
I have a small problem. I've got a packet constructed that has the source and destination mac set to my local cards mac that I've also bound the socket to with bind.
I can receive packets fine from the network.
I'm able to send packets to the degree where I see them appear in wireshark.
However, my listening app doesn't see those packets. It is able to see packets from other sources on the network however.
I should point out that my mac addresses do appear to be being sent in the correct byte order.
Can you send packets to yourself?
Do network cards not loopback?
Does the linux kernel do something special at the IP level for loopback and because I'm below that, ignore me?
Yes, IP "loopback" packets, as you put it, are treated specially. They're looped back internally, not sent out through the interface. So ethernet-level loopback, in this sense, is a special case that doesn't normally need to be supported. Some old 10Mbit ethernet cards were even half-duplex, so it couldn't have worked on that hardware :).
On the other hand, you can buy/make loopback adaptor cables to test network cards. So it must be possible on (hopefully all) modern hardware. And people have used them under linux with AF_PACKET (evidence, though no more details, here).
I guess the next question would be whether your switch supports this. A dumb hub would have to support it, but there's room for a modern switch to get confused. Or maybe disallowing it in fear of an infinite loop of packets.

Is multicasting inherent to all ethernet systems?

Is multicasting inherent to every ethernet system?
What I am trying to do is send codes via ethernet to many devices (without having to send the same 'message' to each device). I am not familiar with the design of multicast systems, so forgive me if this is a lame question. I do know there are IP ranges reserved for the use of multicasting, but does that mean if i set receiving devices to those IPs, they will all receive the same 'messages'?
The IGMP wikipedia page has a lot of good information. Your question is a bit out of scope for stackoverflow.
Multicast uses IP, but I wouldn't say it's inherent to every ethernet system because all network infrastructure needs to be properly configured to allow IGMP subscription.
You do not set a client to the multicast IP. Your client subscribes to the multicast ip, your router sees this subscription and passes it along to that device. The wikipedia page will point you in the right direction, but as I said earlier, it's a bit outside the scope of Stack overflow.

Linux: packets reordering simulation

I would like to simulate packets reordering for UDP packets on Linux to measure the performance and fault tolerance of my application.
Is there a simple way to do this?
look into WANEM
WANem thus allows the application
development team to setup a
transparent application gateway which
can be used to simulate WAN
characteristics like Network delay,
Packet loss, Packet corruption,
Disconnections, Packet re-ordering,
Jitter, etc.
You can use the "netem" feature built-into the Linux kernel. It is shipped with most modern distributions. netem is a traffic control discipline module which deliberately delays, drops and reorders packets and is highly configurable.
This only works for sending packets (because the queues are outbound only), so you may wish to place a router host with netem between the two test machines, and run netem on both interfaces (with differing parameters if you like).
The easiest way to achieve this is to run netem in a VM to route between two VM networks. I've found this to be quite handy.
You could try scapy. It's a python library to manipulate packets. You could capture a pcap session, with tcpdump, wireshark, whatever, and then replay the captured packets in arbitrary order with scapy.
a=rdpcap("/spare/captures/isakmp.cap")
for pkt in a.reverse():
sendp(pkt)
Depending on how you catpured the packets you may need send(layer 3) rather than sendp(layer 2)

Resources