I have 3.8 linux kernel. I have created /sys/class/net in my device to receive control packets i.e. protocol related packets from other devices. However, sometime these protocol messages are too big to receive. Thus, my device gets fragmented data. However, when I did a packet capture, I could see some Frame check sequence errors. My guess is that some packets were lost due to the fragmented data. My protocol relies on IP layer to handle fragmentation rather than handle by itself.
My question is how do I enable or check IP fragmentation support is enabled or not in the linux kernel? The MTU of my network device is 1500 and I am sending 1590 bytes from other host.
AFAIN, there is no way to disable linux IP fragmentation explicitly, for more information ,you can refer to http://lxr.free-electrons.com/source/Documentation/networking/ip-sysctl.txt
Related
Can some expert please throw some light on what is knet interface and what is it used for.
One of my container images show knet2 as an interface for output of 'ifconfig'
I have no idea what it is, can someone please explain or point me to documents / web where I can find more about it.
knet is kernel network interface for efficient of packet exchange between switch and the kernel (linux operating system) network protocol stack.
There could be other methods which could used, such as implementing a software connector module over the Open NSL Rx/TX APIs.
The intent of theknet interface is to provide a network interface that
then delivers packets to the NetIO framework from the kernel.
this is nicely explained in user-networking.pdf.
I hope this is what you were expecting. feel free to comment for any clarification.
this is about knet reference
This module implements a Linux network driver for Broadcom
XGS switch devices. The driver simultaneously serves a
number of vitual Linux network devices and a Tx/Rx API
implemented in user space.
Packets received from the switch device are sent to either
a virtual Linux network device or the user mode Rx API
based on a set of packet filters.susp
Packets from the virtual Linux network devices and the user
mode Tx API are multiplexed with priority given to the Tx API
I've been playing around with an ethernet protocol (not IP) constructed using
socket(PF_PACKET, SOCK_RAW, ether_type)
I have a small problem. I've got a packet constructed that has the source and destination mac set to my local cards mac that I've also bound the socket to with bind.
I can receive packets fine from the network.
I'm able to send packets to the degree where I see them appear in wireshark.
However, my listening app doesn't see those packets. It is able to see packets from other sources on the network however.
I should point out that my mac addresses do appear to be being sent in the correct byte order.
Can you send packets to yourself?
Do network cards not loopback?
Does the linux kernel do something special at the IP level for loopback and because I'm below that, ignore me?
Yes, IP "loopback" packets, as you put it, are treated specially. They're looped back internally, not sent out through the interface. So ethernet-level loopback, in this sense, is a special case that doesn't normally need to be supported. Some old 10Mbit ethernet cards were even half-duplex, so it couldn't have worked on that hardware :).
On the other hand, you can buy/make loopback adaptor cables to test network cards. So it must be possible on (hopefully all) modern hardware. And people have used them under linux with AF_PACKET (evidence, though no more details, here).
I guess the next question would be whether your switch supports this. A dumb hub would have to support it, but there's room for a modern switch to get confused. Or maybe disallowing it in fear of an infinite loop of packets.
I've been experiencing an issue with my server software where if one thread joins a multicast, another thread may not receive an incoming datagram on a different multicast at that same instant. I'm not sure if this can be dismissed as an expected loss due to "unreliable nature" of UDP multicast, or if this is a serious driver/nic defect. Packet capture also shows a gap at that moment.
I've observed this problem on multiple nic models and manufacturers, including Intel and HP. The reason I feel this is a nic or driver issue is that the problem doesn't occur at all if I run a packet sniffer that puts the interface into promiscuous mode.
Is it possible that while IGMP join or leave is updating forwarding tables in the nic it simply stops forwarding all multicast traffic at that moment? Is this acceptable?
I would like to simulate packets reordering for UDP packets on Linux to measure the performance and fault tolerance of my application.
Is there a simple way to do this?
look into WANEM
WANem thus allows the application
development team to setup a
transparent application gateway which
can be used to simulate WAN
characteristics like Network delay,
Packet loss, Packet corruption,
Disconnections, Packet re-ordering,
Jitter, etc.
You can use the "netem" feature built-into the Linux kernel. It is shipped with most modern distributions. netem is a traffic control discipline module which deliberately delays, drops and reorders packets and is highly configurable.
This only works for sending packets (because the queues are outbound only), so you may wish to place a router host with netem between the two test machines, and run netem on both interfaces (with differing parameters if you like).
The easiest way to achieve this is to run netem in a VM to route between two VM networks. I've found this to be quite handy.
You could try scapy. It's a python library to manipulate packets. You could capture a pcap session, with tcpdump, wireshark, whatever, and then replay the captured packets in arbitrary order with scapy.
a=rdpcap("/spare/captures/isakmp.cap")
for pkt in a.reverse():
sendp(pkt)
Depending on how you catpured the packets you may need send(layer 3) rather than sendp(layer 2)
Is there some way to get the Frame Check Sequence (FCS) from an ethernet frame when using Wireshark to capture packets under Linux?
The FCS is generated by the sender's ethernet device and decoded by the recipients ethernet device. If the FCS is correct, the payload is passed up to higher layers. If it's not, then the potential frame is discarded.
Because it's added for the exclusive use of the ethernet layer, there's no reason to pass the data up to the operating system. There's no reason for the hardware to include additional capabilities of moving that data out of the card. All modern ethernet devices do onboard FCS processing (and often much more). The only exception would be devices intended to analyze ethernet performance and function.
The Ethernet wiki page on Wireshark states:
Most Ethernet interfaces also either don't supply the FCS to Wireshark or other applications
so I assume the answer is a no.