iperf UDP over IPv6 - linux

I'm doing some UDP bandwidth tests using iperf (https://iperf.fr/) over IPv6. I have very bad results when using a Linux UDP client with the following command line:
iperf -u -V -c fe80::5910:d4ff:fe31:5b10%usb0 -b 100m
Investigating the issue with Wireshark I have seen there is some fragmentation while the client is sending data. To be more precise, I see UDP client outgoing packets with size 1510 bytes and 92 bytes, alternating.
For example, the UDP packets that I see have the following pattern (in size): 1510, 92, 1510, 92, 1510, 92,...,1510, 92,...
Reading iperf2 documentation I read the following for option (-l) :
The length of buffers to read or write. iPerf works by writing an array of len bytes a number of times. Default is 8 KB for TCP, 1470 bytes for UDP. Note for UDP, this is the datagram size and needs to be lowered when using IPv6 addressing to 1450 or less to avoid fragmentation. See also the -n and -t options.
I have tried to do the same bandwidth test by replacing the Linux iperf UDP client command line with the following:
iperf -u -V -c fe80::5910:d4ff:fe31:5b10%usb0 -b 100m -l1450
and I see good results. Looking at the Wireshark capture I see no fragmentation anymore.
Doing the same test over IPv4 I don't need to change the default UDP datagram size (I don't need to use '-l' option) to get good results.
So my conclusion is that fragmentation (over IPv6) is responsible for poor bandwidth performances.
Anyway, I'm wondering what really happens when setting UDP datagram size to 1450 over IPv6. Why do I have fragmentation over IPv6 and not over IPv4 with default value for UDP datagram size? Moreover, why do I have no fragmentation when reducing the UDP datagram size to 1450?
Thank you.

The base IPv4 header is 20 bytes, the base IPv6 header is 40 bytes and the UDP header is 8 bytes.
With IPv4 the total packet size is 1470+8+20=1498, which is less than the default ethernet MTU of 1500.
With IPv6 the total packet size is 1470+8+40=1518, which is more than 1500 and has to be fragmented.
Now let's look into your observations. You see packets of size 1510 and 92. Those include the ethernet header, which is 14 bytes. Your IPv6 packets are therefore 1496 and 78 bytes. The contents of the big packet are: IPv6 header (40 bytes), a fragmentation header (8), the UDP header (8) and 1440 bytes of data. The smaller packet contains the IPv6 header (40), a fragmentation header (8) and the remaining 30 bytes of data.

The most common MTU for Ethernet is 1500, not including ethernet frame headers. This means you can send 1500 bytes in one packet over the wire, including IP headers. IPv6 headers are larger than IPv4 headers for several reasons, with the most important being that IPv6 addresses are larger than IPv4. So when you run with the default value over IPv6 your packet size goes over the MTU size and the packet needs to be split into two; a procedure known as fragmentation.

Related

How many bits of codification G.711 sends in each datagrama?

I am learning about codecs,and I get this question that I didnt understood the answers.
Assuming CODEC G.711 where each datagram carries 20ms of voice, indicate:
3) [E] How many bytes of G.711 encoding does each datagram carry?
A- 20ms/8*0,02=160
4) What is the byte size of each frame carrying G.711 on an Ethernet network?
Note: The dimensions (in bytes) of the base headers of some of the protocols that might be involved in the communication: Ethernet = 18, IP = 20, TCP = 20, UDP = 8, ICMP = 8, RTP = 12
A-18+20+8+12+160=218
I didnt get this math..
g711 codecs pure bandwidth(codec only) is 64kbit, exact
g711 packet length can be 10,20(default),30.. upto 150ms.
So for default settings you have 20ms packet(50 packet/sec) at 64kbit = 160Bytes without udp packet header
Full length of g711(default 20ms) packet is
TPS = 18 bytes+20 bytes+8 bytes+12 bytes+160 bytes
You have 160 bytes of raw data, first you make it rtp packet(timestamp mostly), see packet size at https://en.wikipedia.org/wiki/Real-time_Transport_Protocol
RTP required fore reorder when you get two packet in different order(sometimes happens).
Now you have rtp, BUT it not suitable for send, need know where to send, need address and port. For port part you use UDP packet https://en.wikipedia.org/wiki/User_Datagram_Protocol
For address you use IP packet header, without address it not go destination machine
https://en.wikipedia.org/wiki/Internet_Protocol
Okay, now you have packet. But you still need actually send it. For send you use some hardware level protocol, in this case it is ETHERNET. Ethernet have mac address, which allow fast switching without parse of IP. That is last 18 bytes
https://scialert.net/fulltext/?doi=ajsr.2017.110.115
In some cases you can prefer TCP(when you have packet loss or complex networking), in this case you not use UDP, use TCP instead of it. So you swap 8bytes UDP for 20 bytes TCP.

In traceroute source code why the size is hard coded to 512 : u_char packet[512];

In traceroute while revising the source code, I saw that the ICMP inbound packet size is hard coded to 512.
I don't know why the size is limited to 512 bytes. What happen if the ICMP inbound packet size is greater than 512 bytes?
In general, there are 3 ways(I am not familiar with implementation using GRE protocol) to implement traceroute, sending ICMP Echo requests, UDP packets or TCP SYN packets with gradually increasing TTL value, starting with TTL value of 1.
if it sends ICMP echo request, it expects ICMP TIME Exceed message(8 bytes + IP header(20 bytes) + first 8 bytes of original datagram's data) or the destination is reached and returns an ICMP echo reply which is 20 + 8 bytes long. Though according to rfc 792, it allows for an arbitrary data length for echo request or reply , but traceroute needn't that.
if it sends UDP pakcets, it expects ICMP TIME Exceed message or the destination is reached and return a port unreachable message which is 20 + 8 + 20 bytes long. Maybe some implementations add some data, but it wouldn't be too much.
if it sends TCP SYN packets, the inbound packets should be ICMP TIME Exceed message, TCP SYN+ACK packet or TCP RST packet, all of them are much less than 512 bytes.

UDP Packet drop - INErrors Vs .RcvbufErrors

I wrote a simple UDP Server program to understand more about possible network bottlenecks.
UDP Server: Creates a UDP socket, binds it to a specified port and addr, and adds the socket file descriptor to epoll interest list. Then its epoll waits for incoming packet. On reception of incoming packet(EPOLLIN), its reads the packet and just prints the received packet length. Pretty simple, right :)
UDP Client: I used hping as shown below:
hping3 192.168.1.2 --udp -p 9996 --flood -d 100
When I send udp packets at 100 packets per second, I dont find any UDP packet loss. But when I flood udp packets (as shown in above command), I see significant packet loss.
Test1:
When 26356 packets are flooded from UDP client, my sample program receives ONLY 12127 packets and the remaining 14230 packets is getting dropped by kernel as shown in /proc/net/snmp output.
cat /proc/net/snmp | grep Udp:
Udp: InDatagrams NoPorts InErrors OutDatagrams RcvbufErrors SndbufErrors
Udp: 12372 0 14230 218 14230 0
For Test1 packet loss percentage is ~53%.
I verified there is NOT much loss at hardware level using "ethtool -S ethX" command both on client side and server side, while at the appln level I see a loss of 53% as said above.
Hence to reduce packet loss I tried these:
- Increased the priority of my sample program using renice command.
- Increased Receive Buffer size (both at system level and process level)
Bump up the priority to -20:
renice -20 2022
2022 (process ID) old priority 0, new priority -20
Bump up the receive buf size to 16MB:
At Process Level:
int sockbufsize = 16777216;
setsockopt(sockfd, SOL_SOCKET, SO_RCVBUF,(char *)&sockbufsize, (int)sizeof(sockbufsize))
At Kernel Level:
cat /proc/sys/net/core/rmem_default
16777216
cat /proc/sys/net/core/rmem_max
16777216
After these changes, performed Test2.
Test2:
When 1985076 packets are flooded from UDP client, my sample program receives 1848791 packets and the remaining 136286 packets is getting dropped by kernel as shown in /proc/net/snmp output.
cat /proc/net/snmp | grep Udp:
Udp: InDatagrams NoPorts InErrors OutDatagrams RcvbufErrors SndbufErrors
Udp: 1849064 0 136286 236 0 0
For Test2 packet loss percentage is 6%.
Packet loss is reduced significantly. But I have the following questions:
Can the packet loss be further reduced?!? I know I am greedy here :) But I am just trying to find out if its possible to reduce packet loss further.
Unlike Test1, in Test2 InErrors doesnt match RcvbufErrors and RcvbufErrors is always zero. Can someone explain the reason behind it, please?!? What exactly is the difference between InErrors and RcvbufErrors. I understand RcvbufErrors but NOT InErrors.
Thanks for your help and time!!!
Tuning the Linux kernel's networking stack to reduce packet drops is a bit involved as there are a lot of tuning options from the driver all the way up through the networking stack.
I wrote a long blog post explaining all the tuning parameters from top to bottom and explaining what each of the fields in /proc/net/snmp mean so you can figure out why those errors are happening. Take a look, I think it should help you get your network drops down to 0.
If there aren't drops at hardware level then should be mostly a question of memory, you should be able to tweak the kernel configuration parameters to reach 0 drops (obviously you need a reasonable balanced hardware for the network traffic you're recv'ing).
I think you're missing netdev_max_backlog which is important for incoming packets:
Maximum number of packets, queued on the INPUT side, when the interface receives packets faster than kernel can process them.
InErrors is composed of:
corrupted packets (incorrect headers or checksum)
full RCV buffer size
So my guess is you have fixed the buffer overflow problem (RcvbufErrors is 0) and what is left are packets with incorrect checksums.

Do MTU modifications impact both directions?

ifconfig 1.2.3.4 mtu 1492
This will set MTU to 1492 for incoming, outgoing packets or both? I think it is only for incoming
TLDR: Both. It will only transmit packets with a payload length less than or equal to that size. Similarly, it will only accept packets with a payload length within your MTU. If a device sends a larger packet, it should respond with an ICMP unreachable (oversized) message.
The nitty gritty:
Tuning the MTU for your device is useful because other hops between you and your destination may encapsulate your packet in another form (for example, a VPN or PPPoE.) This layer around your packet results in a bigger packet being sent along the wire. If this new, larger packet exceeds the maximum size of the layer, then the packet will be split into multiple packets (in a perfect world) or will be dropped entirely (in the real world.)
As a practical example, consider having a computer connected over ethernet to an ADSL modem that speaks PPPoE to an ISP. Ethernet allows for a 1500 byte payload, of which 8 bytes will be used by PPPoE. Now we're down to 1492 bytes that can be delivered in a single packet to your ISP. If you were to send a full-size ethernet payload of 1500 bytes, it would get "fragmented" by your router and split into two packets (one with a 1492 byte payload, the other with an 8 byte payload.)
The problem comes when you want to send more data over this connection - lets say you wanted to send 3000 bytes: your computer would split this up based on your MTU - in this case, two packets of 1500 bytes each, and send them to your ADSL modem which would then split them up so that it can fulfill its MTU. Now your 3000 byte data has been fragmented into four packets: two with a payload of 1492 bytes and two with a payload of 8 bytes. This is obviously inefficient, we really only need three packets to send this data. Had your computer been configured with the correct MTU for the network, it would have sent this as three packets in the first place (two 1492 byte packets and one 16 byte packet.)
To avoid this inefficiency, many IP stacks flip a bit in the IP header called "Don't Fragment." In this case, we would have sent our first 1500 byte packet to the ADSL modem and it would have rejected the packet, replying with an Internet Control (ICMP) message informing us that our packet is too large. We then would have retried the transmission with a smaller packet. This is called Path MTU discovery. Similarly, a layer below, at the TCP layer, another factor in avoiding fragmentation is the MSS (Maximum Segment Size) option where both hosts reply with the maximum size packet they can transfer without fragmenting. This is typically computed from the MTU.
The problem here arises when misconfigured firewalls drop all ICMP traffic. When you connect to (say) a web server, you build a TCP session and send that you're willing to accept TCP packets based on your 1500 byte MTU (since you're connected over ethernet to your router.) If the foreign web server wanted to send you a lot of data, they would split this into chunks that (when combined with the TCP and IP headers) came out to 1500 byte payloads and send them to you. Your ISP would receive one of these and then try to wrap it into a PPPoE packet to send to your ADSL modem, but it would be too large to send. So it would reply with an ICMP unreachable, which would (in a perfect world) cause the remote computer to downsize its MSS for the connection and retransmit. If there was a broken firewall in the way, however, this ICMP message would never be reached by the foreign web server and this packet would never make it to you.
Ultimately setting your MTU on your ethernet device is desirable to send the right size frames to your ADSL modem (to avoid it asking you to retransmit with a smaller frame), but it's critical to influence the MSS size you send to remote hosts when building TCP connections.
ifconfig ... mtu <value> sets the MTU for layer2 payloads sent out the interface, and will reject larger layer2 payloads received on this interface. You must ensure your MTU matches on both sides of an ethernet link; you should not have mismatched mtu values anywhere in the same ethernet broadcast domain. Note that the ethernet headers are not included in the MTU you are setting.
Also, ifconfig has not been maintained in linux for ages and is old and deprecated; sadly linux distributions still include it because they're afraid of breaking old scripts. This has the very negative effect of encouraging people to continue using it. You should be using the iproute2 family of commands:
[mpenning#hotcoffee ~]$ sudo ip link set mtu 1492 eth0
[mpenning#hotcoffee ~]$ ip link show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1492 qdisc mq state UP qlen 1000
link/ether 00:1e:c9:cd:46:c8 brd ff:ff:ff:ff:ff:ff
[mpenning#hotcoffee ~]$
Large incoming packets may be dropped based on the interface MTU size.
For example, the default MTU 1500 on
Linux 2.6 CentOS (tested with Ethernet controller: Intel Corporation 80003ES2LAN Gigabit Ethernet Controller (Copper) (rev 01))
drops Jumbo packets >1504. Errors appear in ifconfig and there are rx_long_length_errors indications for this in ethtool -S output.
Increasing MTU indicates Jumbo packets should be supported.
The threshold for when to drop packets based on their size being too large appears to depend on MTU (-4096, -8192, etc.)
Oren
It's the Maximum Transmission Unit, so it definitely sets the outgoing maximum packet size. I'm not sure if will reject incoming packets larger than the MTU.
There is no doubt that MTU configured by ifconfig impacts Tx ip fragmentation, I have no more comments.
But for Rx direction, I find whether the parameter impacts incoming IP packets, it depends. Different manufacturer behaves differently.
I tested all the devices on hand and found 3 cases below.
Test case:
Device0 eth0 (192.168.225.1, mtu 2000)<--ETH cable-->Device1 eth0
(192.168.225.34, mtu MTU_SIZE)
On Device0 ping 192.168.225.34 -s ICMP_SIZE,
Checking how MTU_SIZE impacts Rx of Device1.
case 1:
Device1 = Linux 4.4.0 with Intel I218-LM:
When MTU_SIZE=1500, ping succeeds at ICMP_SIZE=1476, fails at ICMP_SIZE=1477 and above. It seems that there is a PRACTICAL MTU=1504 (20B(IP header)+8B(ICMP header)+1476B(ICMP data)).
When MTU_SIZE=1490, ping succeeds at ICMP_SIZE=1476, fails at ICMP_SIZE=1477 and above, behave the same as MTU_SIZE=1500.
When MTU_SIZE=1501, ping succeeds at ICMP_SIZE=1476, 1478, 1600, 1900. It seems that jumbo frame is switched on once MTU_SIZE is set >1500 and there is no 1504 restriction any more.
case 2:
Device1 = Linux 3.18.31 with Qualcomm Atheros AR8151 v2.0 Gigabit Ethernet:
When MTU_SIZE=1500, ping succeeds at ICMP_SIZE=1476, fails at ICMP_SIZE=1477 and above.
When MTU_SIZE=1490, ping succeeds at ICMP_SIZE=1466, fails at ICMP_SIZE=1467 and above.
When MTU_SIZE=1501, ping succeeds at ICMP_SIZE=1477, fails at ICMP_SIZE=1478 and above.
When MTU_SIZE=500, ping succeeds at ICMP_SIZE=476, fails at ICMP_SIZE=477 and above.
When MTU_SIZE=1900, ping succeeds at ICMP_SIZE=1876, fails at ICMP_SIZE=1877 and above.
This case behaves exactly as Edward Thomson said, except that in my test the PRACTICAL MTU=MTU_SIZE+4.
case 3:
Device1 = Linux 4.4.50 with Raspberry Pi 2 Module B ETH:
When MTU_SIZE=1500, ping succeeds at ICMP_SIZE=1472, fails at ICMP_SIZE=1473 and above. So there is a PRACTICAL MTU=1500 (20B(IP header)+8B(ICMP header)+1472B(ICMP data)) working there.
When MTU_SIZE=1490, behave the same as MTU_SIZE=1500.
When MTU_SIZE=1501, behave the same as MTU_SIZE=1500.
When MTU_SIZE=2000, behave the same as MTU_SIZE=1500.
When MTU_SIZE=500, behave the same as MTU_SIZE=1500.
This case behaves exactly as Ron Maupin said in Why MTU configuration doesn't take effect on receiving direction?.
To sum it all, in real world, after you set ifconfig mtu,
sometimes the Rx IP packts get dropped when exceed 1504 , no matter what MTU value you set (except that the jumbo frame is enabled).
sometimes the Rx IP packts get dropped when exceed the MTU+4 you set on receiving device.
sometimes the Rx IP packts get dropped when exceed 1500, no matter what MTU value you set.
... ...

UDP IP Fragmentation and MTU

I'm trying to understand some behavior I'm seeing in the context of sending UDP packets.
I have two little Java programs: one that transmits UDP packets, and the other that receives them. I'm running them locally on my network between two computers that are connected via a single switch.
The MTU setting (reported by /sbin/ifconfig) is 1500 on both network adapters.
If I send packets with a size < 1500, I receive them. Expected.
If I send packets with 1500 < size < 24258 I receive them. Expected. I have confirmed via wireshark that the IP layer is fragmenting them.
If I send packets with size > 24258, they are lost. Not Expected. When I run wireshark on the receiving side, I don't see any of these packets.
I was able to see similar behavior with ping -s.
ping -s 24258 hostA works but
ping -s 24259 hostA fails.
Does anyone understand what may be happening, or have ideas of what I should be looking for?
Both computers are running CentOS 5 64-bit. I'm using a 1.6 JDK, but I don't really think it's a programming problem, it's a networking or maybe OS problem.
Implementations of the IP protocol are not required to be capable of handling arbitrarily large packets. In theory, the maximum possible IP packet size is 65,535 octets, but the standard only requires that implementations support at least 576 octets.
It would appear that your host's implementation supports a maximum size much greater than 576, but still significantly smaller than the maximum theoretical size of 65,535. (I don't think the switch should be a problem, because it shouldn't need to do any defragmentation -- it's not even operating at the IP layer).
The IP standard further recommends that hosts not send packets larger than 576 bytes, unless they are certain that the receiving host can handle the larger packet size. You should maybe consider whether or not it would be better for your program to send a smaller packet size. 24,529 seems awfully large to me. I think there may be a possibility that a lot of hosts won't handle packets that large.
Note that these packet size limits are entirely separate from MTU (the maximum frame size supported by the data link layer protocol).
I found the following which may be of interest:
Determine the maximum size of a UDP datagram packet on Linux
Set the DF bit in the IP header and send continually larger packets to determine at what point a packet is fragmented as per Path MTU Discovery. Packet fragmentation should then result in a ICMP type 3 packet with code 4 indicating that the packet was too large to be sent without being fragmented.
Dan's answer is useful but note that after headers you're really limited to 65507 bytes.

Resources