I have C application which transmits UDP stream. It works well in most of servers, but its crazy on few servers.
I have 100 Mbps network connection say eth1 on server. Using this network I usually transmit (TX) around 10-30 Mbps UDP streams, and this network connection will have around 100-300 Kbps RX to server. I have other network connection say eth0 in server from which C application receives UDP streams and forwards to 100 Mbps network connection, eth1.
My application uses blocking sendto() function to transmit UDP packets in eth1. Packets are of variable length, from 17 bytes to maximum 1333 bytes. But most of time, more than 1000 bytes.
The problem is: sometime sendto function blocks on eth1 for huge time around 1 second. This happens once in every 30 seconds to 3 minutes. When sendto blocks, I will have lot of UDP packets buffered in UDP receive buffer from eth0 by kernel, from where C application receive packets. Once sendto returns from long blocking call on eth1, C application will have lot of buffered packets to transmit from eth0. And then C application transmits all these buffered packets with next sendto calls. This will create spike in rate at other endpoint which receives UDP stream from eth1. This will create Z like rate graph at other endpoint. So this Z like spike in rate is my problem.
I have tried to increase wmem_default from around 131 KB to 5 MB in kernel setting to overcome spike. And setting this resolves my issue of spike. Now I don't get Z like spike in rate at other endpoint, but I got new issue. The new issue is: I get lot of packet losses in place of spike. I think it may be due to send buffer of eth1 accumulating lot of packets to send while sending current packet from eth1 takes lot of time (this is why may be sendto blocking long). And at next instant when NIC sends all accumulated packets from send buffer in short time, this may be causing network congestion and I may be getting lot of packet losses instead of spike.
So, this is second problem. But I think root cause is: why sometime NIC pauses for long time while sending traffic, once in every 30 seconds to 3 minutes?
May be I need to look in TX ring buffer of driver of eth1? When socket send buffer gets full due to NIC not transmitting all in time (due to random long TX pauses), then next call to sendto blocks for room in socket send buffer, does that also blocks for room in driver TX ring buffer?
Please dont tell me that UDP is unreliable and we can't control packet losses. I know that its unreliable and UDP packets can be lost. But I am sure still we can do something to minimize packet losses.
EDIT
I have tried to increase wmem_default from around 131 KB to 5 MB in kernel setting to overcome spike. And also I have removed blocking sendto call. Now I use like: sendto(sockfd, buf, len, MSG_DONTWAIT ,dest_addr, addrlen); with large send buffer using wmem_default. Also I am not getting any EAGAIN or EWOULDBLOCK errors on sendto due to large send buffer, but still packets loosing in place of spike.
EDIT
As non-blocking sendto call with huge wmem_default, and as NO any EAGAIN or EWOULDBLOCK errors from sendto, spikes have been removed because no much packets accumulating in receive buffer of eth0. I think its possible solution from application side. But main problem is why NIC slows every few moments? What can be possible reasons? While it resumes from long TX pause, and may be it will have lot of packets accumulated in send buffer, which will be sent as burst next moment and congesting network so lot of packet losses.
More update
I use same this C application to transmit locally in machine (127.0.0.1), and I never get any spikes or packet losses problems locally.
The problem is: sometime sendto function blocks on eth1 for huge time around 1 second.
Blocking sendto may block, surprisingly.
The problem is: sometime sendto function blocks on eth1 for huge time around 1 second.
It could be that IP stack is performing path MTU discovery:
While MTU discovery is in progress, initial packets from datagram sockets may be dropped. Applications using UDP should be aware of this and not take it into account for their packet retransmit strategy.
I have tried to increase wmem_default from around 131 KB to 5 MB in kernel setting to overcome spike.
Be careful with increasing buffer sizes. After a certain limit increasing buffer sizes only increases the amount of queuing and hence delay, leading to the infamous bufferbloat.
You may also play around with NIC Queuing Disciplines, they are responsible for dropping outgoing packets.
Related
I was wondering, imagine if there is no data to read from a TCP socket, then a whole frame of 1492 bytes arrives (full). In your code (C or any language supporting TCP) you have let's say recv 4096 bytes, will the OS guarantee that the recv reads the whole 1492 bytes, or is it possible that the loading of the frame in memory and recv are "interleaved", so the recv may get less ?
TCP is a stream oriented protocol. Data are received in order but you must not do any assumption about how many times you have to call recv until you receive all your data.
It is up to your application to repeat the calls to recv until you know you have received what you need.
(1) TCP is stream-oriented protocol. This means that it accepts a stream of data from the upper layer on the sender and returns the stream of data to the upper layer on the receiver. TCP itself receives packets from IP layer, and then reconstructs the stream. That is at some points packets cease to exist. In theory it is possible that somewhere during this reconstructed stream, only half of the incomming packet is copied in buffer, but it seems to me pretty unlikely that this would happen.
Now, linux man page states
The receive calls normally return any data available up to the requested amount,
I would interpret it as "if one packet has arrived (correctly, in order, etc), you will get the whole packet worth of data". But there is no guarantee.
On the other hand Windows docs states:
recv will return as much data as is currently available—up to the size of the buffer specified.
Which sounds more like the guarantee.
Note, however, that the data will only be returned if the packet is received correctly, and it is next in-order packet (with next expected sequence numbers).
(2) Now, TCP layer works on complete packets. It is actually impossible for it to do interleaving or anything. Ethernet has a checksum, which cannot be computed unless the packet was received completely. Packets with incorrect Ethernet checksum should be filtered out by the network card. TCP also has a checksum which requires all packet data to compute. So, if the network card has passed the packet to your OS, then data should be available.
(3) I don't think you can assume that if the packet is received, it is immediatelly available. A pretty common feature of network cards is TCP segmentation offload, which reconstructs part of the stream and results in network card passing one TCP packet that was reconstructed from multiple TCP packets. There are other things that can be in place to reduce the number of interrupts, which more or less result in several packets comming at once. So, the more likely situation is that you will have maybe some delay and then receive data from several packets at once.
The point is, the opposite of what you described is likely to happen. However, I still would not write an application that makes any assumptions about how large a chunk of data is available at a time. This negates the concept of a stream.
When working with WinSock or POSIX TCP sockets (in C/C++, so no extra Java/Python/etc. wrapping), is there any efficiency pro/cons to building up a larger buffer (e.g. say upto 4KB) in user space then making as few calls to send as possible to send that buffer vs making multiple smaller calls directly with the bits of data (say 1-1000 bytes), other the the fact that for non-blocking/asynchronous sockets the single buffer is potentially easier for me to manage.
I know with recv small buffers are not recommended, but I couldn't find anything for sending.
e.g. does each send call on common platforms go to into kernel mode? Could a 1 byte send actually result in a 1 byte packet being transmitted under normal conditions?
As explained on TCP Illustrated Vol I, by Richard Stevens, TCP divides the send buffer in near to optimum segments to fit in the maximum packet size along the path to the other TCP peer. That means that it will never try to send segments that will be fragmented by ip along the route to destination (when a packet is fragmented at some ip router, it sends back an IP fragmentation ICMP packet and TCP will take it into account to reduce the MSS for this connection). That said, there is no need for larger buffer than the maximum packet size of the link level interfaces you'll have along the path. Having one, let's say, twice or thrice longer, makes you sure that TCP will not stop sending as soon as it receives some acknowledge of remote peer, because of not having its buffer filled with data.
Think that the normal interface type is ethernet and it has a maximum packet size of 1500 bytes, so normally TCP doesn't send a segment greater than this size. And it normally has an internall buffer of 8Kb per connection, so there's little sense in adding buffer size at kernel space for that (if this is the only reason to have a buffer in kernel space).
Of course, there are other factors that force you to use a buffer in user space (for example, you want to store the data to send to your peer process somewhere, as there's only 8Kb data in kernel space to buffer, and you will need more space to be able to do some other processes) An example: ircd (the Internet Relay Chat daemon) uses write buffers of up to 100Kb before dropping a connection because the other side is not receiving/acknowledging that data. If you only write(2) to the connection, you'll be put on wait once the kernel buffer is full, and perhaps that's not what you want.
The reason to have buffers in user space is because TCP makes also flow control, so when it's not able to send data, it has to be put somewhere to cope with it. You'll have to decide if you need your process to save that data up to a limit or you can block sending data until the receiver is able to receive again. The buffer size in kernel space is limited and normally out of control for the user/developer. Buffer size in user space is limited only by the resources allowable to it.
Receiving/sending small chunks of data in a TCP connection is not recommendable because of the increased overhead of TCP handshaking and headers impose. Suppose a telnet connection in which for each character sent, a header for TCP and other for IP is added (20 bytes min for TCP, 20 bytes min for IP, 14 bytes for ethernet frame and 4 for the ethernet CRC) makes up to 60 bytes+ to transmit only one character. And normally each tcp segment is acknowledged individually, so that makes a full roundtrip time to send a segment and get the acknowledge (just to be able to free the buffer resources and assume this character as transmitted)
So, finally, what's the limit? It depends on your application. If you can cope with the kernel resources available and don't need more buffers, you can pass without havin buffers in user space. If you need more, you'll need to implement buffers and be able to feed the kernel buffer with your buffer data when available.
Yes, a one byte send can - under very normal conditions - result in sending a TCP packet with only a single byte payload. Send coalescing in TCP is normally done by use of Nagle's algorithm. With Nagle's algorithm, sending data is delayed iff there is data that has already been sent but not yet acknowledged.
Conversely data will be sent immediately if there is no unacknowledged data. Which is usually true in the following situations:
The connection has just been opened
The connection has been idle for some time
The connection only received data but nothing was sent for some time
In that case the first send call that your application performs will cause a packet to be sent immediately, no matter how small. So starting communication with two or more small sends is usually a bad idea because it increases overhead and delay.
The infamous "send send recv" pattern can also cause really large delays (e.g. on Windows typically 200ms). This happens if the local TCP stack uses Nagle's algorithm (which will usually delay the second send) and the remote stack uses delayed acknowledgment (which can delay the acknowledgment of the first packet).
Since most TCP stack implementations use both, Nagle's algorithm and delayed acknowledgment, this pattern should best be avoided.
I'm trying to understand the correct way to increase the socket buffer size on Linux for our streaming network application. The application receives variable bitrate data streamed to it on a number of UDP sockets. The volume of data is substantially higher at the start of the stream and I've used:
# sar -n UDP 1 200
to show that the UDP stack is discarding packets and
# ss -un -pa
to show that each socket Recv-Q length grows to the nearly the limit (124928. from sysctl net.core.rmem_default) before packets are discarded. This implies that the application simply can't keep up with the start of the stream. After discarding enough initial packets the data rate slows down and the application catches up. Recv-Q trends towards 0 and remains there for the duration.
I'm able to address the packet loss by substantially increasing the rmem_default value which increases the socket buffer size and gives the application time to recover from the large initial bursts. My understanding is that this changes the default allocation for all sockets on the system. I'd rather just increase the allocation for the specific UDP sockets and not modify the global default.
My initial strategy was to modify rmem_max and to use setsockopt(SO_RCVBUF) on each individual socket. However, this question makes me concerned about disabling Linux autotuning for all sockets and not just UDP.
udp(7) describes the udp_mem setting but I'm confused how these values interact with the rmem_default and rmem_max values. The language it uses is "all sockets", so my suspicion is that these settings apply to the complete UDP stack and not individual UDP sockets.
Is udp_rmem_min the setting I'm looking for? It seems to apply to individual sockets but global to all UDP sockets on the system.
Is there a way to safely increase the socket buffer length for the specific UDP ports used in my application without modifying any global settings?
Thanks.
Jim Gettys is armed and coming for you. Don't go to sleep.
The solution to network packet floods is almost never to increase buffering. Why is your protocol's queueing strategy not backing off? Why can't you just use TCP if you're trying to send so much data in a stream (which is what TCP was designed for).
I would like to have UDP packets copied directly from the ethernet adapter into my userspace buffer
Some details on my setup:
I am receiving data from a pair of gigabit ethernet cameras. Combined I am receiving 28800 UDP packets per second (1 packet per line * 30FPS * 2 cameras * 480 lines). There is no way for me to switch to jumbo frames, and I am already looking into tuning driver level interrupts for reduced CPU utilization. What I am after here is reducing the number of times I am copying this ~40MB/s data stream.
This is the best source I have found on this, but I was hoping there was a more complete reference or proof that such an approach worked out in practice.
This article may be useful:
http://yusufonlinux.blogspot.com/2010/11/data-link-access-and-zero-copy.html
Your best avenues are recvmmsg and increasing RX interrupt coalescing.
http://lwn.net/Articles/334532/
You can move lower and match how Wireshark/tcpdump operate but it becomes futile to attempt any serious processing above it having to decode everything yourself.
At only 30,000 packets per second I wouldn't worry too much about copying packets, those problems arise when dealing with 3,000,000 messages per second.
I'm trying to understand some behavior I'm seeing in the context of sending UDP packets.
I have two little Java programs: one that transmits UDP packets, and the other that receives them. I'm running them locally on my network between two computers that are connected via a single switch.
The MTU setting (reported by /sbin/ifconfig) is 1500 on both network adapters.
If I send packets with a size < 1500, I receive them. Expected.
If I send packets with 1500 < size < 24258 I receive them. Expected. I have confirmed via wireshark that the IP layer is fragmenting them.
If I send packets with size > 24258, they are lost. Not Expected. When I run wireshark on the receiving side, I don't see any of these packets.
I was able to see similar behavior with ping -s.
ping -s 24258 hostA works but
ping -s 24259 hostA fails.
Does anyone understand what may be happening, or have ideas of what I should be looking for?
Both computers are running CentOS 5 64-bit. I'm using a 1.6 JDK, but I don't really think it's a programming problem, it's a networking or maybe OS problem.
Implementations of the IP protocol are not required to be capable of handling arbitrarily large packets. In theory, the maximum possible IP packet size is 65,535 octets, but the standard only requires that implementations support at least 576 octets.
It would appear that your host's implementation supports a maximum size much greater than 576, but still significantly smaller than the maximum theoretical size of 65,535. (I don't think the switch should be a problem, because it shouldn't need to do any defragmentation -- it's not even operating at the IP layer).
The IP standard further recommends that hosts not send packets larger than 576 bytes, unless they are certain that the receiving host can handle the larger packet size. You should maybe consider whether or not it would be better for your program to send a smaller packet size. 24,529 seems awfully large to me. I think there may be a possibility that a lot of hosts won't handle packets that large.
Note that these packet size limits are entirely separate from MTU (the maximum frame size supported by the data link layer protocol).
I found the following which may be of interest:
Determine the maximum size of a UDP datagram packet on Linux
Set the DF bit in the IP header and send continually larger packets to determine at what point a packet is fragmented as per Path MTU Discovery. Packet fragmentation should then result in a ICMP type 3 packet with code 4 indicating that the packet was too large to be sent without being fragmented.
Dan's answer is useful but note that after headers you're really limited to 65507 bytes.