Disable TCP Delayed ACKs - linux

I have an application that receives relatively sparse traffic over TCP with no application-level responses. I believe the TCP stack is sending delayed ACKs (based on glancing at a network packet capture). What is the recommended way to disable delayed-ACK in the network stack for a single socket? I've looked at TCP_QUICKACK, but it seems that the stack will change it under my feet anyways.
This is running on a Linux 2.6 kernel, and I am not worried about portability.

You could setsockopt(sockfd, IPPROTO_TCP, TCP_QUICKACK, (int[]){1}, sizeof(int)) after every recv you perform. It appears that TCP_QUICKACK is only reset when there is data being sent or received; if you're not sending any data, then it will only get reset when you receive data, in which case you can simply set it again.
You can check this in the 14th field of /proc/net/tcp; if it is not 1, ACKs should be sent immediately... if I'm reading the TCP code correctly. (I'm not an expert at this either.)

I believe using the setsockopt() function you can use the TCP_NODELAY which will disable the Nagle algorithm.
Edit Found a link: http://www.ibm.com/developerworks/linux/library/l-hisock.html
Edit 2 Tom is correct. Nagle does not affect Delayed ACKs.

The accepted answer has a bug, it should be
flags = 0;
flglen = sizeof(flags);
setsockopt(sfd, SOL_TCP, TCP_QUICKACK, &flags, flglen)
(SOL_TCP, not IPPROTO_TCP)
Post here in case someone need it.
As #damolp suggested, it is not a bug. Keep it here for record.

Related

TCP_NODELAY parameter implication on MSS and CPU utilisation

While I was trying to compose test-suite using netperf I had to go through the manual where I came across the below line for option -D
If setting TCP_NODELAY with -D affects throughput and/or service demand for tests where the send size (-m) is larger than the MSS it suggests the TCP/IP stack’s implementation of the Nagle Algorithm may be broken, perhaps interpreting the Nagle Algorithm on a segment by segment basis rather than the proper user send by user send basis
My question is - how can setting TCP_DELAY and it's effect on throughput determines nagle implementation is broken? Can someone help me with a logical explanation on the same?
In broad terms Nagle goes something like:
Is the amount of data in this send, added to any data queued waiting
to be sent, larger than the Maximum Segment Size (MSS) of this
connection? If the answer is yes, then send the data now, so long
as there are no other constraints on sending such as the congestion
window or the receiver’s advertised window. If the answer is no,
then consider the next step:
Is the connection “idle?” In other
words, is there no un-ACKnowledged data out on the network. If so,
then send the data in the user’s send now. If the connection is not
idle, queue the data and wait. We know that one or more of three
things will happen to cause the data to be sent:
The application will continue making send calls and we get to an MSS-worth of data to send.
An ACKnowledgement will arrive from the remote for the currently un-ACKed data making the connection “idle” again.
The retransmission timer for the un-ACKed data will expire and we might piggy-back this data with the retransmission.
The main point being Nagle is supposed to be evaluated on a user-send by user-send basis. If disabling it results in higher throughput with a send size greater than the MSS, it implies the Nagle implementation of that stack is going TCP segment by TCP segment instead.

Send TCP SYN packet with payload

Is it possible to send a SYN packet with self-defined payload when initiating TCP connections? My gut feeling is that it is doable theoretically. I'm looking for a easy way to achieve this goal in Linux (with C or perhaps Go language) but because it is not a standard behavior, I didn't find helpful information yet. (This post is quite similar while it is not very helpful.)
Please help me, thanks!
EDIT: Sorry for the ambiguity. Not only the possibility for such task, I'm also looking for a way, or even sample codes to achieve it.
As far as I understand (and as written in a comment by Jeff Bencteux in another answer), TCP Fast Open addresses this for TCP.
See this LWN article:
Eliminating a round trip
Theoretically, the initial SYN segment could contain data sent by the initiator of the connection: RFC 793, the specification for TCP, does permit data to be included in a SYN segment. However, TCP is prohibited from delivering that data to the application until the three-way handshake completes.
...
The aim of TFO is to eliminate one round trip time from a TCP conversation by allowing data to be included as part of the SYN segment that initiates the connection.
Obviously if you write your own software on both sides, it is possible to make it work however you want. But if you are relying on standard software on either end (such as, for example, a standard linux or Windows kernel), then no, it isn't possible, because according to TCP, you cannot send data until the session is established, and the session isn't established until you get an acknowledgment to your SYN from the other peer.
So, for example, if you send a SYN packet that also includes additional payload to a linux kernel (caveat: this is speculation to some extent since I haven't actually tried it), it will simply ignore the payload and proceed to acknowledge (SYN/ACK) or reject (with RST) the SYN depending on whether there's a listener.
In any case, you could try this, but since you're going "off the reservation" so to speak, you would need to craft your own raw packets; you won't be able to convince your local OS to create them for you.
The python scapy package could construct it:
#!/usr/bin/env python2
from scapy.all import *
sport = 3377
dport = 2222
src = "192.168.40.2"
dst = "192.168.40.135"
ether = Ether(type=0x800, dst="00:0c:29:60:57:04", src="00:0c:29:78:b0:ff")
ip = IP(src=src, dst=dst)
SYN = TCP(sport=sport, dport=dport, flags='S', seq=1000)
xsyn = ether / ip / SYN / "Some Data"
packet = xsyn.build()
print(repr(packet))
TCP Fast open do that. But both ends should speak TCP fast open. QUIC a new protocol is based to solve this problem AKA 0-RTT.
I had previously stated it was not possible. In the general sense, I stand by that assessment.
However, for the client, it is actually just not possible using the connect() API. There is an alternative connect API when using TCP Fast Open. Example:
sfd = socket(AF_INET, SOCK_STREAM, 0);
sendto(sfd, data, data_len, MSG_FASTOPEN,
(struct sockaddr *) &server_addr, addr_len);
// Replaces connect() + send()/write()
// read and write further data on connected socket sfd
close(sfd);
There is no API to allow the server to attach data to the SYN-ACK sent to the client.
Even so, enabling TCP Fast Open on both the client and server may allow you to achieve your desired result, if you only mean data from the client, but it has its own issues.
If you want the same reliability and data stream semantics of TCP, you will need a new reliable protocol that has the initial data segment in addition to the rest of what TCP provides, such as congestion control and window scaling.
Luckily, you don't have to implement it from scratch. The UDP protocol is a good starting point, and can serve as your L3 for your new L4.
Other projects have done similar things, so it may be possible to use those instead of implementing your own. Consider QUIC or UDT. These protocols were implemented over the existing UDP protocol, and thus avoid the issues faced with deploying TCP Fast Open.

Discard incoming UDP packet without reading

In some cases, I'd like to explicitly discard packets waiting on the socket with as little overhead as possible. It seems there's no explicit "drop udp buffer" system call, but maybe I'm wrong?
The next best way would be probably to recv the packet to a temporary buffer and just drop it. It seems I can't receive 0 bytes, since man says about recv: The return value will be 0 when the peer has performed an orderly shutdown. So 1 is the minimum in this case.
Is there any other way to handle this?
Just in case - this is not a premature optimisation. The only thing this server is doing is forwarding / dispatching the UDP packets in a specific way - although recv with len=1 won't kill me, I'd rather just discard the whole queue in one go with some more specific function (hopefully lowering the latency).
You can have the kernel discard your UDP packets by setting the UDP receive buffer to 0.
int UdpBufSize = 0;
socklen_t optlen = sizeof(UdpBufSize);
setsockopt(socket, SOL_SOCKET, SO_RCVBUF, &UdpBufSize, optlen);
Whenever you see fit to receive packets, you can then set the buffer to, for example, 4096 bytes.
I'd rather just discard the whole queue in one go
Since this is UDP we are talking here: close(udp_server_socket) and socket()/bind() again?
To my understanding should work.
man says about recv: the return value will be 0 when the peer has performed an orderly shutdown.
That doesn't apply to UDP. There is no "connection" to shut down in UDP. A return value of 0 is perfectly valid, it just means datagram with no payload was received (i.e. the IP and UDP headers only.)
Not sure that helps your problem or not. I really don't understand where you are going with the len=1 stuff.

I need a TCP option (ioctl) to send data immediately

I've got an unusual situation: I'm using a Linux system in an embedded situation (Intel box, currently using a 2.6.20 kernel.) which has to communicate with an embedded system that has a partially broken TCP implementation. As near as I can tell right now they expect each message from us to come in a separate Ethernet frame! They seem to have problems when messages are split across Ethernet frames.
We are on the local network with the device, and there are no routers between us (just a switch).
We are, of course, trying to force them to fix their system, but that may not end up being feasible.
I've already set TCP_NODELAY on my sockets (I connect to them), but that only helps if I don't try to send more than one message at a time. If I have several outgoing messages in a row, those messages tend to end up in one or two Ethernet frames, which causes trouble on the other system.
I can generally avoid the problem by using a timer to avoid sending messages too close together, but that obviously limits our throughput. Further, if I turn the time down too low, I risk network congestion holding up packet transmits and ending up allowing more than one of my messages into the same packet.
Is there any way that I can tell whether the driver has data queued or not? Is there some way I can force the driver to send independent write calls in independent transport layer packets? I've had a look through the socket(7) and tcp(7) man pages and I didn't find anything. It may just be that I don't know what I'm looking for.
Obviously, UDP would be one way out, but again, I don't think we can make the other end change anything much at this point.
Any help greatly appreciated.
IIUC, setting the TCP_NODELAY option should flush all packets (i.e. tcp.c implements setting of NODELAY with a call to tcp_push_pending_frames). So if you set the socket option after each send call, you should get what you want.
You cannot work around a problem unless you're sure what the problem is.
If they've done the newbie mistake of assuming that recv() receives exactly one message then I don't see a way to solve it completely. Sending only one message per Ethernet frame is one thing, but if multiple Ethernet frames arrive before the receiver calls recv() it will still get multiple messages in one call.
Network congestion makes it practically impossible to prevent this (while maintaining decent throughput) even if they can tell you how often they call recv().
Maybe, set TCP_NODELAY and set your MTU low enough so that there would be at most 1 message per frame? Oh, and add "dont-fragment" flag on outgoing packets
Have you tried opening a new socket for each message and closing it immediately? The overhead may be nauseating,but this should delimit your messages.
In the worst case scenario you could go one level lower (raw sockets), where you have better control over the packets sent, but then you'd have to deal with all the nitty-gritty of TCP.
Maybe you could try putting the tcp stack into low-latency mode:
echo 1 > /proc/sys/net/ipv4/tcp_low_latency
That should favor emitting packets as quickly as possible over combining data. Read the man on tcp(7) for more information.

TCP handshake with SOCK_RAW socket

Ok, I realize this situation is somewhat unusual, but I need to establish a TCP connection (the 3-way handshake) using only raw sockets (in C, in linux) -- i.e. I need to construct the IP headers and TCP headers myself. I'm writing a server (so I have to first respond to the incoming SYN packet), and for whatever reason I can't seem to get it right. Yes, I realize that a SOCK_STREAM will handle this for me, but for reasons I don't want to go into that isn't an option.
The tutorials I've found online on using raw sockets all describe how to build a SYN flooder, but this is somewhat easier than actually establishing a TCP connection, since you don't have to construct a response based on the original packet. I've gotten the SYN flooder examples working, and I can read the incoming SYN packet just fine from the raw socket, but I'm still having trouble creating a valid SYN/ACK response to an incoming SYN from the client.
So, does anyone know a good tutorial on using raw sockets that goes beyond creating a SYN flooder, or does anyone have some code that could do this (using SOCK_RAW, and not SOCK_STREAM)? I would be very grateful.
MarkR is absolutely right -- the problem is that the kernel is sending reset packets in response to the initial packet because it thinks the port is closed. The kernel is beating me to the response and the connection dies. I was using tcpdump to monitor the connection already -- I should have been more observant and noticed that there were TWO replies one of which was a reset that was screwing things up, as well as the response my program created. D'OH!
The solution that seems to work best is to use an iptables rule, as suggested by MarkR, to block the outbound packets. However, there's an easier way to do it than using the mark option, as suggested. I just match whether the reset TCP flag is set. During the course of a normal connection this is unlikely to be needed, and it doesn't really matter to my application if I block all outbound reset packets from the port being used. This effectively blocks the kernel's unwanted response, but not my own packets. If the port my program is listening on is 9999 then the iptables rule looks like this:
iptables -t filter -I OUTPUT -p tcp --sport 9999 --tcp-flags RST RST -j DROP
You want to implement part of a TCP stack in userspace... this is ok, some other apps do this.
One problem you will come across is that the kernel will be sending out (generally negative, unhelpful) replies to incoming packets. This is going to screw up any communication you attempt to initiate.
One way to avoid this is to use an IP address and interface that the kernel does not have its own IP stack using- which is fine but you will need to deal with link-layer stuff (specifically, arp) yourself. That would require a socket lower than IPPROTO_IP, SOCK_RAW - you need a packet socket (I think).
It may also be possible to block the kernel's responses using an iptables rule- but I rather suspect that the rules will apply to your own packets as well somehow, unless you can manage to get them treated differently (perhaps applying a netfilter "mark" to your own packets?)
Read the man pages
socket(7)
ip(7)
packet(7)
Which explain about various options and ioctls which apply to types of sockets.
Of course you'll need a tool like Wireshark to inspect what's going on. You will need several machines to test this, I recommend using vmware (or similar) to reduce the amount of hardware required.
Sorry I can't recommend a specific tutorial.
Good luck.
I realise that this is an old thread, but here's a tutorial that goes beyond the normal SYN flooders: http://www.enderunix.org/docs/en/rawipspoof/
Hope it might be of help to someone.
I can't help you out on any tutorials.
But I can give you some advice on the tools that you could use to assist in debugging.
First off, as bmdhacks has suggested, get yourself a copy of wireshark (or tcpdump - but wireshark is easier to use). Capture a good handshake. Make sure that you save this.
Capture one of your handshakes that fails. Wireshark has quite good packet parsing and error checking, so if there's a straightforward error it will probably tell you.
Next, get yourself a copy of tcpreplay. This should also include a tool called "tcprewrite".
tcprewrite will allow you to split your previously saved capture files into two - one for each side of the handshake.
You can then use tcpreplay to play back one side of the handshake so you have a consistent set of packets to play with.
Then you use wireshark (again) to check your responses.
I don't have a tutorial, but I recently used Wireshark to good effect to debug some raw sockets programming I was doing. If you capture the packets you're sending, wireshark will do a good job of showing you if they're malformed or not. It's useful for comparing to a normal connection too.
There are structures for IP and TCP headers declared in netinet/ip.h & netinet/tcp.h respectively. You may want to look at the other headers in this directory for extra macros & stuff that may be of use.
You send a packet with the SYN flag set and a random sequence number (x). You should receive a SYN+ACK from the other side. This packet will have an acknowledgement number (y) that indicates the next sequence number the other side is expecting to receive as well as another sequence number (z). You send back an ACK packet that has sequence number x+1 and ack number z+1 to complete the connection.
You also need to make sure you calculate appropriate TCP/IP checksums & fill out the remainder of the header for the packets you send. Also, don't forget about things like host & network byte order.
TCP is defined in RFC 793, available here: http://www.faqs.org/rfcs/rfc793.html
Depending on what you're trying to do it may be easier to get existing software to handle the TCP handshaking for you.
One open source IP stack is lwIP (http://savannah.nongnu.org/projects/lwip/) which provides a full tcp/ip stack. It is very possible to get it running in user mode using either SOCK_RAW or pcap.
if you are using raw sockets, if you send using different source mac address to the actual one, linux will ignore the response packet and not send an rst.

Resources