How do I increase TCP retransmission rate in Linux per-connection? - linux

I'm using TCP over a very lossy network system (almost 20% drop rate) but one with extremely low latency (<2ms). And the default TCP implementation on our Linux system is atrocious. Sometimes it waits 5-6 seconds before re-transmitting packets. On the other side, our TCP stack just retries every 20ms, and it's fine.
I can't find any way to manually incur a re-transmission, even with TCP_NODELAY and sending no data aggressively. Also, there do not appear to be per-socket configurations for this. As we only want to change the timing for specific sockets (the ones on this network).
Is there any kernel feature for either manually re-transmitting with TCP or aggressively set the timers such to allow many retries per second?
(I am aware this is a similar (but not the same)) issue as the person in this thread: Application control of TCP retransmission on Linux -- but I do not want to close the connection, like TCP_USER_TIMEOUT just make it keep retrying, a lot.

TCP_NODELAY is an entirely unrelated options - it disables Nagle's algorithm for consolidating packets - it is used on interactive connection such as ssh where it is desirable to have every single character in a separate packet (to get an immediate remote echo).
Normally, the TCP layer will continuously monitor the RTT (Round-Trip Time) for receiving an ACK and will readjust its timeout. This is called Karn's Algorithm.
AFAIK, There are no tunables in the Linux kernel when it comes to retransmissions because this is something that is specified by the RFCs and it is embedded in the protocol.
I suggest you read this introduction: https://www.catchpoint.com/blog/tcp-rtt and then to capture a packet dump containing those exceptionally long retransmissions and post them here.
Also, are you sure that the packet loss is the same in both directions? This can be an explanation.

Related

Can I use SO_REUSEPORT to distribute a single UDP flow to multiple receiver threads?

My Linux application needs to receive a single UDP flow with modestly-sized packets (~1 KB) at a rate on the order of ~600,000 packets per second. My current implementation is naive: it has a single thread that simply calls recv() repeatedly, placing the received data in a queue to be processed by another thread. Therefore, the receiver thread is only in charge of pulling in the packets.
In some initial testing that I've done, I'm only able to receive between 200,000-300,000 packets per second before the thread reaches full utilization of its CPU core. This obviously isn't good enough to meet the goal of ~600,000 packets per second.
Ideally, I would find some way of distributing the packet reception load across multiple threads. In looking for a solution to the problem, I came across the SO_REUSEPORT socket option, which allows multiple TCP/UDP threads to be bound to the same IP/port combination. At first, this seemed to be exactly what I wanted.
However, the article also points out this detail:
Incoming connections and datagrams are distributed to the server sockets using a hash based on the 4-tuple of the connection—that is, the peer IP address and port plus the local IP address and port. This means, for example, that if a client uses the same socket to send a series of datagrams to the server port, then those datagrams will all be directed to the same receiving server (as long as it continues to exist). This eases the task of conducting stateful conversations between the client and server.
Therefore, if I only have a single UDP flow, the above hashing implementation would yield all of the packets being directed to the same receiver thread, thwarting my attempt at parallelizing the work. Therefore, the question is: is there a way to receive a single flow of UDP packets from multiple threads, using SO_REUSEPORT or some other mechanism?
Note that my application can handle reordering of packets; the protocol that the datagrams are formatted with contains sequencing information that I can use to reorder them properly afterward.
If you didn't find the solution for last 3 years take a look at SO_ATTACH_REUSEPORT_CBPF. We had exactly the same issue and we solved it by attaching simple BPF program which distributes datagrams randomly mod n.

TCP: Improving reliability with a broken connection

I'm working on an application where I need to ensure that even if the network goes down, messages will still arrive at their destination reliably, in-order, and unmodified. I've been using TCP, and up until now, I was just using a strategy of:
If a send/receive fails, do it again until no error.
If the remote disconnects, wait until the next connection and replace the socket I was send/receiving from with this new one (achieved through some threading and blocking to ensure it's swapped cleanly).
I recently realised that this doesn't work, as send can't report errors indicating that the remote hasn't received the message (cite eg. here).
I did also learn that TCP connections can survive brief network outages, as the kernel buffers the packets until the connection is declared dead after the timeout period (cite.
here).
The question: Is it a feasible strategy to just crank the timeout period waaaay higher on both client/server side (using setsockopt and the SO_KEEPALIVE options), so that a connection "never times out"? I'd have to handle errors related to the kernel's buffer filling up, but that should be relatively simple.
Are there any other failure cases?
If both ends doesn't explicitly disconnect, the tcp connection will stay open forever even if you unplug the cable. There is no timeout in TCP.
However, I would use (or design) an application protocol on top of tcp, making it possible to resume data transmission after re-connects. You may use HTTP for example.
That would be much more stable because depending on buffers would, as you say, at some time exhaust the buffers but the buffers would also being lost on let's say a power outage.

How to temporarily buffer incoming network traffic for latency-sensitive HFT application?

We are running a Java-based trading application, and there are certain periods where we want to prioritize outgoing network traffic as much as possible for about 10 ms. Is there a way to temporarily buffer all incoming network traffic during a short time period, either on the network card or via a process or buffer on our Redhat Linux box?
The rationale behind this is that the incoming network traffic spikes during this same period, and the application processing this traffic is stealing CPU cycles from the process we are trying to prioritize. We do not have fine-grained control over the application treating the incoming network traffic.
We're on a 1 Gbps connection so a buffer of about 1 MB should be sufficient. We would prefer not dropping the incoming traffic and requesting retransmission as this would increase load on our network during quite busy periods.
Possible using Qos on the router, or using trickle to control your bandwidth by a sample configuration of :
/etc/trickled.conf.
see example in url.
I am not sure whether I understand your problem correctly. Your concern is sometimes you have priority to deal with output network traffic and at this time the incoming traffic will build up and finally might cause package drop or retransmission which you don't want. Therefore, you want to buffer your incoming traffic.
If my understanding is correct and your are using TCP, try to make your tcp buffer bigger.
http://kaivanov.blogspot.com/2010/09/linux-tcp-tuning.html and then Use netstat to check whether your change is effective.
Adrian, have you tried setting the priority of your outgoing communication process to be higher than that of the process receiving the incoming data? Using the nice command this can be achieved. Note that in Unix/Linux the lower the number the higher the priority.
Otherwise I am not sure this is possible without having a direct tie in between the two applications that are sending / receiving, allowing you to effectively ignore the incoming connections that are ready to read from until any data you have is sent out.

Mule UDP Inbound Endpoint's socket is blocked

I'll post as much as possible here and I'll add more info upon your requests in the comments
I have a mule flow that takes SNMP packets over UDP (Inbound UDP Endpoint) and then passes the message to a transformer that transforms the byte array (The packet) into an Trap object then if this trap of some specific kind I'll just log it and ignore it, otherwise, it will be logged and inserted into DB in addition to some updates (The figure below illustrates the my flow).
The application will be listening on port 17985 for coming SNMP traps from an agent, now if the agent have lot of traps the UDP socket's Recv-Q will be like in the figure below and the UDP endpoint will stop to log any events (Traps to DB nor Log messages to log file)
What did I do?
I tried to increase endpoint buffer size to be 10MB.
I tried to increase system buffer size to 25MB but no matter how much is it, it will be filled up as soon as the agent starts to get crazy
Additional Info
The agent may sometimes send up to 400~600 traps per second.
Trap's packet size is ~1500 bytes max.
The database is very fast, no more optimization is needed there I think.

Linux TCP weirdly unresponsive when under heavy load

I'm trying to get an HTTP server I'm writing on to behave well when under heavy load, but I'm getting some weird behavior that I cannot quite understand.
My testing consists of using ab (the Apache benchmark program) over the loopback interface at a concurrency level of 1000 (ab -n 50000 -c 1000 http://localhost:8080/apa), while straceing the server process. Strace both slows processing down well enough for the problem to be readily reproducible and allows me to debug the server internals post completion to some extent. I also capture the network traffic with tcpdump while the test is running.
What happens is that ab stops running a while into the test, complaining that a connection returned ECONNRESET, which I find a bit weird. I could easily buy into a connection timing out since the server might simply not have the bandwidth to process them all, but shouldn't that reasonably return ETIMEDOUT or even ECONNREFUSED if not all connections can be accepted?
I used Wireshark to extract the packets constituting the first connection to return ECONNRESET, and its brief packet list looks like this:
(The entire tcpdump file of this connection is available here.)
As you can see from this dump, the connection is accepted (after a few SYN retransmissions), and then the request is retransmitted a few times, and then the server resets the connection. I'm wondering, what could cause this to happen? Normally, Linux' TCP implementation ACKs data before the reading process even chooses to receive it so long as their is space in the TCP window, so why doesn't it do that here? Are there some kind of shared buffers that are running out? Most importantly, why is the kernel responding with a RST packet all of a sudden instead of simply waiting and letting the client re-transmit further?
For the record, the strace of the process indicates that it never even accepts a connection from the port in this connection (port 56946), so this seems to be something Linux does on its own. It is also worth noting that the server works perfectly well as long as ab's concurrency level is low enough (it works perfectly well up to about 100, and then starts failing intermittently somewhere between 100-500), and that its request throughput is rather constant regardless of the concurrency level (it processes somewhere between 6000-7000 requests per second as long as it isn't being straced). I have not found any particular correlation between the frequency of the problem occurring and my backlog setting to listen() (I'm currently using 128, but I've tried up to 1024 without it seeming to make a difference).
In case it matters, I'm running Linux 3.2.0 on this AMD64 box.
The backlog queue filled up: hence the SYN retransmissions.
Then a slot became available: hence the SYN/ACK.
Then the GET was sent, followed by four retransmissions, which I can't account for.
Then the server gave up and reset the connection.
I suspect you have a concurrency or throughput problem in your server which is preventing you from accepting connections rapidly enough. You should have a thread that is dedicated to doing nothing else but calling accept() and either starting another thread to handle the accepted socket or else queueing a job to handle it to a thread pool. I would then speculate that Linux resets connections on connections which are in the backlog queue and which are receiving I/O retries, but that's only a guess.

Resources