High number of TIME_WAITs in Haproxy - linux

We have haproxy 1.3.26 hosted on CentOS 5.9 machine having 2.13 GHz Intel Xeon processor which is acting as a http & tcp load balancer for numerous services, serving a peak throughput of ~2000 requests/second. It has been running fine for 2 years but gradually both traffic and number of services are increasing.
Off late we've observed that even after reload old haproxy process remains. On further investigation we found that old process has numerous connections in TIME_WAIT state. We also saw that netstat and lsof were taking a long long time. On referring http://agiletesting.blogspot.in/2013/07/the-mystery-of-stale-haproxy-processes.html we introduced option forceclose but it was messing up with various monitoring service hence reverted it. On further digging we realised that in /proc/net/sockstat close to 200K sockets are in tw (TIME_WAIT) state which is surprising as in /etc/haproxy/haproxy.cfg maxconn has been specified as 31000 and ulimit-n as 64000. We had timeout server and timeout client as 300s which we changed to 30s but not much use.
Now the doubts are :-
Whether such a high number of TIME_WAITs is acceptable. If yes whats a number after which we should be worried. Looking at What is the cost of many TIME_WAIT on the server side? and Setting TIME_WAIT TCP seems there shouldn't be any issue.
How to decrease these TIME_WAITs
Are there any alternatives to netstat and lsof which will perform fine even if there are very high number of TIME_WAITs

Note: The quotes in this answer are all from a mail by Willy Tarreau (the main author of HAProxy) to the HAProxy mailinglist.
Connections in TIME_WAIT state are harmless and don't really consume any resources anymore. They are kept by the kernel on a server for some time for the rare event that it still receives a package after the connection was closed. The default time a closed connection is held in that state is typically 120 seconds (or 2 times the maximum segment lifetime)
TIME_WAIT are harmless on the server side. You can easily reach millions
without any issues.
If you still want to reduce that number to release connections earlier, you can instruct the kernel to do so. To e.g. set it to 30 seconds execute this:
echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout
If you have many connections (either in TIME_WAIT or not), netstat, lsof, ipcs perform very poorly and actually slow the whole system down. To quote Willy again:
There are two commands that you must absolutely never use in a monitoring
system :
netstat -a
ipcs -a
Both of them will saturate the system and considerably slow it down when
something starts to go wrong. For the sockets you should use what's in
/proc/net/sockstat. You have all the numbers you want. If you need more
details, use ss -a instead of netstat -a, it uses the netlink interface
and is several orders of magnitude faster.
On Debian and Ubuntu systems, ss is available in the iproute or iproute2 package (depending on the version of your distribution).

Related

Eliminating latency spikes in TCP connection

I have a Linux network application that I am trying to optimize for low latency. This application consumes UDP and produces TCP traffic. I have setup a network tap and written some scripts that correlate the UDP traffic with the application's TCP response to compute end to end latency. I have also setup tracing within the application so I can measure internal latency. I have found that typical end to end latency as measured by the capture device is about 20us but on about 5% of the cases the latency can spike to 2000us and even more. Correlating the internal logs with the capture device logs indicates this spike originates in the kernel TCP transmission.
Any suggestions on how I could get a better understanding of what is going on and hopefully fix it? I am running on a 4 HW core machine, with three of the cores dedicated to the application and the remaining one left for the OS.
Update: Further investigation of the PCAP file shows that TCP messages that exhibit high latency are always immediately preceded by an ACK from the system that is the target of the TCP data (i.e. the system to which the machine under test is sending its TCP data). This leads me to believe that the system under test is trying to keep the data in flight under some minimum and that is why it deliberately delays its responses. Have not had been able to tune this behavior out though.
Thanks in advance
I'm pretty sure it's too late for you, but may be it'll help someone in the future. I'm almost sure that you haven't turned off the Naggle algorithm by setting on the TCP_NOWAIT socket option.

Need to increase the number of concurrent HTTP connections to 85000

I have a setup with 2 machines. I am using one as the server and the other as client. They are connected directly using a 1Ghz link. Both the machines have 4 cores, 8Gb ram and almost 100Gb disk space. I need to tune the Nginx server ( its the one im trying with but i can use any other as well) to handle 85000 concurrent connections. I have a 1kb file on the server and i am using curl on the client to get the same file over all the connections.
After trying various tuning settings, i have 1500 established connections and around 30000 TIME_WAIT connections when i call the curl around 40000 times. Is there a way i can make the TIME_WAITs ESTABLISHED?
Any help in tuning both the server and client will be much appreciated. I am pretty new to using Linux and trying to get the hang of it. The version of linux on both machines is Fedora 20.
Besides of tuning Nginx, you will also need to tune your Linux installation in respect to limits in number of tcp connections, sockets, open files, etc.
These two links should give you a great overview:
https://www.nginx.com/blog/tuning-nginx/
https://mrotaru.wordpress.com/2013/10/10/scaling-to-12-million-concurrent-connections-how-migratorydata-did-it/
You might want to check how much memory TCP buffers etc are using for all those connections.
See this SO thread: How much memory is consumed by the Linux kernel per TCP/IP network connection?
Also, this page is good: http://www.psc.edu/index.php/networking/641-tcp-tune
Given that your two machines are one the same physical network and delays are very low, you can use fairly small TCP window buffer sizes. Modern Linuxes (you didn't mention what kernel you're using) have TCP Autotuning that automatically adjusts these buffers, so you should not have to worry about this unless you're using an old kernel.
Regardless, however, the application(s) can allocate send- and receive buffers separately, which disables TCP Autotuning, so if you're running an application that does this, you might want to limit how much buffer space an application can request per connection (the net.core.wmem_max and net.core.rmem_max variables mentioned in the SO article).
I would recommend https://github.com/eunyoung14/mtcp to achieve 1 million concurrent connection, I did some tuning of mtcp and tested it on a used Dell PowerEdge R210 with 32G ram and 8 cores to achieve 1 million concurrent connection.

TCP IP Performance on the same machine

When I run a TCP server and client on the same machine what I am observing is that the client send time (that is timestampT1 , send() , timestampT2 ; timestampT2 - timestampT1 ) is significantly higher in the tail percentiles than if I run the same server on a different machine.
With all TCP parameters, software and machine specs being equal if client takes 10 mirco sec in the mean and 20-25 mircoseconds in the 90-99th percentile for 1 million sends in case of server and client on different boxes , it takes 10 microsec in the mean and 70-100 microseconds in the 90-99th percentile for server and client on same box.
I have tried playing with interuupt isolation, socket send buffer sizing and CPU pinning with no significant improvements. This is RHEL 5.6.
Any possible explanation for this ?
Heisenberg uncertainty principle in a broad sense. More specifically, if you have two programs on a computer where one is sending data and the other is analyzing it - then you're taxing the CPU with two tasks, where as if your monitoring program is running on a different computer - your sender has the benefit of not having to compete with anyone else and will always be faster.
Don't test network throughput with both programs on the same machine.

how to close 'closing' sockets on linux use command?

in my linux server, there are so many tcp connections at 'closing' state, netstat results like:
tcp 1 61 local-ip:3306 remote-ip:11654 CLOSING -
How can I closed these connections? The local process is restarted, so I can't find the pids.
And I have restart network, but unused.
I can't restart the server, because there are services running.
Thank you very much!!
Relax, it won't cause any problems (unless there is a huge amount of these in which case it is a denial of service attack).
This will help understand what's going on: TCP State transition diagram

How many socket connections possible?

Has anyone an idea how many tcp-socket connections are possible on a modern standard Linux server?
(There is in general less traffic on each connection, but all the connections have to be up all the time.)
I achieved 1600k concurrent idle socket connections, and at the same time 57k req/s on a Linux desktop (16G RAM, I7 2600 CPU). It's a single thread http server written in C with epoll. Source code is on github, a blog here.
Edit:
I did 600k concurrent HTTP connections (client & server) on both the same computer, with JAVA/Clojure . detail info post, HN discussion: http://news.ycombinator.com/item?id=5127251
The cost of a connection(with epoll):
application need some RAM per connection
TCP buffer 2 * 4k ~ 10k, or more
epoll need some memory for a file descriptor, from epoll(7)
Each registered file descriptor costs roughly 90
bytes on a 32-bit kernel, and roughly 160 bytes on a 64-bit kernel.
This depends not only on the operating system in question, but also on configuration, potentially real-time configuration.
For Linux:
cat /proc/sys/fs/file-max
will show the current maximum number of file descriptors total allowed to be opened simultaneously. Check out http://www.cs.uwaterloo.ca/~brecht/servers/openfiles.html
A limit on the number of open sockets is configurable in the /proc file system
cat /proc/sys/fs/file-max
Max for incoming connections in the OS defined by integer limits.
Linux itself allows billions of open sockets.
To use the sockets you need an application listening, e.g. a web server, and that will use a certain amount of RAM per socket.
RAM and CPU will introduce the real limits. (modern 2017, think millions not billions)
1 millions is possible, not easy. Expect to use X Gigabytes of RAM to manage 1 million sockets.
Outgoing TCP connections are limited by port numbers ~65000 per IP. You can have multiple IP addresses, but not unlimited IP addresses.
This is a limit in TCP not Linux.
10,000? 70,000? is that all :)
FreeBSD is probably the server you want, Here's a little blog post about tuning it to handle 100,000 connections, its has had some interesting features like zero-copy sockets for some time now, along with kqueue to act as a completion port mechanism.
Solaris can handle 100,000 connections back in the last century!. They say linux would be better
The best description I've come across is this presentation/paper on writing a scalable webserver. He's not afraid to say it like it is :)
Same for software: the cretins on the
application layer forced great
innovations on the OS layer. Because
Lotus Notes keeps one TCP connection
per client open, IBM contributed major
optimizations for the ”one process,
100.000 open connections” case to Linux
And the O(1) scheduler was originally
created to score well on some
irrelevant Java benchmark. The bottom
line is that this bloat benefits all of
us.
On Linux you should be looking at using epoll for async I/O. It might also be worth fine-tuning socket-buffers to not waste too much kernel space per connection.
I would guess that you should be able to reach 100k connections on a reasonable machine.
depends on the application. if there is only a few packages from each client, 100K is very easy for linux. A engineer of my team had done a test years ago, the result shows : when there is no package from client after connection established, linux epoll can watch 400k fd for readablity at cpu usage level under 50%.
Which operating system?
For windows machines, if you're writing a server to scale well, and therefore using I/O Completion Ports and async I/O, then the main limitation is the amount of non-paged pool that you're using for each active connection. This translates directly into a limit based on the amount of memory that your machine has installed (non-paged pool is a finite, fixed size amount that is based on the total memory installed).
For connections that don't see much traffic you can reduce make them more efficient by posting 'zero byte reads' which don't use non-paged pool and don't affect the locked pages limit (another potentially limited resource that may prevent you having lots of socket connections open).
Apart from that, well, you will need to profile but I've managed to get more than 70,000 concurrent connections on a modestly specified (760MB memory) server; see here http://www.lenholgate.com/blog/2005/11/windows-tcpip-server-performance.html for more details.
Obviously if you're using a less efficient architecture such as 'thread per connection' or 'select' then you should expect to achieve less impressive figures; but, IMHO, there's simply no reason to select such architectures for windows socket servers.
Edit: see here http://blogs.technet.com/markrussinovich/archive/2009/03/26/3211216.aspx; the way that the amount of non-paged pool is calculated has changed in Vista and Server 2008 and there's now much more available.
Realistically for an application, more then 4000-5000 open sockets on a single machine becomes impractical. Just checking for activity on all the sockets and managing them starts to become a performance issue - especially in real-time environments.

Resources