how to close 'closing' sockets on linux use command? - linux

in my linux server, there are so many tcp connections at 'closing' state, netstat results like:
tcp 1 61 local-ip:3306 remote-ip:11654 CLOSING -
How can I closed these connections? The local process is restarted, so I can't find the pids.
And I have restart network, but unused.
I can't restart the server, because there are services running.
Thank you very much!!

Relax, it won't cause any problems (unless there is a huge amount of these in which case it is a denial of service attack).
This will help understand what's going on: TCP State transition diagram

Related

Will port down event cause TCP or other connection broken in linux TCPIP stack?

I'm working on cloud, customer need our virtual switch to keep its VM's TCP based connection while our virtual switch upgrading (which is stop virtual switch first and then start new virtual switch). We use ovs, in this case, port of VM will link down and link up in 1 second. Customer's OS is linux.
So I'm thinking if this link down/up will cause the TCP connection broken in VM? I have no idea which part of code in TCPIP stack I should read, as boss tells me I have to show him code other than some blog.
TCP/IP is expressly designed to survive such outages. If your boss insists on seeing code, you will have to show him the entire Linux IP stack, and much good may it do him. More probably you should just refer him to RFC 791 and 793.

Eliminating latency spikes in TCP connection

I have a Linux network application that I am trying to optimize for low latency. This application consumes UDP and produces TCP traffic. I have setup a network tap and written some scripts that correlate the UDP traffic with the application's TCP response to compute end to end latency. I have also setup tracing within the application so I can measure internal latency. I have found that typical end to end latency as measured by the capture device is about 20us but on about 5% of the cases the latency can spike to 2000us and even more. Correlating the internal logs with the capture device logs indicates this spike originates in the kernel TCP transmission.
Any suggestions on how I could get a better understanding of what is going on and hopefully fix it? I am running on a 4 HW core machine, with three of the cores dedicated to the application and the remaining one left for the OS.
Update: Further investigation of the PCAP file shows that TCP messages that exhibit high latency are always immediately preceded by an ACK from the system that is the target of the TCP data (i.e. the system to which the machine under test is sending its TCP data). This leads me to believe that the system under test is trying to keep the data in flight under some minimum and that is why it deliberately delays its responses. Have not had been able to tune this behavior out though.
Thanks in advance
I'm pretty sure it's too late for you, but may be it'll help someone in the future. I'm almost sure that you haven't turned off the Naggle algorithm by setting on the TCP_NOWAIT socket option.

UPnP portforwarding persist

I'm trying to make a portforwarding for different ports for communications, but it seems they are lost on reboot.
I'm using a script to make them, and it uses the following syntax:
upnpc -a 192.168.1.95 22 22 TCP
...
Since my system is made to actually stress the gateway to reboot, I need to have these ports open after a reboot. I could do it in the software (running the script if connection lost), but I don't want to do that unless it is absolutely necessary.
Do you have some idea of how to make a portforwarding with UPnP such that the forwarding is persisted after a reboot?
Port mappings are specifically not required to be persistent between gateway reboots, clients are supposed to keep an eye on the mappings and re-map when needed. WANIPConnection spec v2 also does not even allow indefinite mappings: another reason to keep the client running as long as you need the mapping to exist.

High number of TIME_WAITs in Haproxy

We have haproxy 1.3.26 hosted on CentOS 5.9 machine having 2.13 GHz Intel Xeon processor which is acting as a http & tcp load balancer for numerous services, serving a peak throughput of ~2000 requests/second. It has been running fine for 2 years but gradually both traffic and number of services are increasing.
Off late we've observed that even after reload old haproxy process remains. On further investigation we found that old process has numerous connections in TIME_WAIT state. We also saw that netstat and lsof were taking a long long time. On referring http://agiletesting.blogspot.in/2013/07/the-mystery-of-stale-haproxy-processes.html we introduced option forceclose but it was messing up with various monitoring service hence reverted it. On further digging we realised that in /proc/net/sockstat close to 200K sockets are in tw (TIME_WAIT) state which is surprising as in /etc/haproxy/haproxy.cfg maxconn has been specified as 31000 and ulimit-n as 64000. We had timeout server and timeout client as 300s which we changed to 30s but not much use.
Now the doubts are :-
Whether such a high number of TIME_WAITs is acceptable. If yes whats a number after which we should be worried. Looking at What is the cost of many TIME_WAIT on the server side? and Setting TIME_WAIT TCP seems there shouldn't be any issue.
How to decrease these TIME_WAITs
Are there any alternatives to netstat and lsof which will perform fine even if there are very high number of TIME_WAITs
Note: The quotes in this answer are all from a mail by Willy Tarreau (the main author of HAProxy) to the HAProxy mailinglist.
Connections in TIME_WAIT state are harmless and don't really consume any resources anymore. They are kept by the kernel on a server for some time for the rare event that it still receives a package after the connection was closed. The default time a closed connection is held in that state is typically 120 seconds (or 2 times the maximum segment lifetime)
TIME_WAIT are harmless on the server side. You can easily reach millions
without any issues.
If you still want to reduce that number to release connections earlier, you can instruct the kernel to do so. To e.g. set it to 30 seconds execute this:
echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout
If you have many connections (either in TIME_WAIT or not), netstat, lsof, ipcs perform very poorly and actually slow the whole system down. To quote Willy again:
There are two commands that you must absolutely never use in a monitoring
system :
netstat -a
ipcs -a
Both of them will saturate the system and considerably slow it down when
something starts to go wrong. For the sockets you should use what's in
/proc/net/sockstat. You have all the numbers you want. If you need more
details, use ss -a instead of netstat -a, it uses the netlink interface
and is several orders of magnitude faster.
On Debian and Ubuntu systems, ss is available in the iproute or iproute2 package (depending on the version of your distribution).

Close TCP session every on schedule, optional monitor idle state

Good day.
I need an application or script on Linux firewall which will monitor and close idle TCP connections to one site after some minutes. Till now I found cutter and Killcx, but this is command prompt tools and no idle detection.
Do you know any app or maybe firewall implementation for Linux with everything that I want in one package?
Thanks.
Perhaps you can hack something with "conntrack" iptables module. Especially the --ctexpire match. See man iptables.

Resources