Best practice for closing sockets. SO_LINGER or shutdown/close? - linux

There is a lot of mixed information out there.
I just want to ensure that the data is fully sent. Should I be doing shutdown/close or close with SO_LINGER and timeout ?
I'm using non-blocking sockets with epoll under Linux and the same code (with defines) under OSX uses kqueue. It would seem SO_LINGER doesn't always work the same on all platforms?
Also when using SO_LINGER on a non-blocking socket, If you get back EWOULDBLOCK do you need to call close again until you don't get EWOULDBLOCK? or can I just ignore the EWOULDBLOCK error from close in that case?

I just want to ensure that the data is fully sent. Should I be doing shutdown/close
Yes, see below.
or close with SO_LINGER and timeout?
I don't find this at all useful, see below.
I'm using non-blocking sockets with epoll under Linux and the same code (with defines) under OSX uses kqueue. It would seem SO_LINGER doesn't always work the same on all platforms?
It should but who knows? There is a large paper on TCP implementation differences, and SO_LINGER isn't mentioned in it as far as I remember. But I'll check later when I can.
Also when using SO_LINGER on a non-blocking socket, If you get back EWOULDBLOCK do you need to call close again until you don't get EWOULDBLOCK? or can I just ignore the EWOULDBLOCK error from close in that case?
Theoretically you should call close() again in obedience to EAGAIN but I'm wondering whether it would actually work. It would make more sense to put the socket into blocking mode for the close, as there is nothing you can select on that will tell you when all the data has been written. But you still have the problem that if the timeout occurs, the socket is closed, the data may be lost, and there is now nothing you can do except log the possibility of a problem.
However back to your problem. The way to assure all data has been written and received before closing is to shutdown the socket for writing at both ends, then issue a read at both ends, which should return end of stream (recv() returns zero), then close the socket. That way you know both peers got to the close at the same time, having already read all the data. If the recv() returns data you have a bug in your application protocol. This should all be carried out in blocking mode in my opinion.

Related

What really is the "linger time" that can be set with SO_LINGER on sockets?

The man page explains little to nothing about that option and while there are tons of information available on the web and in answers on StackOverflow, I discovered that many of the information provided there even contradicts itself. So what's that setting really good for and why would I need to set or alter it?
When a TCP socket is disconnected, there are three things the system has to consider:
There might still be unsent data in the send-buffer of that socket which would get lost if the socket is closed immediately.
There might still be data in flight, that is, data has already been sent out to the other side but the other side has not yet acknowledged to have received that data correctly and it may have to be resent or otherwise is lost.
Closing a TCP socket is a three-way handshake with no confirmation of the third packet. As the sender doesn't know if the third packet has ever arrived, it has to wait some time and see if the second one gets resend. If it does, the third one has been lost and must be resent.
When you close a socket using the close() call, the system will usually not immediately destroy the socket but will first try to resolve all the three issues above to prevent data loss and ensure a clean disconnect. All of that happens in the background (usually within the operating system kernel), so despite the close() call returning immediately, the socket may still be alive for a while and even send out remaining data. There is a system specific upper time bound how long the system will try to get a clean disconnect before it will eventually give up and destroy the socket anyway, even if that means that data is lost. Note that this time limit can be in the range of minutes!
There is a socket option named SO_LINGER that controls how the system will close a socket. You can turn lingering on or off using that option and if is turned on, set a timeout (you can set a timeout also if turned off but that timeout has no effect).
The default is that lingering is turned off, which means close() returns immediately and the details of the socket closing process are left up to the system which will usually deal with it as described above.
If you turn lingering on and set a timeout other than zero, close() will not return immediately. It will only return if issue (1) and (2) have been resolved (all data has been sent, no data is in flight anymore) or if that timeout has been hit. Which of both was the case can be seen by the result of the close call. If it is success, all remaining data got sent and acknowledged, if it is failure and errno is set to EWOULDBLOCK, the timeout has been hit and some data might have been lost.
In case of a non-blocking socket, close() will not block, not even with a linger time other than zero. In that case there is no way to get the result of the close operation as you cannot ever call close() twice on the same socket. Even if the socket is lingering, once close returned, the socket file descriptor should have been invalidated and calling close again with that descriptor should result in a failure with errno set to EBADF ("bad file descriptor").
However, even if you set linger time to something really short, like one second and the socket won't linger for longer than one second, it will still stay around for a while after lingering to deal with issue (3) above. To ensure a clean disconnect, the implementation must ensure that the other side also has disconnected that connection, otherwise remaining data may still arrive for that already dead connection. So the socket will go into a state most systems call TIME_WAIT and stay in that state for a system specific amount of time, regardless if lingering is on and regardless what linger time has been set.
Except for one special case: If you enable lingering but set the linger time to zero, this changes pretty much everything. In that case a call to close() will really close the socket immediately. That means no matter if the socket is blocking or non-blocking, close() returns at once. Any data still in the send buffer is just discarded. Any data in flight is ignored and may or may not have arrived correctly at the other side. And the socket is also not closed using a normal TCP close handshake (FIN-ACK), it is killed instantly using a reset (RST). As a result, if the other side tries to send something over the socket after the reset, this operation will fail with ECONNRESET ("A connection was forcibly closed by the peer."), whereas a normal close would result in EPIPE ("The socket is no longer connected."). While most programs will treat EPIPE as a harmless event, they tend to treat ECONNRESET as a hard error if they didn't expect that to happen.
Please note that this describes the socket behavior as found in the original BSD socket implementation (original means that this may not even match the behavior of modern BSD implementations such as FreeBSD, OpenBSD, NetBSD, etc.). While the BSD socket API has been copied by pretty much all other major operating systems today (Linux, Android, Windows, macOS, iOS, etc.), the behavior on these systems sometimes varies, as is also true with many other aspects of that API.
E.g. If a non-blocking socket with data in the send buffer is closed on BSD, linger is on and linger time is not zero, the close call will return at once but it will indicate a failure and the error will be EWOULDBLOCK (just like in case of a blocking socket after the linger timeout has been hit). Same holds true for Windows. On macOS this is not the case, close() will always return at once and indicate success, regardless of data in the send buffer or not. And in case of Linux, the close() call will actually block in that case up to the linger timeout, despite the socket being non-blocking.
To learn more about how different systems actually deal with different linger settings, have a look at the following two links (please note that the TLS certificates have been expired and this site is only available via HTTPS; sorry about that but it's the best information currently available):
For blocking sockets:
https://www.nybek.com/blog/2015/03/05/cross-platform-testing-of-so_linger/
For non-blocking sockets:
https://www.nybek.com/blog/2015/04/29/so_linger-on-non-blocking-sockets/
As you can see, the behavior might also change depending on whether shutdown() has been called prior to close() and other system specific aspects, including things like setting a lingering timeout will have an effect despite lingering being turned off completely.
Another system specific behavior is what happens if your processes dies without closing a socket first. In that case the system will close the socket on your behalf and some systems tend to ignore any linger setting when they have to do so and just fall back to the system's default behavior. They cannot "block" on socket close in that case anyway but some systems will even ignore a timeout of zero and do a FIN-ACK in that case.
So it's not true that setting a linger timeout of zero will prevent sockets from ever entering the TIME_WAIT state. It depends on how the socket has been closed (shutdown(), close()), by whom it has been closed (your own code or the system), whether it was blocking or non-blocking, and ultimately, on the system your code is running on. The only true statement that can be made is:
If you manually close a socket that is blocking (at least the moment you close it, might have been non-blocking before) and this socket has lingering enabled with timeout of zero, this is your best chance to avoid that this socket will go into TIME_WAIT state. There is no guarantee it won't but if that won't prevent it from happening, there is nothing else you could do to prevent it from happening, unless you have a way to ensure that the peer on the other side will initiate the close for you; as only the side initiating the close operation may end up in a TIME_WAIT state.
So my personal pro tip is: If you design a sever-client-protocol, design it in such a way that normally the client closes the connection first because it is very undesirable that server sockets typically end up in TIME_WAIT state but it's even more undesirable that connections are closed by RST as that can lead to data loss of data previously sent to the client.

Is there a function for determining how many bytes are left to read on a unix domain socket?

The aim is interact with an OpenEthereum server using json-rpc.
The problem is once connected, I need to react only when receving data as the aim is to subscribe to an event so I need the recv() function to be blocking.
But in that case, if I ask to read more in the buffer than what the server sent the request will be blocking.
The OpenEthereum server is separating it s requests with a linefeed \n character but I don t know how this can help.
I know about simply waiting recv() to timeout. But I using C++ and ipc for having a better latency than my competitors on arbitrage. This also means I need to have the fewest number of context switches as possible.
How to effciently read a message whoes length cannot be determined in advance?
Is there a function for determining how many bytes are left to read on a unix domain socket?
No - just keep doing non-blocking reads until one returns EAGAIN or EWOULDBLOCK.
There may be a platform-specific ioctl or fcntl - but you haven't named a platform, and it's neither portable nor necessary.
How to effciently read a message whoes length cannot be determined in advance?
Just do a non-blocking read into a buffer large enough to contain the largest message you might receive.
I need to react only when receving data as the aim is to subscribe to an event so I need the recv() function to be blocking
You're confusing two things.
How to be notified when the socket becomes readable:
by using select or poll to wait until the socket is readable. Just read their manuals, that's their most common use case.
How to read everything available to read without blocking indefinitely:
by doing non-blocking reads until EWOULDBLOCK or EAGAIN is returned.
There is logically a third step, for stream-based protocols like this, which is correctly managing buffers in case of partial messages. Oh, and actually parsing the messages, but I assume you have a JSON library already.
This is entirely normal, basic UNIX I/O design. It is not an exotic optimization.

Closing websocket immediately after sending message

I'm using https://github.com/websockets/ws
I've a silly question.
Can I do something like this to free resources quickly:
conn.send("otpreceived");
conn.close();
At the moment I'm doing like this:
conn.send("otpreceived");
setTimeout(function ()
{
conn.close();
}, 2000);
I'm not sure if I can close the websocket immediately after send message
Short Answer: Yes you can do this.
Long Answer:
Im going to assume you want to close it immediately for performance gains which is likely negligible. In socket programming, you want to open the socket and keep it open until you are finshed using it.
As you may already know, performing a close() on the connection will require you to open() that connection again if you ever want to send more messages or receieve a response from the other end. Now to answer your question, if you just want to send "otpreceived" without sending anything else or receiving, then yes close it. Otherwise just leave it open until the end of the program.
Now Ive only dealt with socket programming in c but I will say the following...
Does it save resources? Yes. Opening and closing sockets is a kernel function, so it accesses low memory components which is abstracted for you. You may not notice a difference in performance depending on many things like what it is your program is doing and how many concurrent connections exist at a given time. So with that said its usually best to keep your connection open until the very end.

TCP close() vs shutdown() in Linux OS

I know there are already a lot similar questions in stackoverflow, but nothing seems convincing. Basically trying to understand under what circumstances I need to use one over the other or use both.
Also would like to understand if close() & shutdown() with shut_rdwr are the same.
Closing TCP connections has gathered so much confusion that we can rightfully say either this aspect of TCP has been poorly designed, or is lacking somewhere in documentation.
Short answer
To do it the proper way, you should use all 3: shutdown(SHUT_WR), shutdown(SHUT_RD) and close(), in this order. No, shutdown(SHUT_RDWR) and close() are not the same. Read their documentation carefully and questions on SO and articles about it, you need to read more of them for an overview.
Longer answer
The first thing to clarify is what you aim for, when closing a connection. Presumably you use TCP for a higher lever protocol (request-response, steady stream of data etc.). Once you decide to "close" (terminate) connection, all you had to send/receive, you sent and received (otherwise you would not decide to terminate) - so what more do you want? I'm trying to outline what you may want at the time of termination:
to know that all data sent in either direction reached the peer
if there are any errors (in transmitting the data in process of being sent when you decided to terminate, as well as after that, and in doing the termination itself - which also requires data being sent/received), the application is informed
optionally, some applications want to be non-blocking up to and including the termination
Unfortunately TCP doesn't make these features easily available, and the user needs to understand what's under the hood and how the system calls interact with what's under the hood. A key sentence is in the recv manpage:
When a stream socket peer has performed an orderly shutdown, the
return value will be 0 (the traditional "end-of-file" return).
What the manpage means here is, orderly shutdown is done by one end (A) choosing to call shutdown(SHUT_WR), which causes a FIN packet to be sent to the peer (B), and this packet takes the form of a 0 return code from recv inside B. (Note: the FIN packet, being an implementation aspect, is not mentioned by the manpage). The "EOF" as the manpage calls it, means there will be no more transmission from A to B, but application B can, and should continue to send what it was in the process of sending, and even send some more, potentially (A is still receiving). When that sending is done (shortly), B should itself call shutdown(SHUT_WR) to close the other half of the duplex. Now app A receives EOF and all transmission has ceased. The two apps are OK to call shutdown(SHUT_RD) to close their sockets for reading and then close() to free system resources associated with the socket (TODO I haven't found clear documentation taht says the 2 calls to shutdown(SHUT_RD) are sending the ACKs in the termination sequence FIN --> ACK, FIN --> ACK, but this seems logical).
Onwards to our aims, for (1) and (2) basically the application must somehow wait for the shutdown sequence to happen, and observe its outcome. Notice how if we follow the small protocol above, it is clear to both apps that the termination initiator (A) has sent everything to B. This is because B received EOF (and EOF is received only after everything else). A also received EOF, which is issued in reply to its own EOF, so A knows B received everything (there is a caveat here - the termination protocol must have a convention of who initiates the termination - so not both peers do so at once). However, the reverse is not true. After B calls shutdown(SHUT_WR), there is nothing coming back app-level, to tell B that A received all data sent, plus the FIN (the A->B transmission had ceased!). Correct me if I'm wrong, but I believe at this stage B is in state "LAST_ACK" and when the final ACK arrives (step #4 of the 4-way handshake), concludes the close but the application is not informed unless it had set SO_LINGER with a long-enough timeout. SO_LINGER "ON" instructs the shutdown call to block (be performed in the forground) hence the shutdown call itself will do the waiting.
In conclusion what I recommend is to configure SO_LINGER ON with a long timeout, which causes it to block and hence return any errors. What is not entirely clear is whether it is shutdown(SHUT_WR) or shutdown(SHUT_RD) which blocks in expectation of the LAST_ACK, but that is of less importance as we need to call both.
Blocking on shutdown is problematic for requirement #3 above where e.g. you have a single-threaded design that serves all connections. Using SO_LINGER may block all connections on the termination of one of them. I see 3 routes to address the problem:
shutdown with LINGER, from a different thread. This will of course complicate a design
linger in background and either
2A. "Promote" FIN and FIN2 to app-level messages which you can read and hence wait for. This basically moves the problem that TCP was meant to solve, one level higher, which I consider hack-ish, also because the ensuing shutdown calls may still end in a limbo.
2B. Try to find a lower-level facility such as SIOCOUTQ ioctl described here that queries number of unACKed bytes in the network stack. The caveats are many, this is Linux specific and we are not sure if it aplies to FIN ACKs (to know whether closing is fully done), plus you'd need to poll taht periodically, which is complicated. Overall I'm leaning towards option 1.
I tried to write a comprehensive summary of the issue, corrections/additions welcome.
TCP sockets are bidirectional - you send and receive over the one socket. close() stops communication in both directions. shutdown() provides another parameter that allows you to specify which direction you might want to stop using.
Another difference (between close() and shutdown(rw)) is that close() will keep the socket open if another process is using it, while shutdown() shuts down the socket irrespective of other processes.
shutdown() is often used by clients to provide framing - to indicate the end of their request, e.g. an echo service might buffer up what it receives until the client shutdown()s their send side, which tells the server that the client has finished, and the server then replies; the client can receive the reply because it has only shutdown() writing, not reading, through its socket.
Close will close both send and receving end of socket.If you want only sending part of socket should be close not receving part or vice versa you can use shutdown.
close()------->will close both sending and receiving end.
shutdown()------->only want to close sending or receiving.
argument:SHUT_RD(shutdown reading end (receiving end))
SHUT_WR(shutdown writing end(sending end))
SHUT_RDWR(shutdown both)

How do I use EPOLLHUP

Could you guys provide me a good sample code using EPOLLHUP for dead peer handling? I know that it is a signal to detect a user disconnection but not sure how I can use this in code..Thanks in advance..
You use EPOLLRDHUP to detect peer shutdown, not EPOLLHUP (which signals an unexpected close of the socket, i.e. usually an internal error).
Using it is really simple, just "or" the flag with any other flags that you are giving to epoll_ctl. So, for example instead of EPOLLIN write EPOLLIN|EPOLLRDHUP.
After epoll_wait, do an if(my_event.events & EPOLLRDHUP) followed by whatever you want to do if the other side closed the connection (you'll probably want to close the socket).
Note that getting a "zero bytes read" result when reading from a socket also means that the other end has shut down the connection, so you should always check for that too, to avoid nasty surprises (the FIN might arrive after you have woken up from EPOLLIN but before you call read, if you are in ET mode, you'll not get another notification).

Resources