Best practices for Linux socket programming - linux

I have a simple C server that accepts connection to which I try connecting using telnet or netcat. Each time I receive a connection, I print out the descriptor and close the connection in a child process.
I run an instance of netcat, connect to the server, disconnect(Ctrl^C), and repeat this a few times. The values printed at the server side on the descriptors used are 4,5,6,7 .. and it goes on increasing.
I tried repeating this exercise after a period of time and the values still keep increasing. I'm concerned that my descriptions aren't closing(despite an explicit call to close).
Is there some signal I should be handling, setting the handler to close the connection?

After a fork the child process has a copied set of the parent's file descriptors. So the proper procedure is, after the fork, to (1) close the parent's listening socket in the child and to (2) close the new connection socket inherited by the child in the parent.
Open file descriptors are reference counted by the kernel. So when the child inherits the connection socket the reference count is 2. After the parent closes the connection socket the count remains at 1 until the child is done and closes it. The reference count having then dropped to 0 the connection is then closed. (Some details omitted.)
The upshot is that after making this change you then see a lot of FDs equal to 4 in the parent because the same FD will continue to be opened/closed/reused even though multiple connections are being processed by the children.

after fork both parent and child have a copy of the socket file descriptor;
you should close the socket in the parent process after fork.
Now that does not close the connection, only when the child process closes the socket too, this then closes the connection.

Related

How to not lose data with sendHandle?

Node.js child_process docs say
The optional sendHandle argument that may be passed to subprocess.send() is for passing a TCP server or socket object to the child process...Any data that is received and buffered in the socket will not be sent to the child.
So if pass a socket to a child process, how do I not lose the buffered data?
I called socket.read() on the socket before sending it to check for buffered data. It returned null, yet data was still lost.
How do I pass a socket from one process to another without losing data?
The only solution is to never start reading from the socket. For example, in the internal/cluster/round_robin_handle module
this.handle = this.server._handle;
this.handle.onconnection = (err, handle) => this.distribute(err, handle);
This overwrites the normal processessing of new TCP handles, so a child process can do it instead.
(I would have used the cluster module, except I needed some customization, like least connection load balancing.)

Sharing BIO between parent and chid while avoiding the shutdown()

I am using the OpenSSL BIO_... functions to create a connection which is either secure or not. The basics work just fine, whether the connection is encrypted or not (i.e. I can communicate using the BIO_read() and BIO_write() functions, just as expected.)
However, when I want to create a child, it is customary to close sockets we are not going to use in the parent or the child. I can solve the child problem by marking the socket with SOCK_CLOEXEC. However, if I want to do the opposite, keep the socket open in the child and close it in the parent I run in a problem: the BIO_free_all() function calls shutdown(s, SHUT_RDWR) before the close(s). The result after that is that the socket is dead (effectively, shutdown...) in the child as well.
My question is: is there a way to properly share an OpenSSL BIO interface between parent and child by avoiding the shutdown() call?
If not in the BIO interface itself, I can always close the file descriptor before calling BIO_free_all(), but that sounds like an ugly hack!

Where does Linux kernel do process and TCP connections cleanup after process dies?

I am trying to find place in the linux kernel where it does cleanup after process dies. Specifically, I want to see if/how it handles open TCP connections after process is killed with -9 signal. I am pretty sure it closes all connections, but I want to see details, and if there is any chance that connections are not closed properly.
Pointers to linux kernel sources are welcome.
The meat of process termination is handled by exit.c:do_exit(). This function calls exit_files(), which in turn calls put_files_struct(), which calls close_files().
close_files() loops over all file descriptors the process has open (which includes all sockets), calling filp_close() on each one, which calls fput() on the struct file object. When the last reference to the struct file has been put, fput() calls the file object's .release() method, which for sockets, is the sock_close() function in net/socket.c.
I'm pretty sure the socket cleanup is more of a side effect of releasing all the file descriptors after the process dies, and not directly done by the process cleanup.
I'm going to go out on a limb though, and assume you're hitting a common pitfall with network programming. If I am correct in guessing that your problem is that you get an "Address in use" error (EADDRINUSE) when trying to bind to an address after a process is killed, then you are running into the socket's TIME_WAIT.
If this is the case, you can either wait for the timeout, usually 60 seconds, or you can modify the socket to allow immediate reuse like so.
int sock, ret, on;
struct sockaddr_in servaddr;
sock = socket( AF_INET, SOCK_STREAM, 0 ):
/* Enable address reuse */
on = 1;
ret = setsockopt( sock, SOL_SOCKET, SO_REUSEADDR, &on, sizeof(on) );
[EDIT]
From your comments, It sounds like you are having issues with half-open connections, and don't fully understand how TCP works. TCP has no way of knowing if a client is dead, or just idle. If you kill -9 a client process, the four-way closing handshake never completes. This shouldn't be leaving open connections on your server though, so you still may need to get a network dump to be sure of what's going on.
I can't say for sure how you should handle this without knowing exactly what you are doing, but you can read about TCP Keepalive here. A couple other options are sending empty or null messages periodically to the client (may require modifying your protocol), or setting hard timers on idle connections (may result in dropped valid connections).

Prevent fork() from copying sockets

I have the following situation (pseudocode):
function f:
pid = fork()
if pid == 0:
exec to another long-running executable (no communication needed to that process)
else:
return "something"
f is exposed over a XmlRpc++ server. When the function is called over XML-RPC, the parent process prints "done closing socket" after the function returned "something". But the XML-RPC client hangs as long as the child process is still running. When I kill the child process, the XML-RPC client correctly finishes the RPC call.
It seems to me that I'm having a problem with fork() copying socket descriptors to the child process (parent called closesocket but child still owns a reference -> connection still established). How can I circumvent this?
EDIT: I read about FD_CLOEXEC already, but can't I force all descriptors to be closed on exec?
No, you can't force all file descriptors to be closed on exec. You will need to loop over all unwanted file descriptors in the child after the fork() and close them. Unfortunately, there isn't an easy, portable, way to do that - the usual approach is to use getrlimit() to get the current value of RLIMIT_NOFILE and loop from 3 to that number, trying close() on each candidate.
If you are happy to be Linux-only, you can read the /proc/self/fd/ directory to determine the open file descriptors and close them (except 0, 1 and 2 - which should either be left alone or reopened to /dev/null).

Sockets & File Descriptor Reuse (or lack thereof)

I am getting the error "Too many open files" after the call to socket in the server code below. This code is called repeatedly, and it only occurs just after server_SD gets the value 1022. so i am assuming that i am hitting the limit of 1024 as proscribed by "ulimit -n". What i don't understand is that i am closing the Socket, which should make the fd reusable, but this seems not to be happening.
Notes: Using linux, and yes the client is closed also, no i am not a root user so moving the limits is not an option, I should have a maximum of 20 (or so) sockets open at one time. Over the lifetime of my program i would expect to open & close close to 1000000 sockets (hence need to reuse very strong).
server_SD = socket (AF_INET, SOCK_STREAM, 0);
bind (server_SD, (struct sockaddr *) &server_address, server_len)
listen (server_SD,1)
client_SD = accept (server_SD, (struct sockaddr *)&client_address, &client_len)
// read, write etc...
shutdown (server_SD, 2);
close (server_SD)
Does anyone know how to guarantee closure & re-usability ?
Thanks.
Run your program under valgrind with the --track-fds=yes option:
valgrind --track-fds=yes myserver
You may also need --trace-children=yes if your program uses a wrapper or it puts itself in the background.
If it doesn't exit on its own, interrupt it or kill the process with "kill pid" (not -9) after it accumulates some leaked file descriptors. On exit, valgrind will show the file descriptors that are still open and the stack trace corresponding to where they were created.
Running your program under strace to log all system calls may also be helpful. Another helpful command is /usr/sbin/lsof -p pid to display all currently used file descriptors and what they are being used for.
From your description it looks like you are opening server socket for each accept(2). That is not necessary. Create server socket once, bind(2) it, listen(2), then call accept(2) on it in a loop (or better yet - give it to poll(2))
Edit 0:
By the way, shutdown(2) on listening socket is totally meaningless, it's intended for connected sockets only.
Perhaps your problem is that you're not specifying the SO_REUSEADDR flag?
From the socket manpage:
SO_REUSEADDR
Indicates that the rules used in validating addresses supplied in a bind(2) call should allow reuse of local addresses. For PF_INET sockets this means that a socket may bind, except when there is an active listening socket bound to the address. When the listening socket is bound to INADDR_ANY with a specific port then it is not possible to bind to this port for any local address.
Are you using fork()? if so, your children may be inheriting the opened file descriptors.
If this is the case, you should have the child close any fds that don't belong to it.
This looks like you might have a "TIME_WAIT" problem. IIRC, TIME_WAIT is one of the status a TCP socket can be in, and it's entered when both side have closed the connection, but the system keeps the socket for a while, to avoid delayed messages to be accepted as proper payload for subsequent connections.
You shoud maybe have a look at this (bottom of page 99 and top of 100). And maybe that other question.
One needs to close the client before closing the server (reverse order to my code above!)
Thanks all who offered suggestions !

Resources