Determine if other end closed (abruptly) the tcp connection - node.js

I've built a small, basic tcp server in nodejs to which external devices can connect over a tcp-socket.
From each connected device I only want to store one socket which is the active open connection to this device. If for some reason the connection gets closed by the device I want my server to know and do stuff (eg splice the socket from activeSocketList, notify in a room in socket.io etc etc)
Server, straight forward:
var server = net.createServer(function(c){
// new connection is online
logToOversight(c.remoteAddress + ' > Connection Established');
c.setKeepAlive(true);
// Do something with incoming data.
c.on('data', function(buffer) {
parseMessage(c, buffer);
})
//- Connection gets an end-event from us.
c.on('end', function() {
// server closes the connection
})
}).listen(port);
I've looked at the events close and end but they seem only to catch ending connections initiated by the server itself.
While I want to catch the abrupt end of connection from the devices side. E.g. I connect via netcat to the server for testing:
nc example.com 12345
And 'close' the connection via ctrl-z I want to catch that 'event' on the server side.

And 'close' the connection via ctrl-z I want to catch that 'event' on the server side
Ctrl+Z in a Unix terminal suspends the process. It does not close any active TCP connections so your server still has a valid TCP connection to the client, even though that client will not be able to send or consume data until you resume it with fg. See for yourself with netstat, it'll still be there in the ESTABLISHED state.
Both a graceful close of a TCP connection with the exchange of FIN packets and an abrupt close indicated with an RST should both be indicated by the close event that you already know about.
The worst case is a yanking out of the network cable or equivalent such as an abrupt termination of the client's power supply. To detect this case you must implement heartbeats and timeouts between your server and its clients.

Related

TCP server ignores incoming SYN

I have a tcp server running. A client connects to the server and send packet periodically. For TCP server, this incoming connections turns to be CONNECTED, and the server socket still listens for other connections.
Say this client suddenly get powered off, no FIN sent to server. When it powers up again, it still use the same port to connect, but server doesn't reply to SYNC request. It just ignores incoming request, since there exists a connection with this port.
How to let server close the old connection and accept new one?
My tcp server runs on Ubuntu 14.04, it's a Java program using ServerSocket.
That's not correct, a server can accept multiple connections and will accept a new connection from a rebooted client as long as it's connecting from a different port (and that's usually the case). If your program is not accepting it it's because you haven't called accept() a second time. This probably means that your application is only handling one blocking operation per time (for example, it might be stuck in a read() operation on the connected socket). The solution for this is to simultaneously read from the connected sockets and accept new connections. This might be done using an I/O multiplexer, like select(), or multiple threads.

Given one established tcp connection from client to server, is a second one resolved without server explicitly calling accept?

Apologies in advance if my terminology is very rudimentary:
I am working with a client that establishes a tcp connection to a server. The client's socket is nonblocking, so after calling connect(), the client waits for the socket to become writable.
Upon accept()ing the connection from the client, the server performs a blocking operation (call it function X) and does not return to blocking at accept() for a long time.
During this time that the server is occupied performing function X, the client does another connect() to the same server, again using a nonblocking socket (different than the socket used with the first connection), then waiting for the socket to become writable in order to consider the tcp connection as "established."
I naively expected the second socket to remain not-writable until the server called accept() a second time to accept that second connection. But I've observed this as not the case: the second socket becomes writable quickly, so the client again considers this new tcp connection as "established."
Is this expected?
From one of the comments at this question, I (very loosely) understand that nonblocking sockets that are in the middle of a tcp connect will remain not-writable for the duration that TCP handshaking is being performed - is this true? Does this relate to the above question? Is it something like: if there is an existing tcp connection from a client to a server, then subsequent tcp connections from that same client to that same server are immediately/quickly "resolved" (the socket becomes writable without the server explicitly performing a second accept)?
What I tried:
I tried writing up a unit test to simulate this scenario with one thread each for client and server running on a single PC, but I think this is not a valid way to test: per this Q&A I think if client and server are on the same PC, "TCP handshaking" is not quite the same as with two separate PCs, and for example, the client's connecting socket becomes writable without the server even listening let alone accepting the connection.
Every connect() needs a corresponding accept() in order for client and server to communicate with each other.
However, it is possible/likely that the 3-way TCP handshake maybe/is completed while the connection is still in the server's backlog, before accept() creates a new socket for it. Once the handshake is complete, the connection is "established", and that will complete the connect() operation on the client's side, even if the connection has not been accept()ed yet on the server side.
See How TCP backlog works in Linux

Node js - Why 3 connection?

Consider this small server for node.js
var net = require ('net');
var server = net.createServer(function (socket) {
console.log("Connection detected");
socket.on('end', function() {
console.log('server disconnected');
});
socket.write("Hello World");
socket.end();
});
server.listen("8888");
When I test the server with Chrome on my Macbook Pro, I get three times the "Connection detected" message in the console.
I know one is for well connecting, another for the favicon, but what's the third one all about?
I tested it with Firefox and wget (which is a Linux command line program), as well as telnet to do the deep investigation. Surprisingly, all of these don't make any extra connection (obviously they don't even try to download the favicon). So I fired up Wireshark and captured a session, and quickly discovered that Chorme systematically makes useless connection, ie it just connects (SYN, SYN-ACK, ACK) and then closes the connection (RST, ACK) without sending anything.
Just a quick googlin and I found this bug report (excerpt):
I suspect the "empty" TCP connections are
backup TCP connections,
IPv4/IPv6 parallel connections, or
TCP pre-connections,
A backup TCP connection is made only if the original TCP connection is
not set up within 250 milliseconds. IPv4/IPv6 parallel connections
are made only if the server has both IPv6 and IPv6 addresses and the
IPv6 connection is not set up within 300 milliseconds. Since you're
testing a local server at localhost:8080, you should be able to
connect to it quickly, so I suspect you are seeing TCP
pre-connections.
To verify if the "empty" TCP connections are TCP pre-connections, open
the "wrench" menu > Settings > Under the Hood > Privacy, and clear the
"Predict network actions to improve page load performance" check box.
Shut down and restart Chrome. Are the "empty" TCP connections gone?
For further reference, see the linked thread, which explains more in-depth what backup, parallel and pre-connections are and if/why this is a good optimization.

node.js tcp socket disconnect handling

I'm establishing a TCP client socket connection to an XMPP server and need a reliable way to detect interruptions in the connection (e.g. server crashes, restarts etc). I have listeners attached to the end, error and close events, but they do not fire reliably when I cut my internet connection during an active connection. How can my client detect when the connection has been broken? I would prefer not to resort to pinging/timeouts.
I'm in no way an expert on TCP or socket programming, but I'm pretty sure that there exists no "reliable way to detect interruptions in the connection". See e.g. this Unix.com thread.
In node, your options seem to be socket.setTimeout/socket.on('timeout', callback) and/or socket.setKeepAlive.
Edit: Here is a guide on TCP keepalive.

How to know Socket client got disconnected?

I am doing coding in linux architecture.
I have question regarding socket server and client.
I have made one sample code in which server continue to accept the connection and client is connected to server.
if somehow someone has remove the network cable so i am disconnecting client (client socket disconnected from PC) while in server side connection is still alive because i am not able to notify that client is disconnected because network is unplugged.
How can i know that client got disconnected ?
Thanks,
Neel
You need to either configure keepalive on the socket or send an application level heartbeat message, otherwise the listening end will wait indefinitely for packets to arrive. If you control the protocol, the application level heartbeat may be easier. As a plus side, either solution will help keep the connection alive across NAT gateways in the network.
See this answer: Is TCP Keepalive the only mechanism to determine a broken link?
Also see this Linux documentation: http://tldp.org/HOWTO/html_single/TCP-Keepalive-HOWTO/#programming
SIGPIPE for local sockets and eof on read for every socket type.

Resources