Consider this small server for node.js
var net = require ('net');
var server = net.createServer(function (socket) {
console.log("Connection detected");
socket.on('end', function() {
console.log('server disconnected');
});
socket.write("Hello World");
socket.end();
});
server.listen("8888");
When I test the server with Chrome on my Macbook Pro, I get three times the "Connection detected" message in the console.
I know one is for well connecting, another for the favicon, but what's the third one all about?
I tested it with Firefox and wget (which is a Linux command line program), as well as telnet to do the deep investigation. Surprisingly, all of these don't make any extra connection (obviously they don't even try to download the favicon). So I fired up Wireshark and captured a session, and quickly discovered that Chorme systematically makes useless connection, ie it just connects (SYN, SYN-ACK, ACK) and then closes the connection (RST, ACK) without sending anything.
Just a quick googlin and I found this bug report (excerpt):
I suspect the "empty" TCP connections are
backup TCP connections,
IPv4/IPv6 parallel connections, or
TCP pre-connections,
A backup TCP connection is made only if the original TCP connection is
not set up within 250 milliseconds. IPv4/IPv6 parallel connections
are made only if the server has both IPv6 and IPv6 addresses and the
IPv6 connection is not set up within 300 milliseconds. Since you're
testing a local server at localhost:8080, you should be able to
connect to it quickly, so I suspect you are seeing TCP
pre-connections.
To verify if the "empty" TCP connections are TCP pre-connections, open
the "wrench" menu > Settings > Under the Hood > Privacy, and clear the
"Predict network actions to improve page load performance" check box.
Shut down and restart Chrome. Are the "empty" TCP connections gone?
For further reference, see the linked thread, which explains more in-depth what backup, parallel and pre-connections are and if/why this is a good optimization.
Related
I have a tcp server running. A client connects to the server and send packet periodically. For TCP server, this incoming connections turns to be CONNECTED, and the server socket still listens for other connections.
Say this client suddenly get powered off, no FIN sent to server. When it powers up again, it still use the same port to connect, but server doesn't reply to SYNC request. It just ignores incoming request, since there exists a connection with this port.
How to let server close the old connection and accept new one?
My tcp server runs on Ubuntu 14.04, it's a Java program using ServerSocket.
That's not correct, a server can accept multiple connections and will accept a new connection from a rebooted client as long as it's connecting from a different port (and that's usually the case). If your program is not accepting it it's because you haven't called accept() a second time. This probably means that your application is only handling one blocking operation per time (for example, it might be stuck in a read() operation on the connected socket). The solution for this is to simultaneously read from the connected sockets and accept new connections. This might be done using an I/O multiplexer, like select(), or multiple threads.
I just want to ask in net module of Node.js because I did not fully understand in the docs. what will happen if I implement the setKeepAlive() ?. what is the behavior of this setKeepAlive() ?
var net = require('net');
var server = net.createServer(function(socket){
socket.setKeepAlive(true,60000); //1 min = 60000 milliseconds.
socket.on('data',function(data){
///receiving data here
});
socket.on('end',function(data){
});
});
server.listen(1333,'127.0.0.1', function () {
console.log("server is listening in port 1333!");
});
Thank you in advance.
The .setKeepAlive() method enables/disables TCP keep alive. This is done at the TCP level in the OS, so it is enabled by the node.js socket library, but the keep-alive functionality is actually implemented in the TCP stack in the host OS.
Here's a pretty good summary of what the keep alive feature does: http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html.
Here's a piece of that article that should give you an overview:
The keepalive concept is very simple: when you set up a TCP
connection, you associate a set of timers. Some of these timers deal
with the keepalive procedure. When the keepalive timer reaches zero,
you send your peer a keepalive probe packet with no data in it and the
ACK flag turned on. You can do this because of the TCP/IP
specifications, as a sort of duplicate ACK, and the remote endpoint
will have no arguments, as TCP is a stream-oriented protocol. On the
other hand, you will receive a reply from the remote host (which
doesn't need to support keepalive at all, just TCP/IP), with no data
and the ACK set.
If you receive a reply to your keepalive probe, you can assert that
the connection is still up and running without worrying about the
user-level implementation. In fact, TCP permits you to handle a
stream, not packets, and so a zero-length data packet is not dangerous
for the user program.
This procedure is useful because if the other peers lose their
connection (for example by rebooting) you will notice that the
connection is broken, even if you don't have traffic on it. If the
keepalive probes are not replied to by your peer, you can assert that
the connection cannot be considered valid and then take the correct
action.
Since you are setting keep alive on incoming connections to your server, the effect of the keep alive setting will depend entirely upon what happens with these incoming sockets. If they are short lived (e.g. they connected, exchange some data and then disconnect like a typical HTTP connection without going inactive for any significant amount of time), then the keep-alive setting will not even come into play.
If, on the other hand, the client connects to the server and holds that connection open for a long time, then the keep-alive setting will come into play and you will see the different behaviors that are called out in the above referenced article. In addition, if the client is a battery-powered device (phone, tablet, etc...) and it holds a long running connection, then it may consume more battery power and a small bit more bandwidth responding to the regular keep-alive packets because the device has to wake up to receive incoming packets and then has to transmit to send responses.
I've built a small, basic tcp server in nodejs to which external devices can connect over a tcp-socket.
From each connected device I only want to store one socket which is the active open connection to this device. If for some reason the connection gets closed by the device I want my server to know and do stuff (eg splice the socket from activeSocketList, notify in a room in socket.io etc etc)
Server, straight forward:
var server = net.createServer(function(c){
// new connection is online
logToOversight(c.remoteAddress + ' > Connection Established');
c.setKeepAlive(true);
// Do something with incoming data.
c.on('data', function(buffer) {
parseMessage(c, buffer);
})
//- Connection gets an end-event from us.
c.on('end', function() {
// server closes the connection
})
}).listen(port);
I've looked at the events close and end but they seem only to catch ending connections initiated by the server itself.
While I want to catch the abrupt end of connection from the devices side. E.g. I connect via netcat to the server for testing:
nc example.com 12345
And 'close' the connection via ctrl-z I want to catch that 'event' on the server side.
And 'close' the connection via ctrl-z I want to catch that 'event' on the server side
Ctrl+Z in a Unix terminal suspends the process. It does not close any active TCP connections so your server still has a valid TCP connection to the client, even though that client will not be able to send or consume data until you resume it with fg. See for yourself with netstat, it'll still be there in the ESTABLISHED state.
Both a graceful close of a TCP connection with the exchange of FIN packets and an abrupt close indicated with an RST should both be indicated by the close event that you already know about.
The worst case is a yanking out of the network cable or equivalent such as an abrupt termination of the client's power supply. To detect this case you must implement heartbeats and timeouts between your server and its clients.
I have setup a Node.JS server with socket.io on a VPS and I broadcast every 10 seconds the number of connected clients to all. This usually works fine, though often times, the connection can't be established and I get this error (I changed the IP a bit):
GET http://166.0.0.55:8177/socket.io/1/?t=1385120872574
After reloading the site, usually the connection can be established, though I have no idea why the failed connection happens in the first place, also I don't know how to debug the socket.io code. Sometimes I can't connect to the server anymore and I have to restart the server.
Additional information:
My main site runs on a different server (using a LAMP environment with CakePHP) than the Node.js server.
I use forever to run the server
I have a lot of connected clients (around 1000)
My VPS has 512 MB Ram and the CPU is never higher than 25%
After top command, try:
socket.on('error', function (err) {
console.log("Socket.IO Error");
console.log(err.stack); // this is changed from your code in last comment
});
Also, you could try a the slower transport. Socket.io use Websocket by default but if your server cannot allocate enough resource, you can try another transport which is slower but use less resources
io = socketIo.listen(80);
io.set('transports', ['xhr-polling']);
Folks,
My environment is Ubuntu 12.04.
Here is a pseudo-code for my TCP server application that is listening for a connection:
while(true) {
int hConn = accept(hMain, NULL, NULL);
string s = readClient(hConn);
if (s == "quit") {
close(hConn);
}
}
While my server is running, I telnet to localhost at port nnnn:
$ telnet localhost nnnn
quit
Connection closed by foreign host.
$
When the server receives "quit," it closes the client connection. This causes the telnet client to quit with an appropriate message.
So far so good.
However, when I run netstat, I can still see that the client connection is still alive.
It takes a few seconds for the connection to disappear.
This happens even if I force quit my server app.
If I run my server app once again, I get an error that port "nnnn" is still in use.
I have to wait for a few seconds before I can run my server app once again.
Is there something that I am missing? Is there a way to fix this behavior?
Note that I am indeed closing socket hMain when quitting the server although this is not shown in the above pseudo-code.
Thank you in advance for your help.
Regards,
Peter
You need to be aware of the TIME_WAIT state, which provides that TCP connections which have been closed hang around for a couple of minutes for TCP security/integrity reasons.
The problem with restarting your server can be overcome via the SO_REUSEADDR option.