node.js http server: how to get pending socket connections? - node.js

on a basic node http server like this:
var http = require('http');
var server = http.createServer(function(req, res){
console.log("number of concurr. connections: " + this.getConnections());
//var pendingConnections = ???
});
server.maxConnections = 500;
server.listen(4000);
if i send 500 requests at one time to the server, the number of concurr. connections is arround 350. with the hard limit set to 500 (net.server.backlog too), i want to know, how to access the number of pending connections (max. 150 in this example) when ever a new request starts.
so i think i have to access the underlying socket listening on port 4000 to get this info, but until now i was not able to get it.
EDIT
looking at node-http there is an event called connection, so i think the roundtrip of an request is as follows:
client connects to server socket --> 3-way-handshake, socket lasts in state CONNECTED (or ESTABLISHED ?!) then in node event connection is emitted.
the node http server accepts this pending connection an starts processing the request by emitting request
so the number of connections has to be at least as big as the number of requests, but with following example i could not confirm this:
var http = require('http');
var activeRequets = 0;
var activeConnections = 0;
var server = http.createServer(function(req, res){
activeRequests++;
res.send("foo");
});
server.on('connection', function (socket) {
socket.setKeepAlive(false);
activeConnections++;
});
setInterval(function(){
console.log("activeConns: " + activeConnections + " activeRequests: " + activeRequests);
activeRequests = 0;
activeConnections = 0;
}, 500);
server.maxConnections = 1024;
server.listen(4000, '127.0.0.1');
even if i stress the server with 1000 concurr connections and adding one delay in the response, activeRequests is mostly as high as activeConnections. even worse, the activeRequests are often higher then activeconnections, how could this be?

IIRC You can just count how many connections that have a status of SYN_RECV for that particular IP and port that you're listening on. Whether you use a child process to execute netstat and grep (or similar utilities) for that information, or write a binding to get this information using the *nix C API, is up to you.

Related

Socket Io limiting only 6 connection in Node js

So i came across a problem.I am trying to send {id} to my rest API (node js) and in response, I get data on the socket.
Problem:
For first 5-6 time it works perfectly fine and display Id and send data back to socket.But after 6 time it does not get ID.
I tried this https://github.com/socketio/socket.io/issues/1145
and https://github.com/socketio/socket.io/issues/1145 but didn't solve the problem.
On re compiling the server it shows previous {ids} which i enter after 6 time.it like after 5-6 time it is storing id in some form of cache.
Here is my API route.
//this route only get {id} 5-6 times .After 5-6 times it does not display receing {id}.
const express = require("express");
var closeFlag = false;
const PORT = process.env.SERVER_PORT; //|| 3000;
const app = express();
var count = 1;
http = require('http');
http.globalAgent.maxSockets = 100;
http.Agent.maxSockets = 100;
const serverTCP = http.createServer(app)
// const tcpsock = require("socket.io")(serverTCP)
const tcpsock = require('socket.io')(serverTCP, {
cors: {
origin: '*',
}
, perMessageDeflate: false
});
app.post("/getchanneldata", (req, res) => {
console.log("count : "+count)
count++;// for debugging purpose
closeFlag = false;
var message = (req.body.val).toString()
console.log("message : "+message);
chanId = message;
client = dgram.createSocket({ type: 'udp4', reuseAddr: true });
client.on('listening', () => {
const address = client.address();
});
client.on('message', function (message1, remote) {
var arr = message1.toString().split(',');
}
});
client.send(message, 0, message.length, UDP_PORT, UDP_HOST, function (err, bytes) {
if (err) throw err;
console.log(message);
console.log('UDP client message sent to ' + UDP_HOST + ':' + UDP_PORT);
// message="";
});
client.on('disconnect', (msg) => {
client.Diconnected()
client.log(client.client)
})
}
);
There are multiple issues here.
In your app.post() handler, you don't send any response to the incoming http request. That means that when the browser (or any client) sends a POST to your server, the client sits there waiting for a response, but that response never comes.
Meanwhile, the browser has a limit for how many requests it will send simultaneously to the same host (I think Chrome's limit is coincidentally 6). Once you hit that limit, the browser queues the request and and waits for one of the previous connections to return its response before sending another one. Eventually (after a long time), those connections will time out, but that takes awhile.
So, the first thing to fix is to send a response in your app.post() handler. Even if you just do res.send("ok");. That will allow the 7th and 8th and so on requests to be immediately sent to your server. Every incoming http request should have a response sent back to it, even if you have nothing to send, just do a res.end(). Otherwise, the http connection is left hanging, consuming resources and waiting to eventually time out.
On a separate note, your app.post() handler contains this:
client = dgram.createSocket({ type: 'udp4', reuseAddr: true });
This has a couple issues. First, you never declare the variable client so it becomes an implicit global (which is really bad in a server). That means successive calls to the app.post() handler will overwrite that variable.
Second, it is not clear from the included code when, if ever, you close that udp4 socket. It does not appear that the server itself ever closes it.
Third, you're recreating the same UDP socket on every single POST to /getchanneldata. Is that really the right design? If your server receives 20 of these requests, it will open up 20 separate UDP connections.

TCP and WEB sockets

I have http server and socket.io (listening this http server). Clients connect(via socket io) and get some information. Now I want to have clients connecting via tcp socket that will receive the same information as the clients on web socket. How to do it? Is it required to create a net server? And if so, then how information which come to http server send to tcp clients?
You need to create the TCP server so clients will be able to connect to it.
One solution can be using a messaging system (such as pub/sub with Redis, or a library like https://github.com/learnboost/kue) to notify the other server to send the data.
For example:
1) user connects to socket.io
2) user connects to TCP server
3) TCP server subscribes to listening to signals
4) socket.io emits data to the user and signals the TCP server to send the data as well
5) TCP server sends the data
in nodejs to start a tcp server:
var fs = require('fs');
var net = require('net');
var server = net.createServer(function(socket){ // create a tcp server
socket.on('data',function(data){ // on data event when data is set to the socket
var strRequestInfo = data.toString(); // get the string sent by the client
/*
here you could analyse the request data
and think what to do with it like return a certain file
*/
fs.readFile('/path/to/some/file.html', function (err, fileData) { // read a file
if (err) throw err;
socket.write(fileData); // write file content to tcp socket
});
/* -or- just write some text */
socket.write(new Buffer('some text'));
});
});
server.listen(8080, function() { // bind the server
console.log('TCP server bound');
});
you have to take in to consideration that socket.on('data') will not trigger when all the data is sent, it can trigger many time depending on the size of the data being sent.
Therefore the request data should be concatenated until the logic of your request decides to send a response back to the client.
You can add the sockets to an array if you would like to send data to all sockets:
var socketArray =[];
var server = net.createServer(function(socket){
socketArray.push(socket);
});
then you could iterate and send responses to all client:
for(var i=0;i<socketArray.length;i++)
socketArray[i].write(new Buffer('some data'));

Is it possible to enable tcp, http and websocket all using the same port?

I am trying to enable tcp, http and websocket.io communication on the same port. I started out with the tcp server (part above //// line), it worked. Then I ran the echo server example found on websocket.io (part below //// line), it also worked. But when I try to merge them together, tcp doesn't work anymore.
SO, is it possible to enable tcp, http and websockets all using the same port? Or do I have to listen on another port for tcp connections?
var net = require('net');
var http = require('http');
var wsio = require('websocket.io');
var conn = [];
var server = net.createServer(function(client) {//'connection' listener
var info = {
remote : client.remoteAddress + ':' + client.remotePort
};
var i = conn.push(info) - 1;
console.log('[conn] ' + conn[i].remote);
client.on('end', function() {
console.log('[disc] ' + conn[i].remote);
});
client.on('data', function(msg) {
console.log('[data] ' + conn[i].remote + ' ' + msg.toString());
});
client.write('hello\r\n');
});
server.listen(8080);
///////////////////////////////////////////////////////////
var hs = http.createServer(function(req, res) {
res.writeHead(200, {
'Content-Type' : 'text/html'
});
res.end(['<script>', "var ws = new WebSocket('ws://127.0.0.1:8080');", 'ws.onmessage = function (data) { ws.send(data); };', '</script>'].join(''));
});
hs.listen(server);
var ws = wsio.attach(hs);
var i = 0, last;
ws.on('connection', function(client) {
var id = ++i, last
console.log('Client %d connected', id);
function ping() {
client.send('ping!');
if (last)
console.log('Latency for client %d: %d ', id, Date.now() - last);
last = Date.now();
};
ping();
client.on('message', ping);
});
You can have multiple different protocols handled by the same port but there are some caveats:
There must be some way for the server to detect (or negotiate) the protocol that the client wishes to speak. You can think of separate ports as the normal way of detecting the protocol the client wishes to speak.
Only one server process can be actually listening on the port. This server might only serve the purpose of detecting the type of protocol and then forwarding to multiple other servers, but each port is owned by a single server process.
You can't support multiple protocols where the server speaks first (because there is no way to detect the protocol of the client). You can support a single server-first protocol with multiple client-first protocols (by adding a short delay after accept to see if the client will send data), but that's a bit wonky.
An explicit design goal of the WebSocket protocol was to allow WebSocket and HTTP protocols to share the same server port. The initial WebSocket handshake is an HTTP compatible upgrade request.
The websockify server/bridge is an example of a server that can speak 5 different protocols on the same port: HTTP, HTTPS (encrypted HTTP), WS (WebSockets), WSS (encrypted WebSockets), and Flash policy response. The server peeks at the first character of the incoming request to determine if it is TLS encrypted (HTTPS, or WSS) or whether it begins with "<" (Flash policy request). If it is a Flash policy request, then it reads the request, responds and closes the connection. Otherwise, it reads the HTTP handshake (either encrypted or not) and the Connection and Upgrade headers determine whether it is a WebSocket request or a plain HTTP request.
Disclaimer: I made websockify
Short answer - NO, you can't have different TCP/HTTP/Websocket servers running on the same port.
Longish answer -
Both websockets and HTTP work on top of TCP. So you can think of a http server or websocket server as a custom TCP server (with some state mgmt and protocol specific encoding/decoding). It is not possible to have multiple sockets bind to the same port/protocol pair on a machine and so the first one will win and the following ones will get socket bind exceptions.
nginx allows you to run http and websocket on the same port, and it forwards to the correct appliaction:
https://medium.com/localhost-run/using-nginx-to-host-websockets-and-http-on-the-same-domain-port-d9beefbfa95d

How can I send packets between the browser and server with socket.io, but only when there is more than one client?

In my normal setup, the client will emit data to my server regardless of whether or not there is another client to receive it. How can I make it so that it only sends packets when the user-count is > 1? I'm using node with socket.io.
To do this you would want to listen to the connection event on your server (as well as disconnect) and maintain a list of clients which are connected in a 'global' variable. When more than 1 client is connected send out a message to all connected clients to know they can start sending messages, like so:
var app = require('express').createServer(),
io = require('socket.io').listen(app);
app.listen(80);
//setup express
var clients = [];
io.sockets.on('connection', function (socket) {
clients.push(socket);
if (clients.length > 1) {
io.socket.emit('start talking');
}
socket.on('disconnect', function () {
var index = clients.indexOf(socket);
clients = clients.slice(0, index).concat(clients.slice(index + 1));
if (clients.length <= 1) {
io.sockets.emit('quiet time');
};
});
});
Note: I'm making an assumption here that the socket is passed to the disconnect event, I'm pretty sure it is but haven't had a chance to test.
The disconnect event wont receive the socket passed into it but because the event handler is registered within the closure scope of the initial connection you will have access to it.

Change port without losing data

I'm building a settings manager for my http server. I want to be able to change settings without having to kill the whole process. One of the settings I would like to be able to change is change the port number, and I've come up with a variety of solutions:
Kill the process and restart it
Call server.close() and then do the first approach
Call server.close() and initialize a new server in the same process
The problem is, I'm not sure what the repercussions of each approach is. I know that the first will work, but I'd really like to accomplish these things:
Respond to existing requests without accepting new ones
Maintain data in memory on the new server
Lose as little uptime as possible
Is there any way to get everything I want? The API for server.close() gives me hope:
server.close(): Stops the server from accepting new connections.
My server will only be accessible by clients I create and by a very limited number of clients connecting through a browser, so I will be able to notify them of a port change. I understand that changing ports is generally a bad idea, but I want to allow for the edge-case where it is convenient or possibly necessary.
P.S. I'm using connect if that changes anything.
P.P.S. Relatively unrelated, but what would change if I were to use UNIX server sockets or change the host name? This might be a more relevant use-case.
P.P.P.S. This code illustrates the problem of using server.close(). None of the previous servers are killed, but more are created with access to the same resources...
var http = require("http");
var server = false,
curPort = 8888;
function OnRequest(req,res){
res.end("You are on port " + curPort);
CreateServer(curPort + 1);
}
function CreateServer(port){
if(server){
server.close();
server = false;
}
curPort = port;
server = http.createServer(OnRequest);
server.listen(curPort);
}
CreateServer(curPort);
Resources:
http://nodejs.org/docs/v0.4.4/api/http.html#server.close
I tested the close() function. It seems to do absolute nothing. The server still accepts connections on his port. restarting the process was the only way for me.
I used the following code:
var http = require("http");
var server = false;
function OnRequest(req,res){
res.end("server now listens on port "+8889);
CreateServer(8889);
}
function CreateServer(port){
if(server){
server.close();
server = false;
}
server = http.createServer(OnRequest);
server.listen(port);
}
CreateServer(8888);
I was about to file an issue on the node github page when I decided to test my code thoroughly to see if it really is a bug (I hate filing bug reports when it's user error). I realized that the problem only manifests itself in the browser, because apparently browsers do some weird kind of HTTP request keep alive thing where it can still access dead ports because there's still a connection with the server.
What I've learned is this:
Browser caches keep ports alive unless the process on the server is killed
Utilities that do not keep caches by default (curl, wget, etc) work as expected
HTTP requests in node also don't keep the same type of cache that browsers do
For example, I used this code to prove that node http clients don't have access to old ports:
Client-side code:
var http = require('http'),
client,
request;
function createClient (port) {
client = http.createClient(port, 'localhost');
request = client.request('GET', '/create');
request.end();
request.on('response', function (response) {
response.on('end', function () {
console.log("Request ended on port " + port);
setTimeout(function () {
createClient(port);
}, 5000);
});
});
}
createClient(8888);
And server-side code:
var http = require("http");
var server,
curPort = 8888;
function CreateServer(port){
if(server){
server.close();
server = undefined;
}
curPort = port;
server = http.createServer(function (req, res) {
res.end("You are on port " + curPort);
if (req.url === "/create") {
CreateServer(curPort);
}
});
server.listen(curPort);
console.log("Server listening on port " + curPort);
}
CreateServer(curPort);
Thanks everyone for the responses.
What about using cluster?
http://learnboost.github.com/cluster/docs/reload.html
It looks interesting!

Resources