How to set the HTTP Keep-Alive timeout in a nodejs server - node.js

I'm actually doing some load testing against an ExpressJS server, and I noticed that the response send by the server includes a "Connection: Keep-Alive" header. As far as I understand it, the connection will remain opened until the server or the client sends a "Connection: Close" header.
In some implementations, the "Connection: Keep-Alive" header comes up with a "Keep-Alive" header setting the connection timeout and the maximum number of consecutive requests send via this connection.
For example : "Keep-Alive: timeout=15, max=100"
Is there a way (and is it relevant) to set these parameters on an Express server ?
If not, do you know how ExpressJS handles this ?
Edit:
After some investigations, I found out that the default timeout is set in the node standard http library:
socket.setTimeout(2 * 60 * 1000); // 2 minute timeout
In order to change this:
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end("Hello World");
}).on('connection', function(socket) {
socket.setTimeout(10000);
}).listen(3000);
Anyway it still looks a little bit weird to me that the server doesn't send any hint to the client concerning its timeout.
Edit2:
Thanks to josh3736 for his comment.
setSocketKeepAlive is not related to HTTP keep-alive. It is a TCP-level option that allows you to detect that the other end of the connection has disappeared.

For Express 3:
var express = require('express');
var app = express();
var server = app.listen(5001);
server.on('connection', function(socket) {
console.log("A new connection was made by a client.");
socket.setTimeout(30 * 1000);
// 30 second timeout. Change this as you see fit.
});

To set keepAliveTimeout on the express server do:
var express = require('express');
var app = express();
var server = app.listen(5001);
server.keepAliveTimeout = 30000;

For Node.js 10.15.2 and newer with express, only server.keepAliveTimeout was not enough. We also need to configure server.headersTimeout longer than server.keepAliveTimeout.
server.keepAliveTimeout = 30000;
// Ensure all inactive connections are terminated by the ALB, by setting this a few seconds higher than the ALB idle timeout
server.headersTimeout = 31000;
// Ensure the headersTimeout is set higher than the keepAliveTimeout due to this nodejs regression bug: https://github.com/nodejs/node/issues/27363
Update
Since this issue Regression issue with keep alive connections has been closed. We could just set keepAliveTimeout on the latest node.js version.
One more thing, If your node.js server is deployed under AWS ELB and encounters 502 error code occasionally.
Clients -> AWS ELB -> Node Server
AWS ELB has 60 seconds of connection idle timeout by default, and per doc
We also recommend that you configure the idle timeout of your application to be larger than the idle timeout configured for the load balancer
Config the value of keepAliveTimeout to greater than 60 seconds could be one option to eliminate this issue.

Related

How can I solve aws load balancer 502 error?

Now, I am trying to solve ELB 502 error.
I am using express node js server and deploy server with ECS, deployment type instance.
And ELB keeps export 502 errors.
As I check elb access log, lots of logs contains
request_processing_time, target_processing_time >= 0
response_processing_time = -1
I found some documentations that this occurs when server's keepAliveTimeout is shorter than ALB's idle timeout.
However, I set server's timeout 65 secounds as
app.js
server.keepAliveTimeout = 65000;
server.headersTimeout = 66000;
and ELB idle timeout is 60 seconds.
What would be the problem then?
This issue is occurred because your connection is closed and thus AWS throws 502 (gateway error).
const http = require('http');
const server = http.createServer((req, res) => {
res.end('Hello World!');
});
server.keepAliveTimeout = 60 * 1000; // Set keep-alive timeout to 60 seconds
server.listen(8080, () => {
console.log('Server listening on port 8080');
});
In this example, the keepAliveTimeout is set to 60 seconds (60,000 milliseconds), which should be long enough to match the idle timeout of most ALBs. You can adjust this value as needed based on the idle timeout setting of your ALB.
After making this change, you'll need to restart your Node.js application for the changes to take effect.
NOTE: Also check if your target is health based on your configuration.

Why does a NodeJS http server close socket on timeout without response?

Given a NodeJS http server with a timeout of 10s:
const httpServer = require('http').createServer(app);
httpServer.timeout = 10 * 1000;
On timeout, Postman shows this without any response code:
Error: socket hang up
Warning: This request did not get sent completely and might not have all the required system headers
If the NodeJS server is behind an nginx reverse proxy, nginx returns a 502 response (upstream prematurely closed connection while reading response header from upstream). But here it is just NodeJS/express running on localhost. Still one would expect a proper http response.
According to this answer, this is expected behavior, the socket is simply destroyed.
In an architecture with an nginx reverse proxy, is it usual that the server just destroys the socket without sending a timeout response to the proxy?
You're setting the socket timeout when you're setting the http server timeout. The socket timeout prevents abuse from clients that might want to hang on to your connection to DOS you. It has other benefits like ensuring a certain level of service (though these are often more important when you're a client).
The reason it uses a socket timeout instead of sending a 408 status code (Request Timeout) is because the status code might have already been sent for a successful message.
If you want to implement a response timeout on your backend and handle it gracefully, you can timeout the response yourself. Note, you should likely respond with a 408 instead. 502 is for gateways like http proxies (nginx) to indicate that a downstream connection failed.
Here's a simple strawman implementation of handling that.
const httpServer = require('http').createServer((req, res) => {
setTimeout(()=>{
res.statusCode = 200;
res.statusMessage = "Ok";
res.end("Done"); // I'm never called because the timeout will be called instead;
}, 10000)
});
httpServer.on('request', (req, res) => {
setTimeout(()=>{
res.statusCode = 408;
res.statusMessage = 'Request Timeout';
res.end();
}, 1000)
});
httpServer.listen(8080);

Is node.js socket.io-client supposed to handle set-cookie

Is the node.js socket.io-client supposed to automatically handle cookies? That is, for all Set-Cookie response headers, is it supposed to pass back the corresponding Cookie headers during the handshake?
The reason I'm asking is because I have a proxy (the cloud foundry gorouter) between my client and 3 server instances. The socket.io server is appropriately setting two cookies (JSESSIONID and VCAP_ID) on the response and I need the client to send them back appropriately so that affinity is kept by the gorouter. I am currently getting connect failures due to a "transport error" when multiple instances of the server are running, but the problem goes away when I have a single server instance running.
Thanks in advance,
Keith
If you want to access cookies in socket.io check out the following.
http://socket.io/docs/server-api/#namespace#use(fn:function):namespace
var io = require('socket.io')();
io.on('connection', function(socket){
socket.to('others').emit('an event', { some: 'data' });
});
Additionally check out this post on how to do authentication in socket. Socket.IO Authentication
Yes, I did get it to work, but the only node module I could get to work at the time was 'ws' as follows:
var WebSocket = require('ws');
var webSocketUrl = ""wss://" + ...
var opts = { headers: { Cookie: 'JSESSIONID=1; __VCAP_ID__='+vcapID} };
var socket = new WebSocket(websocketUrl,opts);
-- Keith

Socket.io and multiple Dyno's on Heroku Node.js app. WebSocket is closed before the connection is established

I'm building an App deployed to Heroku which uses Websockets.
The websockets connection is working properly when I use only 1 dyno, but when I scale to >1, I get the following errors
POST
http://****.herokuapp.com/socket.io/?EIO=2&transport=polling&t=1412600135378-1&sid=zQzJJ8oPo5p3yiwIAAAC
400 (Bad Request) socket.io-1.0.4.js:2
WebSocket connection to
'ws://****.herokuapp.com/socket.io/?EIO=2&transport=websocket&sid=zQzJJ8oPo5p3yiwIAAAC'
failed: WebSocket is closed before the connection is established.
socket.io-1.0.4.js:2
I am using the Redis adaptor to enable multiple web processes
var io = socket.listen(server);
var redisAdapter = require('socket.io-redis');
var redis = require('redis');
var pub = redis.createClient(18049, '[URI]', {auth_pass:"[PASS]"});
var sub = redis.createClient(18049, '[URI]', {detect_buffers: true, auth_pass:"[PASS]"} );
io.adapter( redisAdapter({pubClient: pub, subClient: sub}) );
This is working on localhost (which I am using foreman to run, as Heroku does, and I am launching 2 web processes, same as on Heroku).
Before I implemented the redis adaptor I got a web-sockets handshake error, so the adaptor has had some effect. Also it is working occasionally now, I assume when the sockets match the same web dyno.
I have also tried to enable sticky sessions, but then it never works.
var sticky = require('sticky-session');
sticky(1, server).listen(port, function (err) {
if (err) {
console.error(err);
return process.exit(1);
}
console.log('Worker listening on %s', port);
});
I'm the Node.js Platform Owner at Heroku.
WebSockets works on Heroku out-of-the-box across multiple dynos; socket.io (and other realtime libs) use fallbacks to stateless processes like xhr polling that break without session affinity.
To scale up socket.io apps, first follow all the instructions from socket.io:
http://socket.io/docs/using-multiple-nodes/
Then, enable session affinity on your app (this is a free feature):
https://devcenter.heroku.com/articles/session-affinity
I spent a while trying to make socket.io work in multi-server architecture, first on Heroku and then on Openshift as many suggest.
The only way to make it work on both PAAS is disabling xhr-polling and setting transports: ['websocket'] on both client and server.
On Openshift, you must explicitly set the port of the server to 8000 (for ws – 8443 for wss on socket.io client initialization, using the *.rhcloud.com server, as explained in this post: http://tamas.io/deploying-a-node-jssocket-io-app-to-openshift/.
Polling strategy doesn't work on Heroku because it does not support sticky sessions (https://github.com/Automattic/engine.io/issues/261), and on Openshift it fails because of this issue: https://github.com/Automattic/engine.io/issues/279, that will hopefully be fixed soon.
So, the only solution I found so far, is disabling polling and use websocket transport only.
In order to do that, with socket.io > 1.0
server-side:
var app = express();
var server = require('http').createServer(app);
var socketio = require('socket.io')(server, {
path: '/socket.io-client'
});
socketio.set('transports', ['websocket']);
client-side:
var ioSocket = io('<your-openshift-app>.rhcloud.com:8000' || '<your-heroku-app>.herokuapp.com', {
path: '/socket.io-client'
transports: ['websocket']
})
Hope this will help.
It could be you need to be running RedisStore:
var session = require('express-session');
var RedisStore = require('connect-redis')(session);
app.use(session({
store: new RedisStore(options),
secret: 'keyboard cat'
}));
per earlier q here: Multiple dynos on Heroku + socket.io broadcasts
I know this isn't a normal answer, but I've tried to get WebSockets working on Heroku for more than a week. After many long conversations with customer support I finally tried out OpenShift. Heroku WebSockets are in beta, but OpenShift WebSockets are stable. I got my code working on OpenShift in under an hour.
http://www.openshift.com
I am not affiliated with OpenShift in any way. I'm just a satisfied (non-paying) customer.
I was having huge problems with this. There were a number of issues failing simultaneously making it a huge nightmare. Make sure you do the following to scale socket.io on heroku:
if you're using clusters make sure you implement socketio-sticky-session or something similar
client's connect url should not be https://example.com/socket.io/?EIO=3&transport=polling but rather https://example.com/ notably I'm using https because heroku supports it
enable cors in socket.io
specify only websocket connections
For you and others it could be any one of these.
if you're having trouble setting up sticky-session clusters here's my working code
var http = require('http');
var cluster = require('cluster');
var numCPUs = require('os').cpus().length;
var sticky = require('socketio-sticky-session');
var redis = require('socket.io-redis');
var io;
if(cluster.isMaster){
console.log('Inside Master');
// create the worker processes
for (var i = 0; i < numCPUs ; i++){
cluster.fork();
}
}
else {
// The worker code to be run is written inside
// the sticky().
}
sticky(function(){
// This code runs inside the workers.
// The sticky-session balances connection between workers based on their ip.
// So all the requests from the same client would go to the same worker.
// If multiple browser windows are opened in the same client, all would be
// redirected to the same worker.
io = require('socket.io')({transports:'websocket', 'origins' : '*:*'});
var server = http.createServer(function(req,res){
res.end('socket.io');
})
io.listen(server);
// The Redis server can also be used to store the socket state
//io.adapter(redis({host:'localhost', port:6379}));
console.log('Worker: '+cluster.worker.id);
// when multiple workers are spawned, the client
// cannot connect to the cloudlet.
StartConnect(); //this function connects my mongodb, then calls a function with io.on('connection', ..... socket.on('message'...... in relation to the io variable above
return server;
}).listen(process.env.PORT || 4567, function(){
console.log('Socket.io server is up ');
});
more information:
personally it would work flawlessly from a session not using websockets (I'm using socket.io for a unity game. It worked flawlessly from the editor only!). When connecting through the browser whether chrome or firefox it would show these handshaking errors, along with error 503 and 400.

node-http-proxy websocket timeout with Socket.io

For some reason http-proxy causes socket.io based websocket connection reconnect after every 2 minutes. Before reconnection messages are working just fine between client and server. If I bypass proxy, the websocket connection works without reconnections. Proxy configuration is very basic and follows an example from nodejitsu.
var http = require('http'),
httpProxy = require('http-proxy');
var options = {
hostNameOnly: true,
router: {
'example.com/sockets/': '127.0.0.1:9001'
}
};
var proxyServer = httpProxy.createServer(options);
proxyServer.listen(80);
I have also tried to change the timeout option in configuration but this does not have an affect to the reconnection problem.
timeout: 120000 // override the default 2 minute http socket timeout value in milliseconds
Software versions: Ubuntu 12.04 server, node.js 0.8.16, http-proxy 0.8.7, socket.io 0.8.7.
This works perfectly on dev Mac (10.8.3) and on Ubuntu desktop 12.04 (virtualbox) but not on server.
Set the timeout in options which you are passing to createServer.
options.timeout for socket timeout
and options.proxyTimeout to allow outgoing socket to timeout so that we could show an error page at the initial request.

Resources