node-http-proxy websocket timeout with Socket.io - node.js

For some reason http-proxy causes socket.io based websocket connection reconnect after every 2 minutes. Before reconnection messages are working just fine between client and server. If I bypass proxy, the websocket connection works without reconnections. Proxy configuration is very basic and follows an example from nodejitsu.
var http = require('http'),
httpProxy = require('http-proxy');
var options = {
hostNameOnly: true,
router: {
'example.com/sockets/': '127.0.0.1:9001'
}
};
var proxyServer = httpProxy.createServer(options);
proxyServer.listen(80);
I have also tried to change the timeout option in configuration but this does not have an affect to the reconnection problem.
timeout: 120000 // override the default 2 minute http socket timeout value in milliseconds
Software versions: Ubuntu 12.04 server, node.js 0.8.16, http-proxy 0.8.7, socket.io 0.8.7.
This works perfectly on dev Mac (10.8.3) and on Ubuntu desktop 12.04 (virtualbox) but not on server.

Set the timeout in options which you are passing to createServer.
options.timeout for socket timeout
and options.proxyTimeout to allow outgoing socket to timeout so that we could show an error page at the initial request.

Related

Creating A HTTP proxy server with https support and use another proxy server to serve the response using nodejs

I need help creating a proxy server using node js to use with firefox.
the end goal is to create a proxy server that will tunnel the traffic through another proxy server (HTTP/SOCKS) and return the response back to firefox. like this
I wanna keep the original response received from the proxy server and also wanna support https websites as well.
Here is the code I came up with.
var http = require('http');
var request = require("request");
http.createServer(function(req, res){
const resu = request(req.url, {
// I wanna Fetch the proxy From database and use it here
proxy: "<Proxy URL>"
})
req.pipe(resu);
resu.pipe(res);
}).listen(8080);
But it has 2 problems.
It does not support https requests.
It also does not supports SOCKS 4/5 proxies.
EDIT: I tried to create a proxy server using this module. https://github.com/http-party/node-http-proxy
but the problem is we cannot specify any external proxy server to send connections through.
I have found a really super simple solution to the problem. We can just forward all packets as it is to the proxy server. and still can handle the server logic with ease.
var net = require('net');
const server = net.createServer()
server.on('connection', function(socket){
var laddr = socket.remoteAddress;
console.log(laddr)
var to = net.createConnection({
host: "<Proxy IP>",
port: <Proxy Port>
});
socket.pipe(to);
to.pipe(socket);
});
server.listen(3000, "0.0.0.0");
You have to use some middleware like http-proxy module.
Documentation here: https://www.npmjs.com/package/http-proxy
Install it using npm install node-http-proxy
This might help too: How to create a simple http proxy in node.js?

Errors going to 2 dynos on Heroku with socket.io / socket.io-redis / rediscloud / node.js

I have a node.js / socket.io app running on Heroku. I am using socket.io-redis with RedisCloud to allow users who connect to different dynos to communicate, as described here.
From my app.js:
var express = require('express'),
app = express(),
http = require('http'),
server = http.createServer(app),
io = require('socket.io').listen(server),
redis = require('redis'),
ioredis = require('socket.io-redis'),
url = require('url'),
redisURL = url.parse(process.env.REDISCLOUD_URL),
And later in app.js ...
var sub1 = redis.createClient(redisURL.port, redisURL.hostname, {
no_ready_check: true,
return_buffers: true
});
sub1.auth(redisURL.auth.split(":")[1]);
var pub1 = redis.createClient(redisURL.port, redisURL.hostname, {
no_ready_check: true,
return_buffers: true
});
pub1.auth(redisURL.auth.split(":")[1]);
var redisOptions = {
pubClient: pub1,
subClient: sub1,
host: redisURL.hostname,
port: redisURL.port
};
if (io.adapter) {
io.adapter(ioredis(redisOptions));
console.log("mylog: io.adapter found");
}
It is kind of working -- communication is succeeding between dynos.
Three issues that happen with 2 dynos but not with 1 dyno:
1) There is a login prompt which comes up and works reliably with 1 dyno but is hit-and-miss with 2 dynos -- may not come up and may not work if it does come up. It is (or should be) triggered by the io.sockets.on('connection') event.
2) I'm seeing a lot of disconnects in the server log.
3) Also lots of errors in the client console on Chrome, for example:
socket.io.js:5039 WebSocket connection to 'ws://example.mydomain.com/socket.io/?EIO=3&transport=websocket&sid=F8babuJrLI6AYdXZAAAI' failed: Error during WebSocket handshake: Unexpected response code: 503
socket.io.js:2739 POST http://example.mydomain.com/socket.io/?EIO=3&transport=polling&t=1419624845433-63&sid=dkFE9mUbvKfl_fiPAAAJ net::ERR_INCOMPLETE_CHUNKED_ENCODING
socket.io.js:2739 GET http://example.mydomain.com/socket.io/?EIO=3&transport=polling&t=1419624842679-54&sid=Og2ZhJtreOG0wnt8AAAQ 400 (Bad Request)
socket.io.js:3318 WebSocket connection to 'ws://example.mydomain.com/socket.io/?EIO=3&transport=websocket&sid=ITYEPePvxQgs0tcDAAAM' failed: WebSocket is closed before the connection is established.
Any thoughts or suggestions would be welcome.
Yes, like generalhenry said, the issue is that Socket.io requires sticky sessions (meaning that requests from a given user always go to the same dyno), and Heroku doesn't support that.
(It works with 1 dyno because when there's only 1 then all requests go to it.)
https://github.com/Automattic/engine.io/issues/261 has a lot more good info, apparently web sockets don't really require sticky sessions, but long-polling does. It also mentions a couple of potential work-arounds:
Roll back to socket.io version 0.9.17, which tries websockets first
Only use SSL connections which, makes websockets more reliable (because ISP's and corporate proxies and whatnot can't tinker with the connection as easily.)
You might get the best results from combining both of those.
You could also spin up your own load balancer that adds sticky session support, but by that point, you're fighting against Heroku and might be better off on a different host.
RE: your other question about the Node.js cluster module: it wouldn't really help here. It's for using up all of the available CPU cores on a single server/dyno,

Socket.io and multiple Dyno's on Heroku Node.js app. WebSocket is closed before the connection is established

I'm building an App deployed to Heroku which uses Websockets.
The websockets connection is working properly when I use only 1 dyno, but when I scale to >1, I get the following errors
POST
http://****.herokuapp.com/socket.io/?EIO=2&transport=polling&t=1412600135378-1&sid=zQzJJ8oPo5p3yiwIAAAC
400 (Bad Request) socket.io-1.0.4.js:2
WebSocket connection to
'ws://****.herokuapp.com/socket.io/?EIO=2&transport=websocket&sid=zQzJJ8oPo5p3yiwIAAAC'
failed: WebSocket is closed before the connection is established.
socket.io-1.0.4.js:2
I am using the Redis adaptor to enable multiple web processes
var io = socket.listen(server);
var redisAdapter = require('socket.io-redis');
var redis = require('redis');
var pub = redis.createClient(18049, '[URI]', {auth_pass:"[PASS]"});
var sub = redis.createClient(18049, '[URI]', {detect_buffers: true, auth_pass:"[PASS]"} );
io.adapter( redisAdapter({pubClient: pub, subClient: sub}) );
This is working on localhost (which I am using foreman to run, as Heroku does, and I am launching 2 web processes, same as on Heroku).
Before I implemented the redis adaptor I got a web-sockets handshake error, so the adaptor has had some effect. Also it is working occasionally now, I assume when the sockets match the same web dyno.
I have also tried to enable sticky sessions, but then it never works.
var sticky = require('sticky-session');
sticky(1, server).listen(port, function (err) {
if (err) {
console.error(err);
return process.exit(1);
}
console.log('Worker listening on %s', port);
});
I'm the Node.js Platform Owner at Heroku.
WebSockets works on Heroku out-of-the-box across multiple dynos; socket.io (and other realtime libs) use fallbacks to stateless processes like xhr polling that break without session affinity.
To scale up socket.io apps, first follow all the instructions from socket.io:
http://socket.io/docs/using-multiple-nodes/
Then, enable session affinity on your app (this is a free feature):
https://devcenter.heroku.com/articles/session-affinity
I spent a while trying to make socket.io work in multi-server architecture, first on Heroku and then on Openshift as many suggest.
The only way to make it work on both PAAS is disabling xhr-polling and setting transports: ['websocket'] on both client and server.
On Openshift, you must explicitly set the port of the server to 8000 (for ws – 8443 for wss on socket.io client initialization, using the *.rhcloud.com server, as explained in this post: http://tamas.io/deploying-a-node-jssocket-io-app-to-openshift/.
Polling strategy doesn't work on Heroku because it does not support sticky sessions (https://github.com/Automattic/engine.io/issues/261), and on Openshift it fails because of this issue: https://github.com/Automattic/engine.io/issues/279, that will hopefully be fixed soon.
So, the only solution I found so far, is disabling polling and use websocket transport only.
In order to do that, with socket.io > 1.0
server-side:
var app = express();
var server = require('http').createServer(app);
var socketio = require('socket.io')(server, {
path: '/socket.io-client'
});
socketio.set('transports', ['websocket']);
client-side:
var ioSocket = io('<your-openshift-app>.rhcloud.com:8000' || '<your-heroku-app>.herokuapp.com', {
path: '/socket.io-client'
transports: ['websocket']
})
Hope this will help.
It could be you need to be running RedisStore:
var session = require('express-session');
var RedisStore = require('connect-redis')(session);
app.use(session({
store: new RedisStore(options),
secret: 'keyboard cat'
}));
per earlier q here: Multiple dynos on Heroku + socket.io broadcasts
I know this isn't a normal answer, but I've tried to get WebSockets working on Heroku for more than a week. After many long conversations with customer support I finally tried out OpenShift. Heroku WebSockets are in beta, but OpenShift WebSockets are stable. I got my code working on OpenShift in under an hour.
http://www.openshift.com
I am not affiliated with OpenShift in any way. I'm just a satisfied (non-paying) customer.
I was having huge problems with this. There were a number of issues failing simultaneously making it a huge nightmare. Make sure you do the following to scale socket.io on heroku:
if you're using clusters make sure you implement socketio-sticky-session or something similar
client's connect url should not be https://example.com/socket.io/?EIO=3&transport=polling but rather https://example.com/ notably I'm using https because heroku supports it
enable cors in socket.io
specify only websocket connections
For you and others it could be any one of these.
if you're having trouble setting up sticky-session clusters here's my working code
var http = require('http');
var cluster = require('cluster');
var numCPUs = require('os').cpus().length;
var sticky = require('socketio-sticky-session');
var redis = require('socket.io-redis');
var io;
if(cluster.isMaster){
console.log('Inside Master');
// create the worker processes
for (var i = 0; i < numCPUs ; i++){
cluster.fork();
}
}
else {
// The worker code to be run is written inside
// the sticky().
}
sticky(function(){
// This code runs inside the workers.
// The sticky-session balances connection between workers based on their ip.
// So all the requests from the same client would go to the same worker.
// If multiple browser windows are opened in the same client, all would be
// redirected to the same worker.
io = require('socket.io')({transports:'websocket', 'origins' : '*:*'});
var server = http.createServer(function(req,res){
res.end('socket.io');
})
io.listen(server);
// The Redis server can also be used to store the socket state
//io.adapter(redis({host:'localhost', port:6379}));
console.log('Worker: '+cluster.worker.id);
// when multiple workers are spawned, the client
// cannot connect to the cloudlet.
StartConnect(); //this function connects my mongodb, then calls a function with io.on('connection', ..... socket.on('message'...... in relation to the io variable above
return server;
}).listen(process.env.PORT || 4567, function(){
console.log('Socket.io server is up ');
});
more information:
personally it would work flawlessly from a session not using websockets (I'm using socket.io for a unity game. It worked flawlessly from the editor only!). When connecting through the browser whether chrome or firefox it would show these handshaking errors, along with error 503 and 400.

HTTPS proxy with a self-signed certificate in Node.JS

I'm trying to create a HTTPS proxy server in Node.JS v0.10.24 using a self-signed certificate. Here's the code I'm using:
var https = require('https');
var server = https.createServer({
key: fs.readFileSync('key.pem'),
cert: fs.readFileSync('cert.pem')
});
server.on('request', function(req, res) {
res.end('hello');
});
server.listen(8080);
This server correctly boots up and is accessible via https://localhost:8080. However, when I set it as a HTTPS proxy (on Mac OS X), the server emits connection events but never emits either request or error, thus causing the connection to hang indefinitely and eventually time out.
I encountered the same issue on my Macbook. The issue appears to be that the proxy server option in OSX is using the HTTP CONNECT method to tunnel HTTPS requests.
In short, this means that you need make your server a http.Server instance and handle the connect event, which will involve forwarding TCP socket traffic.
I know this reply is a bit late, but I wrote my own HTTP/S proxy server that you can look at for reference: https://github.com/robu3/purokishi. The specific section covering the connect method is here.

How to set the HTTP Keep-Alive timeout in a nodejs server

I'm actually doing some load testing against an ExpressJS server, and I noticed that the response send by the server includes a "Connection: Keep-Alive" header. As far as I understand it, the connection will remain opened until the server or the client sends a "Connection: Close" header.
In some implementations, the "Connection: Keep-Alive" header comes up with a "Keep-Alive" header setting the connection timeout and the maximum number of consecutive requests send via this connection.
For example : "Keep-Alive: timeout=15, max=100"
Is there a way (and is it relevant) to set these parameters on an Express server ?
If not, do you know how ExpressJS handles this ?
Edit:
After some investigations, I found out that the default timeout is set in the node standard http library:
socket.setTimeout(2 * 60 * 1000); // 2 minute timeout
In order to change this:
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end("Hello World");
}).on('connection', function(socket) {
socket.setTimeout(10000);
}).listen(3000);
Anyway it still looks a little bit weird to me that the server doesn't send any hint to the client concerning its timeout.
Edit2:
Thanks to josh3736 for his comment.
setSocketKeepAlive is not related to HTTP keep-alive. It is a TCP-level option that allows you to detect that the other end of the connection has disappeared.
For Express 3:
var express = require('express');
var app = express();
var server = app.listen(5001);
server.on('connection', function(socket) {
console.log("A new connection was made by a client.");
socket.setTimeout(30 * 1000);
// 30 second timeout. Change this as you see fit.
});
To set keepAliveTimeout on the express server do:
var express = require('express');
var app = express();
var server = app.listen(5001);
server.keepAliveTimeout = 30000;
For Node.js 10.15.2 and newer with express, only server.keepAliveTimeout was not enough. We also need to configure server.headersTimeout longer than server.keepAliveTimeout.
server.keepAliveTimeout = 30000;
// Ensure all inactive connections are terminated by the ALB, by setting this a few seconds higher than the ALB idle timeout
server.headersTimeout = 31000;
// Ensure the headersTimeout is set higher than the keepAliveTimeout due to this nodejs regression bug: https://github.com/nodejs/node/issues/27363
Update
Since this issue Regression issue with keep alive connections has been closed. We could just set keepAliveTimeout on the latest node.js version.
One more thing, If your node.js server is deployed under AWS ELB and encounters 502 error code occasionally.
Clients -> AWS ELB -> Node Server
AWS ELB has 60 seconds of connection idle timeout by default, and per doc
We also recommend that you configure the idle timeout of your application to be larger than the idle timeout configured for the load balancer
Config the value of keepAliveTimeout to greater than 60 seconds could be one option to eliminate this issue.

Resources