How To use Node Http Server Keep Alive - node.js

I've spent a while setting up my own server using the http library and when I came to load test it using jmeter I noticed I hadn't set it up to utilize keep-alive.
I've spent hours trying to figure this one out - and perhaps I have an issue elsewhere - so in very simple terms, how should keep alive be set up?
I've set the relevant headers and tried the following methods I've found online:
server.on('connection', (socket: Socket) => {
socket.setTimeout(30 * 1000);
socket.setKeepAlive(true);
})
and
handleRequest(request: http.IncomingMessage, response: http.ServerResponse) : void {
request.socket.setKeepAlive(true);
request.socket.write('hello, world');
request.socket.end();
}
These just cause jmeter to crash as it the headers make it think the connections are kept alive when they are not. Nothing I am doing seems to let me keep the connection open. Please advise :)

Seems I got myself into a bit of a fuzz over nothing; seem's there isn't anything wrong with my original implementation; however I run into an issue with Ephemeral Port Limits when running my tests in ApacheBench. This blog post by Daniel Mendel explains the problem: here

Related

Setting long timeout for http request via nodejs angular4 or express

I currently have a request which is made from an angular 4 app(which uses electron[which uses chromium]) to a bottleneck(nodejs/express) server. The server takes about 10 minutes to process the request.
The default timeout which I'm getting is 120 seconds.
I tried to use setting the timeout on the server using
App.use(timeout("1000s")
In the client side I have used
options = {
url,
method: GET
timeout : 600 * 1000}
let req = http.request(options, () => {})
req.end()
I have also tried to give the specific route timeout.
Each time the request hits 120 seconds the socket dies and I get a "socket timeout"
I have read many posts with the same questions but I didn't get any concrete answers. Is it possible to do a request with a long/no timeout using the tools above? Do I need to download a new library which handles long timeouts?
Any help would be greatly appriciated.
So after browsing through the internet I have discovered that there is no possible way to increase Chrome's timeout time.
My solution to this problem was to open the request and return a default answer(something like "started") then pinging the server to find out it's status.
There is another possible solution which will be to put a route in the client(I'm using electron and node modules in the client side so it is possible) and then let the server ping back to the client with the status of the query.
Writing this down so other people will have some possible patches. Will update if I'll find anything better.

why is Socket.io is very slow on windows?

I recently noticed that Socket.io has become really really slow on windows. I noticed this when I opened two tabs and tried emitting events. It took more than 15 seconds to receive a response from server. Server Coded on NodeJS.
Environment:
Windows 10 Pro
Electron Socket.io-Tester
Socket.io - Version 2.0.4
I'm having this exact problem. Used to work fine on Windows (and still does on MacOS), now emits are just getting lost in lag, as much as 5 seconds. This only happens when a 2nd or more connection has been made to the server - as I'm developing a multi player game this is pretty important. The emit event works fine and you can see it in the frame, its just taking forever for the server to respond.
After much experimentation, I can safely conclude this is a problem with NodeJS LTS itself. THIS IS WORKING IN v6.13.1, but not for later versions (not sure where it starts, and not going to bother incrementally upgrading Node).
Downgrade node to safely address this issue. Fun.
In your code are there any proxies? That can sometimes muddle things up a bit.
If there are proxies I found a gist related to a "Socket is slow issue" that was due to proxies. This Gist was a "Fix" to the issue maybe it will help. This snippet is the proxy config.
var server = http_m.createServer(function(req, res) {
switch(req.headers.host) {
case "ws.app.com":
handle_ws(req, res);
break;
case "api.app.com":
handle_api(req, res);
break;
default:
handle_error(req, res);
break;
}
});
server.on("upgrade", handle_ws_upgrade);
server.listen(80);

Async profiling nodejs server to review the code?

We encountered performance problem on our nodejs server holding 100k ip everyday.
Now we want to review the code and find the bottle-neck.
#jfriend00 from what we can see now, the problem seems to be DB access and file access. But we don't know what logic caused this access.
We are still looking for good ways to do the async profiling of nodejs server.
Here's what we tried
Nodetime
This works for us to some extent. It can give the executing time of code specified to the lines. However, we can't locate the error because the server works async and no stacking and calling info can be determined.
Async-profiling
This works with async and is said to be the first of this kind.
Problem is, we've integrated it's js code with our server-side code.
var AsyncProfile = require('async-profile')
AsyncProfile.profile(function () {
///// OUR SERVER-SIDE CODE RESIDES HERE
setTimeout(function () {
// doAsyncStuff
});
});
We can only record the profile of one time of server execution for one request. Can we use this code with things like forever? I've no idea with this.
dtrace
This is too general for us to locate problem in nodejs code.
Do you have any idea on profiling nodejs server code? Any hints or suggestions are appreciated. Thanks.

Shutting down a Node.js http server in a unit test

Supposed I have some unit tests that test a web server. For reasons I don't want to discuss here (outside scope ;-)), every test needs a newly started server.
As long as I don't send a request to the server, everything is fine. But once I do, a call to the http server's close function does not work as expected, as all made requests result in kept-alive connections, hence the server waits for 120 seconds before actually closing.
Of course this is not acceptable for running the tests.
At the moment, the only solutions I'd see was either
setting the keep-alive timeout to 0, so a call to close will actually close the server,
or to start each server on a different port, although this becomes hard to handle when you have lots of tests.
Any other ideas of how to deal with this situation?
PS: I had a asked How do I shutdown a Node.js http(s) server immediately? a while ago, and found a viable way to work around it, but as it seems this workaround does not run reliably in every case, as I am getting strange results from time to time.
function createOneRequestServer() {
var server = http.createServer(function (req, res) {
res.write('write stuff');
res.end();
server.close();
}).listen(8080);
}
You could also consider using process to fork processes and kill them after you have tested on that process.
var child = fork('serverModuleYouWishToTest.js');
function callback(signalCode) {
child.kill(signalCode);
}
runYourTest(callback);
This method is desirable because it does not require you to write special cases of your servers to service only one request, and keeps your test code and your production code 100% independant.

Does mongoDB have reconnect issues or am i doing it wrong?

I'm using nodejs and a mongoDB - and I'm having some connection issues.
Well, actually "wake" issues! It connects perfectly well - is super fast and I'm generally happy with the results.
My problem: If i don't use the connection for a while (i say while, because the timeframe varies 5+ mins) it seems to stall. I don't get disconnection events fired - it just hangs.
Eventually i get a response like Error: failed to connect to [ * .mongolab.com: * ] - ( * = masked values)
A quick restart of the app, and the connection's great again. Sometimes, if i don't restart the app, i can refresh and it reconnects happily.
This is why i think it is "wake" issues.
Rough outline of code:
I've not included the code - I don't think it's needed. It works (apart from the connection dropout)
Things to note: There is just the one "connect" - i never close it. I never reopen.
I'm using mongoose, socketio.
/* constants */
var mongoConnect = 'myworkingconnectionstring-includingDBname';
/* includes */
/* settings */
/* Schema */
var db = mongoose.connect(mongoConnect);
/* Socketio */
io.configure(function (){
io.set('authorization', function (handshakeData, callback) {
});
});
io.sockets.on('connection', function (socket) {
});//sockets
io.sockets.on('disconnect', function(socket) {
console.log('socket disconnection')
});
/* The Routing */
app.post('/login', function(req, res){
});
app.get('/invited', function(req, res){
});
app.get('/', function(req, res){
});
app.get('/logout', function(req, res){
});
app.get('/error', function(req, res){
});
server.listen(port);
console.log('Listening on port '+port);
db.connection.on('error', function(err) {
console.log("DB connection Error: "+err);
});
db.connection.on('open', function() {
console.log("DB connected");
});
db.connection.on('close', function(str) {
console.log("DB disconnected: "+str);
});
I have tried various configs here, like opening and closing all the time - I believe though, the general consensus is to do as i am with one open wrapping the lot. ??
I have tried a connection tester, that keeps checking the status of the connection... even though this appears to say everthing's ok - the issue still happens.
I have had this issue from day one. I have always hosted the MongoDB with MongoLab.
The problem appears to be worse on localhost. But i still have the issue on Azure and now nodejit.su.
As it happens everywhere - it must be me, MongoDB, or mongolab.
Incidentally i have had a similar experience with the php driver too. (to confirm this is on nodejs though)
It would be great for some help - even if someone just says "this is normal"
thanks in advance
Rob
UPDATE: Our support article for this topic (essentially a copy of this post) has moved to our connection troubleshooting doc.
There is a known issue that the Azure IaaS network enforces an idle timeout of roughly thirteen minutes (empirically arrived at). We are working with Azure to see if we can't make things more user-friendly, but in the meantime others have had success by configuring their driver options to work around the issue.
Max connection idle time
The most effective workaround we've found in working with Azure and our customers has been to set the max connection idle time below four minutes. The idea is to make the driver recycle idle connections before the firewall forces the issue. For example, one customer, who is using the C# driver, set MongoDefaults.MaxConnectionIdleTime to one minute and it cleared up their issues.
MongoDefaults.MaxConnectionIdleTime = TimeSpan.FromMinutes(1);
The application code itself didn't change, but now behind the scenes the driver aggressively recycles idle connections. The result can be seen in the server logs as well: lots of connection churn during idle periods in the app.
There are more details on this approach in the related mongo-user thread, SocketException using C# driver on azure.
Keepalive
You can also work around the issue by making your connections less idle with some kind of keepalive. This is a little tricky to implement unless your driver supports it out of the box, usually by taking advantage of TCP Keepalive. If you need to roll your own, make sure to grab each idle connection from the pool every couple minutes and issue some simple and cheap command, probably a ping.
Handling disconnects
Disconnects can happen from time to time even without an aggressive firewall setup. Before you get into production you want to be sure to handle them correctly.
First, be sure to enable auto reconnect. How to do so varies from driver to driver, but when the driver detects that an operation failed because the connection was bad turning on auto reconnect tells the driver to attempt to reconnect.
But this doesn't completely solve the problem. You still have the issue of what to do with the failed operation that triggered the reconnect. Auto reconnect doesn't automatically retry failed operations. That would be dangerous, especially for writes. So usually an exception is thrown and the app is asked to handle it. Often retrying reads is a no-brainer. But retrying writes should be carefully considered.
The mongo shell session below demonstrates the issue. The mongo shell by default has auto reconnect enabled. I insert a document in a collection named stuff then find all the documents in that collection. I then set a timer for thirty minutes and tried the same find again. It failed, but the shell automatically reconnected and when I immediately retried my find it worked as expected.
% mongo ds012345.mongolab.com:12345/mydatabase -u *** -p ***
MongoDB shell version: 2.2.2
connecting to: ds012345.mongolab.com:12345/mydatabase
> db.stuff.insert({})
> db.stuff.find()
{ "_id" : ObjectId("50f9b77c27b2e67041fd2245") }
> db.stuff.find()
Fri Jan 18 13:29:28 Socket recv() errno:60 Operation timed out 192.168.1.111:12345
Fri Jan 18 13:29:28 SocketException: remote: 192.168.1.111:12345 error: 9001 socket exception [1] server [192.168.1.111:12345]
Fri Jan 18 13:29:28 DBClientCursor::init call() failed
Fri Jan 18 13:29:28 query failed : mydatabase.stuff {} to: ds012345.mongolab.com:12345
Error: error doing query: failed
Fri Jan 18 13:29:28 trying reconnect to ds012345.mongolab.com:12345
Fri Jan 18 13:29:28 reconnect ds012345.mongolab.com:12345 ok
> db.stuff.find()
{ "_id" : ObjectId("50f9b77c27b2e67041fd2245") }
We're here to help
Of course, if you have any questions please feel free to contact us at support#mongolab.com. We're here to help.
Thanks for all the help guys - I have managed to solve this issue on both localhost and deployed to a live server.
Here is my now working connect code:
var MONGO = {
username: "username",
password: "pa55W0rd!",
server: '******.mongolab.com',
port: '*****',
db: 'dbname',
connectionString: function(){
return 'mongodb://'+this.username+':'+this.password+'#'+this.server+':'+this.port+'/'+this.db;
},
options: {
server:{
auto_reconnect: true,
socketOptions:{
connectTimeoutMS:3600000,
keepAlive:3600000,
socketTimeoutMS:3600000
}
}
}
};
var db = mongoose.createConnection(MONGO.connectionString(), MONGO.options);
db.on('error', function(err) {
console.log("DB connection Error: "+err);
});
db.on('open', function() {
console.log("DB connected");
});
db.on('close', function(str) {
console.log("DB disconnected: "+str);
});
I think the biggest change was to use "createConnection" over "connect" - I had used this before, but maybe the options help now. This article helped a lot http://journal.michaelahlers.org/2012/12/building-with-nodejs-persistence.html
If I'm honest I'm not overly sure on why I have added those options - as mentioned by #jareed, i also found some people having success with "MaxConnectionIdleTime" - but as far as i can see the javascript driver doesn't have this option: this was my attempt at trying to replicate the behavior.
So far so good - hope this helps someone.
UPDATE: 18 April 2013 note, this is a second app with a different setup
Now I thought i had this solved but the problem rose it's ugly head again on another app recently - with the same connection code. Confused!!!
However the set up was slightly different…
This new app was running on a windows box using IISNode. I didn't see this as significant initially.
I read there were possibly some issues with mongo on Azure (#jareed), so I moved the DB to AWS - still the problem persisted.
So i started playing about with that options object again, reading up quite a lot on it. Came to this conclusion:
options: {
server:{
auto_reconnect: true,
poolSize: 10,
socketOptions:{
keepAlive: 1
}
},
db: {
numberOfRetries: 10,
retryMiliSeconds: 1000
}
}
That was a bit more educated that my original options object i state.
However - it's still no good.
Now, for some reason i had to get off that windows box (something to do with a module not compiling on it) - it was easier to move than spend another week trying to get it to work.
So i moved my app to nodejitsu. Low and behold my connection stayed alive! Woo!
So…. what does this mean… I have no idea! What i do know is is those options seem to work on Nodejitsu…. for me.
I believe IISNode uses some kind of "forever" script for keeping the app alive. Now to be fair the app doesn't crash for this to kick in, but i think there must be some kind of "app cycle" that is refreshed constantly - this is how it can do continuous deployment (ftp code up, no need to restart app) - maybe this is a factor; but i'm just guessing now.
Of course all this means now, is this isn't solved. It's still not solved. It's just solved for me in my setup.
A couple of recommendations for people still having this issue:
Make sure you are using the latest mongodb client for node.js. I noticed significant improvements in this area when migrating from v1.2.x to v1.3.10 (the latest as of today)
You can pass an options object to the MongoClient.connect. The following options worked for me when connecting from Azure to MongoLab:
options = {
db: {},
server: {
auto_reconnect: true,
socketOptions: {keepAlive: 1}
},
replSet: {},
mongos: {}
};
MongoClient.connect(dbUrl, options, function(err, dbConn) {
// your code
});
See this other answer in which I describe how to handle the 'close' event which seems to be more reliable. https://stackoverflow.com/a/20690008/446681
Enable the auto_reconnect Server option like this:
var db = mongoose.connect(mongoConnect, {server: {auto_reconnect: true}});
The connection you're opening here is actually a pool of 5 connections (by default) so you're right to just connect and leave it open. My guess is that you intermittently lose connectivity with mongolab and your connections die when that occurs. Hopefully, enabling auto_reconnect resolves that.
Increasing timeouts may help.
"socketTimeoutMS" : How long a send or receive on a socket can take
before timing out.
"wTimeoutMS" : It controls how many milliseconds the server waits for
the write concern to be satisfied.
"connectTimeoutMS" : How long a connection can take to be opened
before timing out in milliseconds.
$m = new MongoClient("mongodb://127.0.0.1:27017",
array("connect"=>TRUE, "connectTimeoutMS"=>10, "socketTimeoutMS"=>10,
"wTimeoutMS"=>10));
$db= $m->mydb;
$coll = $db->testData;
$coll->insert($paramArr);
I had a similar problem being disconnected from MongoDB periodically. Doing two things fixed it:
Make sure your computer never sleeps (that'll kill your network connection).
Bypass your router/firewall (or configure it properly, which I haven't figured out how to do yet).

Resources