Shutting down a Node.js http server in a unit test - node.js

Supposed I have some unit tests that test a web server. For reasons I don't want to discuss here (outside scope ;-)), every test needs a newly started server.
As long as I don't send a request to the server, everything is fine. But once I do, a call to the http server's close function does not work as expected, as all made requests result in kept-alive connections, hence the server waits for 120 seconds before actually closing.
Of course this is not acceptable for running the tests.
At the moment, the only solutions I'd see was either
setting the keep-alive timeout to 0, so a call to close will actually close the server,
or to start each server on a different port, although this becomes hard to handle when you have lots of tests.
Any other ideas of how to deal with this situation?
PS: I had a asked How do I shutdown a Node.js http(s) server immediately? a while ago, and found a viable way to work around it, but as it seems this workaround does not run reliably in every case, as I am getting strange results from time to time.

function createOneRequestServer() {
var server = http.createServer(function (req, res) {
res.write('write stuff');
res.end();
server.close();
}).listen(8080);
}
You could also consider using process to fork processes and kill them after you have tested on that process.
var child = fork('serverModuleYouWishToTest.js');
function callback(signalCode) {
child.kill(signalCode);
}
runYourTest(callback);
This method is desirable because it does not require you to write special cases of your servers to service only one request, and keeps your test code and your production code 100% independant.

Related

How To use Node Http Server Keep Alive

I've spent a while setting up my own server using the http library and when I came to load test it using jmeter I noticed I hadn't set it up to utilize keep-alive.
I've spent hours trying to figure this one out - and perhaps I have an issue elsewhere - so in very simple terms, how should keep alive be set up?
I've set the relevant headers and tried the following methods I've found online:
server.on('connection', (socket: Socket) => {
socket.setTimeout(30 * 1000);
socket.setKeepAlive(true);
})
and
handleRequest(request: http.IncomingMessage, response: http.ServerResponse) : void {
request.socket.setKeepAlive(true);
request.socket.write('hello, world');
request.socket.end();
}
These just cause jmeter to crash as it the headers make it think the connections are kept alive when they are not. Nothing I am doing seems to let me keep the connection open. Please advise :)
Seems I got myself into a bit of a fuzz over nothing; seem's there isn't anything wrong with my original implementation; however I run into an issue with Ephemeral Port Limits when running my tests in ApacheBench. This blog post by Daniel Mendel explains the problem: here

restart node.js forever process if response time too big

I got forever script for managing node.js site.
Sometimes node.js site hangs and response time go above 30 seconds. And in fact site is down. Then fast cure for it is restarting forever:
$ forever restart 3
where 3 is script number in forever list.
Is it possible to make it automatically? Is there option in forever which make it restart if response time will be more than 2 seconds for example?
Or maybe I got to run external script which will check response time and make descension to restart hanging forever script.
Or maybe I need to write this logic inside my node.js site?
I am assuming you want to restart the server if most of the reply are taking longer than x seconds. There are many tools that helps you to restart your instances based on their health. Monit is one of them. In this guide, monit restart the instance if reply doesn't come back in 10 seconds.
If you want to kill the instance if one request if any of the requests are taking too long, then you note down the time when you take in the request, and note down the time when request leaves. If the time is too long, throw an exception that you know will not get caught and the server would restart by itself. If you use express, then check the code for their logger under development mode, as it tracks the response time.
Aside leorex solution, I have something like this before to send 500 on timed out requests:
var writeHead = res.writeHead;
var timeout = setTimeout(function () {
res.statusCode = 500;
res.end('Response timed out');
// To avoid errors being thrown for writes after this, they should be ignored.
res.writeHead = res.write = res.end = function () {};
}, 40000);
res.writeHead = function () {
// This was called in time.
clearTimeout(timeout);
writeHead.apply(this, arguments);
};
You can use addTimeout module to take away the timeout clearing part.
Once you implemented, you can handle as you like, you can just call process.exit(1); so forever will immediately replaces you.
You can make this smarter. Also in my application, if an uncaught error happens, I signal the supervisor process so that it will spin up another worker process and gracefully go down(close http server and wait for all pending requests to finish). You can do the same in your application, but make sure everything has a timeout callback as a failover/backup plan.

node.js - after get request, script does not return to console

Here is a simple script
var http = require("http");
http.get( WEBSITE, function(res) {
console.log("Does not return");
return;
});
if WEBSITE variable is 'http://google.com' or 'http://facebook.com' script does not return to console.
but if WEBSITE variable is 'http://yahoo.com' or 'http://wikipedia.org' it returns to console. What is the difference?
By "return to console" I'm assuming you mean that node exits and drops you back at a shell prompt.
In fact, node does eventually exit for all of those domains you listed. (You were just impatient.)
What you are seeing is the result of HTTP keep-alives. By default, node keeps the TCP connection open after a HTTP request completes. This makes subsequent requests to the same server faster. As long as a TCP connection is still open, node will not exit.
Eventually, either node or the server will close the idle connection (and thus node will exit). It's likely that Google and Facebook allow idle connections to live for longer amounts of time than Yahoo and Wikipedia.
If you want your script to make a request and exit as soon as it completes, you need to disable HTTP keep-alives. You can do this by disabling Agent support.
http.get({ host:'google.com', port:80, path:'/', agent:false }, function(res) {
...
});
Only disable the Agent if you need this specific functionality. In a normal, long-running app, disabling the Agent can cause many problems.
There are also some other approaches you can take to avoid keep-alives keeping node running.

“Proxying” a lot of HTTP requests with Node.js + Express 2

I'm writing proxy in Node.js + Express 2. Proxy should:
decrypt POST payload and issue HTTP request to server based on result;
encrypt reply from server and send it back to client.
Encryption-related part works fine. The problem I'm facing is timeouts. Proxy should process requests in less than 15 secs. And most of them are under 500ms, actually.
Problem appears when I increase number of parallel requests. Most requests are completed ok, but some are failed after 15 secs + couple of millis. ab -n5000 -c300 works fine, but with concurrency of 500 it fails for some requests with timeout.
I could only speculate, but it seems thant problem is an order of callbacks exectuion. Is it possible that requests that comes first are hanging until ETIMEDOUT because of node's focus in latest ones which are still being processed in time under 500ms.
P.S.: There is no problem with remote server. I'm using request for interactions with it.
upd
The way things works with some code:
function queryRemote(req, res) {
var options = {}; // built based on req object (URI, body, authorization, etc.)
request(options, function(err, httpResponse, body) {
return err ? send500(req, res)
: res.end(encrypt(body));
});
}
app.use(myBodyParser); // reads hex string in payload
// and calls next() on 'end' event
app.post('/', [checkHeaders, // check Content-Type and Authorization headers
authUser, // query DB and call next()
parseRequest], // decrypt payload, parse JSON, call next()
function(req, res) {
req.socket.setTimeout(TIMEOUT);
queryRemote(req, res);
});
My problem is following: when ab issuing, let's say, 20 POSTs to /, express route handler gets called like thousands of times. That's not always happening, sometimes 20 and only 20 requests are processed in timely fashion.
Of course, ab is not a problem. I'm 100% sure that only 20 requests sent by ab. But route handler gets called multiple times.
I can't find reasons for such behaviour, any advice?
Timeouts were caused by using http.globalAgent which by default can process up to 5 concurrent requests to one host:port (which isn't enough in my case).
Thouthands of requests (instead of tens) were sent by ab (Wireshark approved fact under OS X; I can not reproduce this under Ubuntu inside Parallels).
You can have a look at node-http-proxy module and how it handles the connections. Make sure you don't buffer any data and everything works by streaming. And you should try to see where is the time spent for those long requests. Try instrumenting parts of your code with conosle.time and console.timeEnd and see where is taking the most time. If the time is mostly spent in javascript you should try to profile it. Basically you can use v8 profiler, by adding --prof option to your node command. Which makes a v8.log and can be processed via a v8 tool found in node-source-dir/deps/v8/tools. It only works if you have installed d8 shell via scons(scons d8). You can have a look at this article to help you further to make this working.
You can also use node-webkit-agent which uses webkit developer tools to show the profiler result. You can also have a look at my fork with a bit of sugar.
If that didn't work, you can try profiling with dtrace(only works in illumos-based systems like SmartOS).

How to pause http server and resume it?

I'm trying to make simple http server, that can be pause and resume,, I've looked at Nodejs API,, here http://nodejs.org/docs/v0.6.5/api/http.html
but that couldn't help me,, I've tried to remove event listener on 'request' event and add back,, that worked well but the listen callback call increase every time i try to pause and resume,, here some code i did:
var httpServer = require('http').Server();
var resumed = 0;
function ListenerHandler(){
console.log('[-] HTTP Server running at 127.0.0.1:2525');
};
function RequestHandler(req,res){
res.writeHead(200,{'Content-Type': 'text/plain'});
res.end('Hello, World');
};
function pauseHTTP(){
if(resumed){
httpServer.removeAllListeners('request');
httpServer.close();
resumed = 0;
console.log('[-] HTTP Server Paused');
}
};
function resumeHTTP(){
resumed = 1;
httpServer.on('request',RequestHandler);
httpServer.listen(2525,'127.0.0.1',ListenerHandler);
console.log('[-] HTTP Server Resumed');
};
I don't know quite what you're trying to do, but I think you're working at the wrong level to do what you want.
If you want incoming connection requests to your web server to block until the server is prepared to handle them, you need to stop calling the accept(2) system call on the socket. (I cannot imagine that node.js, or indeed any web server, would make this task very easy. The request callback is doubtless called only when an entire well-formed request has been received, well after session initiation.) Your operating system kernel would continue accepting connections up until the maximum backlog given to the listen(2) system call. On slow sites, that might be sufficient. On busy sites, that's less than a blink of an eye.
If you want incoming connection requests to your web server to be rejected until the server is prepared to handle them, you need to close(2) the listening socket. node.js makes this available via the close() method, but that will tear down the state of the server. You'll have to re-install the callbacks when you want to run again.

Resources