iisnode failing requests after exception under windows? - node.js

Recently I've been trying to create a simple file server with nodejs and looks like I've run into some problems that I can't seem to overcome.
In short:
I configured iisnode to have 4 worker processes (there is a setting in web.config for this called nodeProcessCountPerApplication="4"). And it balances the load between these workers.
When there are 8 requests coming in, each worker has 2 request to process, but when an exception happens in one of the request that is being processed, the other one that is waiting also fails.
For example:
worker 1 handling request 1, request 5 waiting
worker 2 handling request 2, request 6 waiting
worker 3 handling request 3, request 7 waiting
worker 4 handling request 4, request 8 waiting
If exception happens when handling request 3, the server responds with my custom error code, shuts down and is restarted by iisnode. But the problem is that request 7 also fails, even if it hasn't been processed.
I tried to set the maxConcurrentRequestsPerProcess="1" so that only 1 request goes at a time to one worker, but it does not work the way I want. Request 5,6,7,8 will be rejected with a 503 Service Unavailable response even though the maximum number of request that will queue is set to 1000 (default by iis).
The Question
These requests don't have anything to do with each other, so one failing should not take down the other.
Is there a setting in IIS that enables the behavior that I'm after? Or is this even possible to do with node and IIS?
In Long
Why?
I'm using node, because I have some other requirements (like logging, etc..) that I can do in JavaScript fairly easy.
Since I have a ASP.NET MVC background and I'm running windows, after a few searches I've found the iisnode module for IIS, that can be used to host a node app with IIS. This makes it easy for me to manage and deploy the application. I also read on many sites, that node servers have good performance because of their async nature.
How?
I started with a very basic exception handling logic, that catches exceptions using the node's domain object:
var server = http.createServer(function (request, response) {
var d = domain.create();
d.on('error', function (err) {
try {
//stop taking new requests.
serverShutdown();
//send an error to the request that triggered the problem
response.statusCode = 500;
response.end('Oops, there was a problem! ;) \n');
}
catch (er2) {
//oh well, not much we can do at this point.
console.error('Error sending 500!', er2.stack);
process.exit(1);
}
});
d.add(request);
d.add(response);
d.run(function () {
router.route(request, response);
});
}).listen(process.env.PORT);
Since I could not find any best practices to gracefully shut down the server, when there is an unhandled exception, I decided to write my own logic. So after server.close() is called, I go through the sockets, and wake them so the server can shut down:
function serverShutdown() {
server.close();
for (var s in sockets) {
sockets[s].setTimeout(1, function () { });
}
}
This is also great!
What?
The problem comes when I try to stresstest this. For some reason the cluster module is not supported by the iisnode, but it has a similar feature. I configured iisnode to have 4 worker processes (there is a setting in web.config for this called nodeProcessCountPerApplication="4"). And it balances the load between these workers.
I'm not entirely sure on how this works, but here's what I figured out from testing:
When there are 8 requests coming in, each worker has 2 request to process, but when an exception happens in one of the request that is being processed, the other one that is waiting also fails.
For example:
worker 1 handling request 1, request 5 waiting
worker 2 handling request 2, request 6 waiting
worker 3 handling request 3, request 7 waiting
worker 4 handling request 4, request 8 waiting
If exception happens when handling request 3, the server responds with my custom error code, shuts down and is restarted by iisnode. But the problem is that request 7 also fails, even if it hasn't been processed.
I tried to set the maxConcurrentRequestsPerProcess="1" so that only 1 request goes at a time to one worker, but it does not work the way I want. Request 5,6,7,8 will be rejected with a 503 Service Unavailable response even though the maximum number of request that will queue is set to 1000 (default by iis).
The Question Again
These requests don't have anything to do with each other, so one failing should not take down the other.
Is there a setting in IIS that enables the behavior that I'm after? Or is this even possible to do with node and IIS?
Any help is appreciated!
Update
I managed to rule out iisnode, and made the same server using cluster and worker processes.
The problem still persist, and request that are queued to the worker that has the exception are returned with 502 Bad Gateway.
Again, I don't know what's happening with the requests that are coming in to the server, and which level are they at the time of the exception. And I can't seem to find any info about this either...
Anyone could point me in the right direction? At least where to search for the solution?

Related

Client Request Timeout between SenecaJS Service and Express API

I am using SenecaJS to build a microservices based application. So far, I have conceptualized one microservice which consists of only one action yet. This action, when called, will execute a time-consuming shell command (approx. time consumed 3 minutes) and return as a response - the output of the shell command. My code files are available here: https://gist.github.com/ohmtrivedi/5a94841d25714f3cfd6aee260add97bb
So, I have been trying to make requests to this service in 2 different ways: First, I send a direct request to the service (which runs the plugin, osfp_tool) using cURL and as demonstrated here: http://senecajs.org/getting-started/#writing-microservices.
Second, by referencing this tutorial: http://senecajs.org/getting-started/#web-server-integration, I wrote an Express API which communicates with my service (osfp_service). So, I send HTTP requests (using POSTMAN) to the Express API.
I used to receive Client Request timeout error in both cases. After some research, I came to know of timeout configuration in Seneca instance. So, I added a time configuration at 2 places - in the Seneca service (osfp_service) as well as in the Express API (app.js). Note that, I have set timeout to 300000ms or 5 mins. I have checked that the shell command takes about 3 mins, so timeout is set more than that. However, I still face Client Request timeout error as you can see below. I know that there is no error from the shell command execution, as on my serivce log, even after I get Client timeout Request error, the action completes its execution successfully which can be seen using console.log messages.
Hope someone can help me in resolving this issue, stuck on it for a very long time now.
EDIT
So, I have been playing around with timeout configuration. I was able to resolve the timeout error from osfp_service.js script by setting the timeout in seneca instance at the topmost level (https://gist.github.com/ohmtrivedi/5a94841d25714f3cfd6aee260add97bb#file-osfp_service-js-L8).
If I set timeout configuration in app.js in the same way (https://gist.github.com/ohmtrivedi/5a94841d25714f3cfd6aee260add97bb#file-app2-js-L26), then I still get Error 504: Client request timeout/Gateway timeout (https://drive.google.com/open?id=1El2JCy047dnm6PHlvU33d_mKPuIWUlfX).
If I set timeout configuration in app.js inside the transport object in seneca instance (https://gist.github.com/ohmtrivedi/5a94841d25714f3cfd6aee260add97bb#file-app1-js-L26), then I get Error 503: Response timeout/Service Unavailable (https://drive.google.com/open?id=1u6w7XyK9-vAJVhna_JnIQ4imRzOm_51T). I cannot understand why it says Service Unavailable, because the action does get executed and it even completes successfully.
I can't seem to understand the different behavior.
I also worked on timeout problems with Seneca.
For my application, the solution was:
Set the timeout in require('seneca'):
let seneca = require('seneca')(
{
timeout: config.request_timeout,
tag: ...
}
)
Set the timeout in each act() call:
seneca.act({timeout$: config.request_timeout, role: ...});
Hope this helps.
EDIT:
As found in this post, the transport timeout can also be configured:
let seneca = require('seneca')(
{
timeout: config.request_timeout,
tag: ...,
transport: {
'web': { timeout: config.request_timeout },
'tcp': { timeout: config.request_timeout }
}
}
);

restart node.js forever process if response time too big

I got forever script for managing node.js site.
Sometimes node.js site hangs and response time go above 30 seconds. And in fact site is down. Then fast cure for it is restarting forever:
$ forever restart 3
where 3 is script number in forever list.
Is it possible to make it automatically? Is there option in forever which make it restart if response time will be more than 2 seconds for example?
Or maybe I got to run external script which will check response time and make descension to restart hanging forever script.
Or maybe I need to write this logic inside my node.js site?
I am assuming you want to restart the server if most of the reply are taking longer than x seconds. There are many tools that helps you to restart your instances based on their health. Monit is one of them. In this guide, monit restart the instance if reply doesn't come back in 10 seconds.
If you want to kill the instance if one request if any of the requests are taking too long, then you note down the time when you take in the request, and note down the time when request leaves. If the time is too long, throw an exception that you know will not get caught and the server would restart by itself. If you use express, then check the code for their logger under development mode, as it tracks the response time.
Aside leorex solution, I have something like this before to send 500 on timed out requests:
var writeHead = res.writeHead;
var timeout = setTimeout(function () {
res.statusCode = 500;
res.end('Response timed out');
// To avoid errors being thrown for writes after this, they should be ignored.
res.writeHead = res.write = res.end = function () {};
}, 40000);
res.writeHead = function () {
// This was called in time.
clearTimeout(timeout);
writeHead.apply(this, arguments);
};
You can use addTimeout module to take away the timeout clearing part.
Once you implemented, you can handle as you like, you can just call process.exit(1); so forever will immediately replaces you.
You can make this smarter. Also in my application, if an uncaught error happens, I signal the supervisor process so that it will spin up another worker process and gracefully go down(close http server and wait for all pending requests to finish). You can do the same in your application, but make sure everything has a timeout callback as a failover/backup plan.

Shutting down a Node.js http server in a unit test

Supposed I have some unit tests that test a web server. For reasons I don't want to discuss here (outside scope ;-)), every test needs a newly started server.
As long as I don't send a request to the server, everything is fine. But once I do, a call to the http server's close function does not work as expected, as all made requests result in kept-alive connections, hence the server waits for 120 seconds before actually closing.
Of course this is not acceptable for running the tests.
At the moment, the only solutions I'd see was either
setting the keep-alive timeout to 0, so a call to close will actually close the server,
or to start each server on a different port, although this becomes hard to handle when you have lots of tests.
Any other ideas of how to deal with this situation?
PS: I had a asked How do I shutdown a Node.js http(s) server immediately? a while ago, and found a viable way to work around it, but as it seems this workaround does not run reliably in every case, as I am getting strange results from time to time.
function createOneRequestServer() {
var server = http.createServer(function (req, res) {
res.write('write stuff');
res.end();
server.close();
}).listen(8080);
}
You could also consider using process to fork processes and kill them after you have tested on that process.
var child = fork('serverModuleYouWishToTest.js');
function callback(signalCode) {
child.kill(signalCode);
}
runYourTest(callback);
This method is desirable because it does not require you to write special cases of your servers to service only one request, and keeps your test code and your production code 100% independant.

How to kill a connection in nodejs

I have a homework assignment to build an http server using only node native modules.
I am trying to protect the server from overloading, so each request is hashed and stores.
If a certain request reaches a high number, say 500, I call socket.destroy().
Every interval (one minute) I restart the hash-table. Problem is that when I do a socket that was previously dead is now working again. The only thing I do each interval is requests = {}, and nothing to do with the connections.
Any ideas why the connection is live again? Is there a better function to use than destroy()?
Thanks
Destroying the socket won't necessarily stop the client from retrying the request with a new socket.
You might instead try responding minimally with just a non-OK status code:
if (requests[path] >= 500) {
res.statusCode = 503;
res.end();
}
And, on the 503 status code:
The server is currently unable to handle the request due to a temporary overloading or maintenance of the server.

“Proxying” a lot of HTTP requests with Node.js + Express 2

I'm writing proxy in Node.js + Express 2. Proxy should:
decrypt POST payload and issue HTTP request to server based on result;
encrypt reply from server and send it back to client.
Encryption-related part works fine. The problem I'm facing is timeouts. Proxy should process requests in less than 15 secs. And most of them are under 500ms, actually.
Problem appears when I increase number of parallel requests. Most requests are completed ok, but some are failed after 15 secs + couple of millis. ab -n5000 -c300 works fine, but with concurrency of 500 it fails for some requests with timeout.
I could only speculate, but it seems thant problem is an order of callbacks exectuion. Is it possible that requests that comes first are hanging until ETIMEDOUT because of node's focus in latest ones which are still being processed in time under 500ms.
P.S.: There is no problem with remote server. I'm using request for interactions with it.
upd
The way things works with some code:
function queryRemote(req, res) {
var options = {}; // built based on req object (URI, body, authorization, etc.)
request(options, function(err, httpResponse, body) {
return err ? send500(req, res)
: res.end(encrypt(body));
});
}
app.use(myBodyParser); // reads hex string in payload
// and calls next() on 'end' event
app.post('/', [checkHeaders, // check Content-Type and Authorization headers
authUser, // query DB and call next()
parseRequest], // decrypt payload, parse JSON, call next()
function(req, res) {
req.socket.setTimeout(TIMEOUT);
queryRemote(req, res);
});
My problem is following: when ab issuing, let's say, 20 POSTs to /, express route handler gets called like thousands of times. That's not always happening, sometimes 20 and only 20 requests are processed in timely fashion.
Of course, ab is not a problem. I'm 100% sure that only 20 requests sent by ab. But route handler gets called multiple times.
I can't find reasons for such behaviour, any advice?
Timeouts were caused by using http.globalAgent which by default can process up to 5 concurrent requests to one host:port (which isn't enough in my case).
Thouthands of requests (instead of tens) were sent by ab (Wireshark approved fact under OS X; I can not reproduce this under Ubuntu inside Parallels).
You can have a look at node-http-proxy module and how it handles the connections. Make sure you don't buffer any data and everything works by streaming. And you should try to see where is the time spent for those long requests. Try instrumenting parts of your code with conosle.time and console.timeEnd and see where is taking the most time. If the time is mostly spent in javascript you should try to profile it. Basically you can use v8 profiler, by adding --prof option to your node command. Which makes a v8.log and can be processed via a v8 tool found in node-source-dir/deps/v8/tools. It only works if you have installed d8 shell via scons(scons d8). You can have a look at this article to help you further to make this working.
You can also use node-webkit-agent which uses webkit developer tools to show the profiler result. You can also have a look at my fork with a bit of sugar.
If that didn't work, you can try profiling with dtrace(only works in illumos-based systems like SmartOS).

Resources