Might be an idiotic question, but I was wondering why does, i.e. invoking Express' res.send() (a subclass of NodeJS' http.ServerResponse) more than once per a single request shut down a NodeJS server? Why doesn't it end the request while sending the first response and simply log the error, without crashing?
Express is just throwing an exception, then node handles it :
The 'uncaughtException' event is emitted when an uncaught JavaScript exception bubbles all the way back to the event loop. By default, Node.js handles such exceptions by printing the stack trace to stderr and exiting. doc
If you want to do something else, implement your own process.on('uncaughtException', (err) => {})
Or you could let it crash and use stuff like forever to bring it back up.
Related
I have an AWS Lambda application built upon an external library that contains an EventEmitter. On a certain event, I need to make a HTTP request. So I was using this code (simplified):
myEmitter.on("myEvent", async() => {
setup();
await doRequest();
finishingWork();
});
What I understand that happens is this:
My handler is called, but as soon as the doRequest function is called, a Promise is returned and the EventEmitter continues with the next handlers. When all that is done, the work of the handler can continue (finishingWork).
This works locally, because my NodeJS process keeps running and any remaining events on the eventloop are handled. The strange thing is that this doesn't seem to work on AWS Lambda. Even if context.callbackWaitsForEmptyEventLoop is set to true.
In my logging I can see my handler enters the doRequest function, but nothing after I call the library to make the HTTP call (request-promise which uses request). And the code doesn't continue when I make another request (which I would expect if callbackWaitsForEmptyEventLoop is set to false, which it isn't).
Has anyone experienced something similar and know how to perform an ansynchronous HTTP request in the handler of a NodeJS event emitter, on AWS Lambda?
I have similar issue as well, my event emitter logs all events normally until running into async function. It works fine in ECS but not in Lambda, as event emitter runs synchronously but Lambda will exit once the response is returned.
At last, I used await-event-emitter to solve the problem.
await emitter.emit('onUpdate', ...);
If you know how to solve this, feel free to add another answer. But for now, the "solution" for us was to put the eventhandler code elsewhere in our codebase. This way, it is executed asynchronously.
We were able to do that because there is only one place where the event is emitted, but the eventhandler way would have been a cleaner solution. Unfortunately, it doesn't seem like it's possible.
The only way I have found to "catch" EPIPE errors thrown asynchronously by a socket timing out or closing prematurely is to directly attach an event handler to the socket object itself, as demonstrated in the documentation here:
https://nodejs.org/api/errors.html
const net = require('net');
const connection = net.connect('localhost');
// Adding an 'error' event handler to a stream:
connection.on('error', (err) => {
// If the connection is reset by the server, or if it can't
// connect at all, or on any sort of error encountered by
// the connection, the error will be sent here.
console.error(err);
});
This works, but is in many cases unhelpful -- if you're accessing a database or another service that has a node driver, the request and socket objects are likely inaccessible from your app code.
The most obvious solution is "don't do things that generate these errors" but since any non-trivial application is dependent on other services, no amount of input-checking in advance can guarantee that the service on the other end won't hang up unexpectedly, throwing an EPIPE in your code and in all likelihood crashing Node.
So, the options for handling this situation seem to be:
Let the error crash your app and use nodemon or supervisor to automatically restart. This isn't clean, but it seems like the only way to really guarantee you'll get back up and running safely.
Write custom connection clients for dependent services. This let's you attach error handlers where known problems could occur. But it violates DRY and means that you're now on the hook for maintaining your own custom client code when otherwise reasonable open source solutions already exist. Basically, it adds a huge maintenance burden for a slightly cleaner solution to a fairly rare problem.
Am I missing something, or are those the best options available?
Using node.js, when I run the program
setTimeout(() => console.log("Timed out"), 0);
console.log("finishing");
I see
finishing
Timed out
But when I add a throw before "finishing"
setTimeout(() => console.log("Timed out"), 0);
throw new Error();
console.log("finishing");
I see
throw new Error();
^
Error
at Object.<anonymous> ...(stack trace here)...
And I don't see any mention of "Timed out".
Why is that? Even though the initial context would throw, once the stack was freed up, I expected the callback I passed to setTimeout would still run.
Does having an uncaught exception cause all timeouts to get canceled? Is this feature documented somewhere?
If I have multiple timeouts, is there a way for me to make sure that all the other timeouts continue to run when they can even if one of them happens to throw?
Unlike a web application running on browser, a Node application runs as a process on top of Google V8 JavaScript Engine. If you look into https://nodejs.org/api/timers.html is states that
The timer functions within Node.js implement a similar API as the timers API provided by Web Browsers but use a different internal implementation that is built around the Node.js Event Loop.
As the above statement explains, even though the same global functions are available in both cases, their implementations are different. Therefore when an uncaught exception occurs in a Node application, all code related to timeouts will stop as the process is terminated. The best way to handle this is to properly handle all exceptions. You can use the below code to capture all uncaught exceptions from the process level itself.
process.on('uncaughtException', function(error) {
console.log(error);
});
I am working on a nodejs project and I had a major doubt/problem in nodejs error handling.
Whenever there is an exception caused by one of the request will the node stop the execution or will callback not returned of existing request, if so is there any way I can make the node to stop or throw exception after all the current request is completed.
I also wanted to ask if it is good to use domain as it is deprecated.
I am working with a partner on a project. He has written a lot of code in Node.js+Express, but we've been running into issues with the architecture.
To remedy this, my primary role has been to figure out the best way to architect a Node.js+Express application. I've run into two scenarios, dealing with errors, and I'd like some suggestions.
First, how do I capture top-level exceptions? The last thing I want is for a bug to completely kill the node process. I want to continue serving users in the face of any error.
Secondly, some errors are passed back via callbacks (we're using caolan / async). As part of each route handler, either we render a view (GET), redirect to another route (POST) and we want to redirect to an error screen with a custom error message. How can I make sure to capture this logic in one place?
First, how do I capture top-level exceptions? The last thing I want is for a bug to completely kill the node process. I want to continue serving users in the face of any error.
Edit: I think node's philosophy in general is that any uncaught exceptions should kill the process, and that you should run your node app under some kind of process monitor with appropriate logging facilities. The following advice is regarding any other errors you might encounter in your express route handlers etc.
Express has a general errorHandler, which should capture all thrown errors as well as everything passed as a parameter to next in your routes/middlewares, and respond with 500 Internal Server Error.
Secondly, some errors are passed back via callbacks (we're using caolan / async). As part of each route handler, either we render a view (GET), redirect to another route (POST) and we want to redirect to an error screen with a custom error message. How can I make sure to capture this logic in one place?
You could create a custom handleError, which you call in each callback like so:
async.series(..., function(err, results) {
if(err)
return handleError(req, res, err);
// ...
});
Or you could just pass the errors on with next(err) and implement your custom error handler as described here: http://expressjs.com/guide/error-handling.html
Top level exceptions:
You can use the uncaughtException event from process, but it's generally not recommended.
Often applications will go into a corrupted state (eg. you have some state which typically gets set, but the exception caused that not to happen) when an exception is thrown. Then, it will just cause more and more errors from there on onwards.
A recommended approach is to use something like forever to automatically restart the app in case it crashes. This way you will have the application in a sane state even after a crash.
Error handling in express:
You can create a new Error instance and pass it to the next callback in the chain.
Eg.
express.get('/some/url', function(req, res, next) {
//something here
if(error) {
next(new Error('blah blah'));
}
});
To handle the error from here on onwards, you can set an error handler. See express docs on error handling
Checkout the excellent log-handling module Winston: https://github.com/flatiron/winston
It allows you to configure exception handling in a manner that will not only log it, but will allow the process to continue. And, since these would obviously be serious issues, you can even configure Winston to send out emails on specific event types (like exceptions).