Best practices to handle exception in SailsJS - node.js

I just started trying out SailsJS a few days ago.
I've realized that the Node is terminated whenever I have an uncaught exception.
I have a list of controllers and each of them calls a specific service JS file (Containing logics and DB calls) in services/.
Can I write a global error handler for all services so that any type of error that occurs from these services should be handled by it and appropriate error response has to be communicated to front-end.
I tried using process.on('uncaughtexception') or some of basic exceptions but it needs to be added to each service method.
Also can I have one common point for all service calls made from client to server through which all io.socket.post() and io..socket.get() goes through
I would appreciate any pointer/article that would show me the common best practices for handling uncaught exceptions in SailsJS and using shorter code rather than writing redundant code in all services.

Best practice is using Domains in your controller. This will handle exceptions in async code, and its relatively straight forward.
You can use something like trycatch to simplify things a little, but domain based exceptions will be most effective. It'll insure that exceptions do not crash your application. Just create a new domain in your controller and run your controller methods inside of that domain.
Sailsjs being based on express you can use connect middleware, and you can seamlessly create a new domain from middleware. There such thing as express-domain-middleware. This might be the most aesthetic option, and most convenient.
Update:
As mention by Benjamin Gruenbaum, Domains are planned to become deprecated in v1 of node. Perhaps you should read through Joyents Error Handling Best Practices. Its agnostic to the framework you are using.
Additonally you can still use Domains, while there isn't a way to globally handle errors in node.js otherwise. Once deprecated you could always remove your dependence on Domains, relatively easily. That said, it may be best not to rely solely on domains.
Strongloop also provides a library inspired by domains called Zone. This is also an option.

Its OK to let node instance error out due to a programming error, else it may continue in an inconsistent state and mess-up business logic. In production environment the server can be restarted on crash, this will reset its state and keep it available, if the error is not frequent. And in all of it its very important to Log everything. This applies to most of Node setups, including SailsJS.
The following approach can be taken:
Use a Logger : A dedicated logger should be accessible to server components. Should be connected to a service that notifies the developer (email ?) of very serious errors.
Propagate per request errors to the end: Carefully forward errors from any step in request processing. In ExperssJs/ConnectJs/middle-ware based setup's, the next(err) can be used to pass an error to the middle-ware chain. An error catching middle-ware at the end of the chain will get this error, log it verbose, and send a 500 status back. You can use Domains or Zones or Promises or async or whatever you like to process request and catch errors.
Shutdown on process.on('uncaughtexception'): Log the erorr, do necessary clean-up, and throw the same error again to shutdown process.
User PM2/Forever or Upstart/init.d on linux : Now when the process shuts down due to the bad exception, these tools will restart it and track how many time server has been crashing. If the server is crashing way too many time, its good to stop it and take immediate action.

I have not tried this, but I believe you should be able to set a catch-all exception handler in bootstrap.js using process.on('uncaughtexception').
Personally, I use promises via the bluebird library, and put a catch statement that passes all errors to a global error handling function.

Related

Service/data layer exception handling in express validator

Ive built an app that pipes requests through express validator's validation chains, but I ran into a design issue related to logging and error handling.
My app is divided into distinct controllers/services/repositories. All errors thrown in repos and layers bubble up to controllers which handle errors by calling next() on them and passing them to an error handling middleware that logs and so on.
The problem is that one of my validation chains calls a repo. If my DB connection is dead, all I get on my log is what I happen to put in the validation chain's withMessage().
Id like to get a better log of the event in such case, but now a dead db connection ends in a 400 Bad request on the logs since there isnt a controller to catch the error.
Any ideas how I should structure my app to combat this?
I would not like to add specific logging on the service/data layers only because I use express validator as a middleware.
I looked more into the responsibilities of each layer and came to the conclusion that the call to the data layer should not be done with express-validator. Logic related to whether an email is already registered, for example, should happen on the service layer, as per:
In which layer should validation be performed?
I will use express validator only for easy checks and put the rest into my service layer.

Putting a Load on Node

We have a C# Web API server and a Node Express server. We make hundreds of requests from the C# server to a route on the Node server. The route on the Node server does intensive work and often doesn't return for 6-8 seconds.
Making hundreds of these requests simultaneously seems to cause the Node server to fail. Errors in the Node server output include either socket hang up or ECONNRESET. The error from the C# side says
No connection could be made because the target machine actively refused it.
This error occurs after processing an unpredictable number of the requests, which leads me to think it is simply overloading the server. Using a Thread.Sleep(500) on the C# side allows us to get through more requests, and fiddling with the wait there leads to more or less success, but thread sleeping is rarely if ever the right answer, and I think this case is no exception.
Are we simply putting too much stress on the Node server? Can this only be solved with Load Balancing or some form of clustering? If there is an another alternative, what might it look like?
One path I'm starting to explore is the node-toobusy module. If I return a 503 though, what should be the process in the following code? Should I Thread.Sleep and then re-submit the request?
It sounds like your node.js server is getting overloaded.
The route on the Node server does intensive work and often doesn't return for 6-8 seconds.
This is a bad smell - if your node process is doing intense computation, it will halt the event loop until that computation is completed, and won't be able to handle any other requests. You should probably have it doing that computation in a worker process, which will run on another cpu core if available. cluster is the node builtin module that lets you do that, so I'll point you there.
One path I'm starting to explore is the node-toobusy module. If I return a 503 though, what should be the process in the following code? Should I Thread.Sleep and then re-submit the request?
That depends on your application and your expected load. You may want to refresh once or twice if it's likely that things will cool down enough during that time, but for your API you probably just want to return a 503 in C# too - better to let the client know the server's too busy and let them make their own decision then to keep refreshing on its behalf.

Best practices for preventing Denial of Service via node.js uncaught Exceptions

So I have recently implemented a pattern similar to that suggested by node.js's domain documentation (http://nodejs.org/api/domain.html#domain_warning_don_t_ignore_errors) to gracefully crash the server if an uncaught exception happens.
However it's easily apparent that it is possible to create a denial of service by rapidly triggering errors, since it takes a significant amount of time to start a new process.
Is there any recommended best practices for mitigating this?
log your uncaught exceptions
read the logs
take the time to understand, reproduce, and fix the bugs
keep deploying new versions of your app where uncaught exceptions are increasingly rare
use the cluster module and load balancers for fault tolerance
consider network level DoS protection if you really need it
Have a web server configuration handy where for the easy case of a particular URL crashes your app, you can have your web server quickly start filtering those requests so they never reach your application server. This could be done with an nginx location block that just sends a fail whale type response.
Realize that this only helps some cases. In complex cases where a specially-crafted POST message body crashes your app and turning off that entire URL will block legitimate user access, you'll need to respond by deploying better code.

Using throw in nodejs connect middleware

Inside a nodejs connect middleware, the default way of error reporting is to call next(err), usually followed by return, if no further message should be shown to the user. The error handler may show an vanilla http 500 page then for example.
However, some errors may result in exceptions, including those throwed by used third party libs. The connect (or express?) middleware stack however catches those, and redirect them to the error handler as well.
I followed some discussions saying nodejs should be restarted on exceptions, as some state may be corrupted. However, the connect (or express) makers doesn't share this view it seems?
Having it this way, is it feasable to just throw exceptions inside middleware? Or may this baypass some connect-internal operation?

Why is using try / catch a bad idea in this scenario?

I've been working on hooking up mongoDB to a node.js server. I've got the code all neatly put away, however it takes about 5~ seconds to connect, and if a request for an insert or a query comes before that, the server will crash.
My first instinct was to use try/catch to filter off any requests which error out. I will want the server to go on regardless of what errors an individual request breaks, so why not use it?
Everywhere I look it's touted as a bad idea, I'm not sure I understand why.
A try/catch block around something that simply ignores the error is generally considered bad practice. However, if that is the behavior you want, then there is nothing wrong with it. Just consider that it may not actually be the best behavior. You may, at a minimum, want to log the fact that an exception occurred.
Now, due to the asynchronous nature of Node.js, try/catch blocks are sometimes not useful. I don't know what part of the MongoDB API you are using, but if there is a callback, you will want to instead check the err parameter, which should be the first parameter of your callback function in most cases.
Finally, for all of my applications, I connect to any necessary DBs synchronously, and start listening on ports afterwards. But, this is only relevant if a persistent connection makes sense for your project. Plus, you still have to watch for errors, which can occur as connection failures do happen.

Resources