I just started trying out node.js a few days ago. I've realized that the Node is terminated whenever I have an unhandled exception in my program. This is different than the normal server container that I have been exposed to where only the Worker Thread dies when unhandled exceptions occur and the container would still be able to receive the request. This raises a few questions:
Is process.on('uncaughtException') the only effective way to guard against it?
Will process.on('uncaughtException') catch the unhandled exception during execution of asynchronous processes as well?
Is there a module that is already built (such as sending email or writing to a file) that I could leverage in the case of uncaught exceptions?
I would appreciate any pointer/article that would show me the common best practices for handling uncaught exceptions in node.js
Update: Joyent now has their own guide. The following information is more of a summary:
Safely "throwing" errors
Ideally we'd like to avoid uncaught errors as much as possible, as such, instead of literally throwing the error, we can instead safely "throw" the error using one of the following methods depending on our code architecture:
For synchronous code, if an error happens, return the error:
// Define divider as a syncrhonous function
var divideSync = function(x,y) {
// if error condition?
if ( y === 0 ) {
// "throw" the error safely by returning it
return new Error("Can't divide by zero")
}
else {
// no error occured, continue on
return x/y
}
}
// Divide 4/2
var result = divideSync(4,2)
// did an error occur?
if ( result instanceof Error ) {
// handle the error safely
console.log('4/2=err', result)
}
else {
// no error occured, continue on
console.log('4/2='+result)
}
// Divide 4/0
result = divideSync(4,0)
// did an error occur?
if ( result instanceof Error ) {
// handle the error safely
console.log('4/0=err', result)
}
else {
// no error occured, continue on
console.log('4/0='+result)
}
For callback-based (ie. asynchronous) code, the first argument of the callback is err, if an error happens err is the error, if an error doesn't happen then err is null. Any other arguments follow the err argument:
var divide = function(x,y,next) {
// if error condition?
if ( y === 0 ) {
// "throw" the error safely by calling the completion callback
// with the first argument being the error
next(new Error("Can't divide by zero"))
}
else {
// no error occured, continue on
next(null, x/y)
}
}
divide(4,2,function(err,result){
// did an error occur?
if ( err ) {
// handle the error safely
console.log('4/2=err', err)
}
else {
// no error occured, continue on
console.log('4/2='+result)
}
})
divide(4,0,function(err,result){
// did an error occur?
if ( err ) {
// handle the error safely
console.log('4/0=err', err)
}
else {
// no error occured, continue on
console.log('4/0='+result)
}
})
For eventful code, where the error may happen anywhere, instead of throwing the error, fire the error event instead:
// Definite our Divider Event Emitter
var events = require('events')
var Divider = function(){
events.EventEmitter.call(this)
}
require('util').inherits(Divider, events.EventEmitter)
// Add the divide function
Divider.prototype.divide = function(x,y){
// if error condition?
if ( y === 0 ) {
// "throw" the error safely by emitting it
var err = new Error("Can't divide by zero")
this.emit('error', err)
}
else {
// no error occured, continue on
this.emit('divided', x, y, x/y)
}
// Chain
return this;
}
// Create our divider and listen for errors
var divider = new Divider()
divider.on('error', function(err){
// handle the error safely
console.log(err)
})
divider.on('divided', function(x,y,result){
console.log(x+'/'+y+'='+result)
})
// Divide
divider.divide(4,2).divide(4,0)
Safely "catching" errors
Sometimes though, there may still be code that throws an error somewhere which can lead to an uncaught exception and a potential crash of our application if we don't catch it safely. Depending on our code architecture we can use one of the following methods to catch it:
When we know where the error is occurring, we can wrap that section in a node.js domain
var d = require('domain').create()
d.on('error', function(err){
// handle the error safely
console.log(err)
})
// catch the uncaught errors in this asynchronous or synchronous code block
d.run(function(){
// the asynchronous or synchronous code that we want to catch thrown errors on
var err = new Error('example')
throw err
})
If we know where the error is occurring is synchronous code, and for whatever reason can't use domains (perhaps old version of node), we can use the try catch statement:
// catch the uncaught errors in this synchronous code block
// try catch statements only work on synchronous code
try {
// the synchronous code that we want to catch thrown errors on
var err = new Error('example')
throw err
} catch (err) {
// handle the error safely
console.log(err)
}
However, be careful not to use try...catch in asynchronous code, as an asynchronously thrown error will not be caught:
try {
setTimeout(function(){
var err = new Error('example')
throw err
}, 1000)
}
catch (err) {
// Example error won't be caught here... crashing our app
// hence the need for domains
}
If you do want to work with try..catch in conjunction with asynchronous code, when running Node 7.4 or higher you can use async/await natively to write your asynchronous functions.
Another thing to be careful about with try...catch is the risk of wrapping your completion callback inside the try statement like so:
var divide = function(x,y,next) {
// if error condition?
if ( y === 0 ) {
// "throw" the error safely by calling the completion callback
// with the first argument being the error
next(new Error("Can't divide by zero"))
}
else {
// no error occured, continue on
next(null, x/y)
}
}
var continueElsewhere = function(err, result){
throw new Error('elsewhere has failed')
}
try {
divide(4, 2, continueElsewhere)
// ^ the execution of divide, and the execution of
// continueElsewhere will be inside the try statement
}
catch (err) {
console.log(err.stack)
// ^ will output the "unexpected" result of: elsewhere has failed
}
This gotcha is very easy to do as your code becomes more complex. As such, it is best to either use domains or to return errors to avoid (1) uncaught exceptions in asynchronous code (2) the try catch catching execution that you don't want it to. In languages that allow for proper threading instead of JavaScript's asynchronous event-machine style, this is less of an issue.
Finally, in the case where an uncaught error happens in a place that wasn't wrapped in a domain or a try catch statement, we can make our application not crash by using the uncaughtException listener (however doing so can put the application in an unknown state):
// catch the uncaught errors that weren't wrapped in a domain or try catch statement
// do not use this in modules, but only in applications, as otherwise we could have multiple of these bound
process.on('uncaughtException', function(err) {
// handle the error safely
console.log(err)
})
// the asynchronous or synchronous code that emits the otherwise uncaught error
var err = new Error('example')
throw err
Following is a summarization and curation from many different sources on this topic including code example and quotes from selected blog posts. The complete list of best practices can be found here
Best practices of Node.JS error handling
Number1: Use promises for async error handling
TL;DR: Handling async errors in callback style is probably the fastest way to hell (a.k.a the pyramid of doom). The best gift you can give to your code is using instead a reputable promise library which provides much compact and familiar code syntax like try-catch
Otherwise: Node.JS callback style, function(err, response), is a promising way to un-maintainable code due to the mix of error handling with casual code, excessive nesting and awkward coding patterns
Code example - good
doWork()
.then(doWork)
.then(doError)
.then(doWork)
.catch(errorHandler)
.then(verify);
code example anti pattern – callback style error handling
getData(someParameter, function(err, result){
if(err != null)
//do something like calling the given callback function and pass the error
getMoreData(a, function(err, result){
if(err != null)
//do something like calling the given callback function and pass the error
getMoreData(b, function(c){
getMoreData(d, function(e){
...
});
});
});
});
});
Blog quote: "We have a problem with promises"
(From the blog pouchdb, ranked 11 for the keywords "Node Promises")
"…And in fact, callbacks do something even more sinister: they deprive us of the stack, which is something we usually take for granted in programming languages. Writing code without a stack is a lot like driving a car without a brake pedal: you don’t realize how badly you need it, until you reach for it and it’s not there. The whole point of promises is to give us back the language fundamentals we lost when we went async: return, throw, and the stack. But you have to know how to use promises correctly in order to take advantage of them."
Number2: Use only the built-in Error object
TL;DR: It pretty common to see code that throws errors as string or as a custom type – this complicates the error handling logic and the interoperability between modules. Whether you reject a promise, throw exception or emit error – using Node.JS built-in Error object increases uniformity and prevents loss of error information
Otherwise: When executing some module, being uncertain which type of errors come in return – makes it much harder to reason about the coming exception and handle it. Even worth, using custom types to describe errors might lead to loss of critical error information like the stack trace!
Code example - doing it right
//throwing an Error from typical function, whether sync or async
if(!productToAdd)
throw new Error("How can I add new product when no value provided?");
//'throwing' an Error from EventEmitter
const myEmitter = new MyEmitter();
myEmitter.emit('error', new Error('whoops!'));
//'throwing' an Error from a Promise
return new promise(function (resolve, reject) {
DAL.getProduct(productToAdd.id).then((existingProduct) =>{
if(existingProduct != null)
return reject(new Error("Why fooling us and trying to add an existing product?"));
code example anti pattern
//throwing a String lacks any stack trace information and other important properties
if(!productToAdd)
throw ("How can I add new product when no value provided?");
Blog quote: "A string is not an error"
(From the blog devthought, ranked 6 for the keywords “Node.JS error object”)
"…passing a string instead of an error results in reduced interoperability between modules. It breaks contracts with APIs that might be performing instanceof Error checks, or that want to know more about the error. Error objects, as we’ll see, have very interesting properties in modern JavaScript engines besides holding the message passed to the constructor.."
Number3: Distinguish operational vs programmer errors
TL;DR: Operations errors (e.g. API received an invalid input) refer to known cases where the error impact is fully understood and can be handled thoughtfully. On the other hand, programmer error (e.g. trying to read undefined variable) refers to unknown code failures that dictate to gracefully restart the application
Otherwise: You may always restart the application when an error appear, but why letting ~5000 online users down because of a minor and predicted error (operational error)? the opposite is also not ideal – keeping the application up when unknown issue (programmer error) occurred might lead unpredicted behavior. Differentiating the two allows acting tactfully and applying a balanced approach based on the given context
Code example - doing it right
//throwing an Error from typical function, whether sync or async
if(!productToAdd)
throw new Error("How can I add new product when no value provided?");
//'throwing' an Error from EventEmitter
const myEmitter = new MyEmitter();
myEmitter.emit('error', new Error('whoops!'));
//'throwing' an Error from a Promise
return new promise(function (resolve, reject) {
DAL.getProduct(productToAdd.id).then((existingProduct) =>{
if(existingProduct != null)
return reject(new Error("Why fooling us and trying to add an existing product?"));
code example - marking an error as operational (trusted)
//marking an error object as operational
var myError = new Error("How can I add new product when no value provided?");
myError.isOperational = true;
//or if you're using some centralized error factory (see other examples at the bullet "Use only the built-in Error object")
function appError(commonType, description, isOperational) {
Error.call(this);
Error.captureStackTrace(this);
this.commonType = commonType;
this.description = description;
this.isOperational = isOperational;
};
throw new appError(errorManagement.commonErrors.InvalidInput, "Describe here what happened", true);
//error handling code within middleware
process.on('uncaughtException', function(error) {
if(!error.isOperational)
process.exit(1);
});
Blog Quote: "Otherwise you risk the state"
(From the blog debugable, ranked 3 for the keywords "Node.JS uncaught exception")
"…By the very nature of how throw works in JavaScript, there is almost never any way to safely “pick up where you left off”, without leaking references, or creating some other sort of undefined brittle state. The safest way to respond to a thrown error is to shut down the process. Of course, in a normal web server, you might have many connections open, and it is not reasonable to abruptly shut those down because an error was triggered by someone else. The better approach is to send an error response to the request that triggered the error, while letting the others finish in their normal time, and stop listening for new requests in that worker"
Number4: Handle errors centrally, through but not within middleware
TL;DR: Error handling logic such as mail to admin and logging should be encapsulated in a dedicated and centralized object that all end-points (e.g. Express middleware, cron jobs, unit-testing) call when an error comes in.
Otherwise: Not handling errors within a single place will lead to code duplication and probably to errors that are handled improperly
Code example - a typical error flow
//DAL layer, we don't handle errors here
DB.addDocument(newCustomer, (error, result) => {
if (error)
throw new Error("Great error explanation comes here", other useful parameters)
});
//API route code, we catch both sync and async errors and forward to the middleware
try {
customerService.addNew(req.body).then(function (result) {
res.status(200).json(result);
}).catch((error) => {
next(error)
});
}
catch (error) {
next(error);
}
//Error handling middleware, we delegate the handling to the centrzlied error handler
app.use(function (err, req, res, next) {
errorHandler.handleError(err).then((isOperationalError) => {
if (!isOperationalError)
next(err);
});
});
Blog quote: "Sometimes lower levels can’t do anything useful except propagate the error to their caller"
(From the blog Joyent, ranked 1 for the keywords “Node.JS error handling”)
"…You may end up handling the same error at several levels of the stack. This happens when lower levels can’t do anything useful except propagate the error to their caller, which propagates the error to its caller, and so on. Often, only the top-level caller knows what the appropriate response is, whether that’s to retry the operation, report an error to the user, or something else. But that doesn’t mean you should try to report all errors to a single top-level callback, because that callback itself can’t know in what context the error occurred"
Number5: Document API errors using Swagger
TL;DR: Let your API callers know which errors might come in return so they can handle these thoughtfully without crashing. This is usually done with REST API documentation frameworks like Swagger
Otherwise: An API client might decide to crash and restart only because he received back an error he couldn’t understand. Note: the caller of your API might be you (very typical in a microservices environment)
Blog quote: "You have to tell your callers what errors can happen"
(From the blog Joyent, ranked 1 for the keywords “Node.JS logging”)
…We’ve talked about how to handle errors, but when you’re writing a new function, how do you deliver errors to the code that called your function? …If you don’t know what errors can happen or don’t know what they mean, then your program cannot be correct except by accident. So if you’re writing a new function, you have to tell your callers what errors can happen and what they mea
Number6: Shut the process gracefully when a stranger comes to town
TL;DR: When an unknown error occurs (a developer error, see best practice number #3)- there is uncertainty about the application healthiness. A common practice suggests restarting the process carefully using a ‘restarter’ tool like Forever and PM2
Otherwise: When an unfamiliar exception is caught, some object might be in a faulty state (e.g an event emitter which is used globally and not firing events anymore due to some internal failure) and all future requests might fail or behave crazily
Code example - deciding whether to crash
//deciding whether to crash when an uncaught exception arrives
//Assuming developers mark known operational errors with error.isOperational=true, read best practice #3
process.on('uncaughtException', function(error) {
errorManagement.handler.handleError(error);
if(!errorManagement.handler.isTrustedError(error))
process.exit(1)
});
//centralized error handler encapsulates error-handling related logic
function errorHandler(){
this.handleError = function (error) {
return logger.logError(err).then(sendMailToAdminIfCritical).then(saveInOpsQueueIfCritical).then(determineIfOperationalError);
}
this.isTrustedError = function(error)
{
return error.isOperational;
}
Blog quote: "There are three schools of thoughts on error handling"
(From the blog jsrecipes)
…There are primarily three schools of thoughts on error handling: 1. Let the application crash and restart it. 2. Handle all possible errors and never crash. 3. Balanced approach between the two
Number7: Use a mature logger to increase errors visibility
TL;DR: A set of mature logging tools like Winston, Bunyan or Log4J, will speed-up error discovery and understanding. So forget about console.log.
Otherwise: Skimming through console.logs or manually through messy text file without querying tools or a decent log viewer might keep you busy at work until late
Code example - Winston logger in action
//your centralized logger object
var logger = new winston.Logger({
level: 'info',
transports: [
new (winston.transports.Console)(),
new (winston.transports.File)({ filename: 'somefile.log' })
]
});
//custom code somewhere using the logger
logger.log('info', 'Test Log Message with some parameter %s', 'some parameter', { anything: 'This is metadata' });
Blog quote: "Lets identify a few requirements (for a logger):"
(From the blog strongblog)
…Lets identify a few requirements (for a logger):
1. Time stamp each log line. This one is pretty self explanatory – you should be able to tell when each log entry occured.
2. Logging format should be easily digestible by humans as well as machines.
3. Allows for multiple configurable destination streams. For example, you might be writing trace logs to one file but when an error is encountered, write to the same file, then into error file and send an email at the same time…
Number8: Discover errors and downtime using APM products
TL;DR: Monitoring and performance products (a.k.a APM) proactively gauge your codebase or API so they can auto-magically highlight errors, crashes and slow parts that you were missing
Otherwise: You might spend great effort on measuring API performance and downtimes, probably you’ll never be aware which are your slowest code parts under real world scenario and how these affects the UX
Blog quote: "APM products segments"
(From the blog Yoni Goldberg)
"…APM products constitutes 3 major segments:1. Website or API monitoring – external services that constantly monitor uptime and performance via HTTP requests. Can be setup in few minutes. Following are few selected contenders: Pingdom, Uptime Robot, and New Relic
2. Code instrumentation – products family which require to embed an agent within the application to benefit feature slow code detection, exceptions statistics, performance monitoring and many more. Following are few selected contenders: New Relic, App Dynamics
3. Operational intelligence dashboard – these line of products are focused on facilitating the ops team with metrics and curated content that helps to easily stay on top of application performance. This is usually involves aggregating multiple sources of information (application logs, DB logs, servers log, etc) and upfront dashboard design work. Following are few selected contenders: Datadog, Splunk"
The above is a shortened version - see here more best practices and examples
You can catch uncaught exceptions, but it's of limited use. See http://debuggable.com/posts/node-js-dealing-with-uncaught-exceptions:4c933d54-1428-443c-928d-4e1ecbdd56cb
monit, forever or upstart can be used to restart node process when it crashes. A graceful shutdown is best you can hope for (e.g. save all in-memory data in uncaught exception handler).
nodejs domains is the most up to date way of handling errors in nodejs. Domains can capture both error/other events as well as traditionally thrown objects. Domains also provide functionality for handling callbacks with an error passed as the first argument via the intercept method.
As with normal try/catch-style error handling, is is usually best to throw errors when they occur, and block out areas where you want to isolate errors from affecting the rest of the code. The way to "block out" these areas are to call domain.run with a function as a block of isolated code.
In synchronous code, the above is enough - when an error happens you either let it be thrown through, or you catch it and handle there, reverting any data you need to revert.
try {
//something
} catch(e) {
// handle data reversion
// probably log too
}
When the error happens in an asynchronous callback, you either need to be able to fully handle the rollback of data (shared state, external data like databases, etc). OR you have to set something to indicate that an exception has happened - where ever you care about that flag, you have to wait for the callback to complete.
var err = null;
var d = require('domain').create();
d.on('error', function(e) {
err = e;
// any additional error handling
}
d.run(function() { Fiber(function() {
// do stuff
var future = somethingAsynchronous();
// more stuff
future.wait(); // here we care about the error
if(err != null) {
// handle data reversion
// probably log too
}
})});
Some of that above code is ugly, but you can create patterns for yourself to make it prettier, eg:
var specialDomain = specialDomain(function() {
// do stuff
var future = somethingAsynchronous();
// more stuff
future.wait(); // here we care about the error
if(specialDomain.error()) {
// handle data reversion
// probably log too
}
}, function() { // "catch"
// any additional error handling
});
UPDATE (2013-09):
Above, I use a future that implies fibers semantics, which allow you to wait on futures in-line. This actually allows you to use traditional try-catch blocks for everything - which I find to be the best way to go. However, you can't always do this (ie in the browser)...
There are also futures that don't require fibers semantics (which then work with normal, browsery JavaScript). These can be called futures, promises, or deferreds (I'll just refer to futures from here on). Plain-old-JavaScript futures libraries allow errors to be propagated between futures. Only some of these libraries allow any thrown future to be correctly handled, so beware.
An example:
returnsAFuture().then(function() {
console.log('1')
return doSomething() // also returns a future
}).then(function() {
console.log('2')
throw Error("oops an error was thrown")
}).then(function() {
console.log('3')
}).catch(function(exception) {
console.log('handler')
// handle the exception
}).done()
This mimics a normal try-catch, even though the pieces are asynchronous. It would print:
1
2
handler
Note that it doesn't print '3' because an exception was thrown that interrupts that flow.
Take a look at bluebird promises:
https://github.com/petkaantonov/bluebird
Note that I haven't found many other libraries other than these that properly handle thrown exceptions. jQuery's deferred, for example, don't - the "fail" handler would never get the exception thrown an a 'then' handler, which in my opinion is a deal breaker.
I wrote about this recently at http://snmaynard.com/2012/12/21/node-error-handling/. A new feature of node in version 0.8 is domains and allow you to combine all the forms of error handling into one easier manage form. You can read about them in my post.
You can also use something like Bugsnag to track your uncaught exceptions and be notified via email, chatroom or have a ticket created for an uncaught exception (I am the co-founder of Bugsnag).
One instance where using a try-catch might be appropriate is when using a forEach loop. It is synchronous but at the same time you cannot just use a return statement in the inner scope. Instead a try and catch approach can be used to return an Error object in the appropriate scope. Consider:
function processArray() {
try {
[1, 2, 3].forEach(function() { throw new Error('exception'); });
} catch (e) {
return e;
}
}
It is a combination of the approaches described by #balupton above.
I would just like to add that Step.js library helps you handle exceptions by always passing it to the next step function. Therefore you can have as a last step a function that check for any errors in any of the previous steps. This approach can greatly simplify your error handling.
Below is a quote from the github page:
any exceptions thrown are caught and passed as the first argument to
the next function. As long as you don't nest callback functions inline
your main functions this prevents there from ever being any uncaught
exceptions. This is very important for long running node.JS servers
since a single uncaught exception can bring the whole server down.
Furthermore, you can use Step to control execution of scripts to have a clean up section as the last step. For example if you want to write a build script in Node and report how long it took to write, the last step can do that (rather than trying to dig out the last callback).
Catching errors has been very well discussed here, but it's worth remembering to log the errors out somewhere so you can view them and fix stuff up.
Bunyan is a popular logging framework for NodeJS - it supporst writing out to a bunch of different output places which makes it useful for local debugging, as long as you avoid console.log.
In your domain's error handler you could spit the error out to a log file.
var log = bunyan.createLogger({
name: 'myapp',
streams: [
{
level: 'error',
path: '/var/tmp/myapp-error.log' // log ERROR to this file
}
]
});
This can get time consuming if you have lots of errors and/or servers to check, so it could be worth looking into a tool like Raygun (disclaimer, I work at Raygun) to group errors together - or use them both together.
If you decided to use Raygun as a tool, it's pretty easy to setup too
var raygunClient = new raygun.Client().init({ apiKey: 'your API key' });
raygunClient.send(theError);
Crossed with using a tool like PM2 or forever, your app should be able to crash, log out what happened and reboot without any major issues.
After reading this post some time ago I was wondering if it was safe to use domains for exception handling on an api / function level. I wanted to use them to simplify exception handling code in each async function I wrote. My concern was that using a new domain for each function would introduce significant overhead. My homework seems to indicate that there is minimal overhead and that performance is actually better with domains than with try catch in some situations.
http://www.lighthouselogic.com/#/using-a-new-domain-for-each-async-function-in-node/
If you want use Services in Ubuntu(Upstart): Node as a service in Ubuntu 11.04 with upstart, monit and forever.js
getCountryRegionData: (countryName, stateName) => {
let countryData, stateData
try {
countryData = countries.find(
country => country.countryName === countryName
)
} catch (error) {
console.log(error.message)
return error.message
}
try {
stateData = countryData.regions.find(state => state.name === stateName)
} catch (error) {
console.log(error.message)
return error.message
}
return {
countryName: countryData.countryName,
countryCode: countryData.countryShortCode,
stateName: stateData.name,
stateCode: stateData.shortCode,
}
},
Related
I have a situation like this where I make some web requests in parallel. Sometimes I make these calls and all requests see the same error (e.g. no-network):
void main() {
Observable.just("a", "b", "c")
.flatMap(s -> makeNetworkRequest())
.subscribe(
s -> {
// TODO
},
error -> {
// handle error
});
}
Observable<String> makeNetworkRequest() {
return Observable.error(new NoNetworkException());
}
class NoNetworkException extends Exception {
}
Depending on the timing, if one request emits the NoNetworkException before the others can, Retrofit/RxJava will dispose/interrupt** the others. I'll see one of the following logs (not all three) for each request remaining in progress++:
<-- HTTP FAILED: java.io.IOException: Canceled
<-- HTTP FAILED: java.io.InterruptedIOException
<-- HTTP FAILED: java.io.InterruptedIOException: thread interrupted
I'll be able to handle the NoNetworkException error in the subscriber and everything downstream will get disposed of and all is OK.
However based on timing, if two or more web requests emit NoNetworkException, then the first one will trigger the events above, disposing of everything down stream. The second NoNetworkException will have nowhere to go and I'll get the dreaded UndeliverableException. This is the same as example #1 documented here.
In the above article, the author suggested using an error handler. Obviously retry/retryWhen don't make sense if I expect to hear the same errors again. I don't understand how onErrorResumeNext/onErrorReturn help here, unless I map them to something recoverable to be handled downstream:
Observable.just("a", "b", "c")
.flatMap(s ->
makeNetworkRequest()
.onErrorReturn(error -> {
// eat actual error and return something else
return "recoverable error";
}))
.subscribe(
s -> {
if (s.equals("recoverable error")) {
// handle error
} else {
// TODO
}
},
error -> {
// handle error
});
but this seems wonky.
I know another solution is to set a global error handler with RxJavaPlugins.setErrorHandler(). This doesn't seem like a great solution either. I may want to handle NoNetworkException differently in different parts of my app.
So what other options to I have? What do other people do in this case? This must be pretty common.
** I don't fully understand who is interrupting/disposing of who. Is RxJava disposing of all other requests in flatmap which in turn causes Retrofit to cancel requests? Or does Retrofit cancel requests, resulting in each
request in flatmap emitting one of the above IOExceptions? I guess it doesn't really matter to answer the question, just curious.
++ It's possible that not all a, b, and c requests are in flight depending on thread pool.
Have you tried by using flatMap() with delayErrors=true?
I'm constantly (since years) wondering the most senseful way to implement the following (it's kind of paradoxic for me):
Imagine a function:
DoSomethingWith(value)
{
if (value == null) { // Robust: Check parameter(s) first
throw new ArgumentNullException(value);
}
// Some code ...
}
It's called like:
SomeFunction()
{
if (value == null) { // Fail early
InformUser();
return;
}
DoSomethingWith(value);
}
But, to catch the ArgumentNullException, should I do:
SomeFunction()
{
if (value == null) { // Fail early
InformUser();
return;
}
try { // If throwing an Exception, why not *not* check for it (even if you checked already)?
DoSomethingWith(value);
} catch (ArgumentNullException) {
InformUser();
return;
}
}
or just:
SomeFunction()
{
try { // No fail early anymore IMHO, because you could fail before calling DoSomethingWith(value)
DoSomethingWith(value);
} catch (ArgumentNullException) {
InformUser();
return;
}
}
?
This is a very general question and the right solution depends on the specific code and architecture.
Generally regarding error handling
The main focus should be to catch the exception on the level where you can handle it.
Handling the exceptions at the right place makes the code robust, so the exception doesn't make the application fail and the exception can be handled accordingly.
Failing early makes the application robust, because this helps avoiding inconsistent states.
This also means that there should be a more general try catch block at the root of the execution to catch any non fatal application error which couldn't be handled. Which often means that you as a programmer didn't think of this error source. Later you can extend your code to also handle this error. But the execution root shouldn't be the only place where you think of exception handling.
Your example
In your example regarding ArgumentNullException:
Yes, you should fail early. Whenever your method is invoked with an invalid null argument, you should throw this exception.
But you should never catch this exception, cause it should be possible to avoid it. See this post related to the topic: If catching null pointer exception is not a good practice, is catching exception a good one?
If you are working with user input or input from other systems, then you should validate the input. E.g. you can display validation error on the UI after null checking without throwing an exception. It is always a critical part of error handling how to show the issues to users, so define a proper strategy for your application. You should try to avoid throwing exceptions in the expected program execution flow. See this article: https://msdn.microsoft.com/en-us/library/ms173163.aspx
See general thoughts about exception handling below:
Handling exceptions in your method
If an exception is thrown in the DoSomethingWith method and you can handle it and continue the flow of execution without any issue, then of course you should do those.
This is a pseudo code example for retrying a database operation:
void DoSomethingAndRetry(value)
{
try
{
SaveToDatabase(value);
}
catch(DeadlockException ex)
{
//deadlock happened, we are retrying the SQL statement
SaveToDatabase(value);
}
}
Letting the exception bubble up
Let's assume your method is public. If an exception happens which implies that the method failed and you can't continue execution, then the exception should bubble up, so that the calling code can handle it accordingly. It depends one the use-case how the calling code would react on the exception.
Before letting the exception bubble up you may wrap it into another application specific exception as inner exception to add additional context information. You may also process the exception somehow, E.g log it , or leave the logging to the calling code, depending on your logging architecture.
public bool SaveSomething(value)
{
try
{
SaveToFile(value);
}
catch(FileNotFoundException ex)
{
//process exception if needed, E.g. log it
ProcessException(ex);
//you may want to wrap this exception into another one to add context info
throw WrapIntoNewExceptionWithSomeDetails(ex);
}
}
Documenting possible exceptions
In .NET it is also helpful to define which exceptions your method is throwing and reasons why it might throw it. So that the calling code can take this into consideration. See https://msdn.microsoft.com/en-us/library/w1htk11d.aspx
Example:
/// <exception cref="System.Exception">Thrown when something happens..</exception>
DoSomethingWith(value)
{
...
}
Ignoring exceptions
For methods where you are OK with a failing method and don't want to add a try catch block around it all the time, you could create a method with similar signature:
public bool TryDoSomethingWith(value)
{
try
{
DoSomethingWith(value);
return true;
}
catch(Exception ex)
{
//process exception if needed, e.g. log it
ProcessException(ex);
return false;
}
}
When working with Error objects in NodeJS, there is the standard 'error' event that can be emitted by EventEmitters, which gets automatically turned into a Thrown error if it's not handled. But what about varying levels of error severity (i.e. "info" or "notice" errors vs. "warning" or "critical"). As far as I can see, there's not a type property on the default Error object. Is there any existing standard way to pass/handle errors of various severities? I'm thinking of implementing my own standard for my modules such that listeners would be like:
myThing.on('error', function(err) {
if (err.severity & (ERR_WARNING | ERR_CRITICAl)) {
console.log("Critical Failure!", err);
proces.exit(0);
}
});
Meaning they could set their own debugging level and ignore errors that didn't meet that threshold. Does something like this already exist as a standard?
The best thing you can do is use another event name for non-critical errors (like warning or so). Also you can implement your own "subclass" of Error like this:
var inherits = require('util').inherits;
function MyError (message, severity) {
this.name = 'MyError';
this.severity = severity;
this.message = message;
Error.captureStackTrace(this, MyError);
}
inherits(MyError, Error);
Is there any existing standard way to pass/handle errors of various severities?
Yes, do a console.error("something is wrong"). This way your app will continue to operate normally, but will output warnings to stderr ensuring that people will look into it eventually.
I first tried a general description of the problem, then some more detail why the usual approaches don't work. If you would like to read these abstracted explanations go on. In the end I explain the greater problem and the specific application, so if you would rather read that, jump to "Actual application".
I am using a node.js child-process to do some computationally intensive work. The parent process does it's work but at some point in the execution it reaches a point where it must have the information from the child process before continuing. Therefore, I am looking for a way to wait for the child-process to finish.
My current setup looks somewhat like this:
importantDataCalculator = fork("./runtime");
importantDataCalculator.on("message", function (msg) {
if (msg.type === "result") {
importantData = msg.data;
} else if (msg.type === "error") {
importantData = null;
} else {
throw new Error("Unknown message from dataGenerator!");
}
});
and somewhere else
function getImportantData() {
while (importantData === undefined) {
// wait for the importantDataGenerator to finish
}
if (importantData === null) {
throw new Error("Data could not be generated.");
} else {
// we should have a proper data now
return importantData;
}
}
So when the parent process starts, it executes the first bit of code, spawning a child process to calculate the data and goes on doing it's own bit of work. When the time comes that it needs the result from the child process to continue it calls getImportantData(). So the idea is that getImportantData() blocks until the data is calculated.
However, the way I used doesn't work. I think this is due to me preventing the event loop from executing by using the while-loop. And since the Event-Loop does not execute no message from the child-process can be received and thus the condition of the while-loop can not change, making it an infinite loop.
Of course, I don't really want to use this kind of while-loop. What I would rather do is tell node.js "execute one iteration of the event loop, then get back to me". I would do this repeatedly, until the data I need was received and then continue the execution where I left of by returning from the getter.
I realize that his poses the danger of reentering the same function several times, but the module I want to use this in does almost nothing on the event loop except for waiting for this message from the child process and sending out other messages reporting it's progress, so that shouldn't be a problem.
Is there way to execute just one iteration of the event loop in Node.js? Or is there another way to achieve something similar? Or is there a completely different approach to achieve what I'm trying to do here?
The only solution I could think of so far is to change the calculation in such a way that I introduce yet another process. In this scenario, there would be the process calculating the important data, a process calculating the bits of data for which the important data is not needed and a parent process for these two, which just waits for data from the two child-processes and combines the pieces when they arrive. Since it does not have to do any computationally intensive work itself, it can just wait for events from the event loop (=messages) and react to them, forwarding the combined data as necessary and storing pieces of data that cannot be combined yet.
However this introduces yet another process and even more inter-process communication, which introduces more overhead, which I would like to avoid.
Edit
I see that more detail is needed.
The parent process (let's call it process 1) is itself a process spawned by another process (process 0) to do some computationally intensive work. Actually, it just executes some code over which I don't have control, so I cannot make it work asynchronously. What I can do (and have done) is make the code that is executed regularly call a function to report it's progress and provided partial results. This progress report is then send back to the original process via IPC.
But in rare cases the partial results are not correct, so they have to be modified. To do so I need some data I can calculate independently from the normal calculation. However, this calculation could take several seconds; thus, I start another process (process 2) to do this calculation and provide the result to process 1, via an IPC message. Now process 1 and 2 are happily calculating there stuff, and hopefully the corrective data calculated by process 2 is finished before process 1 needs it. But sometimes one of the early results of process 1 needs to be corrected and in that case I have to wait for process 2 to finish its calculation. Blocking the event loop of process 1 is theoretically not a problem, since the main process (process 0) would not be be affected by it. The only problem is, that by preventing the further execution of code in process 1 I am also blocking the event loop, which prevents it from ever receiving the result from process 2.
So I need to somehow pause the further execution of code in process 1 without blocking the event loop. I was hoping that there was a call like process.runEventLoopIteration that executes an iteration of the event loop and then returns.
I would then change the code like this:
function getImportantData() {
while (importantData === undefined) {
process.runEventLoopIteration();
}
if (importantData === null) {
throw new Error("Data could not be generated.");
} else {
// we should have a proper data now
return importantData;
}
}
thus executing the event loop until I have received the necessary data but NOT continuing the execution of the code that called getImportantData().
Basically what I'm doing in process 1 is this:
function callback(partialDataMessage) {
if (partialDataMessage.needsCorrection) {
getImportantData();
// use data to correct message
process.send(correctedMessage); // send corrected result to main process
} else {
process.send(partialDataMessage); // send unmodified result to main process
}
}
function executeCode(code) {
run(code, callback); // the callback will be called from time to time when the code produces new data
// this call is synchronous, run is blocking until the calculation is finished
// so if we reach this point we are done
// the only way to pause the execution of the code is to NOT return from the callback
}
Actual application/implementation/problem
I need this behaviour for the following application. If you have a better approach to achieve this feel free to propose it.
I want to execute arbitrary code and be notified about what variables it changes, what functions are called, what exceptions occur etc. I also need the location of these events in the code to be able to display the gathered information in the UI next to the original code.
To achieve this, I instrument the code and insert callbacks into it. I then execute the code, wrapping the execution in a try-catch block. Whenever the callback is called with some data about the execution (e.g. a variable change) I send a message to the main process telling it about the change. This way, the user is notified about the execution of the code, while it is running. The location information for the events generated by these callbacks is added to the callback call during the instrumentation, so that is not a problem.
The problem appears, when an exception occurs. I also want to notify the user about exceptions in the tested code. Therefore, I wrapped the execution of the code in a try-catch and any exceptions that get out of the execution are caught and send to the user interface. But the location of the errors is not correct. An Error object created by node.js has a complete call stack so it knows where it occurred. But this location if relative to the instrumented code, so I cannot use this location information as is, to display the error next to the original code. I need to transform this location in the instrumented code into a location in the original code. To do so, after instrumenting the code, I calculate a source map to map locations in the instrumented code to locations in the original code. However, this calculation might take several seconds. So, I figured, I would start a child process to calculate the source map, while the execution of the instrumented code is already started. Then, when an exception occurs, I check whether the source map has already been calculated, and if it hasn't I wait for the calculation to finish to be able to correct the location.
Since the code to be executed and watched can be completely arbitrary I cannot trivially rewrite it to be asynchronous. I only know that it calls the provided callback, because I instrumented the code to do so. I also cannot just store the message and return to continue the execution of the code, checking back during the next call whether the source map has been finished, because continuing the execution of the code would also block the event-loop, preventing the calculated source map from ever being received in the execution process. Or if it is received, then only after the code to execute has completely finished, which could be quite late or never (if the code to execute contains an infinite loop). But before I receive the sourceMap I cannot send further updates about the execution state. Combined, this means I would only be able to send the corrected progress messages after the code to execute has finished (which might be never) which completely defeats the purpose of the program (to enable the programmer to watch what the code does, while it executes).
Temporarily surrendering control to the event loop would solve this problem. However, that does not seem to be possible. The other idea I have is to introduce a third process which controls both the execution process and the sourceMapGeneration process. It receives progress messages from the execution process and if any of the messages needs correction it waits for the sourceMapGeneration process. Since the processes are independent, the controlling process can store the received messages and wait for the sourceMapGeneration process while the execution process continues executing, and as soon as it receives the source map, it corrects the messages and sends all of them off.
However, this would not only require yet another process (overhead) it also means I have to transfer the code once more between processes and since the code can have thousands of line that in itself can take some time, so I would like to move it around as little as possible.
I hope this explains, why I cannot and didn't use the usual "asynchronous callback" approach.
Adding a third ( :) ) solution to your problem after you clarified what behavior you seek I suggest using Fibers.
Fibers let you do co-routines in nodejs. Coroutines are functions that allow multiple entry/exit points. This means you will be able to yield control and resume it as you please.
Here is a sleep function from the official documentation that does exactly that, sleep for a given amount of time and perform actions.
function sleep(ms) {
var fiber = Fiber.current;
setTimeout(function() {
fiber.run();
}, ms);
Fiber.yield();
}
Fiber(function() {
console.log('wait... ' + new Date);
sleep(1000);
console.log('ok... ' + new Date);
}).run();
console.log('back in main');
You can place the code that does the waiting for the resource in a function, causing it to yield and then run again when the task is done.
For example, adapting your example from the question:
var pausedExecution, importantData;
function getImportantData() {
while (importantData === undefined) {
pausedExecution = Fiber.current;
Fiber.yield();
pausedExecution = undefined;
}
if (importantData === null) {
throw new Error("Data could not be generated.");
} else {
// we should have proper data now
return importantData;
}
}
function callback(partialDataMessage) {
if (partialDataMessage.needsCorrection) {
var theData = getImportantData();
// use data to correct message
process.send(correctedMessage); // send corrected result to main process
} else {
process.send(partialDataMessage); // send unmodified result to main process
}
}
function executeCode(code) {
// setup child process to calculate the data
importantDataCalculator = fork("./runtime");
importantDataCalculator.on("message", function (msg) {
if (msg.type === "result") {
importantData = msg.data;
} else if (msg.type === "error") {
importantData = null;
} else {
throw new Error("Unknown message from dataGenerator!");
}
if (pausedExecution) {
// execution is waiting for the data
pausedExecution.run();
}
});
// wrap the execution of the code in a Fiber, so it can be paused
Fiber(function () {
runCodeWithCallback(code, callback); // the callback will be called from time to time when the code produces new data
// this callback is synchronous and blocking,
// but it will yield control to the event loop if it has to wait for the child-process to finish
}).run();
}
Good luck! I always say it is better to solve one problem in 3 ways than solving 3 problems the same way. I'm glad we were able to work out something that worked for you. Admittingly, this was a pretty interesting question.
The rule of asynchronous programming is, once you've entered asynchronous code, you must continue to use asynchronous code. While you can continue to call the function over and over via setImmediate or something of the sort, you still have the issue that you're trying to return from an asynchronous process.
Without knowing more about your program, I can't tell you exactly how you should structure it, but by and large the way to "return" data from a process that involves asynchronous code is to pass in a callback; perhaps this will put you on the right track:
function getImportantData(callback) {
importantDataCalculator = fork("./runtime");
importantDataCalculator.on("message", function (msg) {
if (msg.type === "result") {
callback(null, msg.data);
} else if (msg.type === "error") {
callback(new Error("Data could not be generated."));
} else {
callback(new Error("Unknown message from sourceMapGenerator!"));
}
});
}
You would then use this function like this:
getImportantData(function(error, data) {
if (error) {
// handle the error somehow
} else {
// `data` is the data from the forked process
}
});
I talk about this in a bit more detail in one of my screencasts, Thinking Asynchronously.
What you are running into is a very common scenario that skilled programmers who are starting with nodejs often struggle with.
You're correct. You can't do this the way you are attempting (loop).
The main process in node.js is single threaded and you are blocking the event loop.
The simplest way to resolve this is something like:
function getImportantData() {
if(importantData === undefined){ // not set yet
setImmediate(getImportantData); // try again on the next event loop cycle
return; //stop this attempt
}
if (importantData === null) {
throw new Error("Data could not be generated.");
} else {
// we should have a proper data now
return importantData;
}
}
What we are doing, is that the function is re-attempting to process the data on the next iteration of the event loop using setImmediate.
This introduces a new problem though, your function returns a value. Since it will not be ready, the value you are returning is undefined. So you have to code reactively. You need to tell your code what to do when the data arrives.
This is typically done in node with a callback
function getImportantData(err,whenDone) {
if(importantData === undefined){ // not set yet
setImmediate(getImportantData.bind(null,whenDone)); // try again on the next event loop cycle
return; //stop this attempt
}
if (importantData === null) {
err("Data could not be generated.");
} else {
// we should have a proper data now
whenDone(importantData);
}
}
This can be used in the following way
getImportantData(function(err){
throw new Error(err); // error handling function callback
}, function(data){ //this is whenDone in our case
//perform actions on the important data
})
Your question (updated) is very interesting, it appears to be closely related to a problem I had with asynchronously catching exceptions. (Also Brandon and Ihad an interesting discussion with me about it! It's a small world)
See this question on how to catch exceptions asynchronously. The key concept is that you can use (assuming nodejs 0.8+) nodejs domains to constrain the scope of an exception.
This will allow you to easily get the location of the exception since you can surround asynchronous blocks with atry/catch. I think this should solve the bigger issue here.
You can find the relevant code in the linked question. The usage is something like:
atry(function() {
setTimeout(function(){
throw "something";
},1000);
}).catch(function(err){
console.log("caught "+err);
});
Since you have access to the scope of atry you can get the stack trace there which would let you skip the more complicated source-map usage.
Good luck!
I am working on a websocket oriented node.js server using Socket.IO. I noticed a bug where certain browsers aren't following the correct connect procedure to the server, and the code isn't written to gracefully handle it, and in short, it calls a method to an object that was never set up, thus killing the server due to an error.
My concern isn't with the bug in particular, but the fact that when such errors occur, the entire server goes down. Is there anything I can do on a global level in node to make it so if an error occurs it will simply log a message, perhaps kill the event, but the server process will keep on running?
I don't want other users' connections to go down due to one clever user exploiting an uncaught error in a large included codebase.
You can attach a listener to the uncaughtException event of the process object.
Code taken from the actual Node.js API reference (it's the second item under "process"):
process.on('uncaughtException', function (err) {
console.log('Caught exception: ', err);
});
setTimeout(function () {
console.log('This will still run.');
}, 500);
// Intentionally cause an exception, but don't catch it.
nonexistentFunc();
console.log('This will not run.');
All you've got to do now is to log it or do something with it, in case you know under what circumstances the bug occurs, you should file a bug over at Socket.IO's GitHub page:
https://github.com/LearnBoost/Socket.IO-node/issues
Using uncaughtException is a very bad idea.
The best alternative is to use domains in Node.js 0.8. If you're on an earlier version of Node.js rather use forever to restart your processes or even better use node cluster to spawn multiple worker processes and restart a worker on the event of an uncaughtException.
From: http://nodejs.org/api/process.html#process_event_uncaughtexception
Warning: Using 'uncaughtException' correctly
Note that 'uncaughtException' is a crude mechanism for exception handling intended to be used only as a last resort. The event should not be used as an equivalent to On Error Resume Next. Unhandled exceptions inherently mean that an application is in an undefined state. Attempting to resume application code without properly recovering from the exception can cause additional unforeseen and unpredictable issues.
Exceptions thrown from within the event handler will not be caught. Instead the process will exit with a non-zero exit code and the stack trace will be printed. This is to avoid infinite recursion.
Attempting to resume normally after an uncaught exception can be similar to pulling out of the power cord when upgrading a computer -- nine out of ten times nothing happens - but the 10th time, the system becomes corrupted.
The correct use of 'uncaughtException' is to perform synchronous cleanup of allocated resources (e.g. file descriptors, handles, etc) before shutting down the process. It is not safe to resume normal operation after 'uncaughtException'.
To restart a crashed application in a more reliable way, whether uncaughtException is emitted or not, an external monitor should be employed in a separate process to detect application failures and recover or restart as needed.
I just did a bunch of research on this (see here, here, here, and here) and the answer to your question is that Node will not allow you to write one error handler that will catch every error scenario that could possibly occur in your system.
Some frameworks like express will allow you to catch certain types of errors (when an async method returns an error object), but there are other conditions that you cannot catch with a global error handler. This is a limitation (in my opinion) of Node and possibly inherent to async programming in general.
For example, say you have the following express handler:
app.get("/test", function(req, res, next) {
require("fs").readFile("/some/file", function(err, data) {
if(err)
next(err);
else
res.send("yay");
});
});
Let's say that the file "some/file" does not actually exist. In this case fs.readFile will return an error as the first argument to the callback method. If you check for that and do next(err) when it happens, the default express error handler will take over and do whatever you make it do (e.g. return a 500 to the user). That's a graceful way to handle an error. Of course, if you forget to call next(err), it doesn't work.
So that's the error condition that a global handler can deal with, however consider another case:
app.get("/test", function(req, res, next) {
require("fs").readFile("/some/file", function(err, data) {
if(err)
next(err);
else {
nullObject.someMethod(); //throws a null reference exception
res.send("yay");
}
});
});
In this case, there is a bug if your code that results in you calling a method on a null object. Here an exception will be thrown, it will not be caught by the global error handler, and your node app will terminate. All clients currently executing requests on that service will get suddenly disconnected with no explanation as to why. Ungraceful.
There is currently no global error handler functionality in Node to handle this case. You cannot put a giant try/catch around all your express handlers because by the time your asyn callback executes, those try/catch blocks are no longer in scope. That's just the nature of async code, it breaks the try/catch error handling paradigm.
AFAIK, your only recourse here is to put try/catch blocks around the synchronous parts of your code inside each one of your async callbacks, something like this:
app.get("/test", function(req, res, next) {
require("fs").readFile("/some/file", function(err, data) {
if(err) {
next(err);
}
else {
try {
nullObject.someMethod(); //throws a null reference exception
res.send("yay");
}
catch(e) {
res.send(500);
}
}
});
});
That's going to make for some nasty code, especially once you start getting into nested async calls.
Some people think that what Node does in these cases (that is, die) is the proper thing to do because your system is in an inconsistent state and you have no other option. I disagree with that reasoning but I won't get into a philosophical debate about it. The point is that with Node, your options are lots of little try/catch blocks or hope that your test coverage is good enough so that this doesn't happen. You can put something like upstart or supervisor in place to restart your app when it goes down but that's simply mitigation of the problem, not a solution.
Node.js has a currently unstable feature called domains that appears to address this issue, though I don't know much about it.
I've just put together a class which listens for unhandled exceptions, and when it see's one it:
prints the stack trace to the console
logs it in it's own logfile
emails you the stack trace
restarts the server (or kills it, up to you)
It will require a little tweaking for your application as I haven't made it generic as yet, but it's only a few lines and it might be what you're looking for!
Check it out!
Note: this is over 4 years old at this point, unfinished, and there may now be a better way - I don't know!)
process.on
(
'uncaughtException',
function (err)
{
var stack = err.stack;
var timeout = 1;
// print note to logger
logger.log("SERVER CRASHED!");
// logger.printLastLogs();
logger.log(err, stack);
// save log to timestamped logfile
// var filename = "crash_" + _2.formatDate(new Date()) + ".log";
// logger.log("LOGGING ERROR TO "+filename);
// var fs = require('fs');
// fs.writeFile('logs/'+filename, log);
// email log to developer
if(helper.Config.get('email_on_error') == 'true')
{
logger.log("EMAILING ERROR");
require('./Mailer'); // this is a simple wrapper around nodemailer http://documentup.com/andris9/nodemailer/
helper.Mailer.sendMail("GAMEHUB NODE SERVER CRASHED", stack);
timeout = 10;
}
// Send signal to clients
// logger.log("EMITTING SERVER DOWN CODE");
// helper.IO.emit(SIGNALS.SERVER.DOWN, "The server has crashed unexpectedly. Restarting in 10s..");
// If we exit straight away, the write log and send email operations wont have time to run
setTimeout
(
function()
{
logger.log("KILLING PROCESS");
process.exit();
},
// timeout * 1000
timeout * 100000 // extra time. pm2 auto-restarts on crash...
);
}
);
Had a similar problem. Ivo's answer is good. But how can you catch an error in a loop and continue?
var folder='/anyFolder';
fs.readdir(folder, function(err,files){
for(var i=0; i<files.length; i++){
var stats = fs.statSync(folder+'/'+files[i]);
}
});
Here, fs.statSynch throws an error (against a hidden file in Windows that barfs I don't know why). The error can be caught by the process.on(...) trick, but the loop stops.
I tried adding a handler directly:
var stats = fs.statSync(folder+'/'+files[i]).on('error',function(err){console.log(err);});
This did not work either.
Adding a try/catch around the questionable fs.statSynch() was the best solution for me:
var stats;
try{
stats = fs.statSync(path);
}catch(err){console.log(err);}
This then led to the code fix (making a clean path var from folder and file).
I found PM2 as the best solution for handling node servers, single and multiple instances
One way of doing this would be spinning the child process and communicate with the parent process via 'message' event.
In the child process where the error occurs, catch that with 'uncaughtException' to avoid crashing the application. Mind that Exceptions thrown from within the event handler will not be caught. Once the error is caught safely, send a message like: {finish: false}.
Parent Process would listen to the message event and send the message again to the child process to re-run the function.
Child Process:
// In child.js
// function causing an exception
const errorComputation = function() {
for (let i = 0; i < 50; i ++) {
console.log('i is.......', i);
if (i === 25) {
throw new Error('i = 25');
}
}
process.send({finish: true});
}
// Instead the process will exit with a non-zero exit code and the stack trace will be printed. This is to avoid infinite recursion.
process.on('uncaughtException', err => {
console.log('uncaught exception..',err.message);
process.send({finish: false});
});
// listen to the parent process and run the errorComputation again
process.on('message', () => {
console.log('starting process ...');
errorComputation();
})
Parent Process:
// In parent.js
const { fork } = require('child_process');
const compute = fork('child.js');
// listen onto the child process
compute.on('message', (data) => {
if (!data.finish) {
compute.send('start');
} else {
console.log('Child process finish successfully!')
}
});
// send initial message to start the child process.
compute.send('start');