Meteor - How to handle external API connection errors? - node.js

I'm using a few external API's (some in timers, every minute or so) and sometimes I get some connection errors because of network problems or because the external systems are down for some reason. When I get this errors, the app restarts, and if the error persists the app continues restarting.
Any ideas on how can I ignore the connection error and keep the app running?
Code Example:
try {
var req = https.request(options, callback);
req.write(JSON.stringify(params));
req.end();
} catch (e) {
throw e;
}

Based on your code example. You're doing throw e inside your try catch. Essentially, you're catching an error and then throwing the error. Just do console.error(err) or however you want to handle that error, without throwing. This is what will cause your instance to stop.

Related

catch never outputs the console even though I know it's failing

I have my mongodb service stopped, so I know that my front end is not connected to my DB. I am using react and express.
Upon my app starting, I want to indicate that to the user somehow the server is offline so I figured if my original get call for users fails, then the server is offline.
I'm doing a simple call:
componentDidMount () {
axios.get ('/api/users')
.then ((res) => this.setState(
{ users : res.data }
))
.catch ((error) => {
//console.error(error);
console.log('error found : offline');
});
}
But nothing happens in situation. I never get the catch call for the console. Am I going about this wrong? I'm new to backend so this is all a learning experience for me.
I was going to set a failed flag and then render a display error for the user and then retry the connection every 1500ms or something (is that bad programming?).

Silently reporting errors to Papertrail

I'm working with a large Node.js application on Heroku, with logging maintained by Papertrail. I have a central error handling and logging function which, at the moment, just logs an error to the console, displays a generic "An error occurred!" dialog, and redirects a client to a specific page (depending on where the error occurred). But neither Papertrail nor Heroku detect this as a real error, and I don't get any sort of notifications or alerts if and when they occur.
At the moment, this is my function:
utilities.errorLogger = (err) => {
console.error(err);
};
I've tried to throw the error, which works like below:
utilities.errorLogger = (err) => {
throw new Error(err);
};
But then the error is displayed to the client, rather than being redirected away, leaving the end user confused. Similarly, putting the error in a try-catch block does not change anything from what I currently have (the error is logged but not picked up on by Papertrail or Heroku).
utilities.errorLogger = (err) => {
try {
throw new Error(err);
}
catch (err) {
console.error(err);
}
};
How can I silently throw an error for Papertrail and Heroku to pick up on and treat as an error, without needing to send the error to the client? I'd like to be able to silently handle the error and have the reporting go on in the background, rather than sending any of the error details to the client.
Ended up finding out the answer. I'm using KeystoneJS which comes with a default error handler that I hadn't seen before; I've modified it now to just redirect people while still being able to log the error.

catching exceptions in node.js express proxy application

I have some proxy code like this below. Problem is that whenever the target server is down, this code fails to capture the error, resulting in the entire application crashing with Error: connect ECONNREFUSED.
For a proxy server, this is terrible, it needs to just return an error to the caller, not crash altogether upon the first time that the target server is unreachable.
What is the right way around it these days?
Node version 6.
let targetUrl = "http://foo.com/bar"
app.options('/cors-proxy/bar', cors())
app.post('/cors-proxy/bar', function(req, res) {
console.log(`received message with method ${req.method} and some body ${req.body}`)
console.log(`relaying message to ${targetUrl}`)
try {
req.pipe(
request({
url: targetUrl,
method: req.method,
json: req.body
})
).pipe(res);
} catch (err) {
res.status(502)
res.render('error', {
message: err.message,
error: err
});
}
});
Thanks!
In general, you can't use try/catch to catch exceptions that may occur in asynchronous callbacks or asynchronous operations. That will only catch synchronous errors.
Instead, you have to read how each particular asynchronous operation reports errors and make sure you are plugged into that particular mechanism.
For example, streams report errors with a message to the stream that you intercept with stream.on('error', ...). For example, a request() can report errors several different ways depending upon which request() library you are actually using and how you are using it.
Some references:
Error handling with node.js streams
Stream Readable Error
How Error Events Affect Piped Streams in Node.js

Debugging in node.js

I build a server which get many requests and response to them.
In some cases, there is an error which cause the server to crush:
events.js:72
throw er; // Unhandled 'error' event
^
Error: ENOENT, open '/mnt/ace/0/file'
I have two problems:
the stack trace doesn't give me any information about the line in my application that cause this exception (I can't do manually debugging because it happens just when I get 1000 request or more).
I don't want that my server ould crush. I prefer that it will raise an exception, but will continue to work.
What the best implementation for this?
You can listen for that kind of stuff and not have it crash the app, but that's not always a great idea.
process.on('uncaughtException', function(err) {
console.log('Something bad happened');
console.log(err.stack);
});
In your case, have you tried checking ulimit settings? You may be having problems opening file handles under loads of 1000+.
Another way of thinking about this is to use domains (if you're using >= 0.8). Domains give you a finer grain of control over how you handle errors based on what contexts cause them.
var domain = require('domain').create();
domain.on('error', function(err) {
console.log(err);
});
domain.run(function() {
// Your code that might throw
});

How do I prevent node.js from crashing? try-catch doesn't work

From my experience, a php server would throw an exception to the log or to the server end, but node.js just simply crashes. Surrounding my code with a try-catch doesn't work either since everything is done asynchronously. I would like to know what does everyone else do in their production servers.
PM2
First of all, I would highly recommend installing PM2 for Node.js. PM2 is really great at handling crash and monitoring Node apps as well as load balancing. PM2 immediately starts the Node app whenever it crashes, stops for any reason or even when server restarts. So, if someday even after managing our code, app crashes, PM2 can restart it immediately. For more info, Installing and Running PM2
Other answers are really insane as you can read at Node's own documents at http://nodejs.org/docs/latest/api/process.html#process_event_uncaughtexception
If someone is using other stated answers read Node Docs:
Note that uncaughtException is a very crude mechanism for exception handling and may be removed in the future
Now coming back to our solution to preventing the app itself from crashing.
So after going through I finally came up with what Node document itself suggests:
Don't use uncaughtException, use domains with cluster instead. If you do use uncaughtException, restart your application after every unhandled exception!
DOMAIN with Cluster
What we actually do is send an error response to the request that triggered the error, while letting the others finish in their normal time, and stop listening for new requests in that worker.
In this way, domain usage goes hand-in-hand with the cluster module, since the master process can fork a new worker when a worker encounters an error. See the code below to understand what I mean
By using Domain, and the resilience of separating our program into multiple worker processes using Cluster, we can react more appropriately, and handle errors with much greater safety.
var cluster = require('cluster');
var PORT = +process.env.PORT || 1337;
if(cluster.isMaster)
{
cluster.fork();
cluster.fork();
cluster.on('disconnect', function(worker)
{
console.error('disconnect!');
cluster.fork();
});
}
else
{
var domain = require('domain');
var server = require('http').createServer(function(req, res)
{
var d = domain.create();
d.on('error', function(er)
{
//something unexpected occurred
console.error('error', er.stack);
try
{
//make sure we close down within 30 seconds
var killtimer = setTimeout(function()
{
process.exit(1);
}, 30000);
// But don't keep the process open just for that!
killtimer.unref();
//stop taking new requests.
server.close();
//Let the master know we're dead. This will trigger a
//'disconnect' in the cluster master, and then it will fork
//a new worker.
cluster.worker.disconnect();
//send an error to the request that triggered the problem
res.statusCode = 500;
res.setHeader('content-type', 'text/plain');
res.end('Oops, there was a problem!\n');
}
catch (er2)
{
//oh well, not much we can do at this point.
console.error('Error sending 500!', er2.stack);
}
});
//Because req and res were created before this domain existed,
//we need to explicitly add them.
d.add(req);
d.add(res);
//Now run the handler function in the domain.
d.run(function()
{
//You'd put your fancy application logic here.
handleRequest(req, res);
});
});
server.listen(PORT);
}
Though Domain is pending deprecation and will be removed as the new replacement comes as stated in Node's Documentation
This module is pending deprecation. Once a replacement API has been finalized, this module will be fully deprecated. Users who absolutely must have the functionality that domains provide may rely on it for the time being but should expect to have to migrate to a different solution in the future.
But until the new replacement is not introduced, Domain with Cluster is the only good solution what Node Documentation suggests.
For in-depth understanding Domain and Cluster read
https://nodejs.org/api/domain.html#domain_domain (Stability: 0 - Deprecated)
https://nodejs.org/api/cluster.html
Thanks to #Stanley Luo for sharing us this wonderful in-depth explanation on Cluster and Domains
Cluster & Domains
I put this code right under my require statements and global declarations:
process.on('uncaughtException', function (err) {
console.error(err);
console.log("Node NOT Exiting...");
});
works for me. the only thing i don't like about it is I don't get as much info as I would if I just let the thing crash.
As mentioned here you'll find error.stack provides a more complete error message such as the line number that caused the error:
process.on('uncaughtException', function (error) {
console.log(error.stack);
});
Try supervisor
npm install supervisor
supervisor app.js
Or you can install forever instead.
All this will do is recover your server when it crashes by restarting it.
forever can be used within the code to gracefully recover any processes that crash.
The forever docs have solid information on exit/error handling programmatically.
Using try-catch may solve the uncaught errors, but in some complex situations, it won't do the job right such as catching async function. Remember that in Node, any async function calls can contain a potential app crashing operation.
Using uncaughtException is a workaround but it is recognized as inefficient and is likely to be removed in the future versions of Node, so don't count on it.
Ideal solution is to use domain: http://nodejs.org/api/domain.html
To make sure your app is up and running even your server crashed, use the following steps:
use node cluster to fork multiple process per core. So if one process died, another process will be auto boot up. Check out: http://nodejs.org/api/cluster.html
use domain to catch async operation instead of using try-catch or uncaught. I'm not saying that try-catch or uncaught is bad thought!
use forever/supervisor to monitor your services
add daemon to run your node app: http://upstart.ubuntu.com
hope this helps!
Give a try to pm2 node module it is far consistent and has great documentation. Production process manager for Node.js apps with a built-in load balancer. please avoid uncaughtException for this problem.
https://github.com/Unitech/pm2
Works great on restify:
server.on('uncaughtException', function (req, res, route, err) {
log.info('******* Begin Error *******\n%s\n*******\n%s\n******* End Error *******', route, err.stack);
if (!res.headersSent) {
return res.send(500, {ok: false});
}
res.write('\n');
res.end();
});
By default, Node.js handles such exceptions by printing the stack trace to stderr and exiting with code 1, overriding any previously set process.exitCode.
know more
process.on('uncaughtException', (err, origin) => {
console.log(err);
});
UncaughtException is "a very crude mechanism" (so true) and domains are deprecated now. However, we still need some mechanism to catch errors around (logical) domains. The library:
https://github.com/vacuumlabs/yacol
can help you do this. With a little of extra writing you can have nice domain semantics all around your code!

Resources