I been using mongoose a consider amount and I cant seem to get around "callback hell" and polluting my queries with error treatments.
For example here is a route I have:
var homePage = function(req, res) {
var companyUrl = buildingId = req.params.company
db.pmModel
.findOne({ companyUrl: companyUrl })
.exec(function (err, doc) {
if (err)
return HandleError(req, res, err)
if( !doc )
return NoResult(req, res, {msg: 'Aint there'})
console.log(doc)
db.rentalModel
.find({ propertyManager: doc.id })
.populate('building')
.exec(function (err, rentals) {
if (err)
return HandleError(req, res, err)
if( !doc )
return NoResult(req, res, {msg: 'Aint there'})
console.log(doc)
var data = doc.toJSON()
data.rentals = rentals
res.render('homePage', data)
})
})
}
my question: is there a more succinct way of writing this?
So perhaps what you have above is just a small example, but it doesn't appear to me that there's too much "callback hell" going on in your code (in my opinion). However, you can certainly refactor your code. Just know in doing so you can make it more difficult to understand or follow from a maintenance perspective.
One thing you can do is simply refactor your database layer. If you always find yourself querying one collection and then turning right around and querying another, you could consider merging those collections, or at least the documents that you're looking for. In a relational database you might separate out these tables and do merges, however in a document-based database, it sometimes makes more sense to combine the data within each document. This allows for easier queries and simpler logic in your code.
Another solution is to refactor your calls into separate functions, and control the flow in a different way. A popular library to help with this is async which provides many helper functions to assist in the asynchronous world of JavaScript. There are many to choose from, but one suggestion would be to use the waterfall function for your situation (since each call must be made before the next). It would then look something like this:
async.waterfall([
function(callback){
findCompany(companyUrl, callback);
},
function(id, callback){
findPropertyManager(id, callback);
}
], function (err, rentals) {
res.render(rentals)
});
You would still need to handle the errors in each function, but you could even refactor that out into a helper function. Furthermore, you could choose to code up something yourself to help with the control flow rather than using async.
But again, the code you showed above is understandable and readable, and only contains a couple inline callbacks. In this way, there's a lot less going on and may make debugging it later (if things go wrong) easier.
Related
Using Mongoose's populate to add users favorite "foods" into the user object.
Set up:
User.findById(req.user._id, function(err, user) {
var newFood = new Food({
name: "tacos",
image: 'test',
});
user.foods = newFood
user.save();
Then:
router.get("/dashboard", function (req, res) {
User.find({currentUser: req.user})
.populate({path: 'foods'}).
exec(function (err, foods) {
if (err) return (err);
When I console.log user.foods.name is undefined. user.foods is an object
How do I get the user.foods.name? In this case expecting "tacos"
Your problem is, that you are not really understanding (yet) how callbacks and async/await works. Your code-execution is not going the way you are wanting it to go, so you are landing on parts where variables have not been set yet.
This problem cannot be answer with just this specific question/answer.
Please watch some tutorials (or read, if you prefere) about what callbacks are.
If you are free to choose the version of your JS, then i suggest you use async/await. It makes things MUCH more structured, readable and understanable.
BUT: You first fully need to understand it. Just copy-pasting it will result in errors and misconceptions (trust me).
Just google for those guides, i personally would search for youtube-tutorials (there are hundreds of nice structured ones).
As your project grows, we started to have this much appreciated, defensive code snippet pretty much everywhere :
func(err, result){
if(err){
console.log('An error occurred!, #myModule :' + err);
return callback(err);
}
//then the rest..
}
A quick google search reveals some libs that attempt to overcome this common concern, e.g. https://www.npmjs.com/package/callback-wrappers.
But what is the best approach to minimize the boilerplate coding without compromising the early error handling mechanism we have?
There are a couple of ways you can help to alleviate this issue, both use external modules.
Firstly, and my preferred method, is to use async, and in particular, async.series, async.parallel or async.waterfall. Each of these methods will skip straight to the last function if an error occurs in any of your async calls, thus preventing the splattering of if(err) conditions throughout your callbacks.
For example:
async.waterfall([
function(cb) {
someAsyncOperation(cb);
},
function(result, cb) {
doSomethingAsyncWithResult(result, cb);
}
], function(err, result) {
if(err) {
// Handle error - could have come from any of the above function blocks
} else {
// Do something with overall result
}
});
The other option is to use a promise library, such as q. This has a function Q.denodeify to help you wrap callback-style code into promise-style. With promises, you use .then., .catch and .done:
var qSomeAsyncOperation = Q.denodeify(someAsyncOperation);
var qDoSomethingAsyncWithResult = Q.denodeify(doSomethingAsyncWithResult);
Q()
.then(qSomeAsyncOperation)
.then(qDoSomethingAsyncWithResult)
.done(function(result) {
// Do something with overall result
}, function(err) {
// Handle error - could have come from any of the above function blocks
});
I prefer using async because it is easier to understand what is going on, and it is closer to the true callback-style that node.js has adopted.
I have two layers in my application(express), first is module with function which is handling database queries, fs , and so on. Second is handling requests(also known as controller/route). I just tired of all this conditions.
Sample code:
exports.updateImage = function(image, userId, callback) {
fs.readFile(image.path, function (err, imageBinary) {
if (err) callback(err);
else {
pg.connect(conString, function(err, client, done) {
done();
if (err) callback(err);
else {
client.query('UPDATE images SET data=$1, filesize=$2, filename=$3 WHERE user_id=$4', [imageBinary, image.size, image.originalFilename, userId], function(err) {
if (err) callback(err);
else callback(null);
});
}
});
}
});
};
As you can see, I callback all my errors to my controller, then it handled as internal server error. I handle database, file system possible errors, and there is too much repetitions in my code. I suppose it is bad design, and it hard to support in production. Please help me.
When you say "tired of all these conditions" I assume you're talking about all the nested callbacks and the "march off the right side of the screen" that results from that kind of directly nested callbacks? If I'm assuming incorrectly please clarify your question and I'll delete everything I'm about to write as not related. :-)
One cheap way to avoid the else structure is to instead of doing
if(err) callback(err);
else { ... stuff ... }
is to do this:
if(err) return callback(err);
Note the return: that causes execution of your function to end, nobody cares about the return values from a callback so they just get ignored. So that potentially gets rid of a layer of braces and elses.
To handle this better in general, you'll want to look at some sort of async helpers. There's three general categories of these things:
Helper libraries that manage the sequencing of multiple callbacks,
Promises, which let you represent async operations as objects, or
Language support to hide the details.
Examples of the three different types of libraries include step, flow, or async as helper libraries, for promises there's Q or when.js, and for language support look at streamline.
For more details, I did a presentation on exactly this topic about a year ago; the slides are here are there's a recording of the presentation as well.
I'm learning node.js coming from a PHP background with a limited JavaScript level. I think I got over now the change of mindset implied by the asynchronous approach. And I love it.
But, as many others before me, I quickly understood the concrete meaning of the "pyramid of doom".
So I build these little 'dummy' route and view to understand how to properly use Async.js. I just spend the last 5 hours writing the following code (rewritten of course tens of times). It works, but I wonder how I could go further and made this code more simple (less verbose, easier to read and maintain).
I found many resources on the web and especially here, but always by bits of info here and there.
I'm guessing at this point that I should use "bind" and "this" with async.apply to make to shorten the 2 last functions called by the waterfall.
The issue is to get the object "db" defined so I can use the "collection" method on it (for the second function).
I really searched an example in Google, but it's surprising that you don't get straightforward examples looking for "async waterfall bind" (as well as many keyword variations I tried). There are answers of course but none seems relevant to this particular issue... ore, quite possibly, I haven't understood them.
Can someone help me on this? I'll be quite grateful.
app.get('/dummy',
function(req, res) {
var MongoClient = require('mongodb').MongoClient;
async.waterfall(
[
async.apply(MongoClient.connect, 'mongodb://localhost:27017/mybdd'),
function(db, callback) {
db.collection('myCollection', callback);
},
function(collection, callback) {
collection.find().sort({"key":-1}).limit(10).toArray(callback);
}
], function(err, results) {
if (err) console.log('Error :', err);
else { res.render('dummy.jade', { title:'dummy', results: results} ); }
}
);
}
);
If you're using the mongodb JS Driver, then this should work:
async.waterfall(
[
function (cb) {
new MongoClient(...)
.connect('mongodb://localhost:27017/mybdd', cb);
},
function (db, callback) {
db.collection('myCollection', callback);
},
...
Alternatively, if you want to use async.apply, just pass an instance of MongoClient
async.apply(new MongoClient(...).connect, 'mongodb://localhost:27017/mybdd')
I've recently created a simple abstraction named WaitFor to call async functions in sync mode (based on Fibers): https://github.com/luciotato/waitfor
I'm not familiar with mongodb client, so i'll be mostly guessing what you're trying to do:
using WaitFor your code will be:
var MongoClient = require('mongodb').MongoClient;
var wait = require('waitfor');
app.get('/dummy', function(req, res) {
// handle request in a Fiber, keep node spinning
wait.launchFiber(handleDummy,req,res)
}
);
function handleDummy(req, res) {
try {
var db = wait.for(MongoClient.connect, 'mongodb://localhost:27017/mybdd');
var collection = wait.forMethod(db,'collection','myCollection');
var results = wait.forMethod(collection.,'sort',{"key":-1}).toArray();
res.render('dummy.jade', { title:'dummy', results: results} );
}
catch(err) {
res.render('error.jade', { title:'error', message: err.message} );
}
};
Using node.js I'm creating a function to update a file that contains a JSON list by appending a new element to the list. The updated list is rewritten back to the file. If the file doesn't exist I create it.
Below, __list_append(..) does the list append and file update.
My question is if (and should) I can restructure this code to not have two calls to __list_append. I'm a bit new to node.js, and don't have a good feel for the asynchronous tactics.
function list_append(filename, doc) {
fs.exists(filename, function(exists) {
if (exists) {
fs.readFile(filename, function(err, data) {
if (err)
throw err;
__list_append(filename, JSON.parse(data), doc);
});
} else
__list_append(filename, [], doc);
});
}
It's easy to get a bit pedantic with "best practices," but when I'm writing code and I get a gut feeling that something's not right or that something could be changed, I go over some well known best practices and attempt to see if the code I'm writing adheres to them. SOLID, while being principles of object oriented programming, can be useful to think about in other contexts. In this case, it seems to me that the function is violating the Single Responsibility Principle:
One of the most foundational principles of good design is:
Gather together those things that change for the same reason, and separate those things that change for different reasons.
This principle is often known as the Single Responsibility Principle or SRP. In short, it says that a subsystem, module, class, or even a function, should not have more than one reason to change.
(This could perhaps be exchanged for Separation of Concerns or other similar principles for this example, but the concept is the same.)
In this case, the function has two responsibilities: (1) getting the current (or default) list associated with a filename, and (2) appending data to said list. A first pass at separating these concerns might look something like this:
function get_current_list(filename, callback) {
fs.exists(filename, function(exists) {
if (exists) {
fs.readFile(filename, function(err, data) {
if (err)
return callback(err);
callback(null, JSON.parse(data));
});
} else
callback(null, []);
});
}
function list_append(filename, doc) {
get_current_list(filename, function(err, list) {
if(err) throw err;
__list_append(filename, list, doc);
});
}
Now, get_current_list is only responsible for getting the current list in a file (or an empty array if there is no file), and __list_append is (assumed to be) only responsible for appending to it; list_append is now a simple integration point between these two functions. The functions are a bit more reusable and can also be tested more easily (as an aside, a test-first or TDD approach to programming can help you notice these kinds of things up front). Furthermore, repeating callback in get_current_list is quite a bit more generic than repeating __list_append; if you need to change __list_append to something else, it now is only called in one place.
This case always feels unsatisfying to me because yes, you do have to repeat your call to __list_append on both branches because only one of the branches is synchronous.
I like and up voted Brandon's answer, but this also works:
function list_append(filename, doc) {
fs.exists(filename, function(exists) {
var data = [];
if (exists) {
data = fs.readFileSync(filename, "utf8");
}
__list_append(filename, data, doc);
});
}