Node.js: should I keep `assert()`s in production code? - node.js

A methodological question:
I'm implementing an API interface to some services, using node.js, mongodb and express.js.
On many (almost all) sites I see code like this:
method(function(err, data) {
assert.equal(null, err);
});
The question is: should I keep assert statements in my code at production time (at least for 'low significance' errors)? Or, are these just for testing code, and I should better handle all errors each time?

You definitively should not keep them in the production environment.
If you google a bit, there are a plethora of alternative approaches to strip out them.
Personally, I'd use the null object pattern by implementing two wrappers in a separate file: the former maps its method directly to the one exported by the module assert, the latter offers empty functions and nothing more.
Thus, at runtime, you can plug in the right one by relying on some global variable previously correctly set, like process.env.mode. Within your files, you'll have only to import the above mentioned module and use it instead of using directly assert.
This way, all around your code you'll never see error-prone stuff like myAssert && myAssert(cond), instead you'll have ever a cleaner and safer myAssert(cond) statement.
It follows a brief example:
// myassert.js
var assert = require('assert');
if('production' === process.env.mode) {
var nil = function() { };
module.exports = {
equal = nil;
notEqual = nil;
// all the other functions
};
} else {
// a wrapper like that one helps in not polluting the exported object
module.exports = {
equal = function(actual, expected, message) {
assert.equal(actual, expected, message);
},
notEqual = function(actual, expected, message) {
assert.notEqual(actual, expected, message);
},
// all the other functions
}
}
// another_file.js
var assert = require('path_to_myassert/myassert');
// ... your code
assert(true, false);
// ... go on

Yes! asserts are good in production code.
Asserts allow a developer to document assumptions that the code makes, making code easier to read and maintain.
It is better for an assert to fail in production than allow the undefined behaviour that the assert was protecting. When an assert fails you can more easily see the problem and fix it.
Knowing your code is working within assumptions is far more valuable than a small performance gain.
I know opinions differ here. I have offered a 'Yes' answer because I am interested to see how people vote.

probably no
ref: When should assertions stay in production code?
Mostly in my code i put error handling function in a separate file , and use same error method everywhere, it mostly depends on logic anyways
like ppl generally forget this
process.on('uncaughtException', function (err) {
console.log(err);
})
and err==null doesn't hurts , it checks both null and undefined

Related

Is it considered bad practice to manipulate a queried database document before sending to the client in Mongoose?

So I spent too long trying to figure out how to manipulate a returned database document (using mongoose) using transform and virtuals, but for my purposes, those aren't options. The behaviour I desire is very similar to that of a transform (in which I delete a property), but I only want to delete the property from the returned document IFF it satisfies a requirement calculated using the req.session.user/req.user object (I'm using PassportJS, but any equivalent session user suffices). Obviously, there is no access to the request object in a virtual or transform, and so I can't do the calculation.
Then it dawned on me that I could just query normally and manipulate the returned object in the callback before I send it to the client. And I could put it in a middleware function that looks nice, but something tells me this is a hacky thing to do. I'm presenting an api to the client that does not reflect the data stored/retrieved directly from the database. It may also clutter up my route configuration if I have middleware like this all over making it harder to maintain code. Below is an example of what the manipulation looks like:
app.route('/api/items/:id').get(manipulateItem, sendItem);
app.param('id', findUniqueItem);
function findUniqueItem(req, res, next, id) {
Item.findUniqueById(id, function(err, item) {
if (!err) { req.itemFound = item; }
next();
}
}
function manipulateItem(req, res, next) {
if (req.itemFound.people.indexOf(req.user) === -1) {
req.itemFound.userIsInPeopleArray = false;
} else {
req.itemFound.userIsInPeopleArray = true;
}
delete req.itemFound.people;
}
function sendItem(req, res, next) {
res.json(req.itemFound);
}
I feel like this is a workaround to a problem with a simpler solution, but I'm not sure what that solution is.
There's nothing hacky about the act of modifying it.
It's all a matter of when you modify it.
For toy servers, and learning projects, the answer is whenever you want.
In production environments, you want to do your transform on your way out of your system, and into the next system (the next system might be the end user; it might be another server; it might be another big block of functionality in your own server, that shouldn't have access to more information that it needs to do its job).
getItemsFromSomewhere()
.then(transformToTypeICanUse)
.then(filterBasedOnMyExpectations)
.then(doOperations)
.then(transformToTypeIPromisedYou)
.then(outputToNextSystem);
That example might not be super-helpful in terms of an actual how, but that's sort of the point.
As you can see, you could link that system of events up to another system of events (that does its own transform to its own data-structure, does its own filtering/mapping, transforms that data into whatever its API promises, and passes it along to the next system, and eventually out to the end user).
I think part of the sense of "hacking" comes from bolting the result of the async process onto req, where req gets injected from step to step, through the middleware.
That said:
function eq (a) {
return function (b) { return a === b; };
}
function makeOutputObject (inputObject, personWasFound) {
// return whatever you want
}
var personFound = req.itemFound.people.some(eq(req.user));
var outputObject = makeOutputObject(req.itemFound, personFound);
Now you aren't using the actual delete keyword, or modifying the call-to-call state of that itemFound object.
You're separating your view-based logic from your app-based logic, but without the formal barriers (can always be added later, if they're needed).

Calling a yeoman generator after a generator has finished

I am looking to call another yeoman generator once the first generator has finished installing, this will be based on an answer I give for one of the prompts.
I have tried calling it at the end.
end: function () {
this.installDependencies({
callback: function () {
if( this.generator2 ){
shell.exec('yo generator2');
}
}.bind(this)
});
},
This runs generator2, but I am unable to answer any prompts.
These are 2 separate generators, so I cannot make the second a sub generator.
Use Yeoman composability feature.
About the code, don't use this.installDependencies() callback (that won't work as you expect). Rather use the run loop priorities.
Also, you should review your logic and the way you think about your current problem. When composing generators, the core idea is to keep both decoupled. They shouldn't care about the ordering, they should run in any order and output the same result. Thinking about your code this way will greatly reduce the complexity and make it more robust.
I see this is an older question, but I came accross a similar requirement & want to make sure all options are listed. I agree with the other answers that it is the best choice to use the composability feature & keep the order irrelevant. But in case it really is necessary to run generators sequentially:
You can also execute another generator using the integration features.
So in generator1 you could call
this.env.run('generator2');
This will also let you answer prompts in generator2.
When using .composeWith a priority group function (e.g.: prompting, writing...) will be executed for all the generators, then the next priority group. If you call .composeWith to generatorB from inside a generatorA, then execution will be, e.g.:
generatorA.prompting => generatorB.prompting => generatorA.writing =>
generatorB.writing
You can cover all possible execution scenarios, condition checking with this concept, also use the options of .composeWith('my-genertor', { 'options' : options })
If you want to control execution between different generators, I advise you to create a "main" generator which composes them together, like written on http://yeoman.io/authoring/composability.html#order:
// In my-generator/generators/turbo/index.js
module.exports = require('yeoman-generator').Base.extend({
'prompting' : function () {
console.log('prompting - turbo');
},
'writing' : function () {
console.log('prompting - turbo');
}
});
// In my-generator/generators/electric/index.js
module.exports = require('yeoman-generator').Base.extend({
'prompting' : function () {
console.log('prompting - zap');
},
'writing' : function () {
console.log('writing - zap');
}
});
// In my-generator/generators/app/index.js
module.exports = require('yeoman-generator').Base.extend({
'initializing' : function () {
this.composeWith('my-generator:turbo');
this.composeWith('my-generator:electric');
}
});

Checking error parameters in node

It is a convention in node to pass an error parameter to asynchronous operations:
async.someMagicalDust(function(callback) {
// some asynchronous task
// […]
callback();
}, function(err) {
// final callback
if(err) throw err;
// […]
});
Maybe I'm too naive, but I've never been a big fan of the if(variable) notation — probably inherited from C, for reasons that have already been discussed many times in the past.
On the other hand, I have sometimes encountered a null parameter and this error check:
if(typeof err !== 'undefined' && err !== null)
is a bit too verbose.
Another solution would be
if(err != null)
but I think the non-strict check can be tricky, even though I consider it normal when comparing with null.
What's the best way to check error parameters in node?
Use if(err).
It is designed to be used in this fashion. Node-style callbacks are supposed to set the error to a non-falsy value only in case of actual error. You won't find any sane example of setting err to '' or 0 to signify error.
Just like YlinaGreed has noted, if module conventions cnahge from null to undefined to 0 to maybe even NaN, you are still safe. I've never been hit by this having only used if(err).
On the other hand, you may want to use coffescript, which would translate for you
unless err? then...
into
if (typeof err === "undefined" || err === null) {
mimicking the most common pattern.
Some links to corroborate the if(err) approach:
https://gist.github.com/klovadis/2548942
http://caolanmcmahon.com/posts/nodejs_style_and_structure/ (by Caolan McMahon, author of async)
http://thenodeway.io/posts/understanding-error-first-callbacks/
The convention seems to be passing an error object as the first argument and null for no error so even if you pass an empty object, it's still an error.
If you use the popular express framework, you should have used the next callback to return from middleware, which follows the errback convention.
I believe that most people prefer more concise next() than next(null), which means that the first argument will evaluate to undefined rather than null, and this is certainly perfectly normal usage.
To me, the best way to handle error is the "if (err == null)"
This is the only case I am using the non strict operator, for these reasons:
The only over solution which always works is very verbose, as you said before
You could also check only on "null" or "undefined", but I did this once, and a few month later, I just updated my dependencies and... The convention changed, and the module was sending null instead of undefined.
This is mostly a matters of "convention", I have mine, and you certairnly have your's too... Just be careful to chose one of the two "good" ways.
Node's primary callback convention is to pass a function with err as the first parameter. In my experience its always been safe to check if error is truthy - in practice if your error comes out null when there IS an error then the problem lies more with the implementation. I would always expect if err is null that no error occured. It may be confusing due to the use of separate functions for errors and successes, something that is more in the style of JQuery.Ajax and promises. I tend to find double callbacks to be a bit too wordy to call.
Given your example it seems you're using the async library which is excellent. If I am looking to perform a parallel option this is how I set it up:
function doAThing(callback) {
var err;
// do stuff here, maybe fill the err var
callback(err);
}
function doAsyncThings(callback) {
var tasks = [function(done) { // stuff to do in async
doAThing(function(err) {
done(err);
});
}];
async.parallel(tasks, function(err) { // single callback function
callback(err); // I send the error back up
});
}
Note that instead of throwing the error I bubbled it back up the request chain. There are few instances which I'd want to actually throw an error since you're basically saying "crash the whole app".
I find it to be simpler and reduces the amount of parameters you have to use to call your functions. When you use this convention throughout you you can simplify by simply passing the callback as a parameter instead of creating a new anonymous function, like so:
function doAThing(callback) {
var err;
// do stuff here, maybe fill the err var
callback(err);
}
function doAsyncThings(callback) {
var tasks = [function(done) { // stuff to do in async
doAThing(done);
}];
async.parallel(tasks, callback); // the error is sent back to the original function
}
I find that generally you want to handle those errors in the functions they are called in. So in this case the caller of doAsyncThings can check if there is an error and handle it appropriate to its own scope (and perhaps provide better information to the user if it is say an API).

Using this within a promise in AngularJS

Is there a best-practice solution to be able to use within in promise this? In jQuery i can bind my object to use it in my promise/callback - but in angularJS? Are there best-practice solutions? The way "var service = this;" i don't prefer ...
app.service('exampleService', ['Restangular', function(Restangular) {
this._myVariable = null;
this.myFunction = function() {
Restangular.one('me').get().then(function(response) {
this._myVariable = true; // undefined
});
}
}];
Are there solutions for this issue? How i can gain access to members or methods from my service within the promise?
Thank you in advance.
The generic issue of dynamic this in a callback is explained in this answer which is very good - I'm not going to repeat what Felix said. I'm going to discuss promise specific solutions instead:
Promises are specified under the Promises/A+ specification which allows promise libraries to consume eachother's promises seamlessly. Angular $q promises honor that specification and therefor and Angular promise must by definition execute the .then callbacks as functions - that is without setting this. In strict mode doing promise.then(fn) will always evaluate this to undefined inside fn (and to window in non-strict mode).
The reasoning is that ES6 is across the corner and solves these problems more elegantly.
So, what are your options?
Some promise libraries provide a .bind method (Bluebird for example), you can use these promises inside Angular and swap out $q.
ES6, CoffeeScript, TypeScript and AtScript all include a => operator which binds this.
You can use the ES5 solution using .bind
You can use one of the hacks in the aforementioned answer by Felix.
Here are these examples:
Adding bind - aka Promise#bind
Assuming you've followed the above question and answer you should be able to do:
Restangular.one('me').get().bind(this).then(function(response) {
this._myVariable = true; // this is correct
});
Using an arrow function
Restangular.one('me').get().then(response => {
this._myVariable = true; // this is correct
});
Using .bind
Restangular.one('me').get().then(function(response) {
this._myVariable = true; // this is correct
}.bind(this));
Using a pre ES5 'hack'
var that = this;
Restangular.one('me').get().then(function(response) {
that._myVariable = true; // this is correct
});
Of course, there is a bigger issue
Your current design does not contain any way to _know when _myVariable is available. You'd have to poll it or rely on internal state ordering. I believe you can do better and have a design where you always execute code when the variable is available:
app.service('exampleService', ['Restangular', function(Restangular) {
this._myVariable =Restangular.one('me');
}];
Then you can use _myVariable via this._myVariable.then(function(value){. This might seem tedious but if you use $q.all you can easily do this with several values and this is completely safe in terms of synchronization of state.
If you want to lazy load it and not call it the first time (that is, only when myFunction is called) - I totally get that. You can use a getter and do:
app.service('exampleService', ['Restangular', function(Restangular) {
this.__hidden = null;
Object.defineProperty(this,"_myVariable", {
get: function(){
return this.__hidden || (this.__hidden = Restangular.one('me'));
}
});
}];
Now, it will be lazy loaded only when you access it for the first time.

Is it possible to make a method like Q.spawn return a value in Node JS

I know out of the box Q won't support this, but I'm wondering if it is theoretically possible to do something like this:
var user = Q.spawn(function* () {
var createdUser = yield createUser();
return user;
});
console.log(user); // user is available here
I may be wrong, but I would say yes, it is possible theoretically, it's just not intended to work like that. That's why Q doesn't support it.
I'm not sure why Q.done doesn't invalidate the promise chain altogether, preventing further calls to p.then to succeed (maybe it's impossible), but right now (Q is at version 1.2.0 at the time of this writing) it doesn't:
var p = Q("Test");
p.done();
p.then(function(message) {
console.log(message); // "Test" is logged here just fine.
});
// No runtime errors.
So Q.spawn needed only return the result of Q.async(generator)() after calling Q.done on it to support that, like this:
Q.spawn = spawn;
function spawn(makeGenerator) {
var p = Q.async(makeGenerator)();
Q.done(p);
return p;
}
That said, it seems clear to me that the API doesn't want to encourage using a done promise chain: Contrary to Q.then and Q.catch, for example, neither Q.done nor Q.spawn return any promise for chaining.
The library authors were either not sure if this was a good idea (and so didn't want to encourage it, but didn't also implement something that prohibits done promises from being used) or were outright convinced it's not.

Resources