Yeoman generator: how to run async command after all files copied - node.js

I'm writing a yeoman generator.
I need to run some shell script after all files copied.
The generator is being called as a child generator so it should wait till script finished.
The script is some command file being run via spawn:
that.spawnCommand('createdb.cmd');
As the script depends on files created by the generator it cannot run right inside generator's methods as all copy/template action are async and have not executed yet:
MyGenerator.prototype.serverApp = function serverApp() {
if (this.useLocalDb) {
this.copy('App_Data/createdb.cmd', 'App_Data/createdb.cmd');
// here I cannot run spawn with createdb.cmd as it doesn't exist
}
}
So the only place I found where I can run spawn is the 'end' event handler:
var MyGenerator = module.exports = function MyGenerator (args, options, config) {
this.on('end', function () {
if (that.useLocalDb) {
that.spawnCommand('createdb.cmd')
}
}
}
The script runs successfully but the generator finishes earlier than the child process. I need to tell Yeoman to wait for my child process.
Something like this:
this.on('end', function (done) {
this.spawnCommand('createdb.cmd')
.on('close', function () {
done();
});
}.bind(this));
But 'end' handler doesn't have the argument with 'dine' callback.
How to do this?
UPDATE:
thanks to #SimonBoudrias I got it working.
The full working code is below.
BTW: end method is described in the docs
var MyGenerator = module.exports = yeoman.generators.Base.extend({
constructor: function (args, options, config) {
yeoman.generators.Base.apply(this, arguments);
this.appName = this.options.appName;
},
prompting : function () {
// asking user
},
writing : function () {
// copying files
},
end: function () {
var that = this;
if (this.useLocalDb) {
var done = this.async();
process.chdir('App_Data');
this.spawnCommand('createdb.cmd').on('close', function () {
that._sayGoodbay();
done();
});
process.chdir('..');
} else {
this._sayGoodbay();
}
},
_sayGoodbay: funciton () {
// final words to user
}
});

Never trigger any action in the end event. This event is to be used by implementors, not generator themselves.
In your case:
module.exports = generators.Base({
end: function () {
var done = this.async();
this.spawnCommand('createdb.cmd').on('close', done);
}
});

Related

Nodejs debug errors in production

I have a nodejs script running in production
I unlikely (once in thousand times) get errors like this:
TypeError: value is out of bounds
at checkInt (buffer.js:1009:11)
at Buffer.writeUInt16LE (buffer.js:1067:5)
at Object.foo.bar (/fake/path/name.js:123:1);
at Object.foo.bar2 (/fake/path/name2.js:123:1);
at Object.foo.bar3 (/fake/path/name3.js:123:1);
Causing the production server to crash...
Great I have a stacktrace! But I want to know what it's current data is for each call or all the data?
What are some great tools or code to use to for error logging(with it's current data) on production code?
I highly recommend in using either Winston or Bunyan. The selection of npm package is a decision of your application.
You can benchmark the available npm packages by going through the stats in the available in npm pages. The stats are basically the following.
downloads in the last day
downloads in the last week
downloads in the last month
open issues and open pull requests.
Having higher number of downloads recently will indicate that there is a great support for the module you are using in the long run. So that is important.
Both Winstan and Bunyan being best logging npm packages in the market, the main difference is that, Winstan is really awesome and flexible for normal logging purposes. Of course Winstan offers a great deal of logging capabilities. But how ever, to make use of these capabilities, some effort need to put on compared to Bunyan.
Bunyan on the other hand is specially support the fact of "analysing logs". So basically Bunyan is for logs processing. So if you want to analyse your logs, log files it is highly recommend to use Bunyan. Tweaking logs with Bunyan is fairly easy vis-a-vis to Winstan.
I did a thorough comparison between Bunyan and Winstan. Please do check the link below to view how the Winstan and Bunyan can use depending on the scope, use-case and necessity of the logging to the Node application.
link : https://docs.google.com/document/d/1pD9PLyxlcHVxxOvserNLO9tAz-QA_Co-xo6cWLhLghc/edit?usp=sharing
Also in the Production environment, make sure to use the logging levels wisely. The mostly used logging levels in production environment are:
error
info
debug
Yoou can use Winston or Pino
With winston you can load many modules for log where you want and maybe store logs online. I never use pino, but i have read good things about it.
Set env variables for choose where output your log, for example you wont to show output on stdout only if you are in develop and store online only if the app is in production.
A good way to handle the asynchronous functions in node.js is by using the decofun debug tool.
The main feature of it is to parse the code and the names of the anonymous functions according to their context.
You can deanonymise any anonymous function by running it with deco filename.js
A simple example would be as mentioned in the documentation
function gravy() {
return function returnedᅠfromᅠgravyᅠㅣlineᅠ2 () {
return {
prop: function asᅠpropertyᅠpropᅠㅣlineᅠ4 () {
setTimeout(function passedᅠintoᅠsetTimeoutᅠㅣlineᅠ5 () {
console.trace('Getting a trace...');
}, 10)
}
}
}
}
Trace: Getting a trace...
at passedᅠintoᅠsetTimeoutᅠㅣlineᅠ5 [as _onTimeout] (/home/ubuntu/workspace/node_modules/decofun/examples/loadable/index.js:6:22)
at Timer.listOnTimeout (timers.js:92:15)
Since it comes with embedded cute-stack library which normalises the path to the current directory
By applying the command deco examples/loadable --cute table
the output would be shown as
The best thing i like about it is that it transforms the functions based upon their calls to the original one as seen in the example from this
function one (a, cb) {
}
one('blah', function () {
})
function two () {
return function () { }
}
function three () {
return {
shoe: function () {}
}
}
function four () {
return function () {
return function () {
}
}
}
function five () {
return function () {
return function () {
return function () {
foo('blue', function () {
})
}
}
}
}
var six = function () {
}
var seven = function (err, cb) {
return function () {
cb(function () {
})
}
}
var o = {};
o.eight = function (cb) { }
o.eight(function () { })
o.eight.nine = function () {}
o.eight.nine(function () { })
var o2;
o2 = function () { }
;(function () {}())
!function () { }()
function toodeep () {
return function () {
return function () {
return function () {
return function () {
return function () {
return function () {
return function () {
return function () {
return function () {
return function () {
}
}
}
}
}
}
}
}
}
}
}
into this
function one (a, cb) {
}
one('blah', function passedᅠintoᅠoneᅠㅣlineᅠ6 () {
})
function two () {
return function returnedᅠfromᅠtwoᅠㅣlineᅠ11 () { }
}
function three () {
return {
shoe: function asᅠpropertyᅠshoeᅠㅣlineᅠ17 () {}
}
}
function four () {
return function returnedᅠfromᅠfourᅠㅣlineᅠ22 () {
return function returnedᅠfromᅠᐸᅠreturnedᅠfromᅠfourᅠᐳᅠㅣlineᅠ23 () {
}
}
}
function five () {
return function returnedᅠfromᅠfiveᅠㅣlineᅠ30 () {
return function returnedᅠfromᅠᐸᅠreturnedᅠfromᅠfiveᅠᐳᅠㅣlineᅠ31 () {
return function returnedᅠfromᅠᐸᅠreturnedᅠfromᅠᐸᅠreturnedᅠfromᅠfiveᅠᐳᅠᐳᅠㅣlineᅠ32 () {
foo('blue', function passedᅠintoᅠfooᅠㅣlineᅠ33 () {
})
}
}
}
}
var six = function asᅠvarᅠsixᅠㅣlineᅠ42 () {
}
var seven = function asᅠvarᅠsevenᅠㅣlineᅠ47 (err, cb) {
return function returnedᅠfromᅠᐸᅠasᅠvarᅠsevenᅠᐳᅠㅣlineᅠ49 () {
cb(function passedᅠintoᅠcbᅠㅣlineᅠ50 () {
})
}
}
var o = {};
o.eight = function asᅠpropertyᅠeightᅠㅣlineᅠ58 (cb) { }
o.eight(function passedᅠintoᅠoːeightᅠㅣlineᅠ61 () { })
o.eight.nine = function asᅠpropertyᅠnineᅠㅣlineᅠ63 () {}
o.eight.nine(function passedᅠintoᅠeightːnineᅠㅣlineᅠ64 () { })
var o2;
o2 = function asᅠvarᅠo2ᅠㅣlineᅠ68 () { }
;(function IIFEᅠㅣlineᅠ71 () {}())
!function IIFEᅠㅣlineᅠ73 () { }()
function toodeep () {
return function returnedᅠfromᅠtoodeepᅠㅣlineᅠ78 () {
return function returnedᅠfromᅠᐸᅠreturnedᅠfromᅠtoodeepᅠᐳᅠㅣlineᅠ79 () {
return function returnedᅠfromᅠᐸᅠreturnedᅠfromᅠᐸᅠreturnedᅠfromᅠtoodeepᅠᐳᅠᐳᅠㅣlineᅠ80 () {
return function returnedᅠfromᅠᐸᅠreturnedᅠfromᅠᐸᅠreturnedᅠfromᅠᐸᅠreturnedᅠfromᅠtoodeepᅠᐳᅠᐳᅠᐳᅠㅣlineᅠ82 () {
return function returnedᅠfromᅠᐸᅠreturnedᅠfromᅠᐸᅠreturnedᅠfromᅠᐸᅠreturnedᅠfromᅠᐸᅠreturnedᅠfromᅠtoodeepᅠᐳᅠᐳᅠᐳᅠᐳᅠㅣlineᅠ83 () {
return function returnedᅠfromᅠᐸᅠreturnedᅠfromᅠᐸᅠreturnedᅠfromᅠᐸᅠreturnedᅠfromᅠᐸᅠreturnedᅠfromᅠᐸᅠreturnedᅠfromᅠtoodeepᅠᐳᅠᐳᅠᐳᅠᐳᅠᐳᅠㅣlineᅠ84 () {
return function returnedᅠfromᅠᐸᅠreturnedᅠfromᅠᐸᅠreturnedᅠfromᅠᐸᅠreturnedᅠfromᅠᐸᅠreturnedᅠfromᅠᐸᅠreturnedᅠfromᅠᐸᅠreturnedᅠfromᅠtoodeepᅠᐳᅠᐳᅠᐳᅠᐳᅠᐳᅠᐳᅠㅣlineᅠ86 () {
return function () {
return function () {
return function () {
}
}
}
}
}
}
}
}
}
}
}
Hope this might help a bit! Cheers!
I am using pm2 which is a process manager for node.js and also rollbar error reporting. I think you should define some metrics for the part of your code which this error comes from.
For any uncaughtException the server will stop in order to make the server keep on running even when there is an uncaught exception what i have done is created a separate collection for storing error, save error once an uncaught exception occurs and returns.
Collection
var ErrorSchema = new mongoose.Schema({
err_Message:{type:String},
err_Stack:{type:String},
date:{type:Date}
});
Controller
process.on('uncaughtException', function (err) {
console.log(err);
console.error((new Date).toUTCString() + ' uncaughtException:', err.message);
console.error(err.stack);
var newError = new Error;
newError.err_Message = err.message;
newError.err_Stack = err.stack;
newError.date = moment();
newError.save(function(saveErr,errData){
if(!saveErr)
console.log('New Error is saved');
else
console.log('Error in saving error');
});
//process.exit(1)
});
The above methods stores the uncaught exception in the Error collection and the process/server does not stops.
Hope this helps.

Nodejs async.series not executing all the methods

I am trying to use 'async' for my work, so I have written a sample program to make sure that works. async.parallel() works as expected, but not the async.series(). Not sure what I am missing. Can anyone take a look at this sample code and point out the problem/mistake?
async.series([task1, task2]) is just executing 'task1' ONLY.
const async = require('async');
var firstThing = function() {
setTimeout(function(){console.log('IN the First thing')}, 1000);
};
var secondThing = function () {
setTimeout(function(){console.log('IN the second thing')}, 1500);
};
async.series(
[
firstThing,
secondThing
],
function (err, result) {
console.log('blah blah '+result);
});
when I run this code, I get
IN the First thing
and exits. Why is the second task not being called? what am I missing?
Thanks.
You have to call back when you finish each of the functions you want to run in series:
const async = require('async');
var firstThing = function(callback) {
setTimeout(function(){console.log('IN the First thing')}, 1000);
callback(/* pass error or callback*/);
};
var secondThing = function (callback) {
setTimeout(function(){console.log('IN the second thing')}, 1500);
callback(/* pass error or callback*/);
};

Node.js sinon stubbing a function in parallel executions causes failed tests

I have 2 test cases which test the same function just taking 2 different executions paths, so to illustrate:
MyClass.prototype.functionBeingTested = function() {
if (this.check1()) {
this.isCheck1Called = true;
} else if (this.check2()) {
this.isCheck1Called = false;
} else {
...
}
};
My 2 test cases are as follow:
it('should take check1() execution path', function() {
var myClass= new MyClass({}, {}, {});
var check1Stub sinon.stub(MyClass.prototype, 'check1');
check1Stub.returns(true);
myClass.functionBeingTested();
myClass.isCheck1Called.should.equal(true);
});
it('should take check2() execution path', function() {
var myClass= new MyClass({}, {}, {});
var check2Stub sinon.stub(MyClass.prototype, 'check2');
check2Stub.returns(true);
myClass.functionBeingTested();
myClass.isCheck1Called.should.equal(false);
});
Now by default, check1() returns false so I don't stub it in the second test case, but by the time the second case is running, the check1() function stub is still active and causes the second case to enter the execution path of the first case as-well, making the second case test fail.
I understand it's a problem of test running in parallel and the first sinon stub still being used by the first test case, is there anyway I can solve this problem?
At the end of the first test, you should restore the original method (which is always a good thing, to prevent tests from being influenced by previous tests):
check1Stub.restore()
Or, alternatively, you can use a Sinon sandbox to run each test in:
describe('MyClass', function() {
beforeEach(function() {
this.sinon = sinon.sandbox.create();
});
afterEach(function() {
this.sinon.restore();
});
it('should take check1() execution path', function() {
var myClass = new MyClass({}, {}, {});
// `this.sinon` is the sandbox
var check1Stub = this.sinon.stub(MyClass.prototype, 'check1');
check1Stub.returns(true);
myClass.functionBeingTested();
myClass.isCheck1Called.should.equal(true);
});
it('should take check2() execution path', function() {
var myClass = new MyClass({}, {}, {});
var check2Stub = this.sinon.stub(MyClass.prototype, 'check2');
check2Stub.returns(true);
myClass.functionBeingTested();
myClass.isCheck1Called.should.equal(false);
});
});
(See mocha-sinon, which does exactly the same)

How to create parameterized and reusable gulp tasks

Is there a way to make this generic to the point where I can have one copy of it and pass the config item, and file list into it rather than duplicating it for every file/config combination?
I'd love to have something like
gulp.task('foo_test', function (cb) {
run_tests(files.foo_list, config.fooCoverage);
cb();
}
Note on potential oddities in the code
I'm using lazypipe and gulp-load-plugins full file here
// test the server functions and collect coverage data
gulp.task('api_test', function (cb) {
gulp.src(files.api_files)
.pipe(istanbulPre())
.on('end', function () {
gulp.src(files.api_test_files)
.pipe(mochaTask())
.pipe(istanbulAPI())
.on('end', cb);
});
});
var istanbulAPI = lazypipe()
.pipe(plugins.istanbul.writeReports, config.apiCoverage);
config = {
apiCoverage: {
reporters: ['json'],
reportOpts: {
json: {
dir: 'coverage',
file: 'coverage-api.json'
}
}
},
Gulp is just JavaScript.
You can write plain old regular functions, just as you normally would:
function run_tests(srcFiles, srcTestFiles, coverageConfig, cb) {
var istanbul = lazypipe()
.pipe(plugins.istanbul.writeReports, coverageConfig);
gulp.src(srcFiles)
.pipe(istanbulPre())
.on('end', function () {
gulp.src(srcTestFiles)
.pipe(mochaTask())
.pipe(istanbul())
.on('end', cb);
});
}
gulp.task('unit_test', function (cb) {
run_tests(files.lib_files, files.unit_test_files, config.unitCoverage, cb);
});
gulp.task('api_test', function (cb) {
run_tests(files.api_files, files.api_test_files, config.apiCoverage, cb);
});
Note that the callback cb is just another parameter that is passed to the run_tests function. If it was called immediately after calling run_tests that would signal task completion to gulp before the asynchronous code in run_tests has actually finished.
Lazypipe was a solution (and there are other alternatives) but since Gulp 4 these do not seem to work anymore. Gulp 4 does not pass the stream to pipe functions. Yet the gulp.src(...) function returns a stream.
Also a nice feature of gulp 4 is that functions with Promises can also be a task.
So in the end I came up with this solution that worked for me. With my gulpfile.js looking something like this:
const {
src,
dest,
series,
parallel
} = require('gulp');
// other gulp packages omitted in this example...
const myReusableJsParser = (sources, destination) => {
return src(sources)
.pipe(stripComments({...}))
.pipe(obfuscator({compact:true}))
.pipe(...) //etc
.pipe(dest(destination));
};
const parseScriptsX = () => {
return myReusableJsParser('./js/x/**/*.js', './dist/js/x/');
}
const parseScriptsY = () => {
return myReusableJsParser('./js/y/**/*.js', './dist/js/y/');
}
// more
...
const cleanUp = () => new Promise((resolve, reject) => {
try {
deleteFiles('./dist/').then(resolve).catch(reject);
} catch(err) {
reject(err);
}
});
// export
module.exports = {
default: series(cleanUp, paralell(parseScriptsX, parseScriptsY), paralell(...)),
...,
clean: cleanUp
};

Using a node module within a Grunt Task fails

I'm trying to extract meta data from files read within a Grunt task.
executing: node test.js on this file:
var exif = require('exif2');
exif('fixtures/forest.png', function (err, o) {
console.log(arguments);
});
Produces the expected output
However, executing the grunt process: grunt projectJSON
module.exports = function (grunt) {
var exif = require('exif2');
return grunt.registerMultiTask("projectJSON", "Creates project JSON file.", function () {
exif('fixtures/forest.png', function (err, o) {
console.log(arguments);
});
});
}
** note that I am just testing with the fixtures/forest.png file
Produces no output whatsoever. The callback isn't even fired.
When I console.log exif, I get: [Function]
What am I missing? I think that the doesn't work is because of the grunt task, but I have no idea how to fix it. Wrapping it in a try-catch block produces nothing.
You need to make your projectJSON task asynchronous - Grunt is exiting before your exif callback is being invoked.
Have a look at the Grunt documentation on asynchronous tasks.
This is how you can make your task asynchronous:
module.exports = function (grunt) {
var exif = require('exif2');
grunt.registerMultiTask("projectJSON", "Creates project JSON file.", function () {
// Make task asynchronous.
var done = this.async();
exif('fixtures/forest.png', function (err, o) {
console.log(arguments);
// Invoke the task callback to continue with
// other Grunt tasks.
done();
});
});
}

Resources