I'm trying to make my tests run each time I'm saving some files. Here is the gulp watch:
gulp.task('jasmine', function() {
gulp.src('spec/nodejs/*Spec.js')
.pipe(jasmine({verbose:true, includeStackTrace: true}));
});
gulp.task('watch', function () {
gulp.watch(['app/*.js', 'app/!(embed)**/*.js','spec/nodejs/*.js'], ['jasmine']);
});
To test for example app/maps.js I'm creating a spec/nodejs/mapsSpec.js file like this:
'use strict';
var maps = require('../../app/maps');
describe('/maps related routes', function(){
it('should ...', function(){...}
...
If I change a spec file everything is working well, if I modify app/maps.js file the change trigger the test. if I modify it again tests are tiggered but the modifications do not taking effect. For example if I add a console.log('foo') in a second time, I will not see it until I relaunch gulp watch and save it again. So only one run of jasmine is ok when using it with gulp.watch.
I guess it's because require is cached by nodejs in the gulp process. So how should I do ?
I took a look at the code of gulp-jasmine. The problem is that the only file from the cache is the Specs.js file. The cache of the children(the reqquired files to test) aren't cleared.
Within the index.js of gulp-jasmine is a row which deletes the cache:
delete require.cache[require.resolve(path.resolve(file.path))];
If you put the next block of code before the delete, you will delete all the children's cache and will it run correctly after every time you save your file.
var files = require.cache[require.resolve(path.resolve(file.path))];
if( typeof files !== 'undefined' ) {
for( var i in files.children ) {
delete require.cache[ files.children[i].id ];
}
}
You can change this in the node_modules.
I will go for a pull request, so maybe in the near future this will be solved permanently.
Also wrote a post about it on: http://navelpluisje.nl/entry/fix-cache-problem-jasmine-tests-with-gulp
I haven't found a fix for this issue, but you can work around it via the gulp-shell task.
npm install gulp-shell --save-dev
then
var shell = require('gulp-shell');
...
gulp.task('jasmine', function() {
gulp.src('spec/nodejs/*Spec.js')
.pipe(shell('minijasminenode spec/*Spec.js'));
});
You'll also need jasmine installed as a direct dependency (gulp-jasmine uses minijasminenode)
Related
The following Gulp task does almost what I want.
const gulp = require('gulp');
const browserify = require('browserify');
const vinylStream = require('vinyl-source-stream');
const vinylBuffer = require('vinyl-buffer');
const watchify = require('watchify');
const glob = require('glob');
const jasmineBrowser = require('gulp-jasmine-browser');
gulp.task('test', function() {
let testBundler = browserify({
entries: glob.sync('src/**/*-test.js'),
cache: {},
packageCache: {},
}).plugin(watchify);
function updateSpecs() {
return testBundler.bundle()
.pipe(vinylStream(jsBundleName))
.pipe(vinylBuffer())
.pipe(jasmineBrowser.specRunner({console: true}))
.pipe(jasmineBrowser.headless({driver: 'phantomjs'}));
}
testBundler.on('update', updateSpecs);
updateSpecs();
});
It bundles all my Jasmine specs using Browserify and has them tested through gulp-jasmine-browser. It also watches all specs and all modules that they depend on and re-runs the tests if any of these modules changes.
The only ugly bit, which I'd really like to see solved, is that a new PhantomJS instance and a new Jasmine server are created every time updateSpecs is run. I was hoping to avoid that with code like the following:
gulp.task('test', function() {
let testBundler = browserify({
entries: glob.sync('src/**/*-test.js'),
cache: {},
packageCache: {},
}).plugin(watchify);
// persist the Jasmine server and PhantomJS browser
let testServer = jasmineBrowser.headless({driver: 'phantomjs'});
function updateSpecs() {
return testBundler.bundle()
.pipe(vinylStream(jsBundleName))
.pipe(vinylBuffer())
.pipe(jasmineBrowser.specRunner({console: true}))
.pipe(testServer);
}
testBundler.on('update', updateSpecs);
updateSpecs();
});
Alas, this doesn't work. Right after starting the task, all tests run fine, but the next time updateSpecs is called, I get a write after end error and the task exits with status 1. This error originates from the readable-stream Node module.
As I understand it, the end event during the first run of updateSpecs leaves testServer in a state in which it doesn't accept any new inputs. Unfortunately, the Node.js streams documentation isn't very clear on how to remedy this.
I have tried breaking the pipe chain at a different place, but I got the same result, which seems to indicate this is universal behaviour for streams. I also tried stopping the end event from propagating by inserting a through-stream that didn't re-emit that event, but this prevented the tests from being run at all. Finally, I tried returning the testServer stream from the task; this stopped the error, but although the updateSpecs function gets called every time the sources change, the tests are only being run the first time the task starts. This time, the testServer simply seems to ignore the new input.
The gulp-jasmine-browser documentation suggests that the following code would work:
var watch = require('gulp-watch');
gulp.task('test', function() {
var filesForTest = ['src/**/*.js', 'spec/**/*-test.js'];
return gulp.src(filesForTest)
.pipe(watch(filesForTest))
.pipe(jasmineBrowser.specRunner())
.pipe(jasmineBrowser.server());
});
And it goes on to suggest that you can also make this work with Browserify, but this isn't illustrated. Apparently, gulp-watch does something which causes the follow-up pipes to accept updated inputs later. How can I imitate this behaviour with watchify?
GitHub issue
As it turns out, it is a hard rule in Node.js that you cannot write after the end event. In addition, jasmineBrowser.specRunner(), .server() and .headless() must receive the end signal in order to actually test anything. This restriction is inherited from the official Jasmine test runner.
The example with gulp-watch from the README doesn't actually work, either, for the same reason. In order to make it work, one would have to do something similar to the working version of my watchify code in the question:
gulp.task('test', function() {
var filesForTest = ['src/**/*.js', 'spec/**/*-test.js'];
function runTests() {
return gulp.src(filesForTest)
.pipe(jasmineBrowser.specRunner())
.pipe(jasmineBrowser.server());
}
watch(filesForTest).on('add change unlink', runTests);
});
(I didn't test it, but something very close to this should work.)
So whatever watching mechanism you're using, you'll always need to call .specRunner() and .server() again for every cycle. The good news is that apparently, the Jasmine server will be reused if you explicitly pass a port number:
.pipe(jasmineBrowser.server({port: 8080}));
this also applies to .headless().
I need to alter a stream of files to contain a different base folder name. I thought the gulp-rename plugin would allow for this, but it only seems to replace the glob portion.
Example:
gulp.task("test", function() {
gulp.src("bower_components/**/*", { base: "bower_components", read:false })
.pipe($.rename(function (p) { p.dirname = "X/" + p.dirname; }))
.pipe($.print());
});
outputs:
[gulp] bower_components\X\jquery\test\data\offset\scroll.html
[gulp] bower_components\X\jquery\test\data\offset\static.html
[gulp] bower_components\X\jquery\test\data\offset\table.html
...
I want
[gulp] X\jquery\test\data\offset\scroll.html
[gulp] X\jquery\test\data\offset\static.html
[gulp] X\jquery\test\data\offset\table.html
...
Is there a way to do this with gulp-replace, or some other plugin?
I believe you could do this with gulp-tap to get a hold of the the file instances and alter properties on them before they get printed or use it to print them.
Out of curiosity what are you aiming to do?
Hope that helps!
EDIT-1::
The following is a slightly modified version of the example in the gulp-tap documentation which may work for your use case.
gulp.src("src/**/*.{coffee,js}")
.pipe(tap(function(file, t) {
file.path = 'X/' + file.path;
}))
.pipe($.print())
.pipe(gulp.dest('build'));
EDIT-2::
This is a common task I have set up in my projects for handling external scripts (note; I am using gulp-load-plugins hence invoking my plugins with plugins.<NAME>);
gulp.task('vendor:scripts:publish', function() {
return gulp.src(sources.vendor.js)
.pipe(plugins.plumber())
.pipe(plugins.concat('vendor.js'))
.pipe(gulp.dest(destinations.js))
.pipe(plugins.uglify())
.pipe(plugins.rename(pluginOpts.rename))
.pipe(gulp.dest(destinations.js));
});
destinations and sources are two variables that I have defined in a config file for my gulpfile.
But for clarity, sources.vendor.js points at an array much like the following;
js: [
'src/vendor/jquery/dist/jquery.js',
'src/vendor/lodash/lodash.js',
'src/vendor/backbone/backbone.js'
],
The reason my folder is named vendor and not bower_components is because I've made use of a .bowerrc file to point my bower installation at a different folder.
In addition if you have discrete scripts that you may not want to include all of the time you can look to make use of gulp-utils and gulp-filter to filter out certain scripts when an option is passed or not passed when gulp is invoked on the CLI.
For example; having gulp vendor:scripts:publish include all scripts but gulp vendor:scripts:publish --release omitting discrete scripts.
This then requires modifying your task to declare a filter that is piped in based on an option flag being picked up by gulp-utils.
var isRelease = (plugins.utils.env.release) ? true: false;
gulp.task('vendor:scripts:publish', function() {
var discreteFilter = plugins.filter([
'**/*.js',
'!**/discrete.min.js'
]);
return gulp.src(sources.vendor.js)
.pipe(plugins.plumber())
.pipe(isRelease ? discreteFilter: plugins.utils.noop())
.pipe(plugins.concat('vendor.js'))
.pipe(gulp.dest(destinations.js))
.pipe(plugins.uglify())
.pipe(plugins.rename(pluginOpts.rename))
.pipe(gulp.dest(destinations.js));
});
Hope that helps you out!
I need to run some code after nodeunit successfully passed all tests.
I'm testing some Firebase wrappers and Firebase reference blocks exiting nodeunit after all test are run.
I am looking for some hook or callback to run after all unit tests are passed. So I can terminate Firebase process in order nodeunit to be able to exit.
Didn't found a right way to do it.
There is my temporary solution:
//Put a *LAST* test to clear all if needed:
exports.last_test = function(test){
//do_clear_all_things_if_needed();
setTimeout(process.exit, 500); // exit in 500 milli-seconds
test.done();
}
In my case, this is used to make sure DB connection or some network connect get killed any way. The reason it works is because nodeunit run tests in series.
It's not the best, even not the good way, just to let the test exit.
For nodeunit 0.9.0
For a recent project, we counted the tests by iterating exports, then called tearDown to count the completions. After the last test exits, we called process.exit().
See the spec for full details. Note that this went at the end of the file (after all the tests were added onto exports)
(function(exports) {
// firebase is holding open a socket connection
// this just ends the process to terminate it
var total = 0, expectCount = countTests(exports);
exports.tearDown = function(done) {
if( ++total === expectCount ) {
setTimeout(function() {
process.exit();
}, 500);
}
done();
};
function countTests(exports) {
var count = 0;
for(var key in exports) {
if( key.match(/^test/) ) {
count++;
}
}
return count;
}
})(exports);
As per nodeunit docs I can't seem to find a way to provide a callback after all tests have ran.
I suggest that you use Grunt so you can create a test workflow with tasks, for example:
Install the command line tool: npm install -g grunt-cli
Install grunt to your project npm install grunt --save-dev
Install the nodeunit grunt plugin: npm install grunt-contrib-nodeunit --save-dev
Create a Gruntfile.js like the following:
module.exports = function(grunt) {
grunt.initConfig({
nodeunit : {
all : ['tests/*.js'] //point to where your tests are
}
});
grunt.loadNpmTasks('grunt-contrib-nodeunit');
grunt.registerTask('test', [
'nodeunit'
]);
};
Create your custom task that will be run after the tests by changing your grunt file to the following:
module.exports = function(grunt) {
grunt.initConfig({
nodeunit : {
all : ['tests/*.js'] //point to where your tests are
}
});
grunt.loadNpmTasks('grunt-contrib-nodeunit');
//this is just an example you can do whatever you want
grunt.registerTask('generate-build-json', 'Generates a build.json file containing date and time info of the build', function() {
fs.writeFileSync('build.json', JSON.stringify({
platform: os.platform(),
arch: os.arch(),
timestamp: new Date().toISOString()
}, null, 4));
grunt.log.writeln('File build.json created.');
});
grunt.registerTask('test', [
'nodeunit',
'generate-build-json'
]);
};
Run your test tasks with grunt test
I came across another solution how to deal with this solution. I have to say the all answers here are correct. However when inspecting grunt I found out that Grunt is running nodeunit tests via reporter and the reporter offers a callback option when all tests are finished. It could be done something like this:
in folder
test_scripts/
some_test.js
test.js can contain something like this:
//loads default reporter, but any other can be used
var reporter = require('nodeunit').reporters.default;
// safer exit, but process.exit(0) will do the same in most cases
var exit = require('exit');
reporter.run(['test/basic.js'], null, function(){
console.log(' now the tests are finished');
exit(0);
});
the script can be added to let's say package.json in script object
"scripts": {
"nodeunit": "node scripts/some_test.js",
},
now it can be done as
npm run nodeunit
the tests in some_tests.js can be chained or it can be run one by one using npm
So basically this is what I want to do. Have a grunt script that compiles my coffee files to JS. Then run the node server and then, either after the server closes or while it's still running, delete the JS files that were the result of the compilation and only keep the .coffee ones.
I'm having a couple of issues getting it to work. Most importantly, the way I'm currently doing it is this:
grunt.loadNpmTasks("grunt-contrib-coffee");
grunt.registerTask("node", "Starting node server", function () {
var done = this.async();
console.log("test");
var sp = grunt.util.spawn({
cmd: "node",
args: ["index"]
}, function (err, res, code) {
console.log(err, res, code);
done();
});
});
grunt.registerTask("default", ["coffee", "node"]);
The problem here is that the node serer isn't run in the same process as grunt. This matters because I can't just CTRL-C once to terminate JUST the node server.
Ideally, I'd like to have it run in the same process and have the grunt script pause while it's waiting for me to CTRL-C the server. Then, after it's finished, I want grunt to remove the said files.
How can I achieve this?
Edit: Note that the snippet doesn't have the actual removal implemented since I can't get this to work.
If you keep the variable sp in a more global scope, you can define a task node:kill that simply checks whether sp === null (or similar), and if not, does sp.kill(). Then you can simply run the node:kill task after your testing task. You could additionally invoke a separate task that just deletes the generated JS files.
For something similar I used grunt-shell-spawn in conjunction with a shutdown listener.
In your grunt initConfig:
shell: {
runSuperCoolJavaServer:{
command:'java -jar mysupercoolserver.jar',
options: {
async:true //spawn it instead!
}
}
},
Then outside of initConfig, you can set up a listener for when the user ctrl+c's out of your grunt task:
grunt.registerTask("superCoolServerShutdownListener",function(step){
var name = this.name;
if (step === 'exit') process.exit();
else {
process.on("SIGINT",function(){
grunt.log.writeln("").writeln("Shutting down super cool server...");
grunt.task.run(["shell:runSuperCoolJavaServer:kill"]); //the key!
grunt.task.current.async()();
});
}
});
Finally, register the tasks
grunt.registerTask('serverWithKill', [
'runSuperCoolJavaServer',
'superCoolServerShutdownListener']
);
I'm writing a node.js program that will watch a directory filled with a large (300-ish) amount of scss projects. Grunt-watch (run either through the node module or on its own, whatever works) will be configured so that whenever a scss file is changed, it will be compiled with compass, the output file moved to a separate directory, for example:
./1234/style.scss was changed >> grunt-watch runs grunt-compass >> /foo/bar/baz/1234/style.css updated
The project directory that the file was in is obviously very important (if grunt-compass sent all the compiled files to the same directory, they would be jumbled and unusable and the grunt automation would be purposeless). I order to make sure all files are routed to the correct place, I am dynamically changing the grunt-compass settings every time a css file is updated.
Sample gruntfile:
module.exports = function(grunt) {
grunt.initConfig({
pkg: grunt.file.readJSON('package.json'),
watch: {
files: './*/*.scss',
tasks: ['compass']
},
compass: {
origin:{
options: {
//temportary settings to be changed later
sassDir: './',
cssDir: './bar',
specify: './foo.scss'
}
}
}
});
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.loadNpmTasks('grunt-contrib-compass');
grunt.event.on('watch', function(action, filepath, target) {
var path = require('path');
grunt.log.writeln(target + ': ' + filepath + ' might have ' + action);
var siteDirectory = path.dirname(filepath);
//changes sass directory to that of the changed file
var option = 'compass.origin.options.sassDir';
var result = __dirname + '/' + siteDirectory;
grunt.log.writeln(option + ' changed to ' + result);
grunt.config(option, result);
//customizes css output directory so that file goes to correct place
option = 'compass.origin.options.cssDir';
result = path.resolve(__dirname, '../', siteDirectory);
grunt.log.writeln(option + ' changed to ' + result);
grunt.config(option, result);
//grunt.task.run(['compass']);
});
};
However this doesn't work. If you run 'grunt watch' in verbose mode, you will see that grunt runs both the grunt.event.on function and the watch task in separate processes. The second parsing of the gruntfile reverts all my event.on config changes to the defaults above, and compass fails to run.
As seen in the event.on comments, I attempted to add a grunt.task.run() to make sure that compass was run in the same process as the event.on function, which would preserve my config changes. However the task refused to run, likely because I'm doing it wrong.
Unfortunately, the grunt.event.on variables are not sent to the defined grunt-watch task, otherwise I could write a custom function that would change the compass settings and then run compass in the same process.
I've tried implementing this without grunt, using the watch function build into compass, however compass can only store one static output path per project and can only watch one project at once.
I have currently gotten around this issue by adding a node program that takes the site name as a parameter, rewrites the grunfile.js by running using fs, and then running 'grunt watch' via an exec function. This however has it's own drawbacks (I can't view the grunt.log data) and is horribly convoluted, so I'd like to change it.
Thank you so much for any insight.
You need to specify
options : { nospawn : true }
in your watch task config to have the watch run in the same context:
watch: {
files: './*/*.scss',
tasks: ['compass'],
options : { nospawn : true }
}
See this section of documentation for more info on this.