In grunt, if I'm watching multiple files and two or more change, how can I only run tasks on the changed files? - node.js

I've got an initConfig with this code in it:
grunt.initConfig({
pkg: grunt.file.readJSON('package.json'),
watch: {
options: {
spawn: false
},
coffee: {
files: [
'src/**/*.coffee'
],
tasks: ['coffee', 'coffeelint', 'concat', 'qunit']
},
...
coffee: {
glob_to_multiple: {
expand: true,
flatten: false,
cwd: '.',
src: ['src/**/*.coffee'],
ext: '.js'
}
},
...
grunt.event.on('watch', function (action, filepath) {
if (grunt.file.isMatch("**/*.coffee", filepath)) {
grunt.config(['coffee', 'glob_to_multiple', 'src'], filepath);
}
});
This is supposed to compile only the .coffee files that have changed. This works pretty well. But I just noticed that if I modify multiple at once, it will output this:
Waiting...src\test\resources\app\js\FILE1.coffee
src\main\resources\app\js\FILE2.coffee
OK
>> File "src\test\resources\app\js\FILE1.coffee" changed.
>> File "src\main\resources\app\js\FILE2.coffee" changed.
Running "coffee:glob_to_multiple" (coffee) task
File src/main/resources/app/js/FILE2.js created.
...
As you can see, I've changed two files, but it's only running the tasks on "FILE2.js". How can I avoid this? I want it to run coffee:glob_to_multiple on FILE1 and FILE2, not just one of them.
NOTE: I'm pretty sure the documentation explains how to do this:
If you save multiple files simultaneously you may opt for a more robust method:
var changedFiles = Object.create(null);
var onChange = grunt.util._.debounce(function() {
grunt.config(['jshint', 'all'], Object.keys(changedFiles));
changedFiles = Object.create(null);
}, 200);
grunt.event.on('watch', function(action, filepath) {
changedFiles[filepath] = action;
onChange();
});
Following that documentation, I made this change to my code:
var changedFiles = Object.create(null);
var onChange = grunt.util._.debounce(function() {
grunt.config(['coffee', 'glob_to_multiple', 'src'], Object.keys(changedFiles));
changedFiles = Object.create(null);
}, 200);
grunt.event.on('watch', function(action, filepath) {
if (grunt.file.isMatch("**/*.coffee", filepath)) {
changedFiles[filepath] = action;
onChange();
}
});
And things worked exactly the way I want. But I'm not sure how this works. Could someone explain it to me?

It's a pretty sophisticated solution using Lo-Dash debounce ;-) (in a sec...)
Know that when you used your older code of:
grunt.config(['coffee', 'glob_to_multiple', 'src'], filepath);
Grunt is instructed to run the coffee task with the new file. The problem with this is that it's a synchronic process and so when another file is changed ( usually this happens in a matter of milliseconds) then Grunt Watch won't allow you to run another process until the debounceDelay has been reached.
The default debounceDelay is 500 ms, but this can be changed using options of the watch task. (read more About option.debounceDelay
Basically when you save multiple files, as you saw - only the first file saved is changed. In order to bypass this, a great utility for delaying (debouncing) function run is in the grunt.util._.debounce (Lo-Dash link to it is here
The function's parameters are:
_.debounce(func, wait, options)
So it takes in the function, how many ms to wait, and some options (that we don't need here).
When you call the debounce utility it will delay the execution of the function the waitTime and that way - when you save multiple files at once - all the calls will add up to a single function call after those 200ms time period.
That way - the most useful line here, besides the debounce util is the following:
changedFiles[filepath] = action;
Which will add the files to the (at first) empty object of changedFiles. Notice that after the debounce function has launched we reset the changedFiles obj so that the next call will contain only freshly changed files.
Amazing solution indeed ;-)

Related

How to setup gulp-watch with gulp-connect livereload?

Most questions and answers on this site do not contain an easy-to follow general approach to using these two libraries together.
So, being that we use the gulp-connect npm package, and we want to make use of the gulp-watch npm package, how do we set it up so that we can:
watch changes in some files
perform some operation, like building / compiling those files
live-reload the server once the building is done
First, you will define your build task. This can have pre-required tasks, can be a task of some sort, it doesn't matter.
gulp.task('build', ['your', 'tasks', 'here']);
Then, you will need to activate the connect server. It is important that you are serving the result of the compilation (in this example, the dist directory) and you're enabling livereload with the livereload: true parameter.
const connect = require('gulp-connect');
gulp.task('server', function() {
return connect.server({
root: 'dist',
livereload: true
});
});
Finally, you will setup your watch logic. Note that we're using watch and not gulp.watch. If you decide to change it, notice that their APIs are different and they have different capabilities. This example uses gulp-watch.
const watch = require('gulp-watch');
gulp.task('watch-and-reload', ['build'], function() {
watch(['src/**'], function() {
gulp.start('build');
}).pipe(connect.reload());
});
gulp.task('watch', ['build', 'watch-and-reload', 'server']);
The watch-and-reload task will depend on the build task, so that it ensures to run at least one build.
Then, it will watch for your source files, and in the callback, it will start the build task. This callback gets executed every time that a file is changed in the directory. You could pass an options object to the watch method to be more specific. Check the usage API in their repository.
Also, you will need to start the build action, for which we're using gulp.start. This is not the recommended approach, and will be deprecated eventually, but so far it works. Most questions with these issues in StackOverflow will look for an alternative workaround that changes the approach. (See related questions.)
Notice that gulp.start is called synchronously. This is what you want, since you want to allow the build task to finish before you proceed with the event stream.
And finally, you can use the event stream to reload the page. The event stream will correctly capture what files changed and will reload those.
Bringing up to speed, as per current stable gulp release
gulp.task API isn't the recommended pattern anymore. Use exports object to make public tasks
From official documentation: https://gulpjs.com/docs/en/api/task#task
To Configure watch and livereload you need following
gulp.watch
gulp-connect
watch function is available in gulp module itself
install gulp-connect using npm install --save-dev gulp-connect
To configure gulp-connect server for livereload we need to set property livereload to true
Run all tasks followed by task that calls watch function in which globs and task are given. Any changes to files that match globs trigger task passed to watch().
task passed to watch() should signal async complection else task will not be run a second time. Simple works: should call callback or return stream or promise
Once watch() is configured, append .pipe(connect.reload()) followed by pipe(dest(..)) where ever you think created files by dest are required to reload
Here is simple working gulpfile.js with connect lifereload
const {src, dest, watch, series, parallel } = require("gulp");
const htmlmin = require("gulp-htmlmin");
const gulpif = require("gulp-if");
const rename = require('gulp-rename');
const connect = require("gulp-connect");
//environment variable NODE_ENV --> set NODE_ENV=production for prouduction to minify html and perform anything related to prod
mode = process.env.NODE_ENV || 'dev';
var outDir = (mode != 'dev') ? 'dist/prod': 'dist/';
const htmlSources = ['src/*.html'];
function html() {
return src(htmlSources)
.pipe(gulpif(
mode.toLowerCase() != 'dev',
htmlmin({
removeComments: true,
collapseWhitespace: true,
minifyCSS: true,
minifyJS: true
})
)
)
.pipe(dest(outDir))
.pipe(connect.reload());
}
function js(){
return src('src/*.js')
.pipe(uglify())
.pipe(rename({ extname: '.min.js' }))
.pipe(dest(outDir))
.pipe(connect.reload());
}
function server() {
return connect.server({
port: 8000,
root: outDir,
livereload: true
})
}
function watchReload() {
let tasks = series(html, js);
watch(["src/**"], tasks);
}
exports.html = html;
exports.js = js;
exports.dev = parallel(html, js, server, watchReload);
Configure connect server with livereload property
function server() {
return connect.server({
port: 8000,
root: outDir,
livereload: true //essential for live reload
})
}
Notice .pipe(connect.reload()) in the above code. It is essential that stream of required files to be piped to connect.reload() else it may not work if you call connect.reload() arbitrarily
function html() {
return src(htmlSources)
.pipe(gulpif(
mode.toLowerCase() != 'dev',
htmlmin({
removeComments: true,
collapseWhitespace: true,
minifyCSS: true,
minifyJS: true
})
)
)
.pipe(dest(outDir))
.pipe(connect.reload()); //Keep it if you want livereload else discard
}
Since we configure public task dev following command will execute all tasks followed by connect and watchReload
gulp dev

Run a gulp tasks on multiple sets of files

I have a gulp task that I would like to run on multiple sets of files. My problem is pretty much similar to what is described here except that I define my sets of files in an extra config.
What I've come up with so far looks like the following:
config.json
{
"files": {
"mainScript": [
"mainFileA.js",
"mainFileB.js"
],
"extraAdminScript": [
"extraFileA.js",
"extraFileB.js"
]
}
}
gulpfile.js
var config = require ('./config.json');
...
gulp.task('scripts', function() {
var features = [],
dest = (argv.production ? config.basePath.compile : config.basePath.build) + '/scripts/';
for(var feature in config.files) {
if(config.files.hasOwnProperty(feature)) {
features.push(gulp.src(config.files[feature])
.pipe(plumper({
errorHandler: onError
}))
.pipe(jshint(config.jshintOptions))
.pipe(jshint.reporter('jshint-stylish'))
.pipe(sourcemaps.init())
.pipe(concat(feature + '.js'))
.pipe(gulpif(argv.production, uglify()))
.pipe(sourcemaps.write('.'))
.pipe(gulp.dest(dest))
);
}
}
return mergeStream(features);
});
My problem is that this doesn't seem to work. The streams are not combine or at least nothing really happens. Some while ago others ran into a similar problem, see here, but even though it should have been fixed it's not working for me.
By the way I've also tested merging the streams in this way:
return es.merge(features)
return es.merge.apply(null, features)
And if I just run the task on a single set of files it works fine.
Motivation
The reason why I want to do this is that at some point concatenating and minifying ALL scripts into one final file doesn't make sense when the sheer number of files is too large. Also, sometimes there is no need to load everything at once. For example all scripts related to an admin interface doesn't need to be load by every visitor.

Grunt: Watch file changes and compile parent directory

I'm working on a project using grunt, I haven't worked with grunt before and currently this is setup as to watch files and when a file has been changed recompile all the files (multiple subdirectories containing hundreds of files) using handlebars into html which is quite slow. I want to improve this to a faster process by only compiling what is needed.
Watching the files with grunt newer doesn't really work because there are dependencies within the directory and thus only recompiling the changed files will not result in a valid page.
I would basically need to recompile the whole parent directory of the file that has changed, but I'm not quite sure on how I would configure something like that.
Any hints where I should look at?
The assemble itself is configured like this:
var _ = require('lodash');
var path = require('path');
// expand the data files and loop over each filepath
var pages = _.flatten(_.map(grunt.file.expand('./src/**/*.json'), function(filepath) {
// read in the data file
var data = grunt.file.readJSON(filepath);
var dest=path.dirname(filepath)+ '/' +path.basename(filepath, path.extname(filepath));
dest=dest.replace("src/","");
var hbs;
if (data.hbs){
hbs=grunt.file.read(path.dirname(filepath)+ '/' + data.hbs)
}
// create a 'page' object to add to the 'pages' collection
return {
// the filename will determine how the page is named later
filename: dest,
// the data from the json file
data: data,
// add the recipe template as the page content
content:hbs
};
}));
return {
options: {
/*postprocess: require('pretty'),*/
marked: {sanitize: false},
data: '<%= options.src %>/**/*.json',
helpers: '<%= options.src %>/helpers/helper-*.js',
layoutdir: '<%= options.src %>/templates',
partials: ['<%= options.src %>/components/**/*.hbs']
},
build: {
options: {
layout: 'base.hbs',
assets: '<%= options.build %>',
pages: pages
},
files: [
{
cwd: '<%= options.src %>',
dest: '<%= options.build %>',
src: '!*'
}
]
},
}
So every time this loads all the pages get scanned down like /src/sites/abc/xyz/foo.json and get compiled, but I only want to have changed files. Watch does detect changed files, but all the files get compiled again and I'm not sure how I could get the changed files that watch has recognized in the config to only process part of the files.
I think what you need is already there in watch.
Check the Using the watch event in grunt doc.
Copying down the content here to satisfy the SO MODS/GODS.
This task will emit a watch event when watched files are modified. This is useful if you would like a simple notification when files are edited or if you're using this task in tandem with another task. Here is a simple example using the watch event:
grunt.initConfig({
watch: {
scripts: {
files: ['lib/*.js'],
},
},
});
grunt.event.on('watch', function(action, filepath, target) {
grunt.log.writeln(target + ': ' + filepath + ' has ' + action);
});
The watch event is not intended for replacing the standard Grunt API for configuring and running tasks. If you're trying to run tasks from within the watch event you're more than likely doing it wrong. Please read configuring tasks.
Compiling Files As Needed
A very common request is to only compile files as needed. Here is an example that will only lint changed files with the jshint task:
grunt.initConfig({
watch: {
scripts: {
files: ['lib/*.js'],
tasks: ['jshint'],
options: {
spawn: false,
},
},
},
jshint: {
all: {
src: ['lib/*.js'],
},
},
});
// on watch events configure jshint:all to only run on changed file
grunt.event.on('watch', function(action, filepath) {
grunt.config('jshint.all.src', filepath);
});
If you need to dynamically modify your config, the spawn option must be disabled to keep the watch running under the same context.
If you save multiple files simultaneously you may opt for a more robust method:
var changedFiles = Object.create(null);
var onChange = grunt.util._.debounce(function() {
grunt.config('jshint.all.src', Object.keys(changedFiles));
changedFiles = Object.create(null);
}, 200);
grunt.event.on('watch', function(action, filepath) {
changedFiles[filepath] = action;
onChange();
});

Can Blanket.js work with Jasmine tests if the tests themselves are loaded with RequireJS?

We've been using Jasmine and RequireJS successfully together for unit testing, and are now looking to add code coverage, and I've been investigating Blanket.js for that purpose. I know that it nominally supports Jasmine and RequireJS, and I'm able to successfully use the "jasmine-requirejs" runner on GitHub, but this runner is using a slightly different approach than our model -- namely, it loads the test specs using a script tag in runner.html, whereas our approach has been to load the specs through RequireJS, like the following (which is the callback for a requirejs call in our runner):
var jasmineEnv = jasmine.getEnv();
jasmineEnv.updateInterval = 1000;
var htmlReporter = new jasmine.TrivialReporter();
var jUnitReporter = new jasmine.JUnitXmlReporter('../JasmineTests/');
jasmineEnv.addReporter(htmlReporter);
jasmineEnv.addReporter(jUnitReporter);
jasmineEnv.specFilter = function (spec) {
return htmlReporter.specFilter(spec);
};
var specs = [];
specs.push('spec/models/MyModel');
specs.push('spec/views/MyModelView');
$(function () {
require(specs, function () {
jasmineEnv.execute();
});
});
This approach works fine for simply doing unit testing, if I don't have blanket or jasmine-blanket as dependencies for the function above. If I add them (with require.config paths and shim), I can verify that they're successfully fetched, but all that appears to happen is that I get jasmine-blanket's overload of jasmine.getEnv().execute, which simply prints "waiting for blanket..." to the console. Nothing is triggering the tests themselves to be run anymore.
I do know that in our approach there's no way to provide the usual data-cover attributes, since RequireJS is doing the script loading rather than script tags, but I would have expected in this case that Blanket would at least calculate coverage for everything, not nothing. Is there a non-attribute-based way to specify the coverage pattern, and is there something else I need to do to trigger the actual test execution once jasmine-blanket is in the mix? Can Blanket be made to work with RequireJS loading the test specs?
I have gotten this working by requiring blanket-jasmine then setting the options
require.config({
paths: {
'jasmine': '...',
'jasmine-html': '...',
'blanket-jasmine': '...',
},
shim: {
'jasmine': {
exports: 'jasmine'
},
'jasmine-html': {
exports: 'jasmine',
deps: ['jasmine']
},
'blanket-jasmine': {
exports: 'blanket',
deps: ['jasmine']
}
}
});
require([
'blanket-jasmine',
'jasmine-html',
], function (blanket, jasmine) {
blanket.options('filter', '...'); // data-cover-only
blanket.options('branchTracking', true); // one of the data-cover-flags
require(['myspec'], function() {
var jasmineEnv = jasmine.getEnv();
jasmineEnv.updateInterval = 250;
var htmlReporter = new jasmine.HtmlReporter();
jasmineEnv.addReporter(htmlReporter);
jasmineEnv.specFilter = function (spec) {
return htmlReporter.specFilter(spec);
};
jasmineEnv.addReporter(new jasmine.BlanketReporter());
jasmineEnv.currentRunner().execute();
});
});
The key lines are the addition of the BlanketReporter and the currentRunner execute. Blanket jasmine adapter overrides jasmine.execute with a no-op that just logs a line, because it needs to halt the execution until it is ready to begin after it has instrumented the code.
Typically the BlanketReport and currentRunner execute would be done by the blanket jasmine adapter but if you load blanket-jasmine itself in require, the event for starting blanket test runner will not get fired as subscribes to the window.load event (which by the point blanket-jasmine is loaded has already fired) therefore we need to add the report and execute the "currentRunner" as it would usually execute itself.
This should probably be raised as a bug, but for now this workaround works well.

How to modify grunt watch tasks based on the file changed?

I'm writing a node.js program that will watch a directory filled with a large (300-ish) amount of scss projects. Grunt-watch (run either through the node module or on its own, whatever works) will be configured so that whenever a scss file is changed, it will be compiled with compass, the output file moved to a separate directory, for example:
./1234/style.scss was changed >> grunt-watch runs grunt-compass >> /foo/bar/baz/1234/style.css updated
The project directory that the file was in is obviously very important (if grunt-compass sent all the compiled files to the same directory, they would be jumbled and unusable and the grunt automation would be purposeless). I order to make sure all files are routed to the correct place, I am dynamically changing the grunt-compass settings every time a css file is updated.
Sample gruntfile:
module.exports = function(grunt) {
grunt.initConfig({
pkg: grunt.file.readJSON('package.json'),
watch: {
files: './*/*.scss',
tasks: ['compass']
},
compass: {
origin:{
options: {
//temportary settings to be changed later
sassDir: './',
cssDir: './bar',
specify: './foo.scss'
}
}
}
});
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.loadNpmTasks('grunt-contrib-compass');
grunt.event.on('watch', function(action, filepath, target) {
var path = require('path');
grunt.log.writeln(target + ': ' + filepath + ' might have ' + action);
var siteDirectory = path.dirname(filepath);
//changes sass directory to that of the changed file
var option = 'compass.origin.options.sassDir';
var result = __dirname + '/' + siteDirectory;
grunt.log.writeln(option + ' changed to ' + result);
grunt.config(option, result);
//customizes css output directory so that file goes to correct place
option = 'compass.origin.options.cssDir';
result = path.resolve(__dirname, '../', siteDirectory);
grunt.log.writeln(option + ' changed to ' + result);
grunt.config(option, result);
//grunt.task.run(['compass']);
});
};
However this doesn't work. If you run 'grunt watch' in verbose mode, you will see that grunt runs both the grunt.event.on function and the watch task in separate processes. The second parsing of the gruntfile reverts all my event.on config changes to the defaults above, and compass fails to run.
As seen in the event.on comments, I attempted to add a grunt.task.run() to make sure that compass was run in the same process as the event.on function, which would preserve my config changes. However the task refused to run, likely because I'm doing it wrong.
Unfortunately, the grunt.event.on variables are not sent to the defined grunt-watch task, otherwise I could write a custom function that would change the compass settings and then run compass in the same process.
I've tried implementing this without grunt, using the watch function build into compass, however compass can only store one static output path per project and can only watch one project at once.
I have currently gotten around this issue by adding a node program that takes the site name as a parameter, rewrites the grunfile.js by running using fs, and then running 'grunt watch' via an exec function. This however has it's own drawbacks (I can't view the grunt.log data) and is horribly convoluted, so I'd like to change it.
Thank you so much for any insight.
You need to specify
options : { nospawn : true }
in your watch task config to have the watch run in the same context:
watch: {
files: './*/*.scss',
tasks: ['compass'],
options : { nospawn : true }
}
See this section of documentation for more info on this.

Resources