I feel like I'm missing something.
Here is what I want to achieve :
Having a grunt task that executes my server.js and runs watch task in parallel. It feels to me that this is precisely one of the tasks grunt was designed for but I can't achieve this configuration.
Among others, I have read this :
Running Node app through Grunt
but I still can't make it.
Here is my Gruntfile.js :
module.exports = function(grunt) {
// Project configuration.
grunt.initConfig({
watch: {
scripts: {
files: ['*.js'],
tasks: ['start'],
options: {
nospawn: true
}
}
}
});
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.registerTask('start', function() {
grunt.util.spawn({
cmd: 'node',
args: ['server.js']
});
grunt.task.run('watch');
});
grunt.registerTask('default', 'start');
};
I have "grunt-contrib-watch": "~0.3.1" which should be higher version than grunt-contrib-watch#0.3.0 as in the previously mentioned post.
If you could help me achieve the proper configuration, I would be extremely grateful. But more in general, I don't understand why there is no official grunt-contrib-nodemon-like package and task since I have the feeling it would be another great reason to use grunt (which I really like as a tool !)
Thanks
Edit: grunt-nodemon
since writing this, a nice person developed that.
I was having a lot of trouble using grunt.util.spawn to fire off new processes. They would run, but they wouldn't give me any output back. Perhaps you can figure out what I could not in these docs. http://gruntjs.com/api/grunt.util#grunt.util.spawn
Two problems I see with what you have:
I think grunt.registerTask() has to take three arguments when you use a callback function to run your task.
I don't think you can just call node server.js over and over again everytime a file changes. It will work on the first time, for it to really work you'd have to manage the server as a child process, killing and restarting it on subsequent file changes.
For the registerTask arguments try this, just to see if you can get something to work in your current implementation.
http://gruntjs.com/api/grunt.task#grunt.task.registertask
It takes (taskName, description, taskFunction) like so:
grunt.registerTask('start', 'My start task description', function() {
grunt.util.spawn({
cmd: 'node',
args: ['server.js']
});
grunt.task.run('watch');
});
That might at least get your watch to run node server.js the first time a file changes.
Here's what I would do instead.
Either just use nodemon $ nodemon server.js as is
or...
Read the source and use grunt-develop
He is managing the server as a child process, might be what you're looking for.
or...
Get grunt-shell
npm install grunt-shell --save-dev
And use it to run nodemon for you:
module.exports = function(grunt) {
// Project configuration.
grunt.initConfig({
serverFile: 'server.js',
shell: {
nodemon: {
command: 'nodemon <%= serverFile %>',
options: {
stdout: true,
stderr: true
}
}
},
watch: { /* nothing to do in watch anymore */ }
});
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.loadNpmTasks('grunt-shell');
grunt.registerTask('default', 'shell:nodemon');
};
$ grunt shell:nodemon
I sincerely hope that helps. Good luck!
Hi I also came across this problem and here is my solution (based on nackjicholson's answer). This uses grunt-nodemon in a spawned process. so I can:
Reload nodejs
Watch for changes to e.g. .less files
Get output of both tasks
grunt.loadNpmTasks('grunt-nodemon');
grunt.initConfig({
nodemon: {
dev: {
options: {
file: 'server.js',
nodeArgs: ['--debug'],
env: {
PORT: '8282'
}
}
}
},
});
grunt.registerTask('server', function (target) {
// Running nodejs in a different process and displaying output on the main console
var nodemon = grunt.util.spawn({
cmd: 'grunt',
grunt: true,
args: 'nodemon'
});
nodemon.stdout.pipe(process.stdout);
nodemon.stderr.pipe(process.stderr);
// here you can run other tasks e.g.
// grunt.task.run([ 'watch' ]);
});
Use grunt-concurrent
The issue is that tasks like watch and nodemon will never terminate, so grunt will never reach them. You need to spawn a new process.
You can do this easily using grunt-concurrent:
https://github.com/sindresorhus/grunt-concurrent
For example:
module.exports = function(grunt) {
grunt.initConfig({
...
concurrent: {
dev: {
tasks: ['nodemon', 'watch'],
options: {
logConcurrentOutput: true
}
}
}
});
};
The two will now run happily side by side.
Related
I would like to set the NODE_ENV variable at the beginning of a Grunt task, to development or production, but it looks it's not as simple as I thought.
The reason, why I would like this is that I use grunt-webpack, which expects NODE_ENV to be set correctly to "development" or "production". But I also would like to initialize my tasks exclusively from grunt, if possible.
I created the following test Gruntfile, using the grunt-shell and cross-env modules:
function log(err, stdout, stderr, cb, e) {
if (err) {
cb(err);
return;
}
console.log(process.env.NODE_ENV);
console.log(stdout);
cb();
}
module.exports = function(grunt) {
grunt.initConfig({
shell: {
dev: {
command : 'cross-env NODE_ENV="development"',
options: {
callback: log
}
},
dist: {
command : 'cross-env NODE_ENV="production"',
options: {
callback: log
}
}
}
});
grunt.loadNpmTasks('grunt-shell');
};
Line 6 of log() should echo the actual value of process.env.NODE_ENV, but it constantly says undefined, even if I check it manually in the node console.
If I set it manually from the terminal, like set NODE_ENV=production (set is for Windows), everywhere echoes the value production, as I would like it to.
Your test won't work because grunt-shell runs a child_process and your callback runs after it ends and under the main process.
Same thing would happen with cross-env.
If you want to pass an environment variable to grunt-shell, you should use the options configuration according to the documentation.
For example:
grunt.initConfig({
shell: {
dev: {
command : 'echo %NODE_ENV%', //windows syntax
options: {
execOptions: {
env: {
'NODE_ENV': 'dev'
}
},
callback: log
}
}
}
});
This will still print undefined for process.env.NODE_ENV, but the value of NODE_ENV will be available in the stdout because of the echo.
On a side note, it sounds like you're trying to run a process (grunt-shell), which runs a process (cross-env), which runs a process (webpack or grunt-webpack).
Why not just use the cross-env example usage? It looks pretty close to what you need.
Or you can just define the variable in the task config itself and lose all of these wrappers.
LifeQuery's answer helped me a lot to find out what the problem actually was. I first realized that webpack.DefinePlugin() actually doesn't change anything on process.env.NODE_ENV (and it would be too late anyway, as it transforms code parsed by webpack after all loaders).
After this I created a solution, which does what I want. This is how my customized Gruntfile.js begins:
'use strict';
const path = require('path');
const webpack = require('webpack');
module.exports = function (grunt) {
// Setting the node environment based on the tasks's name or target
let set_NODE_ENV = function () {
const devTasks = ['webpack-dev-server', 'dev', 'hmr', 'watch'],
devTargets = [':dev'],
task = grunt.cli.tasks[0], // The name of the (first) task we initialized grunt with ('webpack-dev-server' if started 'grunt webpack-dev-server)
target = ':'+grunt.option('target'),
devEnv = (devTasks.indexOf(task) > -1 || devTargets.indexOf(target) > -1);
process.env.NODE_ENV = devEnv ? 'development' : 'production';
}();
const webpackConfig = require('../assets/webpack.config');
grunt.initConfig({
// ...usual Gruntfile content
});
};
I created a whitelist of grunt task names and targets when I set the process.env.NODE_ENV. As it is placed before the grunt.initConfig(), the configuration object can use process.env.NODE_ENV with the desired state.
It will set NODE_ENV to "development" if starting definitely webpack-dev-server, dev, hmr or watch tasks, or any other tasks with the :dev target.
Most questions and answers on this site do not contain an easy-to follow general approach to using these two libraries together.
So, being that we use the gulp-connect npm package, and we want to make use of the gulp-watch npm package, how do we set it up so that we can:
watch changes in some files
perform some operation, like building / compiling those files
live-reload the server once the building is done
First, you will define your build task. This can have pre-required tasks, can be a task of some sort, it doesn't matter.
gulp.task('build', ['your', 'tasks', 'here']);
Then, you will need to activate the connect server. It is important that you are serving the result of the compilation (in this example, the dist directory) and you're enabling livereload with the livereload: true parameter.
const connect = require('gulp-connect');
gulp.task('server', function() {
return connect.server({
root: 'dist',
livereload: true
});
});
Finally, you will setup your watch logic. Note that we're using watch and not gulp.watch. If you decide to change it, notice that their APIs are different and they have different capabilities. This example uses gulp-watch.
const watch = require('gulp-watch');
gulp.task('watch-and-reload', ['build'], function() {
watch(['src/**'], function() {
gulp.start('build');
}).pipe(connect.reload());
});
gulp.task('watch', ['build', 'watch-and-reload', 'server']);
The watch-and-reload task will depend on the build task, so that it ensures to run at least one build.
Then, it will watch for your source files, and in the callback, it will start the build task. This callback gets executed every time that a file is changed in the directory. You could pass an options object to the watch method to be more specific. Check the usage API in their repository.
Also, you will need to start the build action, for which we're using gulp.start. This is not the recommended approach, and will be deprecated eventually, but so far it works. Most questions with these issues in StackOverflow will look for an alternative workaround that changes the approach. (See related questions.)
Notice that gulp.start is called synchronously. This is what you want, since you want to allow the build task to finish before you proceed with the event stream.
And finally, you can use the event stream to reload the page. The event stream will correctly capture what files changed and will reload those.
Bringing up to speed, as per current stable gulp release
gulp.task API isn't the recommended pattern anymore. Use exports object to make public tasks
From official documentation: https://gulpjs.com/docs/en/api/task#task
To Configure watch and livereload you need following
gulp.watch
gulp-connect
watch function is available in gulp module itself
install gulp-connect using npm install --save-dev gulp-connect
To configure gulp-connect server for livereload we need to set property livereload to true
Run all tasks followed by task that calls watch function in which globs and task are given. Any changes to files that match globs trigger task passed to watch().
task passed to watch() should signal async complection else task will not be run a second time. Simple works: should call callback or return stream or promise
Once watch() is configured, append .pipe(connect.reload()) followed by pipe(dest(..)) where ever you think created files by dest are required to reload
Here is simple working gulpfile.js with connect lifereload
const {src, dest, watch, series, parallel } = require("gulp");
const htmlmin = require("gulp-htmlmin");
const gulpif = require("gulp-if");
const rename = require('gulp-rename');
const connect = require("gulp-connect");
//environment variable NODE_ENV --> set NODE_ENV=production for prouduction to minify html and perform anything related to prod
mode = process.env.NODE_ENV || 'dev';
var outDir = (mode != 'dev') ? 'dist/prod': 'dist/';
const htmlSources = ['src/*.html'];
function html() {
return src(htmlSources)
.pipe(gulpif(
mode.toLowerCase() != 'dev',
htmlmin({
removeComments: true,
collapseWhitespace: true,
minifyCSS: true,
minifyJS: true
})
)
)
.pipe(dest(outDir))
.pipe(connect.reload());
}
function js(){
return src('src/*.js')
.pipe(uglify())
.pipe(rename({ extname: '.min.js' }))
.pipe(dest(outDir))
.pipe(connect.reload());
}
function server() {
return connect.server({
port: 8000,
root: outDir,
livereload: true
})
}
function watchReload() {
let tasks = series(html, js);
watch(["src/**"], tasks);
}
exports.html = html;
exports.js = js;
exports.dev = parallel(html, js, server, watchReload);
Configure connect server with livereload property
function server() {
return connect.server({
port: 8000,
root: outDir,
livereload: true //essential for live reload
})
}
Notice .pipe(connect.reload()) in the above code. It is essential that stream of required files to be piped to connect.reload() else it may not work if you call connect.reload() arbitrarily
function html() {
return src(htmlSources)
.pipe(gulpif(
mode.toLowerCase() != 'dev',
htmlmin({
removeComments: true,
collapseWhitespace: true,
minifyCSS: true,
minifyJS: true
})
)
)
.pipe(dest(outDir))
.pipe(connect.reload()); //Keep it if you want livereload else discard
}
Since we configure public task dev following command will execute all tasks followed by connect and watchReload
gulp dev
I am a grunt and node noob but I managed to write a node script that does what I want it to and works from the command line. I don't want to publish the script as a node module but I would like to run it from my grunt file.
What changes (if any) do I need to make to the script for this to work?
The more I read about configuring grunt files and custom tasks the more confused I get. I currently have something that looks like this:
module.exports = function(grunt) {
grunt.initConfig({
'mytaskname': 'what goes here?'
});
grunt.loadNpmTasks('./node_modules/script_name');
grunt.registerTask('run-from-command-line', 'description', function() {
grunt.log.writeln('Not running...');
});
}
Any help would be greatly appreciated.
You could use the grunt-execute plugin for doing this which executes files in a node.js child process.
Example:
If your node script is in "node-scripts/script.js", Gruntfile.js would look something like this:
module.exports = function(grunt) {
grunt.initConfig({
execute: {
target: {
src: ["node-scripts/script.js"]
}
}
});
// Load the plugins
grunt.loadNpmTasks("grunt-execute");
grunt.registerTask("default", ["execute"]);
};
I need to run some code after nodeunit successfully passed all tests.
I'm testing some Firebase wrappers and Firebase reference blocks exiting nodeunit after all test are run.
I am looking for some hook or callback to run after all unit tests are passed. So I can terminate Firebase process in order nodeunit to be able to exit.
Didn't found a right way to do it.
There is my temporary solution:
//Put a *LAST* test to clear all if needed:
exports.last_test = function(test){
//do_clear_all_things_if_needed();
setTimeout(process.exit, 500); // exit in 500 milli-seconds
test.done();
}
In my case, this is used to make sure DB connection or some network connect get killed any way. The reason it works is because nodeunit run tests in series.
It's not the best, even not the good way, just to let the test exit.
For nodeunit 0.9.0
For a recent project, we counted the tests by iterating exports, then called tearDown to count the completions. After the last test exits, we called process.exit().
See the spec for full details. Note that this went at the end of the file (after all the tests were added onto exports)
(function(exports) {
// firebase is holding open a socket connection
// this just ends the process to terminate it
var total = 0, expectCount = countTests(exports);
exports.tearDown = function(done) {
if( ++total === expectCount ) {
setTimeout(function() {
process.exit();
}, 500);
}
done();
};
function countTests(exports) {
var count = 0;
for(var key in exports) {
if( key.match(/^test/) ) {
count++;
}
}
return count;
}
})(exports);
As per nodeunit docs I can't seem to find a way to provide a callback after all tests have ran.
I suggest that you use Grunt so you can create a test workflow with tasks, for example:
Install the command line tool: npm install -g grunt-cli
Install grunt to your project npm install grunt --save-dev
Install the nodeunit grunt plugin: npm install grunt-contrib-nodeunit --save-dev
Create a Gruntfile.js like the following:
module.exports = function(grunt) {
grunt.initConfig({
nodeunit : {
all : ['tests/*.js'] //point to where your tests are
}
});
grunt.loadNpmTasks('grunt-contrib-nodeunit');
grunt.registerTask('test', [
'nodeunit'
]);
};
Create your custom task that will be run after the tests by changing your grunt file to the following:
module.exports = function(grunt) {
grunt.initConfig({
nodeunit : {
all : ['tests/*.js'] //point to where your tests are
}
});
grunt.loadNpmTasks('grunt-contrib-nodeunit');
//this is just an example you can do whatever you want
grunt.registerTask('generate-build-json', 'Generates a build.json file containing date and time info of the build', function() {
fs.writeFileSync('build.json', JSON.stringify({
platform: os.platform(),
arch: os.arch(),
timestamp: new Date().toISOString()
}, null, 4));
grunt.log.writeln('File build.json created.');
});
grunt.registerTask('test', [
'nodeunit',
'generate-build-json'
]);
};
Run your test tasks with grunt test
I came across another solution how to deal with this solution. I have to say the all answers here are correct. However when inspecting grunt I found out that Grunt is running nodeunit tests via reporter and the reporter offers a callback option when all tests are finished. It could be done something like this:
in folder
test_scripts/
some_test.js
test.js can contain something like this:
//loads default reporter, but any other can be used
var reporter = require('nodeunit').reporters.default;
// safer exit, but process.exit(0) will do the same in most cases
var exit = require('exit');
reporter.run(['test/basic.js'], null, function(){
console.log(' now the tests are finished');
exit(0);
});
the script can be added to let's say package.json in script object
"scripts": {
"nodeunit": "node scripts/some_test.js",
},
now it can be done as
npm run nodeunit
the tests in some_tests.js can be chained or it can be run one by one using npm
I cannot understand how grunt matches tasks with Gruntfile.js:
module.exports = function (grunt) {
grunt.initConfig({
concat: {
dist: {
src: ['src/*.js'],
dest: 'dest/all.js'
}
}
});
grunt.loadNpmTasks('grunt-contrib-concat');
grunt.registerTask('default', ['concat']);
};
It's a valid config. But I don't know how grunt match 'concat' to 'grunt-contrib-concat'.
Does grunt trim the 'grunt-contrib-' prefix to match 'concat' to 'grunt-contrib-concat'?
First, we look inside grunt-contrib-concat source code:
grunt.registerMultiTask('concat', 'Concatenate files.', function() {
Looking inside grunt creating tasks docs, the first argument passed into a task registration function is the name of the task:
grunt.registerMultiTask(taskName, [description, ] taskFunction)
grunt.registerTask(taskName, [description, ] taskFunction)
Consclusion
There is no "magic" names nor "grunt keywords"
There is no difference between your custom tasks and task plugins ( even grunt-contrib.. )
The API for creating tasks is simple as that.
contrib-less, contrib-jade, contrib-concat .... so contrib is just to signifies that these plugins are contributed by Grunt community developers and less, jade, concat indicate the modules which you want to work on your project as mentioned in gruntfile.js.
So when you say:
grunt.loadNpmTasks('grunt-contrib-concat')
It loads the mentioned module.
But in order to make it work when you fire grunt, you actually have to register it.
grunt.registerTask('default', ['concat','jade','less']);
grunt.registerTask('test', ['concat','jade','less']);
grunt.registerTask('dist', ['concat','jade','less','uglify']);
So as you can see in production we may want to uglify, so we can register our task in by 'dist'.