I am trying to set up a serve task that will do the following:
Call a watch task to watch for any source changes. ( the watch task calls a build task that builds the app into a "build" folder )
Start the app using Nodemon ( gulp-nodemon )
If a source change happens - rebuild the app and restart nodemon
So far I've written the following tasks:
gulp.task('build', gulp.series(['lint'], () => {
del.sync(['./build/**/*.*']);
const tsCompile = gulp.src('./src/**/*.ts')
.pipe(gulpSourcemaps.init())
.pipe(project());
return tsCompile.js.pipe(gulpSourcemaps.write({
sourceRoot: file => path.relative(path.join(file.cwd, file.path), file.base)
}))
.pipe(gulp.dest('./build/'));
}));
gulp.task('watch', gulp.series(['build'], () => {
gulp.watch('./src/**/*.ts', gulp.series(['build']));
}));
gulp.task('serve', gulp.series(['watch'], () => {
return gulpNodemon({
script: './build/index.js',
watch: './build/'
});
}));
The current behavior of the tasks is:
I start the application by typing gulp serve
The watch tasks is called and it works as expected
Nodemon does not start and the process is stuck at the watch task. If I change any of the source code, the watch task will be called.
Basically, Nodemon does not start and only the watch task is working.
I am not able to figure out why the following behavior happens and I want to ask if anyone knows what the problem could be?
Use are running your watch task in series.
However, the watch task does not end so nodemon never starts.
Try using gulp.parallel() instead of gulp.series():
gulp.task('serve', gulp.parallel('watch', () => {
return gulpNodemon({
script: './build/index.js',
watch: './build/'
});
}));
Hopefully this should solve your problem.
Related
Most questions and answers on this site do not contain an easy-to follow general approach to using these two libraries together.
So, being that we use the gulp-connect npm package, and we want to make use of the gulp-watch npm package, how do we set it up so that we can:
watch changes in some files
perform some operation, like building / compiling those files
live-reload the server once the building is done
First, you will define your build task. This can have pre-required tasks, can be a task of some sort, it doesn't matter.
gulp.task('build', ['your', 'tasks', 'here']);
Then, you will need to activate the connect server. It is important that you are serving the result of the compilation (in this example, the dist directory) and you're enabling livereload with the livereload: true parameter.
const connect = require('gulp-connect');
gulp.task('server', function() {
return connect.server({
root: 'dist',
livereload: true
});
});
Finally, you will setup your watch logic. Note that we're using watch and not gulp.watch. If you decide to change it, notice that their APIs are different and they have different capabilities. This example uses gulp-watch.
const watch = require('gulp-watch');
gulp.task('watch-and-reload', ['build'], function() {
watch(['src/**'], function() {
gulp.start('build');
}).pipe(connect.reload());
});
gulp.task('watch', ['build', 'watch-and-reload', 'server']);
The watch-and-reload task will depend on the build task, so that it ensures to run at least one build.
Then, it will watch for your source files, and in the callback, it will start the build task. This callback gets executed every time that a file is changed in the directory. You could pass an options object to the watch method to be more specific. Check the usage API in their repository.
Also, you will need to start the build action, for which we're using gulp.start. This is not the recommended approach, and will be deprecated eventually, but so far it works. Most questions with these issues in StackOverflow will look for an alternative workaround that changes the approach. (See related questions.)
Notice that gulp.start is called synchronously. This is what you want, since you want to allow the build task to finish before you proceed with the event stream.
And finally, you can use the event stream to reload the page. The event stream will correctly capture what files changed and will reload those.
Bringing up to speed, as per current stable gulp release
gulp.task API isn't the recommended pattern anymore. Use exports object to make public tasks
From official documentation: https://gulpjs.com/docs/en/api/task#task
To Configure watch and livereload you need following
gulp.watch
gulp-connect
watch function is available in gulp module itself
install gulp-connect using npm install --save-dev gulp-connect
To configure gulp-connect server for livereload we need to set property livereload to true
Run all tasks followed by task that calls watch function in which globs and task are given. Any changes to files that match globs trigger task passed to watch().
task passed to watch() should signal async complection else task will not be run a second time. Simple works: should call callback or return stream or promise
Once watch() is configured, append .pipe(connect.reload()) followed by pipe(dest(..)) where ever you think created files by dest are required to reload
Here is simple working gulpfile.js with connect lifereload
const {src, dest, watch, series, parallel } = require("gulp");
const htmlmin = require("gulp-htmlmin");
const gulpif = require("gulp-if");
const rename = require('gulp-rename');
const connect = require("gulp-connect");
//environment variable NODE_ENV --> set NODE_ENV=production for prouduction to minify html and perform anything related to prod
mode = process.env.NODE_ENV || 'dev';
var outDir = (mode != 'dev') ? 'dist/prod': 'dist/';
const htmlSources = ['src/*.html'];
function html() {
return src(htmlSources)
.pipe(gulpif(
mode.toLowerCase() != 'dev',
htmlmin({
removeComments: true,
collapseWhitespace: true,
minifyCSS: true,
minifyJS: true
})
)
)
.pipe(dest(outDir))
.pipe(connect.reload());
}
function js(){
return src('src/*.js')
.pipe(uglify())
.pipe(rename({ extname: '.min.js' }))
.pipe(dest(outDir))
.pipe(connect.reload());
}
function server() {
return connect.server({
port: 8000,
root: outDir,
livereload: true
})
}
function watchReload() {
let tasks = series(html, js);
watch(["src/**"], tasks);
}
exports.html = html;
exports.js = js;
exports.dev = parallel(html, js, server, watchReload);
Configure connect server with livereload property
function server() {
return connect.server({
port: 8000,
root: outDir,
livereload: true //essential for live reload
})
}
Notice .pipe(connect.reload()) in the above code. It is essential that stream of required files to be piped to connect.reload() else it may not work if you call connect.reload() arbitrarily
function html() {
return src(htmlSources)
.pipe(gulpif(
mode.toLowerCase() != 'dev',
htmlmin({
removeComments: true,
collapseWhitespace: true,
minifyCSS: true,
minifyJS: true
})
)
)
.pipe(dest(outDir))
.pipe(connect.reload()); //Keep it if you want livereload else discard
}
Since we configure public task dev following command will execute all tasks followed by connect and watchReload
gulp dev
I've got two server scripts (both are relying on socket.io; running on different ports).
I'd like to start both in parallel via gulp. But in addition I'd like to have a possibility to stop one of them. And maybe even access the console output of each script.
Is there any existing solution for this? Or would you even recommend using anything else than gulp?
I found a solution in which I additionally start a mongoDB server:
var child_process = require('child_process');
var nodemon = require('gulp-nodemon');
var processes = {server1: null, server2: null, mongo: null};
gulp.task('start:server', function (cb) {
// The magic happens here ...
processes.server1 = nodemon({
script: "server1.js",
ext: "js"
});
// ... and here
processes.server2 = nodemon({
script: "server2.js",
ext: "js"
});
cb(); // For parallel execution accept a callback.
// For further info see "Async task support" section here:
// https://github.com/gulpjs/gulp/blob/master/docs/API.md
});
gulp.task('start:mongo', function (cb) {
processes.mongo = child_process.exec('mongod', function (err, stdout, stderr) {});
cb();
});
process.on('exit', function () {
// In case the gulp process is closed (e.g. by pressing [CTRL + C]) stop both processes
processes.server1.kill();
processes.server2.kill();
processes.mongo.kill();
});
gulp.task('run', ['start:mongo', 'start:server']);
gulp.task('default', ['run']);
nodemon/foreverjs is a good solution for not complicated cases, but they are not as scalable as pm2 is. So, if you want a scalable and reliable solution I'd recommend to use pm2.
Also, it worth mentioning that pm2 daemonizes after start unlike foreverjs/nodemon. It could be a bug or a feature for you and generally depends on your needs.
pm2 start script1.js
pm2 start script2.js
pm2 status // show status of running processes
pm2 logs // tail -f logs from running processes
I have an app using Grunt, that I launch in my terminal, and I want to run a task through an another app.
So I'd like to know how can I include my Gruntfile.js to this other app, and run the task.
For now this new app is really basic, juste a simple local web page using NodeJS, with a button that launch the task.
Gruntfile (I want to run the "archive" task)
module.exports = function (grunt) {
require('time-grunt')(grunt);
require('jit-grunt')(grunt, {
ngtemplates: "grunt-angular-templates"
});
var Generator = require("./generator.js")(grunt);
var generator = new Generator();
generator.printLogo();
// Build
grunt.registerTask("build", function (fileType) {
//definition of build task
grunt.task.run(tasks);
});
// Archive Task.
grunt.registerTask("archive", ["build", "compress", "clean:post-rsync"]);
};
Other file : (I tried a require, It seems to work, but I can't run the "archive" task of the Gruntfile.)
var grunt = require('grunt');
var gruntfile = require('./Gruntfile.js')(grunt);
var app = express();
app.get('/', function(req, res){
res.render('test.ejs');
});
app.post('/create', function(req, res){
//run grunt task "archive" here
//gruntfile.grunt.registerTask("archive", ["build"]);
res.redirect('/');
});
app.listen(8080);
Do you have any idea how could I run the task in my gruntfile in this other file ?
(The function printLogo() is working so i'm sure the Gruntfile is include)
Thank you very much (I'm a beginner with Grunt so sorry if I miss something trivial)
You can just run a command from node. This way you don’t have to worry about dependencies and what not. You just spawn grunt, like you normally would, except programatically.
var spawn = require('child_process').spawn;
// This will run the 'archive' task of grunt
spawn('grunt', ['archive'], {
cwd: 'path/to/grunt/project'
});
Grunt is a command line tool, the cleanest approach here would be to refactor your Gruntfile and extract your task's logics into a library.
Then from your Gruntfile's task you can call that library, and from your /create route you can also call your library.
You can use grunt-hub plugin:
grunt.initConfig({
hub: {
all: {
src: ['../*/Gruntfile.js'],
tasks: ['jshint', 'nodeunit'],
},
},
});
So basically this is what I want to do. Have a grunt script that compiles my coffee files to JS. Then run the node server and then, either after the server closes or while it's still running, delete the JS files that were the result of the compilation and only keep the .coffee ones.
I'm having a couple of issues getting it to work. Most importantly, the way I'm currently doing it is this:
grunt.loadNpmTasks("grunt-contrib-coffee");
grunt.registerTask("node", "Starting node server", function () {
var done = this.async();
console.log("test");
var sp = grunt.util.spawn({
cmd: "node",
args: ["index"]
}, function (err, res, code) {
console.log(err, res, code);
done();
});
});
grunt.registerTask("default", ["coffee", "node"]);
The problem here is that the node serer isn't run in the same process as grunt. This matters because I can't just CTRL-C once to terminate JUST the node server.
Ideally, I'd like to have it run in the same process and have the grunt script pause while it's waiting for me to CTRL-C the server. Then, after it's finished, I want grunt to remove the said files.
How can I achieve this?
Edit: Note that the snippet doesn't have the actual removal implemented since I can't get this to work.
If you keep the variable sp in a more global scope, you can define a task node:kill that simply checks whether sp === null (or similar), and if not, does sp.kill(). Then you can simply run the node:kill task after your testing task. You could additionally invoke a separate task that just deletes the generated JS files.
For something similar I used grunt-shell-spawn in conjunction with a shutdown listener.
In your grunt initConfig:
shell: {
runSuperCoolJavaServer:{
command:'java -jar mysupercoolserver.jar',
options: {
async:true //spawn it instead!
}
}
},
Then outside of initConfig, you can set up a listener for when the user ctrl+c's out of your grunt task:
grunt.registerTask("superCoolServerShutdownListener",function(step){
var name = this.name;
if (step === 'exit') process.exit();
else {
process.on("SIGINT",function(){
grunt.log.writeln("").writeln("Shutting down super cool server...");
grunt.task.run(["shell:runSuperCoolJavaServer:kill"]); //the key!
grunt.task.current.async()();
});
}
});
Finally, register the tasks
grunt.registerTask('serverWithKill', [
'runSuperCoolJavaServer',
'superCoolServerShutdownListener']
);
I'm trying to make my tests run each time I'm saving some files. Here is the gulp watch:
gulp.task('jasmine', function() {
gulp.src('spec/nodejs/*Spec.js')
.pipe(jasmine({verbose:true, includeStackTrace: true}));
});
gulp.task('watch', function () {
gulp.watch(['app/*.js', 'app/!(embed)**/*.js','spec/nodejs/*.js'], ['jasmine']);
});
To test for example app/maps.js I'm creating a spec/nodejs/mapsSpec.js file like this:
'use strict';
var maps = require('../../app/maps');
describe('/maps related routes', function(){
it('should ...', function(){...}
...
If I change a spec file everything is working well, if I modify app/maps.js file the change trigger the test. if I modify it again tests are tiggered but the modifications do not taking effect. For example if I add a console.log('foo') in a second time, I will not see it until I relaunch gulp watch and save it again. So only one run of jasmine is ok when using it with gulp.watch.
I guess it's because require is cached by nodejs in the gulp process. So how should I do ?
I took a look at the code of gulp-jasmine. The problem is that the only file from the cache is the Specs.js file. The cache of the children(the reqquired files to test) aren't cleared.
Within the index.js of gulp-jasmine is a row which deletes the cache:
delete require.cache[require.resolve(path.resolve(file.path))];
If you put the next block of code before the delete, you will delete all the children's cache and will it run correctly after every time you save your file.
var files = require.cache[require.resolve(path.resolve(file.path))];
if( typeof files !== 'undefined' ) {
for( var i in files.children ) {
delete require.cache[ files.children[i].id ];
}
}
You can change this in the node_modules.
I will go for a pull request, so maybe in the near future this will be solved permanently.
Also wrote a post about it on: http://navelpluisje.nl/entry/fix-cache-problem-jasmine-tests-with-gulp
I haven't found a fix for this issue, but you can work around it via the gulp-shell task.
npm install gulp-shell --save-dev
then
var shell = require('gulp-shell');
...
gulp.task('jasmine', function() {
gulp.src('spec/nodejs/*Spec.js')
.pipe(shell('minijasminenode spec/*Spec.js'));
});
You'll also need jasmine installed as a direct dependency (gulp-jasmine uses minijasminenode)