Recently I made a simple API server using node.js+express. And the script below is a part of my package.json file I use to run with npm commands.
"scripts": {
...
"release": "cross-env NODE_ENV=production MODE=release node server/app.js",
}
After I starts the server with npm run release, I can see multiple processes such as below are running on my Linux server.
/bin/sh /api/node_modules/.bin/cross-env NODE_ENV=development MODE=test node server/app.js
node /api/node_modules/.bin/../cross-env/bin/cross-env.js NODE_ENV=development MODE=test node server/app.js
node server/app.js
I read a related documentation here, but I don't understand what actually happens in background.
What is the order of creating processes? npm => /bin/sh => node /api/.. => node server/app.js ?
What does each process do? All three processes are necessary to run my server?
If I wants to kill the server with pid, which process id should I use?
What is the order of creating processes? npm => /bin/sh => node /api/.. => node server/app.js ?
What does each process do? All three processes are necessary to run my server?
Well, the flow is like this:
NPM is spawned (you run it) inside your shell, npm itself runs with NPX in order to set the local path.
Your npm script spawns a process from a package called cross-env for cross-OS environment variable setting.
That process in turn spawns Node.js (after setting the environment variables)
That's why you see 3 processes. After your server itself run - only the actual server process is needed to run the server.
If I wants to kill the server with pid, which process id should I use?
This one: node server/app.js - since that's your actual server, the others are just "utility processes" (one for the npm script you ran and the other for the environment variables").
It's worth mentioning that in general - servers are run inside containers or other orchestrators/managers that have built-in logic for restarting/killing the process. Typically the orchestrator sends SIGTERM to the process.
I am building a CLI tool with node, and want to use the fs.promise API. However, when the app is launched, there's always an ExperimentalWarning, which is super annoying and messes up with the interaction prompts. How can I disable this warning/all warnings?
I'm testing this with the latest node v10 lts release on Windows 10.
To use the CLI tool globally, I have added this to my package.json file:
{
//...
"preferGlobal": true,
"bin": { "myapp" : "./index.js" }
//...
}
And have run npm link to link the ./index.js script. Then I am able to run the app globally simply with myapp.
After some research I noticed that there are generally 2 ways to disable the warnings:
set environmental variable NODE_NO_WARNINGS=1
call the script with node --no-warnings ./index.js
Although I was able to disable the warnings with the 2 methods above, there seems to be no way to do that while directly running myapp command.
The shebang I placed in the entrance script ./index.js is:
#!/usr/bin/env node
// my code...
I have also read other discussions on modifying the shebang, but haven't found a universal/cross-platform way to do this - to either pass argument to node itself, or set the env variable.
If I publish this npm package, it would be great if there's a way to make sure the warnings of this single package are disabled in advance, instead of having each individual user tweak their environment themselves. Is there any hidden npm package.json configs that allow this?
Any help would be greatly appreciated!
I am now using a launcher script to spawn a child_process to work around this limitation. Ugly, but it works with npm link, global installs and whatnot.
#!/usr/bin/env node
const { spawnSync } = require("child_process");
const { resolve } = require("path");
// Say our original entrance script is `app.js`
const cmd = "node --no-warnings " + resolve(__dirname, "app.js");
spawnSync(cmd, { stdio: "inherit", shell: true });
As it's kind of like a hack, I won't be using this method next time, and will instead be wrapping the original APIs in a promise manually, sticking to util.promisify, or using the blocking/sync version of the APIs.
I configured my test script like this:
"scripts": {
"test": "tsc && cross-env NODE_OPTIONS=--experimental-vm-modules NODE_NO_WARNINGS=1 jest"
},
Notice the NODE_NO_WARNINGS=1 part. It disables the warnings I was getting from setting NODE_OPTIONS=--experimental-vm-modules
Here's what I'm using to run node with a command line flag:
#!/bin/sh
_=0// "exec" "/usr/bin/env" "node" "--experimental-repl-await" "$0" "$#"
// Your normal Javascript here
The first line tells the shell to use /bin/sh to run the script. The second line is a bit magical. To the shell it's a variable assignment _=0// followed by "exec" ....
Node sees it as a variable assignment followed by a comment - so it's almost a nop apart from the side effect of assigning 0 to _.
The result is that when the shell reaches line 2 it will exec node (via env) with any command line options you need.
New answer: You can also catch emitted warnings in your script and choose which ones to prevent from being logged
const originalEmit = process.emit;
process.emit = function (name, data, ...args) {
if (
name === `warning` &&
typeof data === `object` &&
data.name === `ExperimentalWarning`
//if you want to only stop certain messages, test for the message here:
//&& data.message.includes(`Fetch API`)
) {
return false;
}
return originalEmit.apply(process, arguments);
};
Inspired by this patch to yarn
I am trying to have a nodejs application start automatically on system boot. Basically all I need is to run the command node /dir/app.
I am using openwrt on an Arduino Yun. And have tried a couple things.
On the openwrt website it said I can do this. https://wiki.openwrt.org/inbox/procd-init-scripts :
#!/bin/sh /etc/rc.common
USE_PROCD=1
start_service() {
procd_open_instance
procd_set_param command node ///www/www-blink.js
procd_close_instance
}
I have also tried changing the dir to /www/www-blink.js not ///
However i'm not sure what i'm doing wrong as nothing comes up when I try run it with /etc/init.d/node-app start I am obviously writing the code wrong but i'm not sure what it should exactly look like.
The other thing I have tried is the node modules forever and forever-service.
I downloaded them on my computer using npm install -g forever and forever-service aswell. I transfered them to usr/lib/node_modules on my arduino yun. However when I try to use and forever(-service) commands it says
-ash: forever: not found
I have tried a couple other things, however nothing has worked. Any help would be greatly appreciated.
-- I also need to be able to start my express script with npm start not node app but I guess the first thing is getting it to work at all.
you can put the starting command (node /dir/app &)in the /etc/rc.local script. This will start your nodejs application automatically on system boot.
OpenWRT procd has a "respawn" parameter, which will restart a service that exits or crashes.
# respawn automatically if something died, be careful if you have an
# alternative process supervisor if process dies sooner than respawn_threshold,
# it is considered crashed and after 5 retries the service is stopped
procd_set_param respawn ${respawn_threshold:-3600} ${respawn_timeout:-5} ${respawn_retry:-5}
So, you cold just add:
procd_set_param respawn 60 5 5
or something like that to your OpenWRT procd initscript. This 60 5 5 means it will wait 5s between respawns (middle parameter), and if it respanws more than 5 times (last parameter) in 60s (first parameter), it will disable the service ("restart loop" detected).
Refer to this page for more information:
https://openwrt.org/docs/guide-developer/procd-init-scripts
You need to execute your node application like a Linux Service.
Upstart is perfect for this task
Upstart is an event-based replacement for the /sbin/init daemon which handles starting of tasks and services during boot, stopping them during shutdown and supervising them while the system is running.
If you have an app like this (for example):
// app.js
var express = require('express')
var app = express()
var port = process.env.PORT
app.get('/', function(req, res) {
res.send('Hello world!')
})
app.listen(port)
With a package.json like this:
{
"name": "my-awesome-app",
"version": "1.0.0",
"dependencies": {
"express": "^4.13.3"
},
"scripts": {
"start": "node app.js"
}
}
We create a upstart configuration file called myAwesomeApp.conf with the following code:
start on runlevel [2345]
stop on runlevel [!2345]
respawn
respawn limit 10 5
setuid ubuntu
chdir /opt/myAwesomeApp.conf
env PORT=3000
exec npm start
To finish, put your application (app.js and package.json) in the /opt/myAwesomeApp.conf and copy the configuration file myAwesomeApp.conf in /etc/init/
This is all, now you just need to run service myAwesomeApp start to run your node application as a service
I've never used procd before, but it likely needs the full path to node (e.g., /usr/bin/node). You'd need to make the line something like procd_set_param command /usr/bin/node /www/www-blink.js, assuming the file you want to run is /www/www-blink.js. You can locate node by running which node or type -a node.
I see people using gulp with webpack. But then I read webpack can replace gulp? I'm completely confused here...can someone explain?
UPDATE
in the end I started with gulp. I was new to modern front-end and just wanted to get up and running quick. Now that I've got my feet quite wet after more than a year, I'm ready to move to webpack. I suggest the same route for people who start off in the same shoes. Not saying you can't try webpack but just sayin if it seems complicated start with gulp first...nothing wrong with that.
If you don't want gulp, yes there's grunt but you could also just specify commands in your package.json and call them from the command-line without a task runner just to get up and running initially. For example:
"scripts": {
"babel": "babel src -d build",
"browserify": "browserify build/client/app.js -o dist/client/scripts/app.bundle.js",
"build": "npm run clean && npm run babel && npm run prepare && npm run browserify",
"clean": "rm -rf build && rm -rf dist",
"copy:server": "cp build/server.js dist/server.js",
"copy:index": "cp src/client/index.html dist/client/index.html",
"copy": "npm run copy:server && npm run copy:index",
"prepare": "mkdir -p dist/client/scripts/ && npm run copy",
"start": "node dist/server"
},
This answer might help. Task Runners (Gulp, Grunt, etc) and Bundlers (Webpack, Browserify). Why use together?
...and here's an example of using webpack from within a gulp task. This goes a step further and assumes that your webpack config is written in es6.
var gulp = require('gulp');
var webpack = require('webpack');
var gutil = require('gutil');
var babel = require('babel/register');
var config = require(path.join('../..', 'webpack.config.es6.js'));
gulp.task('webpack-es6-test', function(done){
webpack(config).run(onBuild(done));
});
function onBuild(done) {
return function(err, stats) {
if (err) {
gutil.log('Error', err);
if (done) {
done();
}
} else {
Object.keys(stats.compilation.assets).forEach(function(key) {
gutil.log('Webpack: output ', gutil.colors.green(key));
});
gutil.log('Webpack: ', gutil.colors.blue('finished ', stats.compilation.name));
if (done) {
done();
}
}
}
}
I think you'll find that as your app gets more complicated, you might want to use gulp with a webpack task as per example above. This allows you to do a few more interesting things in your build that webpack loaders and plugins really don't do, ie. creating output directories, starting servers, etc. Well, to be succinct, webpack actually can do those things, but you might find them limited for your long term needs. One of the biggest advantages you get from gulp -> webpack is that you can customize your webpack config for different environments and have gulp do the right task for the right time. Its really up to you, but there's nothing wrong with running webpack from gulp, in fact there's some pretty interesting examples of how to do it. The example above is basically from jlongster.
NPM scripts can do the same as gulp, but in about 50x less code. In fact, with no code at all, only command line arguments.
For example, the use case you described where you want to have different code for different environments.
With Webpack + NPM Scripts, it's this easy:
"prebuild:dev": "npm run clean:wwwroot",
"build:dev": "cross-env NODE_ENV=development webpack --config config/webpack.development.js --hot --profile --progress --colors --display-cached",
"postbuild:dev": "npm run copy:index.html && npm run rename:index.html",
"prebuild:production": "npm run clean:wwwroot",
"build:production": "cross-env NODE_ENV=production webpack --config config/webpack.production.js --profile --progress --colors --display-cached --bail",
"postbuild:production": "npm run copy:index.html && npm run rename:index.html",
"clean:wwwroot": "rimraf -- wwwroot/*",
"copy:index.html": "ncp wwwroot/index.html Views/Shared",
"rename:index.html": "cd ../PowerShell && elevate.exe -c renamer --find \"index.html\" --replace \"_Layout.cshtml\" \"../MyProject/Views/Shared/*\"",
Now you simply maintain two webpack config scripts, one for development mode, webpack.development.js, and one for production mode, webpack.production.js. I also utilize a webpack.common.js which houses webpack config shared on all environments, and use webpackMerge to merge them.
Because of the coolness of NPM scripts, it allows for easy chaining, similar to how gulp does Streams/pipes.
In the example above, to build for developement, you simply go to your command line and execute npm run build:dev.
NPM would first run prebuild:dev,
Then build:dev,
And finally postbuild:dev.
The pre and post prefixes tell NPM which order to execute in.
If you notice, with Webpack + NPM scripts, you can run a native programs, such as rimraf, instead of a gulp-wrapper for a native program such as gulp-rimraf. You can also run native Windows .exe files as I did here with elevate.exe or native *nix files on Linux or Mac.
Try doing the same thing with gulp. You'll have to wait for someone to come along and write a gulp-wrapper for the native program you want to use. In addition, you'll likely need to write convoluted code like this: (taken straight from angular2-seed repo)
Gulp Development code
import * as gulp from 'gulp';
import * as gulpLoadPlugins from 'gulp-load-plugins';
import * as merge from 'merge-stream';
import * as util from 'gulp-util';
import { join/*, sep, relative*/ } from 'path';
import { APP_DEST, APP_SRC, /*PROJECT_ROOT, */TOOLS_DIR, TYPED_COMPILE_INTERVAL } from '../../config';
import { makeTsProject, templateLocals } from '../../utils';
const plugins = <any>gulpLoadPlugins();
let typedBuildCounter = TYPED_COMPILE_INTERVAL; // Always start with the typed build.
/**
* Executes the build process, transpiling the TypeScript files (except the spec and e2e-spec files) for the development
* environment.
*/
export = () => {
let tsProject: any;
let typings = gulp.src([
'typings/index.d.ts',
TOOLS_DIR + '/manual_typings/**/*.d.ts'
]);
let src = [
join(APP_SRC, '**/*.ts'),
'!' + join(APP_SRC, '**/*.spec.ts'),
'!' + join(APP_SRC, '**/*.e2e-spec.ts')
];
let projectFiles = gulp.src(src);
let result: any;
let isFullCompile = true;
// Only do a typed build every X builds, otherwise do a typeless build to speed things up
if (typedBuildCounter < TYPED_COMPILE_INTERVAL) {
isFullCompile = false;
tsProject = makeTsProject({isolatedModules: true});
projectFiles = projectFiles.pipe(plugins.cached());
util.log('Performing typeless TypeScript compile.');
} else {
tsProject = makeTsProject();
projectFiles = merge(typings, projectFiles);
}
result = projectFiles
.pipe(plugins.plumber())
.pipe(plugins.sourcemaps.init())
.pipe(plugins.typescript(tsProject))
.on('error', () => {
typedBuildCounter = TYPED_COMPILE_INTERVAL;
});
if (isFullCompile) {
typedBuildCounter = 0;
} else {
typedBuildCounter++;
}
return result.js
.pipe(plugins.sourcemaps.write())
// Use for debugging with Webstorm/IntelliJ
// https://github.com/mgechev/angular2-seed/issues/1220
// .pipe(plugins.sourcemaps.write('.', {
// includeContent: false,
// sourceRoot: (file: any) =>
// relative(file.path, PROJECT_ROOT + '/' + APP_SRC).replace(sep, '/') + '/' + APP_SRC
// }))
.pipe(plugins.template(templateLocals()))
.pipe(gulp.dest(APP_DEST));
};
Gulp Production code
import * as gulp from 'gulp';
import * as gulpLoadPlugins from 'gulp-load-plugins';
import { join } from 'path';
import { TMP_DIR, TOOLS_DIR } from '../../config';
import { makeTsProject, templateLocals } from '../../utils';
const plugins = <any>gulpLoadPlugins();
const INLINE_OPTIONS = {
base: TMP_DIR,
useRelativePaths: true,
removeLineBreaks: true
};
/**
* Executes the build process, transpiling the TypeScript files for the production environment.
*/
export = () => {
let tsProject = makeTsProject();
let src = [
'typings/index.d.ts',
TOOLS_DIR + '/manual_typings/**/*.d.ts',
join(TMP_DIR, '**/*.ts')
];
let result = gulp.src(src)
.pipe(plugins.plumber())
.pipe(plugins.inlineNg2Template(INLINE_OPTIONS))
.pipe(plugins.typescript(tsProject))
.once('error', function () {
this.once('finish', () => process.exit(1));
});
return result.js
.pipe(plugins.template(templateLocals()))
.pipe(gulp.dest(TMP_DIR));
};
The actual gulp code is much more complicated that this, as this is only 2 of the several dozen gulp files in the repo.
So, which one is easier to you?
In my opinion, NPM scripts far surpasses gulp and grunt, in both effectiveness and ease of use, and all front-end developers should consider using it in their workflow because it is a major time saver.
UPDATE
There is one scenario I've encountered where I wanted to use Gulp in combination with NPM scripts and Webpack.
When I need to do remote debugging on an iPad or Android device for example, I need to start up extra servers. In the past I ran all the servers as separate processes, from within IntelliJ IDEA (Or Webstorm) that is easy with the "Compound" Run Configuration. But if I need to stop and restart them, it was tedious to have to close 5 different server tabs, plus the output was spread across the different windows.
One of the benefits of gulp is that is can chain all the output from separate independent processes into one console window, which becomes the parent of all the child servers.
So I created a very simple gulp task that just runs my NPM scripts or the commands directly, so all the output appears in one window, and I can easily end all 5 servers at once by closing the gulp task window.
Gulp.js
/**
* Gulp / Node utilities
*/
var gulp = require('gulp-help')(require('gulp'));
var utils = require('gulp-util');
var log = utils.log;
var con = utils.colors;
/**
* Basic workflow plugins
*/
var shell = require('gulp-shell'); // run command line from shell
var browserSync = require('browser-sync');
/**
* Performance testing plugins
*/
var ngrok = require('ngrok');
// Variables
var serverToProxy1 = "localhost:5000";
var finalPort1 = 8000;
// When the user enters "gulp" on the command line, the default task will automatically be called. This default task below, will run all other tasks automatically.
// Default task
gulp.task('default', function (cb) {
console.log('Starting dev servers!...');
gulp.start(
'devserver:jit',
'nodemon',
'browsersync',
'ios_webkit_debug_proxy'
'ngrok-url',
// 'vorlon',
// 'remotedebug_ios_webkit_adapter'
);
});
gulp.task('nodemon', shell.task('cd ../backend-nodejs && npm run nodemon'));
gulp.task('devserver:jit', shell.task('npm run devserver:jit'));
gulp.task('ios_webkit_debug_proxy', shell.task('npm run ios-webkit-debug-proxy'));
gulp.task('browsersync', shell.task(`browser-sync start --proxy ${serverToProxy1} --port ${finalPort1} --no-open`));
gulp.task('ngrok-url', function (cb) {
return ngrok.connect(finalPort1, function (err, url) {
site = url;
log(con.cyan('ngrok'), '- serving your site from', con.yellow(site));
cb();
});
});
// gulp.task('vorlon', shell.task('vorlon'));
// gulp.task('remotedebug_ios_webkit_adapter', shell.task('remotedebug_ios_webkit_adapter'));
Still quite a bit of code just to run 5 tasks, in my opinion, but it works for the purpose. One caveate is that gulp-shell doesn't seem to run some commands correctly, such as ios-webkit-debug-proxy. So I had to create an NPM Script that just executes the same command, and then it works.
So I primarily use NPM Scripts for all my tasks, but occasionally when I need to run a bunch of servers at once, I'll fire up my Gulp task to help out. Pick the right tool for the right job.
UPDATE 2
I now use a script called concurrently which does the same thing as the gulp task above. It runs multiple CLI scripts in parallel and pipes them all to the same console window, and its very simple to use. Once again, no code required (well, the code is inside the node_module for concurrently, but you don't have to concern yourself with that)
// NOTE: If you need to run a command with spaces in it, you need to use
// double quotes, and they must be escaped (at least on windows).
// It doesn't seem to work with single quotes.
"run:all": "concurrently \"npm run devserver\" nodemon browsersync ios_webkit_debug_proxy ngrok-url"
This runs all 5 scripts in parallel piped out to one terminal. Awesome! So that this point, I rarely use gulp, since there are so many cli scripts to do the same tasks with no code.
I suggest you read these articles which compare them in depth.
How to Use NPM as a Build Tool
Why we should stop using Grunt & Gulp
Why I Left Gulp and Grunt for NPM Scripts
I used both options in my different projects.
Here is one boilerplate that I put together using gulp with webpack - https://github.com/iroy2000/react-reflux-boilerplate-with-webpack.
I have some other project used only webpack with npm tasks.
And they both works totally fine. And I think it burns down to is how complicated your task is, and how much control you want to have in your configuration.
For example, if you tasks is simple, let's say dev, build, test ... etc ( which is very standard ), you are totally fine with just simple webpack with npm tasks.
But if you have very complicated workflow and you want to have more control of your configuration ( because it is coding ), you could go for gulp route.
But from my experience, webpack ecosystem provides more than enough plugins and loaders that I will need, and so I love using the bare minimum approach unless there is something you can only do in gulp. And also, it will make your configuration easier if you have one less thing in your system.
And a lot of times, nowadays, I see people actually replacing gulp and browsify all together with webpack alone.
The concepts of Gulp and Webpack are quite different. You tell Gulp how to put front-end code together step-by-step, but you tell Webpack what you want through a config file.
Here is a short article (5 min read) I wrote explaining my understanding of the differences: https://medium.com/#Maokai/compile-the-front-end-from-gulp-to-webpack-c45671ad87fe
Our company moved from Gulp to Webpack in the past year. Although it took some time, we figured out how to move all we did in Gulp to Webpack. So to us, everything we did in Gulp we can also do through Webpack, but not the other way around.
As of today, I'd suggest just use Webpack and avoid the mixture of Gulp and Webpack so you and your team do not need to learn and maintain both, especially because they are requiring very different mindsets.
Honestly I think the best is to use both.
Webpack for all javascript related.
Gulp for all css related.
I still have to find a decent solution for packaging css with webpack, and so far I am happy using gulp for css and webpack for javascript.
I also use npm scripts as #Tetradev as described. Especially since I am using Visual Studio, and while NPM Task runner is pretty reliable Webpack Task Runner is pretty buggy.
I've got two heroku node.js apps, one for prod and one for dev, and I also have a Gruntfile with dev- and prod-specific tasks. I know you can set up package.json to run grunt as a postinstall hook for npm, but can you specify somehow different tasks to be run depending on what enviro you're in?
Here's what the relevant section of my package.json looks like so far:
"scripts": {
"postinstall": "./node_modules/grunt/bin/grunt default"
},
Rather than run grunt default every time, I'd love to run "grunt production" if NODE_ENV is production, etc.
Is this possible?
Sadly there's no difference like postInstall and postInstallDev. You can make an intermediate script to handle the difference though. For example, if you have the following:
"scripts": { "postinstall": "node postInstall.js" },
Then in this script you could check the environment variable and execute the correct Grunt task from there:
// postInstall.js
var env = process.env.NODE_ENV;
if (env === 'development') {
// Spawn a process or require the Gruntfile directly for the default task.
return;
}
if (env === 'production') {
// Spawn a process or require the Gruntfile directly to the prod task.
return;
}
console.error('No task for environment:', env);
process.exit(1);
A couple of peripherally related points...
Try not to have Grunt and co. as dependencies. Keep them to devDependencies to avoid having to install all that stuff in production. Having an intermediary script in vanilla Node like the above will allow you to do this. I like to use a postInstall script like this to install git hook scripts too (but also only on development environments).
You don't have to use ./node_modules/grunt/bin/grunt default. If grunt-cli is a dependency or devDependency, npm knows where to look and grunt default will work fine.
For some reason, my dev environment was never running my "development" if statement. I sent a ticket to Heroku support, and this was their answer: "By default, your environment is not available during slug compilation. If you would like to make this available, you can enable an experimental feature called "user-env-compile". Please see the following article for details:
http://devcenter.heroku.com/articles/labs-user-env-compile". Good to know. So, I went another route using the heroku-buildpack-nodejs-grunt buildpack, and then creating a heroku:development grunt task.