I really like the structure of a NX workspace, and that lead me to start using it when building a new CLI project.
I started with creating a #nrwl/node:application but i currently is having some issues making it executable.
I believe this is not a problem with NX itself, but i can't add a shebang #!/usr/bin/env node in the main.ts file since the tsc transpiler will complain.
Module parse failed: Unexpected character '#' (1:0) File was processed
with these loaders: * ./node_modules/ts-loader/index.js
I have added the "bin": {"cli": "main.js"} property in my package.json file but if i run the main.js file without the shebang i will get this error:
line 1: syntax error near unexpected token `('
C:\Users\*\AppData\Roaming\npm/node_modules/*/dist/apps/*/main.js: line 1: `(function(e, a) { for(var i in a) e[i] = a[i]; }(exports, /******/ (function(modules) { // webpackBootstrap
Is there any smart way of solving this?
Steps to reproduce:
npx create-nx-workspace#latest cli-workspace --preset empty --cli nx --nx-cloud false
cd cli-workspace
npm install -D #nrwl/node
nx generate #nrwl/node:application my-cli
Add #!/usr/bin/env node to the top of the main.ts file
npm start
I've had the same problem. I could solve this by using the #nx-extend/gcp-functions:build builder to build the application, then I've created a custom executor to install the application as a CLI tool.
import { ExecutorContext } from '#nrwl/devkit';
import { exec } from 'child_process';
import { promisify } from 'util';
import * as fs from 'fs';
export interface NodeInstallOptions {
buildPath: string;
appName: string;
}
export default async function nodeInstallExecutor(
options: NodeInstallOptions,
context: ExecutorContext
) {
let success = true;
const mainPath = `${options.buildPath}/main.js`;
console.log('Building library...');
await runBashLine(success, `nx build ${options.appName}`);
console.log('Adding shebang line to main.js...');
const file = fs.readFileSync(mainPath, 'utf8');
const first_line = file.split("\n")[0];
if (first_line !== '#!/usr/bin/env node') {
var data = "#!/usr/bin/env node\n\n";
data += fs.readFileSync(mainPath, 'utf8');
fs.writeFileSync(mainPath, data);
}
console.log("Packing library...")
await runBashLine(success, `cd ${options.buildPath}; npm pack`);
console.log('Installing library...');
await runBashLine(success, `npm install -g ${options.buildPath}/*.tgz`);
return { success };
}
async function runBashLine(success: boolean, line: string) {
const { stdout, stderr } = await promisify(exec)(line);
console.log(stdout);
console.error(stderr);
success = success && !stderr;
}
I'm trying to create a deployment script which will let me know whether or not I have the latest image of my project deployed on either my master or development branch.
I'm attempting to use git.diff to compare the SHA1 hash of the deployed image against my local repository, and although the hashes are clearly different, git.diff gives me no output. I don't understand what's going on here, since if the SHA1 is different, there must surely be changes to show from git.diff?
This is the code I have written so far:
#!/usr/bin/node
// get an exec function from node we can use to run shell commands
const exec = require('util').promisify(require('child_process').exec);
// check the user supplied the folder name of the repo as an arg
if (!process.argv[2]) {
console.error('argument missing');
process.exit(1);
}
// initialize our git client using the repo path arg
const git = require('simple-git/promise')("../../" + process.argv[2]);
var projectNameArray = process.argv[2].split(".");
const projectName = projectNameArray[0] + "-" + projectNameArray[1] + "-" + projectNameArray[2]
console.log('\x1b[36m%s\x1b[0m', 'Your project name is:', projectName);
// use an IIAFE for async/await
(async () => {
// run git rev-parse development and
var devSha1 = await git.revparse(['development']);
console.log('\x1b[36m%s\x1b[0m', 'devSha1: ', devSha1);
devSha1 = devSha1.replace(/(\n)/gm,"");
// run git rev-parse master
var masterSha1 = await git.revparse(['master']);
console.log('\x1b[36m%s\x1b[0m', 'masterSha1: ', masterSha1);
masterSha1 = masterSha1.replace(/(\n)/gm,"");
// use kubectl to export pods to JSON and then parse it
const { stdout, stderr } = await exec(`kubectl get deployment ${projectName} -o json`);
const pods = JSON.parse(stdout);
const imageName = pods.spec.template.spec.containers[0].image;
//get deployed image has
const commitHashArray = imageName.split('development-' || 'master-');
console.log('\x1b[36m%s\x1b[0m', 'Deployed image: ', commitHashArray[1]);
var diffArray = new Array(devSha1, commitHashArray[1])
//logic to tell if latest is deployed of if behind
if (commitHashArray[1] == devSha1){
console.log('\x1b[32m%s\x1b[0m', 'You have the latest image deployed');
} else {
console.log('\x1b[31m%s\x1b[0m', 'You don\'t have the latest image deployed');
await git.diff(diffArray);
}
})().then(() => console.log('\x1b[32m%s\x1b[0m', 'Ok')).catch((e) => console.error(e));
This gives me the following console output:
Your project name is: xxx-xxx-xxx
devSha1: 6a7ee89dbefc4508b03d863e5c1f5dd9dce579b4
masterSha1: 4529244ba95e1b043b691c5ef1dc484c7d67dbe2
Deployed image: 446c4ba124f7a12c8a4c91ca8eedde4c3c8652fd
You don't have the latest image deployed
Ok
I'm not sure if I'm fundamentally misunderstanding how git.diff works, or if something else is at play here. The images clearly don't match, so I would love if anyone could explain why there is no output from this function?
Thanks! :)
What is the most correct way to install npm packages in nested sub folders?
my-app
/my-sub-module
package.json
package.json
What is the best way to have packages in /my-sub-module be installed automatically when npm install run in my-app?
I prefer using post-install, if you know the names of the nested subdir. In package.json:
"scripts": {
"postinstall": "cd nested_dir && npm install",
...
}
Per #Scott's answer, the install|postinstall script is the simplest way as long as sub-directory names are known. This is how I run it for multiple sub dirs. For example, pretend we have api/, web/ and shared/ sub-projects in a monorepo root:
// In monorepo root package.json
{
...
"scripts": {
"postinstall": "(cd api && npm install); (cd web && npm install); (cd shared && npm install)"
},
}
On Windows, replace the ; between the parentesis with &&.
// In monorepo root package.json
{
...
"scripts": {
"postinstall": "(cd api && npm install) && (cd web && npm install) && (cd shared && npm install)"
},
}
Use Case 1: If you want be able to run npm commands from within each subdirectory (where each package.json is), you will need to use postinstall.
As I often use npm-run-all anyway, I use it to keep it nice and short (the part in the postinstall):
{
"install:demo": "cd projects/demo && npm install",
"install:design": "cd projects/design && npm install",
"install:utils": "cd projects/utils && npm install",
"postinstall": "run-p install:*"
}
This has the added benefit that I can install all at once, or individually. If you don't need this or don't want npm-run-all as a dependency, check out demisx's answer (using subshells in postinstall).
Use Case 2: If you will be running all npm commands from the root directory (and, for example, won't be using npm scripts in subdirectories), you could simply install each subdirectory like you would any dependecy:
npm install path/to/any/directory/with/a/package-json
In the latter case, don't be surprised that you don't find any node_modules or package-lock.json file in the sub-directories - all packages will be installed in the root node_modules, which is why you won't be able to run your npm commands (that require dependencies) from any of your subdirectories.
If you're not sure, use case 1 always works.
If you want to run a single command to install npm packages in nested subfolders, you can run a script via npm and main package.json in your root directory. The script will visit every subdirectory and run npm install.
Below is a .js script that will achieve the desired result:
var fs = require('fs');
var resolve = require('path').resolve;
var join = require('path').join;
var cp = require('child_process');
var os = require('os');
// get library path
var lib = resolve(__dirname, '../lib/');
fs.readdirSync(lib).forEach(function(mod) {
var modPath = join(lib, mod);
// ensure path has package.json
if (!fs.existsSync(join(modPath, 'package.json'))) {
return;
}
// npm binary based on OS
var npmCmd = os.platform().startsWith('win') ? 'npm.cmd' : 'npm';
// install folder
cp.spawn(npmCmd, ['i'], {
env: process.env,
cwd: modPath,
stdio: 'inherit'
});
})
Note that this is an example taken from a StrongLoop article that specifically addresses a modular node.js project structure (including nested components and package.json files).
As suggested, you could also achieve the same thing with a bash script.
EDIT: Made the code work in Windows
Just for reference in case people come across this question. You can now:
Add a package.json to a subfolder
Install this subfolder as reference-link in the main package.json:
npm install --save path/to/my/subfolder
The accepted answer works, but you can use --prefix to run npm commands in a selected location.
"postinstall": "npm --prefix ./nested_dir install"
And --prefix works for any npm command, not just install.
You can also view the current prefix with
npm prefix
And set your global install (-g) folder with
npm config set prefix "folder_path"
Maybe TMI, but you get the idea...
My solution is very similar.
Pure Node.js
The following script examines all subfolders (recursively) as long as they have package.json and runs npm install in each of them.
One can add exceptions to it: folders allowed not having package.json. In the example below one such folder is "packages".
One can run it as a "preinstall" script.
const path = require('path')
const fs = require('fs')
const child_process = require('child_process')
const root = process.cwd()
npm_install_recursive(root)
// Since this script is intended to be run as a "preinstall" command,
// it will do `npm install` automatically inside the root folder in the end.
console.log('===================================================================')
console.log(`Performing "npm install" inside root folder`)
console.log('===================================================================')
// Recurses into a folder
function npm_install_recursive(folder)
{
const has_package_json = fs.existsSync(path.join(folder, 'package.json'))
// Abort if there's no `package.json` in this folder and it's not a "packages" folder
if (!has_package_json && path.basename(folder) !== 'packages')
{
return
}
// If there is `package.json` in this folder then perform `npm install`.
//
// Since this script is intended to be run as a "preinstall" command,
// skip the root folder, because it will be `npm install`ed in the end.
// Hence the `folder !== root` condition.
//
if (has_package_json && folder !== root)
{
console.log('===================================================================')
console.log(`Performing "npm install" inside ${folder === root ? 'root folder' : './' + path.relative(root, folder)}`)
console.log('===================================================================')
npm_install(folder)
}
// Recurse into subfolders
for (let subfolder of subfolders(folder))
{
npm_install_recursive(subfolder)
}
}
// Performs `npm install`
function npm_install(where)
{
child_process.execSync('npm install', { cwd: where, env: process.env, stdio: 'inherit' })
}
// Lists subfolders in a folder
function subfolders(folder)
{
return fs.readdirSync(folder)
.filter(subfolder => fs.statSync(path.join(folder, subfolder)).isDirectory())
.filter(subfolder => subfolder !== 'node_modules' && subfolder[0] !== '.')
.map(subfolder => path.join(folder, subfolder))
}
If you have find utility on your system, you could try running the following command in your application root directory:
find . ! -path "*/node_modules/*" -name "package.json" -execdir npm install \;
Basically, find all package.json files and run npm install in that directory, skipping all node_modules directories.
EDIT As mentioned by fgblomqvist in comments, npm now supports workspaces too.
Some of the answers are quite old. I think nowadays we have some new options available to setup monorepos.
I would suggest using yarn workspaces:
Workspaces are a new way to set up your package architecture that’s available by default starting from Yarn 1.0. It allows you to setup multiple packages in such a way that you only need to run yarn install once to install all of them in a single pass.
If you prefer or have to stay with npm, I suggest taking a look at lerna:
Lerna is a tool that optimizes the workflow around managing multi-package repositories with git and npm.
lerna works perfect with yarn workspaces too - article. I've just finished setting up a monorepo project - example.
And here is an example of a multi-package project configured to use npm + lerna - MDC Web: they run lerna bootstrap using package.json's postinstall.
Adding Windows support to snozza's answer, as well as skipping of node_modules folder if present.
var fs = require('fs')
var resolve = require('path').resolve
var join = require('path').join
var cp = require('child_process')
// get library path
var lib = resolve(__dirname, '../lib/')
fs.readdirSync(lib)
.forEach(function (mod) {
var modPath = join(lib, mod)
// ensure path has package.json
if (!mod === 'node_modules' && !fs.existsSync(join(modPath, 'package.json'))) return
// Determine OS and set command accordingly
const cmd = /^win/.test(process.platform) ? 'npm.cmd' : 'npm';
// install folder
cp.spawn(cmd, ['i'], { env: process.env, cwd: modPath, stdio: 'inherit' })
})
Inspired by the scripts provided here, I built a configurable example which:
can be setup to use yarn or npm
can be setup to determine the command to use based on lock files so that if you set it to use yarn but a directory only has a package-lock.json it will use npm for that directory (defaults to true).
configure logging
runs installations in parallel using cp.spawn
can do dry runs to let you see what it would do first
can be run as a function or auto run using env vars
when run as a function, optionally provide array of directories to check
returns a promise that resolves when completed
allows setting max depth to look if needed
knows to stop recursing if it finds a folder with yarn workspaces (configurable)
allows skipping directories using a comma separated env var or by passing the config an array of strings to match against or a function which receives the file name, file path, and the fs.Dirent obj and expects a boolean result.
const path = require('path');
const { promises: fs } = require('fs');
const cp = require('child_process');
// if you want to have it automatically run based upon
// process.cwd()
const AUTO_RUN = Boolean(process.env.RI_AUTO_RUN);
/**
* Creates a config object from environment variables which can then be
* overriden if executing via its exported function (config as second arg)
*/
const getConfig = (config = {}) => ({
// we want to use yarn by default but RI_USE_YARN=false will
// use npm instead
useYarn: process.env.RI_USE_YARN !== 'false',
// should we handle yarn workspaces? if this is true (default)
// then we will stop recursing if a package.json has the "workspaces"
// property and we will allow `yarn` to do its thing.
yarnWorkspaces: process.env.RI_YARN_WORKSPACES !== 'false',
// if truthy, will run extra checks to see if there is a package-lock.json
// or yarn.lock file in a given directory and use that installer if so.
detectLockFiles: process.env.RI_DETECT_LOCK_FILES !== 'false',
// what kind of logging should be done on the spawned processes?
// if this exists and it is not errors it will log everything
// otherwise it will only log stderr and spawn errors
log: process.env.RI_LOG || 'errors',
// max depth to recurse?
maxDepth: process.env.RI_MAX_DEPTH || Infinity,
// do not install at the root directory?
ignoreRoot: Boolean(process.env.RI_IGNORE_ROOT),
// an array (or comma separated string for env var) of directories
// to skip while recursing. if array, can pass functions which
// return a boolean after receiving the dir path and fs.Dirent args
// #see https://nodejs.org/api/fs.html#fs_class_fs_dirent
skipDirectories: process.env.RI_SKIP_DIRS
? process.env.RI_SKIP_DIRS.split(',').map(str => str.trim())
: undefined,
// just run through and log the actions that would be taken?
dry: Boolean(process.env.RI_DRY_RUN),
...config
});
function handleSpawnedProcess(dir, log, proc) {
return new Promise((resolve, reject) => {
proc.on('error', error => {
console.log(`
----------------
[RI] | [ERROR] | Failed to Spawn Process
- Path: ${dir}
- Reason: ${error.message}
----------------
`);
reject(error);
});
if (log) {
proc.stderr.on('data', data => {
console.error(`[RI] | [${dir}] | ${data}`);
});
}
if (log && log !== 'errors') {
proc.stdout.on('data', data => {
console.log(`[RI] | [${dir}] | ${data}`);
});
}
proc.on('close', code => {
if (log && log !== 'errors') {
console.log(`
----------------
[RI] | [COMPLETE] | Spawned Process Closed
- Path: ${dir}
- Code: ${code}
----------------
`);
}
if (code === 0) {
resolve();
} else {
reject(
new Error(
`[RI] | [ERROR] | [${dir}] | failed to install with exit code ${code}`
)
);
}
});
});
}
async function recurseDirectory(rootDir, config) {
const {
useYarn,
yarnWorkspaces,
detectLockFiles,
log,
maxDepth,
ignoreRoot,
skipDirectories,
dry
} = config;
const installPromises = [];
function install(cmd, folder, relativeDir) {
const proc = cp.spawn(cmd, ['install'], {
cwd: folder,
env: process.env
});
installPromises.push(handleSpawnedProcess(relativeDir, log, proc));
}
function shouldSkipFile(filePath, file) {
if (!file.isDirectory() || file.name === 'node_modules') {
return true;
}
if (!skipDirectories) {
return false;
}
return skipDirectories.some(check =>
typeof check === 'function' ? check(filePath, file) : check === file.name
);
}
async function getInstallCommand(folder) {
let cmd = useYarn ? 'yarn' : 'npm';
if (detectLockFiles) {
const [hasYarnLock, hasPackageLock] = await Promise.all([
fs
.readFile(path.join(folder, 'yarn.lock'))
.then(() => true)
.catch(() => false),
fs
.readFile(path.join(folder, 'package-lock.json'))
.then(() => true)
.catch(() => false)
]);
if (cmd === 'yarn' && !hasYarnLock && hasPackageLock) {
cmd = 'npm';
} else if (cmd === 'npm' && !hasPackageLock && hasYarnLock) {
cmd = 'yarn';
}
}
return cmd;
}
async function installRecursively(folder, depth = 0) {
if (dry || (log && log !== 'errors')) {
console.log('[RI] | Check Directory --> ', folder);
}
let pkg;
if (folder !== rootDir || !ignoreRoot) {
try {
// Check if package.json exists, if it doesnt this will error and move on
pkg = JSON.parse(await fs.readFile(path.join(folder, 'package.json')));
// get the command that we should use. if lock checking is enabled it will
// also determine what installer to use based on the available lock files
const cmd = await getInstallCommand(folder);
const relativeDir = `${path.basename(rootDir)} -> ./${path.relative(
rootDir,
folder
)}`;
if (dry || (log && log !== 'errors')) {
console.log(
`[RI] | Performing (${cmd} install) at path "${relativeDir}"`
);
}
if (!dry) {
install(cmd, folder, relativeDir);
}
} catch {
// do nothing when error caught as it simply indicates package.json likely doesnt
// exist.
}
}
if (
depth >= maxDepth ||
(pkg && useYarn && yarnWorkspaces && pkg.workspaces)
) {
// if we have reached maxDepth or if our package.json in the current directory
// contains yarn workspaces then we use yarn for installing then this is the last
// directory we will attempt to install.
return;
}
const files = await fs.readdir(folder, { withFileTypes: true });
return Promise.all(
files.map(file => {
const filePath = path.join(folder, file.name);
return shouldSkipFile(filePath, file)
? undefined
: installRecursively(filePath, depth + 1);
})
);
}
await installRecursively(rootDir);
await Promise.all(installPromises);
}
async function startRecursiveInstall(directories, _config) {
const config = getConfig(_config);
const promise = Array.isArray(directories)
? Promise.all(directories.map(rootDir => recurseDirectory(rootDir, config)))
: recurseDirectory(directories, config);
await promise;
}
if (AUTO_RUN) {
startRecursiveInstall(process.cwd());
}
module.exports = startRecursiveInstall;
And with it being used:
const installRecursively = require('./recursive-install');
installRecursively(process.cwd(), { dry: true })
find . -maxdepth 1 -type d \( ! -name . \) -exec bash -c "cd '{}' && npm install" \;
[For macOS, Linux users]:
I created a bash file to install all dependencies in the project and nested folder.
find . -name node_modules -prune -o -name package.json -execdir npm install \;
Explain: In the root directory, exclude the node_modules folder (even inside nested folders), find the directory that has the package.json file then run the npm install command.
In case you just want to find on specified folders (eg: abc123, def456 folder), run as below:
find ./abc123/* ./def456/* -name node_modules -prune -o -name package.json -execdir npm install \;
To run npm install on every subdirectory you can do something like:
"scripts": {
...
"install:all": "for D in */; do npm install --cwd \"${D}\"; done"
}
where
install:all is just the name of the script, you can name it whatever you please
D Is the name of the directory at the current iteration
*/ Specifies where you want to look for subdirectories. directory/*/ will list all directories inside directory/ and directory/*/*/ will list all directories two levels in.
npm install -cwd install all dependencies in the given folder
You could also run several commands, for example:
for D in */; do echo \"Installing stuff on ${D}\" && npm install --cwd \"${D}\"; done
will print "Installing stuff on your_subfolder/" on every iteration.
This works for yarn too
Any language that can get a list of directories and run shell commands can do this for you.
I know it isn't the answer OP was going for exactly, but it's one that will always work. You need to create an array of subdirectory names, then loop over them and run npm i, or whatever command you're needing to run.
For reference, I tried npm i **/, which just installed the modules from all the subdirectories in the parent. It's unintuitive as hell, but needless to say it's not the solution you need.
I have an application that has some development-specific debugging code in it. Currently, all development code is guarded by a variable called dev at the top of the file. Here's an example of what my app does:
var dev = true;
if (dev) {
console.log("Hello developer");
} else {
console.log("Hello production");
}
When I go to deploy my application, I have to manually change the dev variable form true to false. This sucks.
I'm in the middle of migrating from hand-rolled builds to gulp.js and I want to solve this development vs. production build problem cleanly. I'm thinking about the following:
// Inside main.js
var dev = require('./isdev');
if (dev) //...
// Inside isdev.js:
module.exports = true;
Now, when I build for production, instead of manually setting the dev flag to false, I want to replace isdev.js from module.exports = true; to module.exports = false;. My specific question is, how do I automate gulp such that gulp development produces a file with dev = true and gulp production produces a file with dev = false.
Here's an update to those who are curious.
First, I have an options.js:
exports.dev = false;
I also have a options_dev.js:
exports.dev = true;
Inside of gulpfile.js, I have the following code that parses input arguments:
// Parse the arguments. Use `gulp --prod` to build a production extension
var argv = parseArgs(process.argv.slice(2));
var dev = !argv['prod']; // Whether to build a development extension or not
Finally, when I pipe to browserify, I have the following:
var resolve = require('browser-resolve');
// ...
.pipe(browserify({
debug: dev,
resolve: function(pkg, opts) {
// Replace options.js with options_dev.js if this is a dev build
if (dev) {
opts.modules['./options'] = 'src/options_dev.js';
}
return resolve.apply(this, arguments);
}
}))
The magic happens by using a custom resolve function, dynamically swapping ./options with options_dev for development builds. The browserify docs say:
You can give browserify a custom opts.resolve() function or by default it uses browser-resolve.
When we run gulp, we build a development version. When we run gulp --prod, we build a production version. The value of require('./options').dev allows us to dynamically change things like server endpoints, etc. Cool!
The way that I've seen this done is to set the environment variable on the command line before the execution command. An example of doing this with the Node.JS CLI (in a bash-like environment) would be:
ENV=dev node
> process.env.ENV
'dev'
Then in your code, you could do:
var dev = process.env.ENV === 'dev'
So with gulp, you could use:
ENV=dev gulp <task name>
I tested this out with the following snippet, and it works:
gulp.task('dev', function(){
if (process.env.ENV === 'dev')
console.log("IT WORKED");
else
console.log("NO DICE");
});
Edit:
You can write out the environment to the file isdev right before building:
var fs = require('fs');
gulp.task('build', function(){
if (process.env.ENV === 'dev')
fs.writeFileSync('isdev', 'module.exports = true');
else
fs.writeFileSync('isdev', 'module.exports = false');
// kick off build
});
Now, the correct value will be present in isdev for any require call in the built bundle. You could extend this to other specified environments as well (or to other configuration flags).