I want to create a hash on build and set is as environment variable. It should be accessible by node.
Firstly I wrote a bash script, exported the environment variable in the script and sourced it in the package.json.
Problem is node doesn't know the source command.
Now I rewrote the script in Typescript (due to the whole project using TS not JS).
In the script I set the variable as follows:
process.env.VARIABLE = hashFunction(path);
The function is called through a script in package.json
"hash": "ts-node path/to/script.ts"
The function works as it should, but the environment variable is not set. Can someone help me to resolve this? Is it possible to return the string outside of the script and set it from there?
If possible i'd like to not use an external package.
Thank you :)
Update:
I used a bash script, but with a typescript script it'd work the same way. For bash the console.log is replaced with echo.
script.ts
console.log("2301293232") // The hash created by the script
package.json
"scripts": {
"build": "yarn run hash react-scripts build", // omit &&
"hash": "ENV_VAR=$(ts-node script.ts)"
}
So I did the following:
The script returns the checksum to the console/standard output. But I'll capture it before and set the printed value as environment variable in the package.json file. This will work as long as its the same process which starts the build.
That is why neither
"scripts": {
"build": "yarn run hash && react-scripts build"
}
nor
"scripts": {
"build": "react-scripts build",
"prebuild": "ENV_VAR=$(ts-node script.ts)"
}
will work. In both examples a new process will be started and the environment variable will be lost.
Can't (easily) change environment variables for parent process
You can change/set the environment for the currently running process. That means that when ts-node runs your program, you are changing the environment variables for your script and for ts-node.
After your script is finished running, ts-node stops, and the environment changes are lost. They don't get passed back to the shell.
Changing another process's environment
Changing the environment variables for the parent process (the shell) is a much more complicated process and depends on your OS and upon having the correct permissions. For linux, one such technique is listed here. In Windows, you can find some hints by looking at this question.
Other options
Your other option might be to just return a string that your shell understands, and run that.
Related
Our team has built a small CLI used for maintenance. The package.json specifies a path for with the bin property, and everything works great; "bin": { "eddy": "./dist/src/cli/entry.js"}
Autocompletion is achived by using yargs#17.0.1. However we recently converted the project to use es6 modules, because of a migration to Sveltekit, i.e. the package.json now contains type: module. Because of this, the CLI now only works if we run with:
what works
node --experimental-specifier-resolution=node ./dist/src/cli/entry.js help
However, if we run this without the flag, we get an error "module not found":
Error [ERR_MODULE_NOT_FOUND]: Cannot find module...
So the question is
Can we somehow "always" add the experimental-specifier-resolution=node to the CLI - so we can continue to use the shorthand eddy, and utilize auto completion?
There are two probable solutions here.
Solution 1
Your entry.js file should start with a shebang like #!/usr/bin/env node. You cannot specify the flag directly here, however, if you could provide the absolute path to node directly in the shebang, you can specify the flag.
Assuming you have node installed in /usr/bin/node, you can write the shebang in entry.js like:
#!/usr/bin/node --experimental-specifier-resolution=node
(Use which node to find the absolute path)
However, this is not a very portable solution. You cannot always assume everyone has node installed in the same path. Also some may use nvm to manage versions and can have multiple version in different path. This is the reason why we use /usr/bin/env to find the required node installation in the first place. This leads to the second solution.
Solution 2
You can create a shell script that would intern call the cli entry point with the required flags. This shell script can be specified in the package.json bin section.
The shell script (entry.sh) should look like:
#!/usr/bin/env bash
/usr/bin/env node --experimental-specifier-resolution=node ./entry.js "$#"
Then, in your package.json, replace bin with:
"bin": { "eddy": "./dist/src/cli/entry.sh"}
So when you run eddy, it will run the entry.js using node with the required flag. The "$#" in the command will be replaced by any arguments that you pass to eddy.
So eddy help will translate to /usr/bin/env node --experimental-specifier-resolution=node ./entry.js help
Just add a script to your package.json:Assuming index.js is your entry point and package.json is in the same directory
{
"scripts": {
"start": "node --experimental-specifier-resolution=node index.js"
}
}
Then you can just run on your console:
npm start
I'm writing an app that is composed of microservices (I use micro).
I really like es6, so I use Babel to make the development process easier. The problem that I have is that I need a script that would compile my es6 code and restarted the 'server'; I don't know how to achieve this.
Right now I have the following script in my package.json:
"scripts": {
"start": "yarn run build && micro",
"build": "./node_modules/.bin/babel src --out-dir lib"
},
When I run yarn start my es6 code compiles successfully and micro starts the server. However, if I make changes to my code, I'll have to manually stop the server and run yarn start again.
I've tried to change my build script
"build": "./node_modules/.bin/babel src --watch --out-dir lib"
But in this case the micro command does not get executed as the build script just watches for changes and blocks anything else from execution. My goal is to have a script that would watch for changes and restart the server if a change occurred (compiling the code beforehand) like in Meteor.
One option is using ParallelShell module to run shell commands in parallel. You can find an example of how to use it here
The simplest solution would be to yarn run build & micro (note the single & and not &&).
As mentioned by others, parallelshell is another good hack (probably more robust than &).
I just installed ESLint and I can successfully run it by doing this at the terminal window:
./node_modules/.bin/eslint app
Note: app is the root folder I want lint to inspect.
I then put that exact command in my package.json:
"scripts": {
"lint": "./node_modules/.bin/eslint app"
}
I expected to be able to run it in the terminal window like this:
npm run lint
Note: I know how to fix the no-undef error. My question is about the many errors lines after that.
It actually works, but it also produces a bunch of errors after showing me the correct output:
Why is that happening?
This is the default way of how the npm script runner handles script errors (i.e. non-zero exit codes). This will always happen, even if you only run a script like exit 1.
I'm not sure why this feature exists, it seems annoying and useless in most cases.
If you don't want to see this, you can add || true at the end of your script.
Example:
lint: "eslint app || true"
As you might've noticed, I've omitted the part to the eslint binary. The npm script runner already includes local binaries as part of the path when trying to run the script so there is no need to use the full path.
document is a global, so eslint thinks you are missing an import somewhere. For those cases, you can adapt your config so that the error is not reported, Something like this:
module.exports = {
"globals": {
"document": true
}
}
this should be saved as .eslintrc.js and be at the same level where your package.json is
I noticed this strange behavior which is not a big deal, but bugging the heck out of me.
In my package.json file, under the "scripts" section, I have a "start" entry. It looks like this:
"scripts": {
"start": "APPLICATION_ENV=development nodemon app.js"
}
typing npm start on a Mac terminal works fine, and nodemon runs the app with the correct APPLICATION_ENV variable as expected. When I try the same on a Windows environment, I get the following error:
"'APPLICATION_ENV' is not recognized as an internal or external command, operable program or batch file."
I have tried the git-bash shell and the normal Win CMD prompt, same deal.
I find this odd, because typing the command directly into the terminal (not going through the package.json script via npm start) works fine.
Has anyone else seen this and found a solution? Thanks!!
For cross-platform usage of environment variables in your scripts install and utilize cross-env.
"scripts": {
"start": "cross-env APPLICATION_ENV=development nodemon app.js"
}
The issue is explained well at the link provided to cross-env. It reads:
Most Windows command prompts will choke when you set environment variables with NODE_ENV=production like that. (The exception is Bash on Windows, which uses native Bash.) Similarly, there's a difference in how windows and POSIX commands utilize environment variables. With POSIX, you use: $ENV_VAR and on windows you use %ENV_VAR%.
I ended up using the dotenv package based on the 2nd answer here:
Node.js: Setting Environment Variables
I like this because it allows me to setup environmental variables without having to inject extra text into my npm script lines. Instead, they are using a .env file (which should be placed on each environment and ommitted from version control).
You should use "set" command to set environment variables in Windows.
"scripts": {
"start": "set APPLICATION_ENV=development && nodemon app.js"
}
Something like this.
When deploying to Modulus.io (this probably applies to other PAAS as well), they will install the required packages from the packages.json file. As part of the install process, some npm scripts might be called as well. For example postinstall. However, these scripts might not be able to run (or should not run) on production. Be it because of scripts that are only available locally or do not make any sense on production.
How can I detect the environment and execute or not execute certain npm scripts? Can I access the process.env object and handle the scripts appropriatly or is there a better way?
Unfortunately, you can't in your package.json define script only for specific environment.
Let's say you have a postinstall script declared like this in package.json:
{
"scripts": {
"postinstall": "node postInstall.js"
},
}
The "easy" way would be to add your logic regarding the environment in this postInstall.js script:
if (process.env.NODE_ENV === 'production') {
// Do not run in production
process.exit(1);
}
If you're running in the production environment, you just instructs Node.js to terminate the process as quickly as possible with the specified exit code for example.
You could also if you're running multiple scripts in the postinstall hook, move all your scripts execution in a wrapper having the same mechanism to exit on certain environment, if not, executes all the other scripts.
Another approach if you're always running on Unix systems is to check directly the Node.js environment using a Bash condition:
{
"scripts": {
"postinstall": "[ \"$NODE_ENV\" != production ] && node postInstall.js"
},
}
In this case, if the node environment is not production, then you're running your postInstall.js script. You can adjust it to other conditions like only in development, etc.