babel-preset-react-app not picking up environment variables - node.js

Failed to compile ./src/index.js Module build failed: Error: Using
babel-preset-react-app requires that you specify NODE_ENV or
BABEL_ENV environment variables. Valid values are "development",
"test", and "production". Instead, received: "undefined". (While
processing
preset:"C:\Users\mitch\OneDrive\Development\Git\react-seed\node_modules\babel-preset-react-app\index.js")
at Array.map (native)
I keep getting the above error no matter how many weird and wonderful ways I try to set either the environment or the environment variables since updating my react-app and I even get this on a fresh app created from the 'create-react-app' scripts - What am I clearly doing wrong?

I do it in my package.json:
{
"scripts": {
"build": "NODE_ENV=development babel src -d lib",
"build-prod": "NODE_ENV=production babel src -d lib"
}
}

Using babel-preset-react-app requires that you specify NODE_ENV or BABEL_ENV environment variables. Valid values are "development", "test", and "production". Instead, received: "development "
Ran into a similar problem and noticed that a space character messed things up with this configuration:
SET NODE_ENV=development && node server/bootstrap.js
//Changed to this:
SET NODE_ENV=development&&node server/bootstrap.js

{
"scripts": {
"build": "export NODE_ENV=development && babel src -d lib",
"build-prod": "export NODE_ENV=production && babel src -d lib"
}
}

Related

How to set multiple environment variables in a single line on Windows?

I'm working on a Node.js project and using Jest as the test framework. This project runs on Windows as it happens, and I'm having a heck of a time setting more than one environment variable on the command line.
Here's the relevant line in package.json
"scripts": {
"test": "SET NODE_ENV=test & SET DB_URI=postgresql://<database stuff>> & jest -t Suite1 --watch --verbose false"
},
As can be seen above, I'm setting both a NODE_ENV and DB_URI environment variable prior to running jest via npm run test.
My problem is the that DB_URI environment variable doesn't appear to be set when jest runs. The error I get back from jest makes it obvious it can't find it. I do know that the first, NODE_ENV environment variable is set ok, but am not sure what's wrong with the second one, did I get the syntax wrong somehow? Is anyone with jest experience on Windows doing something similar to what I'm trying?
Just make the following change:
Use &&, also you need to remove the white space before and after the "&&".
"scripts": {
"test": "SET NODE_ENV=test&&SET DB_URI=postgresql://<database stuff>>&&jest -t Suite1 --watch --verbose false"
},
I'd suggest you to add cross-env. It should be able to set multiple environment variables for Windows and POSIX
package.json
{
// ...
"scripts": {
"test": "cross-env NODE_ENV=test DB_URI=postgresql://<database stuff>> jest -t Suite1 --watch --verbose false"
},
"devDependencies": {
"cross-env": "^6.0.0"
}
}
If you are setting the environment variables from powershell you can do like so:
cmd /K "set f=df & echo %f%"
The output will be "df"

Passing environment variable to pm2 is not working

I have two API in node js using babel and I have a package.json commands to use to make the application working this is the commands:
"build": "del-cli dist/ && babel src -d dist --copy-files",
"serve": "cross-env NODE_ENV=production node dist/index.js",
"start:noupdate": "cross-env NODE_ENV=development babel-node src/index.js",
"start:serve": "cross-env NODE_ENV=production node dist/index.js",
I have two domains one is https://api.website1.com.br and another is https://website2.com.br/api.
They have the same env file name but with another data for each database, that is .env.production and .env.development
When I make this "yarn build", my Linux execute this command :
"build": "del-cli dist/ && babel src -d dist --copy-files",
And this is working fine when I try to put in production mode on my real webservers, i go to the folder from the project and run this command to make the app online with PM2:
pm2 start npm -- run-script start:serve NODE_ENV=production
That will make this command work:
"cross-env NODE_ENV=production node dist/index.js"
The app runs just fine, but I have a problem he only runs one and doesn't create a new PM2 APP he just restarts what I start.
Example if I go to the folder in my https://api.website1.com.br and run this command first in this he starts, but I go to the another he doesn't start that but reload my already early app don't create a new one, what I'm doing wrong?
I manage to work this using pm2 ecosystem, that I found in this documentation from http://pm2.keymetrics.io/docs/usage/application-declaration/
I configure the default file and put a name my APP:
module.exports = {
apps : [{
name: "app",
script: "./app.js",
env: {
NODE_ENV: "development",
},
env_production: {
NODE_ENV: "production",
}
}]
}
and use this command pm2 start ecosystem.config.js and now is working, I post here to know if someone has the same problem

Webpack --watch and launching nodemon?

Thanks to an excellent answer by #McMath I now have webpack compiling both my client and my server. I'm now on to trying to make webpack --watch be useful. Ideally I'd like to have it spawn something like nodemon for my server process when that bundle changes, and some flavor of browsersync for when my client changes.
I realize it's a bundler/loader and not really a task runner, but is there some way to accomplish this? A lack of google results seems to indicate I'm trying something new, but this must have been done already..
I can always have webpack package to another directory and use gulp to watch it/copy it/browsersync-ify it, but that seems like a hack.. Is there a better way?
Install the following dependencies:
npm install npm-run-all webpack nodemon
Configure your package.json file to something as seen below:
package.json
{
...
"scripts": {
"start" : "npm-run-all --parallel watch:server watch:build",
"watch:build" : "webpack --watch",
"watch:server" : "nodemon \"./dist/index.js\" --watch \"./dist\""
},
...
}
After doing so, you can easily run your project by using npm start.
Don't forget config WatchIgnorePlugin for webpack to ignore ./dist folder.
Dependencies
npm-run-all - A CLI tool to run multiple npm-scripts in parallel or sequential.
webpack - webpack is a module bundler. Its main purpose is to bundle JavaScript files for usage in a browser, yet it is also capable of transforming, bundling, or packaging just about any resource or asset.
nodemon - Simple monitor script for use during development of a node.js app.
Faced the same problem and found the next solution - webpack-shell-plugin.
It
allows you to run any shell commands before or after webpack builds
So, thats my scripts in package.json:
"scripts": {
"clean": "rimraf build",
"prestart": "npm run clean",
"start": "webpack --config webpack.client.config.js",
"poststart": "webpack --watch --config webpack.server.config.js",
}
If I run 'start' script it launches next script sequence: clean -> start -> poststart.
And there is part of 'webpack.server.config.js':
var WebpackShellPlugin = require('webpack-shell-plugin');
...
if (process.env.NODE_ENV !== 'production') {
config.plugins.push(new WebpackShellPlugin({onBuildEnd: ['nodemon build/server.js --watch build']}));
}
...
"onBuildEnd" event fires only once after first build, rebuilds are not trigger "onBuildEnd", so nodemon works as intended
I like the simplicity of nodemon-webpack-plugin
webpack.config.js
const NodemonPlugin = require('nodemon-webpack-plugin')
module.exports = {
plugins: [new NodemonPlugin()]
}
then just run webpack with the watch flag
webpack --watch
In addition to #Ling's good answer:
If you want to build your project once, before you watch it with nodemon, you can use a webpack compiler hook. The plugin's code triggers nodemon in the done hook once after webpack has finished its compilation (see also this helpful post).
const { spawn } = require("child_process")
function OnFirstBuildDonePlugin() {
let isInitialBuild = true
return {
apply: compiler => {
compiler.hooks.done.tap("OnFirstBuildDonePlugin", compilation => {
if (isInitialBuild) {
isInitialBuild = false
spawn("nodemon dist/index.js --watch dist", {
stdio: "inherit",
shell: true
})
}
})
}
}
}
webpack.config.js:
module.exports = {
...
plugins: [
...
OnFirstBuildDonePlugin()
]
})
package.json:
"scripts": {
"dev" : "webpack --watch"
},
Hope, it helps.
There's no need to use plugins here. You could try running multiple nodemon instances like below. Try modifying the following script for your use case, and see if it works for you:
"scripts": {
"start": "nodemon --ignore './public/' ./bin/www & nodemon --ignore './public/' --exec 'yarn webpack'",
"webpack": "webpack --config frontend/webpack.config.js"
}
You don't need any plugins to use webpack and nodemon, just use this scripts on your package.json
"scripts": {
"start": "nodemon --ignore './client/dist' -e js,ejs,html,css --exec 'npm run watch'",
"watch": "npm run build && node ./server/index.js",
"build": "rimraf ./client/dist && webpack --bail --progress --profile"
},
#Ling has an answer very close to being correct. But it errors the first time somebody runs watch. You'll need to modify the solution as so to prevent errors.
Run npm install npm-run-all webpack nodemon
Create a file called watch-shim.js in your root. Add the following contents, which will create a dummy file and directory if they're missing.
var fs = require('fs');
if (!fs.existsSync('./dist')) {
fs.mkdir('./dist');
fs.writeFileSync('./dist/bundle.js', '');
}
Setup your scripts as so in package.json. This will only run watch if the watch-shim.js file runs successfully. Thereby preventing Nodemon from crashing due to missing files on the first run.
{
...
"scripts": {
"start": "npm run watch",
"watch": "node watch-shim.js && npm-run-all --parallel watch:server watch:build",
"watch:build": "webpack --progress --colors --watch",
"watch:server": "nodemon \"./dist/bundle.js\" --watch \"./dist/*\""
}
...
},
Assuming nodemon server.js touch the server.js file afterEmit:
// webpack.config.js
module.exports = {
// ...
plugins: [
// ...,
// 👇
apply: (compiler) => {
compiler.hooks.afterEmit.tap('AfterEmitPlugin', (compilation) => {
require('child_process').execSync('touch server.js') // $ touch server.js
});
}
]
}
I tried most of the solution provided above. I believe the best one is to use nodemon-webpack-plugin .
It is very simple to use i.e. just add
const NodemonPlugin = require('nodemon-webpack-plugin')
to webpack file with
new NodemonPlugin() as your plugin.
Below are the scripts to use it:
"scripts": {
"watch:webpack-build-dev": "webpack --watch --mode development",
"clean-db": "rm -rf ./db && mkdir -p ./db",
"local-dev": "npm run clean-db && npm run watch:webpack-build-dev"
...
}
After this you can simply run npm run local-dev.
Adding a module to development is usually not as bad as adding to a production one. And mostly you will be using it for the development anyway.
This also doesn't require any additional package like nodemon or npm-run-all etc.
Also nodemon-webpack-plugin only works in watch mode.

call gulp task with process.env.NODE_ENV argument

Can I set process.env.NODE_ENV when task is called?
I wrote in package.json:
"scripts": {
"clean" : "gulp clean:build --env production"
}
Also I tried --NODE_ENV=production but it doesn't work. process.env.NODE_ENV is undefined.
I believe you can just set the variable like in the windows command prompt.
set NODE_ENV=production
So for your script package.json something like this:
"scripts": {
"clean" : "set NODE_ENV=production && gulp clean:build --env production"
}
In WebStorm I can set environment in settings menu.
But I wonder how to write environment in package.json/scripts.

How to set environment variables from within package.json?

How to set some environment variables from within package.json to be used with npm start like commands?
Here's what I currently have in my package.json:
{
...
"scripts": {
"help": "tagove help",
"start": "tagove start"
}
...
}
I want to set environment variables (like NODE_ENV) in the start script while still being able to start the app with just one command, npm start.
Set the environment variable in the script command:
...
"scripts": {
"start": "node app.js",
"test": "NODE_ENV=test mocha --reporter spec"
},
...
Then use process.env.NODE_ENV in your app.
Note: This is for Mac & Linux only. For Windows refer to the comments.
Just use NPM package cross-env. Super easy. Works on Windows, Linux, and all environments. Notice that you don't use && to move to the next task. You just set the env and then start the next task. Credit to #mikekidder for the suggestion in one of the comments here.
From documentation:
{
"scripts": {
"build": "cross-env NODE_ENV=production OTHERFLAG=myValue webpack --config build/webpack.config.js"
}
}
Notice that if you want to set multiple global vars, you just state them in succession, followed by your command to be executed.
Ultimately, the command that is executed (using spawn) is:
webpack --config build/webpack.config.js
The NODE_ENV environment variable will be set by cross-env
I just wanted to add my two cents here for future Node-explorers. On my Ubuntu 14.04 the NODE_ENV=test didn't work, I had to use export NODE_ENV=test after which NODE_ENV=test started working too, weird.
On Windows as have been said you have to use set NODE_ENV=test but for a cross-platform solution the cross-env library didn't seem to do the trick and do you really need a library to do this:
export NODE_ENV=test || set NODE_ENV=test&& yadda yadda
The vertical bars are needed as otherwise Windows would crash on the unrecognized export NODE_ENV command. I don't know about the trailing space, but just to be sure I removed them too.
Because I often find myself working with multiple environment variables, I find it useful to keep them in a separate .env file (make sure to ignore this from your source control). Then (in Linux) prepend export $(cat .env | xargs) && in your script command before starting your app.
Example .env file:
VAR_A=Hello World
VAR_B=format the .env file like this with new vars separated by a line break
Example index.js:
console.log('Test', process.env.VAR_A, process.env.VAR_B);
Example package.json:
{
...
"scripts": {
"start": "node index.js",
"env-linux": "export $(cat .env | xargs) && env",
"start-linux": "export $(cat .env | xargs) && npm start",
"env-windows": "(for /F \"tokens=*\" %i in (.env) do set %i)",
"start-windows": "(for /F \"tokens=*\" %i in (.env) do set %i) && npm start",
}
...
}
Unfortunately I can't seem to set the environment variables by calling a script from a script -- like "start-windows": "npm run env-windows && npm start" -- so there is some redundancy in the scripts.
For a test you can see the env variables by running npm run env-linux or npm run env-windows, and test that they make it into your app by running npm run start-linux or npm run start-windows.
Try this on Windows by replacing YOURENV:
{
...
"scripts": {
"help": "set NODE_ENV=YOURENV && tagove help",
"start": "set NODE_ENV=YOURENV && tagove start"
}
...
}
#luke's answer was almost the one I needed! Thanks.
As the selected answer is very straightforward (and correct), but old, I would like to offer an alternative for importing variables from a .env separate file when running your scripts and fixing some limitations to Luke's answer.
Try this:
::: .env file :::
# This way, you CAN use comments in your .env files
NODE_PATH="src/"
# You can also have extra/empty lines in it
SASS_PATH="node_modules:src/styles"
Then, in your package json, you will create a script that will set the variables and run it before the scripts you need them:
::: package.json :::
scripts: {
"set-env": "export $(cat .env | grep \"^[^#;]\" |xargs)",
"storybook": "npm run set-env && start-storybook -s public"
}
Some observations:
The regular expression in the grep'ed cat command will clear the comments and empty lines.
The && don't need to be "glued" to npm run set-env, as it would be required if you were setting the variables in the same command.
If you are using yarn, you may see a warning, you can either change it to yarn set-env or use npm run set-env --scripts-prepend-node-path && instead.
Different environments
Another advantage when using it is that you can have different environment variables.
scripts: {
"set-env:production": "export $(cat .production.env | grep \"^[^#;]\" |xargs)",
"set-env:development": "export $(cat .env | grep \"^[^#;]\" |xargs)",
}
Please, remember not to add .env files to your git repository when you have keys, passwords or sensitive/personal data in them!
UPDATE: This solution may break in npm v7 due to npm RFC 21
CAVEAT: no idea if this works with yarn
npm (and yarn) passes a lot of data from package.json into scripts as environment variables. Use npm run env to see them all. This is documented in https://docs.npmjs.com/misc/scripts#environment and is not only for "lifecycle" scripts like prepublish but also any script executed by npm run.
You can access these inside code (e.g. process.env.npm_package_config_port in JS) but they're already available to the shell running the scripts so you can also access them as $npm_... expansions in the "scripts" (unix syntax, might not work on windows?).
The "config" section seems intended for this use:
"name": "myproject",
...
"config": {
"port": "8010"
},
"scripts": {
"start": "node server.js $npm_package_config_port",
"test": "wait-on http://localhost:$npm_package_config_port/ && node test.js http://localhost:$npm_package_config_port/"
}
An important quality of these "config" fields is that users can override them without modifying package.json!
$ npm run start
> myproject#0.0.0 start /home/cben/mydir
> node server.js $npm_package_config_port
Serving on localhost:8010
$ npm config set myproject:port 8020
$ git diff package.json # no change!
$ cat ~/.npmrc
myproject:port=8020
$ npm run start
> myproject#0.0.0 start /home/cben/mydir
> node server.js $npm_package_config_port
Serving on localhost:8020
See npm config and yarn config docs.
It appears that yarn reads ~/.npmrc so npm config set affects both, but yarn config set writes to ~/.yarnrc, so only yarn will see it :-(
For a larger set of environment variables or when you want to reuse them you can use env-cmd.
As a plus, the .env file would also work with direnv.
./.env file:
# This is a comment
ENV1=THANKS
ENV2=FOR ALL
ENV3=THE FISH
./package.json:
{
"scripts": {
"test": "env-cmd mocha -R spec"
}
}
This will work in Windows console:
"scripts": {
"setAndStart": "set TMP=test&& node index.js",
"otherScriptCmd": "echo %TMP%"
}
npm run aaa
output:
test
See this answer for details.
suddenly i found that actionhero is using following code, that solved my problem by just passing --NODE_ENV=production in start script command option.
if(argv['NODE_ENV'] != null){
api.env = argv['NODE_ENV'];
} else if(process.env.NODE_ENV != null){
api.env = process.env.NODE_ENV;
}
i would really appreciate to accept answer of someone else who know more better way to set environment variables in package.json or init script or something like, where app bootstrapped by someone else.
use git bash in windows. Git Bash processes commands differently than cmd.
Most Windows command prompts will choke when you set environment variables with NODE_ENV=production like that. (The exception is Bash on Windows, which uses native Bash.) Similarly, there's a difference in how windows and POSIX commands utilize environment variables. With POSIX, you use: $ENV_VAR and on windows you use %ENV_VAR%. - cross-env doc
{
...
"scripts": {
"help": "tagove help",
"start": "env NODE_ENV=production tagove start"
}
...
}
use dotenv package to declare the env variables
For single environment variable
"scripts": {
"start": "set NODE_ENV=production&& node server.js"
}
For multiple environment variables
"scripts": {
"start": "set NODE_ENV=production&& set PORT=8000&& node server.js"
}
When the NODE_ENV environment variable is set to 'production' all devDependencies in your package.json file will be completely ignored when running npm install. You can also enforce this with a --production flag:
npm install --production
For setting NODE_ENV you can use any of these methods
method 1: set NODE_ENV for all node apps
Windows :
set NODE_ENV=production
Linux, macOS or other unix based system :
export NODE_ENV=production
This sets NODE_ENV for current bash session thus any apps started after this statement will have NODE_ENV set to production.
method 2: set NODE_ENV for current app
NODE_ENV=production node app.js
This will set NODE_ENV for the current app only. This helps when we want to test our apps on different environments.
method 3: create .env file and use it
This uses the idea explained here. Refer this post for more detailed explanation.
Basically, you create a .env file and run some bash scripts to set them on the environment.
To avoid writing a bash script, the env-cmd package can be used to load the environment variables defined in the .env file.
env-cmd .env node app.js
method 4: Use cross-env package
This package allows environment variables to be set in one way for every platform.
After installing it with npm, you can just add it to your deployment script in package.json as follows:
"build:deploy": "cross-env NODE_ENV=production webpack"
{
...
"scripts": {
"start": "ENV NODE_ENV=production someapp --options"
}
...
}
Most elegant and portable solution:
package.json:
"scripts": {
"serve": "export NODE_PRESERVE_SYMLINKS_MAIN=1 && vue-cli-service serve"
},
Under windows create export.cmd and put it somewhere to your %PATH%:
#echo off
set %*
If you:
Are currently using Windows;
Have git bash installed;
Don't want to use set ENV in your package.json which makes it only runnable for Windows dev machines;
Then you can set the script shell of node from cmd to git bash and write linux-style env setting statements in package.json for it to work on both Windows/Linux/Mac.
$ npm config set script-shell "C:\\Program Files\\git\\bin\\bash.exe"
Although not directly answering the question I´d like to share an idea on top of the other answers. From what I got each of these would offer some level of complexity to achieve cross platform independency.
On my scenario all I wanted, originally, to set a variable to control whether or not to secure the server with JWT authentication (for development purposes)
After reading the answers I decided simply to create 2 different files, with authentication turned on and off respectively.
"scripts": {
"dev": "nodemon --debug index_auth.js",
"devna": "nodemon --debug index_no_auth.js",
}
The files are simply wrappers that call the original index.js file (which I renamed to appbootstrapper.js):
//index_no_auth.js authentication turned off
const bootstrapper = require('./appbootstrapper');
bootstrapper(false);
//index_auth.js authentication turned on
const bootstrapper = require('./appbootstrapper');
bootstrapper(true);
class AppBootStrapper {
init(useauth) {
//real initialization
}
}
Perhaps this can help someone else
Running a node.js script from package.json with multiple environment variables:
package.json file:
"scripts": {
"do-nothing": "set NODE_ENV=prod4 && set LOCAL_RUN=true && node ./x.js",
},
x.js file can be as:
let env = process.env.NODE_ENV;
let isLocal = process.env.LOCAL_RUN;
console.log("ENV" , env);
console.log("isLocal", isLocal);
You should not set ENV variables in package.json. actionhero uses NODE_ENV to allow you to change configuration options which are loaded from the files in ./config. Check out the redis config file, and see how NODE_ENV is uses to change database options in NODE_ENV=test
If you want to use other ENV variables to set things (perhaps the HTTP port), you still don't need to change anything in package.json. For example, if you set PORT=1234 in ENV and want to use that as the HTTP port in NODE_ENV=production, just reference that in the relevant config file, IE:
# in config/servers/web.js
exports.production = {
servers: {
web: function(api){
return {
port: process.env.PORT
}
}
}
}
In addition to use of cross-env as documented above, for setting a few environment variables within a package.json 'run script', if your script involves running NodeJS, then you can set Node to pre-require dotenv/config:
{
scripts: {
"eg:js": "node -r dotenv/config your-script.js",
"eg:ts": "ts-node -r dotenv/config your-script.ts",
"test": "ts-node -r dotenv/config -C 'console.log(process.env.PATH)'",
}
}
This will cause your node interpreter to require dotenv/config, which will itself read the .env file in the present working directory from which node was called.
The .env format is lax or liberal:
# Comments are permitted
FOO=123
BAR=${FOO}
BAZ=Basingstoke Round About
#Blank lines are no problem
Note : In order to set multiple environment variable, script should goes like this
"scripts": {
"start": "set NODE_ENV=production&& set MONGO_USER=your_DB_USER_NAME&& set MONGO_PASSWORD=DB_PASSWORD&& set MONGO_DEFAULT_DATABASE=DB_NAME&& node app.js",
},

Resources