npm adds whitespace when setting env variable in package.json - node.js

I have a pre-written package.json file for an app which I need to modify. More specifically, I want to change the NODE_PORT environment variable through the package.json file and I'm working on a Windows machine.
In the package.json I have several scripts that I run through npm when I like to spin up an instance of the app.
For example:
set NODE_PORT=80&& set NODE_ENV=test&& pm2 install pm2-logrotate&& pm2 start app.js -i max -o ./logs/access.log -e ./logs/err.log --time --name Test
This script for example works fine.
However, when I'm trying to set the NODE_PORT variable to 8080 (that's the port I need) like so:
set NODE_PORT=8080&& set NODE_ENV=parallel_test&& pm2 install pm2-logrotate&& pm2 start app.js -i max -o ./logs/parallel_access.log -e ./logs/parallel_err.log --time --name Parallel_Test
a whitespace at the end of the variable gets added.
I verified this by printing out the number of chars of $process.env.NODE_PORT in the log file which prints 5. Moreover the login for the app via Google crashes as the redirect link of the app doesn't match with the one in the Google Cloud Platform. That is:
app: http://localhost:8080 /auth/check-google vs. Google Cloud Platform: http://localhost:8080/auth/check-google
Any idea why this is happening?

i have faced similar issue recently. Handled it with .trimEnd() while adding variables with dotenv. But I think using cross-env can solve your problems.
Most Windows command prompts will choke when you set environment
variables with NODE_ENV=production like that. (The exception is Bash
on Windows, which uses native Bash.) Similarly, there's a difference
in how windows and POSIX commands utilize environment variables. With
POSIX, you use: $ENV_VAR and on windows you use %ENV_VAR%.
Adding this inside your script: "cross-env NODE_PORT=8080 ..."

Related

NodeJS Google Vision is unable to detect a Project Id in the current environment

Under Ubuntu environment, NodeJS Google Vision complains:
Error: Unable to detect a Project Id in the current environment.
Even though I already put json credential through
$ export GOOGLE_APPLICATION_CREDENTIALS=/var/credential_google.json"
Please help.
As a quick hack you can try this :
$ GOOGLE_APPLICATION_CREDENTIALS="/var/credential_google.json" node app.js
It's not recommended to use a .json config file locally. I've seen these leak on production servers causing whole platforms to be deleted + the introduce environmental switching and security issues.
Setup Google Cloud CLI.
Now the server will 'look' at the local environment and use that.
If you get the error "Unable to detect a Project Id in the current environment.", it means the auth library cannot find the project default id.
You need to have a base project in Google Cloud set, regardless of environmental variables and project you're running.
Run
gcloud config set project [some-project-id]
Now if you run (node example)
"dev": "NODE_ENV=dev GCP_PROJECT=some-project-id nodemon index.ts",
It will load the project environment. This also allows you to deploy easier with:
"deploy:dev": "y | gcloud app deploy --project some-dev-project app.yaml",
"deploy:prod": "y | gcloud app deploy --project some-prod-project app.yaml"
App engine has security setup automatically with standard environments. With flex you can use one of the manage images Google Provides.
If you are usually a windows user and trying out Ubuntu (like me), the problem is likely with the assumptions that the export command exports variable to all terminal sessions and that you need to open a new terminal to get it to use (as expected in a windows terminal for an environment variable).
The export command doesn't export the variable to another terminal session. So if you export it in a terminal, you use it on the same terminal.
If you would like to export it permanently, then you can try the solution listed here
You can put the path to the JSON credentials directly when instantiating the client, by passing it as an argument.
For example:
const client = new speech.SpeechClient( {keyFilename: "credential_google.json"});
Also, for me setting it in the terminal didn't work.

Elastic Beanstalk Environment Variables Missing When In SSH

I'm trying to run some commands on my NodeJS app that need to be run via SSH (Sequelize seeding for instance), however when I do so, I noticed that the expected env vars were missing.
If I run eb printenv on my local machine I see the expected environment variables that were set in my EB Dashboard
If I SSH in, and run printenv, all of those variables I expect are missing.
So what happens, is when I run my seeds, I get an error:
node_modules/.bin/sequelize db:seed:all
ERROR: connect ECONNREFUSED 127.0.0.1:3306
I noticed that the port was wrong, it should be 5432. I checked to see if my environment variables were set with printenv and they are not there. This leads me to suspect that the proper env variables are not loaded in my ssh session, and NodeJS is falling back to the default values I provided in my config.
I found some solutions online, like running the following to load the env variables:
/opt/python/current/env
But the python directory doesn't exist. All that's inside /opt/ is elasticbeanstalk and aws directories.
I had some success so I could at least see that the env variables exist somewhere on the server by running the following:
sudo /opt/elasticbeanstalk/bin/get-config environment --output YAML
But simply running this command does not fix the problem. All it does is output the expected env variables to the screen. That's something! At least I know they are definitely there! But the env variables are still not there when I run printenv
So how do I fix this problem? Sequelize and NodeJS are clearly not seeing the env variables either, and it's trying to use the fallback default values that are set in my config file.
I know my answer is late, but I had the same problem and after some attempts with bash script I found a way to store it in your env vars.
you can simply run the following command:
export env=`/opt/elasticbeanstalk/bin/get-config environment -k <your-variable-name>`
now you will be able to easily access this variable:
echo $your-variable-name
afterward, you can utilize the env var to do what ever you like. in my case, I use it to decide which version of my code to build in a file called build-script.sh and its content is as follows:
# get env variable to know in which environment this code is running in
export env=`/opt/elasticbeanstalk/bin/get-config environment -k environment`
# building the code based on the current environment
if [ $env = "production" ]
then
echo "building for production"
npm --prefix /var/app/current run build-prod
else
echo "building for non production"
npm --prefix /var/app/current run build-prod
fi
hope this helps anyone facing the same issue 🤟🏻

How to use pm2 with a nodejs app that uses readline for taking command line input?

I have a Node.js app that uses the node's native readline to be able to take command-line inputs.
When launching the app with pm2, the command-line input is unavailable.
Any ideas how to solve this issue? Other than using systemd and creating an init script myself?
Use pm2 to attach to your process and you will see readline, clearline and cursorTo working as expected.
First get your process id with:
$ pm2 id {your-process-name}
[ 7 ]
Let's say it's 7:
$ pm2 attach 7
if you check the pm2 website they clearly mention the following line: Advanced, production process manager for Node.js. So using it in this context is unnecessary as all pm2 does is start your 'node' process and allows you to manage it, the simple way is to use command line args while starting the process.
for example:
I myself use commander for this purpose. it manages all my command line arguments (u can see its usage). and with pm2 i use it like following:
pm2 start server.js --name production -- --env dev -p 3458
notice -- before --env, it is used to separate pm2 arguments from the arguments you want to supply to your process
p.s.
PM2 has more complex usage than this, in the terms of process management, i myself use it for production level deployment. If you want to take input from a user every time s/he starts your app, then you should stick with using node command only

Docker & PM2: String based CMD with environment variables

I'm currently using the shell-form of CMD in Docker for launching my node app:
CMD /usr/src/app/node_modules/.bin/trifid --config $TRIFID_CONFIG
The env-var TRIFID_CONFIGis set to a default in the Dockerfile:
ENV TRIFID_CONFIG config.customer.json
This makes it easy to pass another config file for dev-environments for example.
Now I try to switch this to PM2 for production. However it looks like all PM2 samples are using the "exec" form which from what I understood does not evaluate ENV-vars. I tried the shell-form with PM2:
CMD pm2-docker /usr/src/app/node_modules/trifid/server.js --config $TRIFID_CONFIG
But it looks like the variable is not evaluated like this, it fails back to default on execution.
What would be the proper way to handle this with PM2 inside a Docker image?
I had a discussion on Github and meanwhile figured it out:
CMD pm2-docker /usr/src/app/node_modules/.bin/trifid -- --config $TRIFID_CONFIG
So the trick is to use -- after the command and the rest will be passed as argument. If I use the shell form env-vars do seem to get evaluated properly.

Nodejs/Strongloop: working upstart config example

After update strongloop to v2.10 slc stops writing logs.
Also I couldn't make the app to start in production mode.
/etc/init/app.conf
#!upstart
description "StrongLoop app"
start on startup
stop on shutdown
env NODE_ENV=production
script
exec slc run /home/ubuntu/app/ \
-l /home/ubuntu/app/app.log \
-p /var/run/app.pid
end script
Can anybody check my upstart config or provide another working copy?
Are you were writing the pid to a file so that you can use it to send SIGUSR2 to the process to trigger log re-opening from logrotate?
Assuming you are using Upstart 1.4+ (Ubuntu 12.04 or newer), then you would be better off letting slc run log to its stdout and let Upstart take care of writing it to a file so that log rotation is done for you:
#!upstart
description "StrongLoop app"
start on startup
stop on shutdown
# assuming this is /etc/init/app.conf,
# stdout+stderr logged to: /var/log/upstart/app.log
console log
env NODE_ENV=production
exec /usr/local/bin/slc run --cluster=CPUs /home/ubuntu/app
The log rotation for "free" is nice, but the biggest benefit to this approach is Upstart can log errors that slc run reports even if they are a crash while trying to set up its internal logging, which makes debugging a lot easier.
Aside from what it means to your actual application, the only effect NODE_ENV has on slc run is to set the default number of cluster workers to the number of detected CPU cores, which literally translates to --cluster=CPUs.
Another problem I find is the node/npm path prefix not being in the $PATH as used by Upstart, so I normally put the full paths for executables in my Upstart jobs.
Service Installer
You could also try using strong-service-install, which is a module used by slc pm-install to install strong-pm as an OS service:
$ npm install -g strong-service-install
$ sudo sl-svc-install --name app --user ubuntu --cwd /home/ubuntu/app -- slc run --cluster=CPUs .
Note the spaces around the -- before slc run

Resources