I recently deployed a node application with Phusion Passenger for nginx, and encountered a pretty quirky error in the process:
My code threw an error from trying to spawn a child_process. I did a bit of debugging and eventually concluded that the problem arose from the $PATH environment variable being undefined in node, and I could solve the problem with a passenger_env_var directive like this (showing an extract of my nginx config):
server {
listen 80;
server_name blargh.com;
root /home/user/blargh.com/build;
passenger_enabled on;
# For some reason $PATH isn't loaded into node, and we can't spawn child processes without it
passenger_env_var PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games;
}
I still haven't figured out what caused this problem though - setting passenger_load_shell_envvars on; didn't help, and the www-data user did have a $PATH envvar defined in the shell. Moreover, other environment variables (like $SHELL) seems to have been loaded by node, adding to the mystery of why $PATH was excluded.
Does anybody know what could cause this problem?
tl;dr
Specify global envvars that you expect to be defined at system boot (like PATH) in /etc/default/nginx. Use something like dotenv properly and write environment specific config for your app in a text file that's not checked in. Environment variables are pretty evil in general.
I felt this one deserved a fairly lengthy answer, since environment variables has caused recurring problems for me during the last couple of months.
Storing your config as environment variables is one of the rules that 12 factor app lays out for writing scalable web applications. They're good because they let you separate your config from your code in a flexible manner. However, a problem with them is that the way we encounter them normally, when we export MYVAR=myvalue or set them in our ~/.pam_environment or ~/.bashrc, the scope of them is our current terminal session.
This causes issues as we start to use solutions like Phusion Passenger to start our apps at system boot - their startup scripts don't care about user shell environments. They also don't care about the global /etc/environment apparently, which is what caused my problems with PATH being undefined.
Phusion Passenger actually has some documentation on making global environment variables persist:
If you installed Nginx through the Debian or Ubuntu packages, then you can define environment variables in /etc/default/nginx. This is a shell script so you must use the export FOO=bar syntax.
So by setting the PATH envvar in /etc/default/nginx, I could solve that issue. But I was still having trouble with the other environment variables - I had to set them in my nginx config to have them passed on to my node app. It was clear to me that this wasn't the right way to do it.
At this point I was already using dotenv, but I had misunderstood its purpose slightly. I had checked in the .env file and thought of it as a way to provide default values for envvars that would be overridden by the environment as needed. This isn't how the authors themselves envisioned this module to be used:
We strongly recommend against committing your .env file to version control. It should only include environment-specific values such as database passwords or API keys.
It started becoming clear to me that people often don't define the envvars for their apps in the actual environment. I found an article by Peter Lyons that suggests storing config in a text file instead of in envvars, and that's when it clicked for me.
My final solution was to uncommit my .env file, and write a specific one for each environment. I left a .env.template in my repo as a reference to what configuration my app expected to be defined at run-time.
Related
I have a basic "service unit" file like the following.
[Unit]
Description=Certprovider service
After=network.target
[Service]
Type=simple
Restart=always
RestartSec=5s
ExecStart=/home/mert/certprovider/certprovider
WorkingDirectory=/home/mert
User=root
Group=root
[Install]
WantedBy=multi-user.target
I have the .env file in the root of the project.
CA_DIR_URL=https://acme-v02.api.letsencrypt.org/directory
EMAIL=mertsmsk0#gmail.com
HOST=127.0.0.1
PORT=8557
I load this file with the following lines.
err := godotenv.Load()
if err != nil {
log.Fatalln("Error loading .env file")
}
Service has been working very well but I cannot reach the PORT environment variable. Thus I cannot start the webserver because that port cannot listen. I print all the environment variables that in the .env excluding PORT. I changed its name to APP_PORT but it same thing.
The mystery part is I can reach other variables in the .env file. In addition to that when I add the following line in the unit file, I can reach that variable but I don't understand that why should I add only the PORT variable in the unit file?
[Service]
Environment=PORT=8557
It's happening when I try to run it as a binary file. Because I can reach the variables with the following command.
go run .
If you call Load without any args it will default to loading env in the current path.
Your current path is configured here:
WorkingDirectory=/home/mert
And yet, you say (emphasis added)
I have the .env file in the root of the project.
But that's not the current working directory.
root of the project
That concept is not meaningful to the application runtime. Unlike interpreted languages like, say, PHP, Go compiles to a static binary that is functionally entirely distinct from the set of libraries and sources that define it. In PHP (or python, ruby, etc) those libraries have no other place to be, than the root of some project directory.
In go, that stuff is only relevant for development and testing. The fact that your executable appears to be in your "root of the project" is entirely incidental and completely meaningless.
If you really want to put the runtime configuration in tht particular file, in that particular place, just set that as the working directory:
ExecStart=/home/mert/certprovider/certprovider
WorkingDirectory=/home/mert/certprovider/certprovider
I would put that stuff in /usr/local so I didn't accidentally break my let's encrypt while fiddling with stuff in my home directory - doubly so for let's encrypt because it might take up to 90 days to realize your certs weren't being refreshed. I'd put the config outside of my home directory for the same reason.
Actually, for this case I'd probably put all the config in the unit file. Why not put it there? But of course, that's a matter of opinion. If you really want to use the automatic .env discovery then you should dedicate a directory to containing that hidden file. It doesn't make much sense to put a config specific to one application in ~/.env.
Wherever you put .env, make sure that's your working directory so it will be discovered.
I print all the environment variables that in the .env excluding PORT. I changed its name to APP_PORT but it same thing. [...]
The mystery part is I can reach other variables in the .env file.
Respectfully, that sounds like an assumption on your part. Without evidence to the contrary, it's easy to conclude that you have set defaults for these values or that they're coming from some other source or behavior. That's more parsimonious than concluding the godotenv library read some, but not all, the values from a file.
It's happening when I try to run it as a binary file. Because I can reach the variables with the following command. [go run .]
Go always runs as a binary. go run . simply automatically builds the binary in a temp location and then runs it. Why is it recommended to use `go build` instead of `go run` when running a Go app in production? talks about why go run is often contraindicated on SO.
Under Ubuntu environment, NodeJS Google Vision complains:
Error: Unable to detect a Project Id in the current environment.
Even though I already put json credential through
$ export GOOGLE_APPLICATION_CREDENTIALS=/var/credential_google.json"
Please help.
As a quick hack you can try this :
$ GOOGLE_APPLICATION_CREDENTIALS="/var/credential_google.json" node app.js
It's not recommended to use a .json config file locally. I've seen these leak on production servers causing whole platforms to be deleted + the introduce environmental switching and security issues.
Setup Google Cloud CLI.
Now the server will 'look' at the local environment and use that.
If you get the error "Unable to detect a Project Id in the current environment.", it means the auth library cannot find the project default id.
You need to have a base project in Google Cloud set, regardless of environmental variables and project you're running.
Run
gcloud config set project [some-project-id]
Now if you run (node example)
"dev": "NODE_ENV=dev GCP_PROJECT=some-project-id nodemon index.ts",
It will load the project environment. This also allows you to deploy easier with:
"deploy:dev": "y | gcloud app deploy --project some-dev-project app.yaml",
"deploy:prod": "y | gcloud app deploy --project some-prod-project app.yaml"
App engine has security setup automatically with standard environments. With flex you can use one of the manage images Google Provides.
If you are usually a windows user and trying out Ubuntu (like me), the problem is likely with the assumptions that the export command exports variable to all terminal sessions and that you need to open a new terminal to get it to use (as expected in a windows terminal for an environment variable).
The export command doesn't export the variable to another terminal session. So if you export it in a terminal, you use it on the same terminal.
If you would like to export it permanently, then you can try the solution listed here
You can put the path to the JSON credentials directly when instantiating the client, by passing it as an argument.
For example:
const client = new speech.SpeechClient( {keyFilename: "credential_google.json"});
Also, for me setting it in the terminal didn't work.
I have variable set in my .bash_rc file:
whoami#cloudshell:~/source/NodePrototype (x-alcove-9999999)$ echo $APP_ENVIRONMENT
LIVE
Yet node.js application out of:
const app_environment_config=require('./APP_ENVIRONMENT/' + process.env.APP_ENVIRONMENT)
produce
2019-02-21 14:18:16 default[20190221t141628] Error: Cannot find module './APP_ENVIRONMENT/undefined'
Eventhough when I enter node shell:
whoami#cloudshell:~/source/NodePrototype (x-alcove-9999999)$ node
> process.env.APP_ENVIRONMENT
'LIVE'
The same part works locally.
It depends on how your Node app is being launched, because looks like is not running in an environment where that variable exists, to make sure print all your current env vars to make this sure: console.log(process.env).
Also, a good practice, when you need something like that, is to use .env files with this module: https://www.npmjs.com/package/dotenv is a good practice to pass configuration to your Node apps.
I am new to openshift and I've tried hard to modify my env upon git push so that I don't need to rhc env set ENV_VAR=value -a appname everytime I push. According to the documentation, I can do export in one of the action hooks, but whenever I did so, the environment variable will not register..
What is the best way to register those variables automatically, rather than needing to execute rhc command or ssh into the machine and export?
The documentation seems to be outdated as the method of exporting in action_hooks doesn't work anymore
https://developers.openshift.com/en/managing-environment-variables.html
I see that you have your answer already, but in case others come here for the same question, I'd like to mention that the rhc env set command actually sets a variable persistently, so it "survives" the code push, build and gear restart.
The documentation linked in the question says that the export can be used to view environment variables during build; it does not recommend setting environment variables using hooks.
The variables' listing itself, using the build hook, should work just fine. (worked for me at the time of writing this)
In case the export in the build action hook seems not to work (does not list the variables), it is typically caused by the hook file not being set executable (or by a syntax error within the file).
Yes, the action hook way is already broken, even though you export through the hook, you can see that there is no declare -x statements thrown out like stated in the documentation anymore.
One other method you can do is to use the action hook to write to files in this directory:
$HOME/.env/user_vars
for example, if you want to set RAILS_ENV=development, write a script that churns out this file:
$HOME/.env/user_vars/RAILS_ENV
with this content:
development
Spent an awful lots of time to find alternative ways too, but this guy nailed it out, copied it in case the link becomes broken in the future:
If you need to set some environment variables in your GEAR you can use an action hook.
The pre-start action hook will serve you well but if you need to restore those variables after a gear restart, pre-start action hook won’t work.
Post-restart action hook, on the other hand, will execute its actions but I haven’t managed to get the environment variables working. After its execution all environment variables that should have a value were empty.
What I did was to modify pre-start action hook to create environment variables as files under $HOME/.env/user_vars
# Actual script
export OPENSHIFT_POSTGRESQL_DB_HOST="xxx.xxx.xxx.xxx"
export OPENSHIFT_POSTGRESQL_DB_PORT="***"
export OPENSHIFT_POSTGRESQL_DB_NAME="***"
export OPENSHIFT_POSTGRESQL_DB_USERNAME="***""
# Added script for post restart variables
echo "xxx.xxx.xxx.xxx" > OPENSHIFT_POSTGRESQL_DB_HOST
echo "***" > OPENSHIFT_POSTGRESQL_DB_PORT
echo "***" > OPENSHIFT_POSTGRESQL_DB_USERNAME
echo "***" > OPENSHIFT_POSTGRESQL_DB_PASSWORD
After this, if you execute gear restart, the environment variables will exist and will be accesible from your application.
Reference:
https://guilleml.wordpress.com/2015/02/17/setting-environment-variables-in-openshift/
I am running my NodeJS project on DotCloud. Sadly, DotClouds deployment is "project-intrusive" that is it requires a supervisord.conf file to reside in the app-root. My deployment setup looks like this (using git repos).
project-deploy.git/prod/dotcloud.yml
project-deploy.git/prod/project -> project.git
(/prod/project use project.git as a submodule to access the code)
Now, my though of this is that I eventually would end up having different environments like this, e.g. dev, test and stage. The dev environment wouldn't even have a dotcloud.yml file since it is expected to run everything locally.
Well this works pretty well. But the problem is the supervisord.conf file which is just for deployment to dotcloud, now it resides in the project.git repo, but it doesn't belong there since it is just for deployment.
Are there any modules or NodeJS scripts that let you put deployment configuration files elsewhere, and maybe even specify what the target environment is, e.g. node deploy.js --production, or something like that?
There is a way to get rid of supervisord.conf. Assuming that you want to run e.g. node app.js, you can put the following in dotcloud.yml:
www:
type: nodejs
process: node app.js
Now, of course, it doesn't solve the problem of the dotcloud.yml file itself; but at least it reduces clutter a little bit -- removing it from the approot.