Read Env Vars from a File in AWS Elastic Beanstalk during deployment - node.js

We traditionally stored our env vars in Environment->Configuration->Software in Elastic Beanstalk console and access it in our NodeJS app by typing process.env.VariableName. There's a hard limit for the number of vars stored there, so we moved our variables to AWS Parameter Store.
I have a .ebextention script that downloads the parameters from the parameter store and saves them to a file in /home/ec2-user directory named .env.local, but I can't reference them in the app with process.env.
.ebextention script:
commands:
01_command2:
command: aws ssm get-parameters-by-path --path /latest/ --recursive --with-decryption --output text --query "Parameters[].[Name,Value]" --region us-west-2 | sed -E 's#latest/([^[:space:]]*)[[:space:]]*#export \1=#' > /home/ec2-user/.env.local
How can I access these vars in NodeJS without modifying the way that we correctly use env vars (process.env.VarName)?

Related

Nextjs App reading configuration from Azure App Service

We have a nextjs project which is build by docker and deploy into Azure App Service (container). We also setup configuration values within App Service and try to access it, however its not working as expected.
Few things we tried
Restarting the App Service after adding new configuration
removing .env file while building the docker image
including .env file while building the docker image
Here's how we read try to read the environment variables within the App Service
const env = process.env.NEXT_PUBLIC_ENV;
const A = process.env.NEXT_PUBLIC_AS_VALUE;
Wondering if this can actually be done?
Just thinking something out loud below,
Since we're deploying the docker image within App Service's Container (Linux).. does that mean, the container can't pull the value from this environment variable?
Docker image already perform the npm run build, would that means the image is in static formed (build time). It will never ready from App Service configuration (runtime).
After a day or 2, I came up with an alternative solution by passing the environment values in Dockerfile while building my project.
TLDR
Pass your env values within dockerfile
Set all your env (dev, staging, prod, etc) var values in Pipeline variable.
Set a "settable" variable inside the Pipeline variable too, so you can set to build different environment while triggering your pipeline (eg, buildEnv)
Setup a bash script to perform variable text changing (eg, from firebaseApiKey to DEVfirebaseApiKey ) according to env received from buildEnv.
Use "replace token" task from Azure Pipeline to replace values inside Dockerfile
Build your docker image
Huaala~ now you get your environment specific build
Details
Within your Dockerfile you can place your env variable like this
RUN NEXT_PUBLIC_ENV=#{env}# \
NEXT_PUBLIC_FIREBASE_API_KEY=#{firebaseApiKey}# \
NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN=#{firebaseAuthDomain}# \
NEXT_PUBLIC_FIREBASE_PROJECT_ID=#{firebaseProjectId}# \
NEXT_PUBLIC_FIREBASE_STORAGE_BUCKET=#{firebaseStorageBucket}# \
NEXT_PUBLIC_FIREBASE_MESSAGING_SENDER_ID=#{firebaseMessagingSenderId}# \
NEXT_PUBLIC_FIREBASE_APP_ID=#{firebaseAppId}# \
NEXT_PUBLIC_FIREBASE_MEASUREMENT_ID=#{firebaseMeasurementId}# \
NEXT_PUBLIC_BASE_URL=#{baseURL}# \
npm run build
These values set (eg, baseURL, firebaseMeasurementId, etc) are only a placeholder, because we need to change them later with bash script according to the buildEnv we receive. (buildEnv is settable when you trigger a build)
Bash script sample as below. What it does is that it will look within your Dockerfile for the word env and change to DEVenv / UATenv / PRODenv based on what you're passing to buildEnv
#!/bin/bash
case $(buildENV) in
dev)
sed -i -e 's/env/DEVenv/g' ./Dockerfile
;;
uat)
sed -i -e 's/env/UATenv/g' ./Dockerfile
;;
prod)
sed -i -e 's/env/PROenvD/g' ./Dockerfile
;;
*)
echo -n "unknown"
;;
esac
When this is complete, your "environment specific" docker file is sort of created. Now we'll make use of the "replace token" task from Azure Pipeline to replace the values inside Dockerfile. **Make sure you have all your values setup in Pipeline Variable!
Lastly all you may build your docker image and deploy :)

Elasticbeanstalk process.env variables not set

I've created a Node v12 (5.2.2 platform) elastic beanstalk app, and set the env vars in the EB console.
Locally, I use dotenv to load envs, but EB does it their own way. The env variables are recognized when I view the elastic beanstalk server logs, but when I ssh into the EC2 instance, and console.log(process.env), or echo $MY_ENV_VAR, none of the values are set.
So, I source'd the env vars at /opt/elasticbeanstalk/deployment/env and can now echo them, but running migrations shows that the env vars are still unset, and console.log(process.env) shows none of them are set.
Why is this? How do I ensure they're set so I can run migrations and other commands that require envs?
/root/.ebextensions/environmentvar.config:
option_settings:
aws:elasticbeanstalk:application:environment:
TEST_VAR: helloworld

Setting EC2 Environment Variables with CodeDeploy, Parameter Store and PM2

I am deploying a Node.js app to EC2 using CodeDeploy. I am storing credentials within AWS Systems Manager, Parameter Store however cannot find a method to expose these to my application.
I am using PM2 for process management. I can successfully retrieve the parameter from the Parameter Store on the target machine, so there are no permission issues. For example:
aws ssm get-parameters --region us-east-1 --names LOCAL_CACHE_PATH --with-decryption --query Parameters[0].Value`
...successfully returns the correct string. I attempt to use this in my applicationStart.sh CodeDeploy file and start the app:
#!/bin/bash
export LOCAL_CACHE_PATH=$(aws ssm get-parameters --region us-east-1 --names LOCAL_CACHE_PATH --with-decryption --query Parameters[0].Value)
pm2 start ecosystem.config.js --env production
LOCAL_CACHE_PATH returns undefined in my app when accessing process.env.LOCAL_CACHE_PATH.
So the environment variable is available within the applicationStart.sh script and yet undefined when the app starts from that script.
I am looking for a recommended approach to use environment variables from the Parameter Store with CodeDeploy.
I have read literally dozens of posts on similar topics but cannot resolve it. Very much appreciate any guidance.
The solution I am using is to write the environment variables to a .env file and use that in my app:
afterInstall.sh:
echo LOCAL_CACHE_PATH=$(aws ssm get-parameters --output text --region us-east-1 --names LOCAL_CACHE_PATH --with-decryption --query Parameters[0].Value) >> /home/ubuntu/foo/.env

Cloud9 does not expose bash_profile exports in nodejs lambda

I have a Cloud9 environment spun up and have modified my ~/.bash_profile to export a value at the end of the file.
export foo="hello world"
I run . ~/.bash_profile and then echo $foo and I see hello world output in the terminal.
I then created a NodeJS Lambda with API Gateway. I run the API Gateway locally in Cloud 9 and attempt to read the environment variables
console.log(process.env)
I see a list of variables available to me that AWS has defined. My export is not listed there however. Since I will be using environment variables when my Lambda is deployed, I want to test it with environment variables defined in the Cloud9 environment.
Is there something specific I have to do in order to get the Lambda to read my .bash_profile exports?
AWS Cloud9's Lambda plugin is backed by SAM Local, which uses Docker: https://github.com/awslabs/aws-sam-cli . By default, this means that the ~/.bash_profile file is not used by Lambda; you'll want to load this in manually.
Please see Using the AWS Serverless Application Model (AWS SAM) article that describes how to work with environment variables in SAM (so also in cloud9).
In summary - put environment variables into the template.yaml file (present in the root folder of your app) like below:
Properties:
.... # tons of other properties here, add yours at the end
Environment:
Variables:
MY_ENV_VARIABLE: 'This is my awesome env variable value'

Is there a way to avoid storing the AWS_SECRET_KEY on the .ebextensions?

I'm deploying a Django based project on AWS Elastic Beanstalk.
I have been following the Amazon example, where I add my credentials (ACCESS_KEY/SECRET) to my app.config under the .ebextentions directory.
The same config file has:
container_commands:
01_syncdb:
command: "django-admin.py migrate --noinput"
leader_only: true
02_collectstatic:
command: "django-admin.py collectstatic --noinput"
leader_only: true
Problem is that this is forcing me to store my credentials under Version Control, and I will like to avoid that.
I tried to remove the credentials and then add them with eb setenv, but the problem is that the two django commands require the these settings to be set on the environment.
I'm using the v3 cli:
eb create -db -c foo bar --profile foobar
where foobar is the name of the profile under ~/.aws/credentials, and where I want to keep my secret credentials.
What is the best security practices for the AWS credentials using EB?
One solution is to keep the AWS credentials, but create a policy that ONLY allows them to POST objects on the one bucket used for /static.
I ended up removing the collecstatic step from the config file, and simply take care of uploading statics on the build side.
After that, all credentials can be removed and all other boto commands will grab the credentials from the security role on the EC2 instance.

Resources