Move configurartion from env.ym to ssm on serverless.yml - node.js

My API keys are hard coded in env.yml and published on our git so, I need to move all secrets from my serverless.yml config (using ${file(env.yml)}) to ssm for all environement except for local environment.
The idea is to fallback to local env.yml in case configuration for one enviroment (i.e. localhost) is not available on remote server.
So, for insance to find the value for PRIVATE_API_KEY_<stage> look up ssm for /SHARED/<stage>/PRIVATE_API_KEY if not found look up look up .env.local for CEFLA_KEY_VALUE_<stage>
Any clue?

Related

How to emulate AWS Parameter Store on local computer for lambda function development?

I'm using Serverless framework and NodeJS to develop my AWS Lambda function. So far, I have used .env file to store my secrets. So, I can get access to them in serverless.yml like this
provider:
...
environment:
DB_HOST: ${env:DB_HOST}
DB_PORT: ${env:DB_PORT}
But now I need to use AWS Parameter Store instead of .env file. I have tried to find information about how to emulate it on my local machine, but I couldn't.
I think, I have to use one serverless config file on local and staging. I need a way to select somehow env values either from .env file (if it's local machine) or from Parameter Store (if it's AWS Lambda). Is there any way how to do it? Thanks!
It should work like this: within your serverless.yml you can reference .env parameters with ${env:keyname} and AWS Parameters using the ${param:keyname} syntax.
If you need to support both of them you just need to write ${env:keyname, param:keyname}.
Here's an example:
provider:
...
environment:
ALLOWED_ORIGINS: ${env:ALLOWED_ORIGINS, param:ALLOWED_ORIGINS}
AUTHORIZER_ARN: ${env:AUTHORIZER_ARN, param:AUTHORIZER_ARN}
MONGODB_URL: ${env:MONGODB_URL, param:MONGODB_URL}

Handle multiple environments variables in .env NodeJs

Suppose I have a .env file like these:
#dev variable
PORT=3000
#production
PORT=3030
And I get these variables using process.env, how can I manage to sometimes use the dev variable and other times the Production variable
You can create multiple .env files like .dev.env, .prod.env, and load them based on NODE_ENV. using this
Storing configuration in environment variables is the way to go, and exactly what is recommended by the config in the 12-Factor App, so you're already starting with the right foot.
The values of these variables should not be stored with the code, except maybe the ones for your local development environment, which you can even assume as the default values:
port = process.env.PORT || '3000';
For all other environments, the values should be stored in a safe place like Vault or AWS Secrets Manager, and then are only handled by your deployment pipeline. Jenkins, for example, has a credentials plugin to handle that.

Use variable in Terraform remote backend

# Using a single workspace:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "company"
workspaces {
name = "my-app-prod"
}
}
}
For Terraform remote backend, would there be a way to use variable to specify the organization / workspace name instead of the hardcoded values there?
The Terraform documentation
didn't seem to mention anything related either.
The backend configuration documentation goes into this in some detail. The main point to note is this:
Only one backend may be specified and the configuration may not contain interpolations. Terraform will validate this.
If you want to make this easily configurable then you can use partial configuration for the static parts (eg the type of backend such as S3) and then provide config at run time interactively, via environment variables or via command line flags.
I personally wrap Terraform actions in a small shell script that runs terraform init with command line flags that uses an appropriate S3 bucket (eg a different one for each project and AWS account) and makes sure the state file location matches the path to the directory I am working on.
I had the same problems and was very disappointed with the need of additional init/wrapper scripts. Some time ago I started to use Terragrunt.
It's worth taking a look at Terragrunt because it closes the gap between Terraform and the lack of using variables at some points, e.g. for the remote backend configuration:
https://terragrunt.gruntwork.io/docs/getting-started/quick-start/#keep-your-backend-configuration-dry

using bitbucket-pipelines deploy same branch to multiple environments

I have three environments is AWS dev/uat/prod, and the same branch(develop) I wanted to develop in all three respective environments using bitbucket-pipelines. As I know we need AWS AWS_ACCESS_KEY_ID to do so.
My question is: how to provide AWS AWS_ACCESS_KEY_ID for all the three environments dynamically?,
I am able to deploy at time on one environment as of now.
Thanks for help in advanced
There's a number of client libraries that allow you to parametrize AWS credentials without having to store them in the environment-specific config files. You didn't specify what AWS service you want to use, but here's and example for S3: s3_website
Their config file looks like this; you can configure multiple sets of variables.
s3_id: <%= ENV['S3_ID'] %>
s3_secret: <%= ENV['S3_SECRET'] %>
If this doesn't work for you, write a shell/python script around AWS CLI and pull the environment-specific variables into AWS config file yourself. Manage that script as part of your source code or a docker image.

how to make .ebextensions work when deploying a node js application?

I am having troubles understanding how .ebextensions is used when deploying a node js application using elasticbeanstalk. I have created a file called 01run.config in the top directory of may application:
my_app:
|-- server.js
|-- site/(...)
|-- node-modules
|-- .ebextensions/01run.config
The file .ebextensions contains my AWS credentials and a parameter referring to a S3 bundle that my app uses.
option_settings:
- option_name: AWS_SECRET_KEY
value: MY-AWS-SECRET-KEY
- option_name: AWS_ACCESS_KEY_ID
value: MY-AWS-KEY-ID
- option_name: PARAM1
value: MY-S3-BUNDLE-ID
After deploying my app using eb create, an .elasticbeanstalk/optionsettings.my_app-env is created that contains many variables, amongst which PARAM1 is set to "". Also the credentials do not exist.
I think I read somewhere that .ebextensions is when initiating the application, so this is not necessarily bad that I don't see these variables in the optionsettings.my_app-env'. However, the variables are not set up, and the application does not work properly. I'd appreciate any explanations.
I find that official documentation a bit confusing to understand.
It seems that the problem was that I had not commited .ebextensions to git. Apparently, the file is read on initializing your application, so it has to be part of the bundle sent to elasticbeanstalk.
I had taken the idea of using the config file to set up the authentication keys from the amazon documentation http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_nodejs_custom_container.html.
However, I had not commited the file because it is clear that you are not supposed to commit your authentication keys (more on this discussion here: How do you pass custom environment variable on Amazon Elastic Beanstalk (AWS EBS)?).
I end up simplifying the file to contain the PARAM1 option, and I passed the secret key and access key id throughout the elasticbenastalk online interface.
Your config file example is missing the namespace. You must specify namespace for each of your option settings.
You can pass the environment options in the .elasticbeanstalk/optionsettings.[your-env-name] file.
You should have a section called [aws:elasticbeanstalk:application:environment]. It might be populated with PARAM1=...etc. Just add your environment variables under this section. The files in the .elasticbeanstalk directory should not be committed.
After doing eb update you can verify that the options were added if you go to the web-based EBS console. The new options should show up. I believe that any old options added through the console do not get removed.

Resources