How to emulate AWS Parameter Store on local computer for lambda function development? - node.js

I'm using Serverless framework and NodeJS to develop my AWS Lambda function. So far, I have used .env file to store my secrets. So, I can get access to them in serverless.yml like this
provider:
...
environment:
DB_HOST: ${env:DB_HOST}
DB_PORT: ${env:DB_PORT}
But now I need to use AWS Parameter Store instead of .env file. I have tried to find information about how to emulate it on my local machine, but I couldn't.
I think, I have to use one serverless config file on local and staging. I need a way to select somehow env values either from .env file (if it's local machine) or from Parameter Store (if it's AWS Lambda). Is there any way how to do it? Thanks!

It should work like this: within your serverless.yml you can reference .env parameters with ${env:keyname} and AWS Parameters using the ${param:keyname} syntax.
If you need to support both of them you just need to write ${env:keyname, param:keyname}.
Here's an example:
provider:
...
environment:
ALLOWED_ORIGINS: ${env:ALLOWED_ORIGINS, param:ALLOWED_ORIGINS}
AUTHORIZER_ARN: ${env:AUTHORIZER_ARN, param:AUTHORIZER_ARN}
MONGODB_URL: ${env:MONGODB_URL, param:MONGODB_URL}

Related

Move configurartion from env.ym to ssm on serverless.yml

My API keys are hard coded in env.yml and published on our git so, I need to move all secrets from my serverless.yml config (using ${file(env.yml)}) to ssm for all environement except for local environment.
The idea is to fallback to local env.yml in case configuration for one enviroment (i.e. localhost) is not available on remote server.
So, for insance to find the value for PRIVATE_API_KEY_<stage> look up ssm for /SHARED/<stage>/PRIVATE_API_KEY if not found look up look up .env.local for CEFLA_KEY_VALUE_<stage>
Any clue?

Handle multiple environments variables in .env NodeJs

Suppose I have a .env file like these:
#dev variable
PORT=3000
#production
PORT=3030
And I get these variables using process.env, how can I manage to sometimes use the dev variable and other times the Production variable
You can create multiple .env files like .dev.env, .prod.env, and load them based on NODE_ENV. using this
Storing configuration in environment variables is the way to go, and exactly what is recommended by the config in the 12-Factor App, so you're already starting with the right foot.
The values of these variables should not be stored with the code, except maybe the ones for your local development environment, which you can even assume as the default values:
port = process.env.PORT || '3000';
For all other environments, the values should be stored in a safe place like Vault or AWS Secrets Manager, and then are only handled by your deployment pipeline. Jenkins, for example, has a credentials plugin to handle that.

NodeJs Environment variables vs config file

Actually I have a nodejs express app with its config file for params like host, port, JWT token, DB params and more.
The question is if it could have sense to keep those params directly on environment variables (whitout any config file) and acces them without the need of do the "require" for config in all components and modules.
All examples I see uses a config file, probably something about security or memory?
config file is usually for setting the default values for your environment variables,
which is needed when you are writing the test cases and need to use default values or mock values,
and also you will have all the env variables at one place which is better management.
so if you have an environment variable x,
in config file you can keep it as
config.x = process.env.x || 'defaultVale or mockValue'
A config file lets your very quickly set the entire environment of a machine - eg S3 buckets, API urls, access keys, etc. If you separate these into separate process.env.VARIABLE then you would need to set each of these...for which you would likely make a script...and now you have an environment file again!
To access environment variables you can use process.env.VARIABLE in your nodejs code (is always a string), as long as the variable is set before the process is started.
Another possibility is using an .env files in nodejs. I think you have to npm install dotenv in your application. Ideally different instances (dev, prod....) have its own .env file, and you dont have to call require("dotenv") every time if you want to access the environment variable. Call it in the very beginning i.e) in app.js and you can access the environment variable in any of the sub-files.

How to get environment variables defined in serverless.yml in tests

I am using the serverless framework for running lambda functions on AWS.
In my serverless.yml there are environment variables that are fetched from SSM.
When I write integration tests for the code, I need the code to have the environment variables and I can't find a good way to do this.
I don't want to duplicate all the variables definitions just for the tests, they are already defined in the serverless.yml. Also, some are secrets and I can't commit them to source conrol, so I would have to also repeat them in the ci environment.
Tried using the serverless-jest-plugin but it is not working and not well maintained.
Ideas I had for solutions:
Make the tests exec sls invoke - this will work but would mean that the code cannot be debugged, I won't know the test coverage, and it will be slow.
Parse the serverless.yml myself and export the env variables - possible but rewriting the logic of pulling the SSM variables just for tests seems wrong.
Any ideas?
The solution we ended up using is a serverless plugin called serverless-export-env.
After adding this plugin you can run serverless export-env to export all the resolved environment variables to an .env file. This resolves ssm parameters correctly and made integration testing much simpler for us.
BTW, to get the environment variables set from the .env file use the the dotenv npm package.
Credit to grishezz for finding the solution
You can run node with --require option to inject .env file to a serverless command.
Create .env at the project root with package.json, and list variables in .env.
Install serverless and dotenv in the project by yarn add -D serverless dotenv.
Run a command like node -r dotenv/config ./node_modules/.bin/sls invoke.
Then, you can get environment variables in the handler process.env.XXX.
Are you looking to do mocked unit tests, or something more like integration tests?
In the first case, you don't need real values for the environment variables. Mock your database, or whatever requires environment variables set. This is actually the preferable way because the tests will run super quickly with proper mocks.
If you are actually looking to go with end-to-end/integration kind of approach, then you would do something like sls invoke, but from jest using javascript. So, like regular network calls to your deployed api.
Also, I would recommend not to store keys in serverless.yml. Try the secret: ${env:MY_SECRET} syntax instead (https://serverless.com/framework/docs/providers/aws/guide/variables#referencing-environment-variables), and use environment variables instead. If you have a ci/cd build server, you can store your secrets there.
After searching I did my custom solution
import * as data from './secrets.[stage].json'
if( process.env.NODE_ENV === 'test'){
process.env = Object.assign( data, process.env );
}
//'data' is the object that has the Serverless environment variables
The SLS environment variables in my case at the file secrets.[stage].json
Serverless.yml has
custom:
secrets: ${file(secrets.[stage].json)}

using bitbucket-pipelines deploy same branch to multiple environments

I have three environments is AWS dev/uat/prod, and the same branch(develop) I wanted to develop in all three respective environments using bitbucket-pipelines. As I know we need AWS AWS_ACCESS_KEY_ID to do so.
My question is: how to provide AWS AWS_ACCESS_KEY_ID for all the three environments dynamically?,
I am able to deploy at time on one environment as of now.
Thanks for help in advanced
There's a number of client libraries that allow you to parametrize AWS credentials without having to store them in the environment-specific config files. You didn't specify what AWS service you want to use, but here's and example for S3: s3_website
Their config file looks like this; you can configure multiple sets of variables.
s3_id: <%= ENV['S3_ID'] %>
s3_secret: <%= ENV['S3_SECRET'] %>
If this doesn't work for you, write a shell/python script around AWS CLI and pull the environment-specific variables into AWS config file yourself. Manage that script as part of your source code or a docker image.

Resources