how to make .ebextensions work when deploying a node js application? - node.js

I am having troubles understanding how .ebextensions is used when deploying a node js application using elasticbeanstalk. I have created a file called 01run.config in the top directory of may application:
my_app:
|-- server.js
|-- site/(...)
|-- node-modules
|-- .ebextensions/01run.config
The file .ebextensions contains my AWS credentials and a parameter referring to a S3 bundle that my app uses.
option_settings:
- option_name: AWS_SECRET_KEY
value: MY-AWS-SECRET-KEY
- option_name: AWS_ACCESS_KEY_ID
value: MY-AWS-KEY-ID
- option_name: PARAM1
value: MY-S3-BUNDLE-ID
After deploying my app using eb create, an .elasticbeanstalk/optionsettings.my_app-env is created that contains many variables, amongst which PARAM1 is set to "". Also the credentials do not exist.
I think I read somewhere that .ebextensions is when initiating the application, so this is not necessarily bad that I don't see these variables in the optionsettings.my_app-env'. However, the variables are not set up, and the application does not work properly. I'd appreciate any explanations.
I find that official documentation a bit confusing to understand.

It seems that the problem was that I had not commited .ebextensions to git. Apparently, the file is read on initializing your application, so it has to be part of the bundle sent to elasticbeanstalk.
I had taken the idea of using the config file to set up the authentication keys from the amazon documentation http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_nodejs_custom_container.html.
However, I had not commited the file because it is clear that you are not supposed to commit your authentication keys (more on this discussion here: How do you pass custom environment variable on Amazon Elastic Beanstalk (AWS EBS)?).
I end up simplifying the file to contain the PARAM1 option, and I passed the secret key and access key id throughout the elasticbenastalk online interface.

Your config file example is missing the namespace. You must specify namespace for each of your option settings.

You can pass the environment options in the .elasticbeanstalk/optionsettings.[your-env-name] file.
You should have a section called [aws:elasticbeanstalk:application:environment]. It might be populated with PARAM1=...etc. Just add your environment variables under this section. The files in the .elasticbeanstalk directory should not be committed.
After doing eb update you can verify that the options were added if you go to the web-based EBS console. The new options should show up. I believe that any old options added through the console do not get removed.

Related

Move configurartion from env.ym to ssm on serverless.yml

My API keys are hard coded in env.yml and published on our git so, I need to move all secrets from my serverless.yml config (using ${file(env.yml)}) to ssm for all environement except for local environment.
The idea is to fallback to local env.yml in case configuration for one enviroment (i.e. localhost) is not available on remote server.
So, for insance to find the value for PRIVATE_API_KEY_<stage> look up ssm for /SHARED/<stage>/PRIVATE_API_KEY if not found look up look up .env.local for CEFLA_KEY_VALUE_<stage>
Any clue?

Azure static web app environment variable

I am trying to publish Gatsbyjs by Azure Static web app.
I have a plugin (gatsby-source-contentful).
I need to pass variables like:
{
resolve: `gatsby-source-contentful`,
options: {
spaceId: process.env.CONTENTFUL_SPACE_ID,
accessToken: process.env.CONTENTFUL_ACCESS_TOKEN,
},
},
Error:
Running 'npm run build'...
> gatsby-starter-default#0.1.0 build /github/workspace
> gatsby build
success open and validate gatsby-configs - 0.021s
error Invalid plugin options for "gatsby-source-contentful":
- "accessToken" is required
- "spaceId" is required
not finished load plugins - 0.905s
Where can I pass this?
Thanks.
For Azure Static Web Apps there is two ways to set environment variables one for front-end and one for back-end scenarios.
Since you are using Gatsby, I guess its safe to assume you are building your front-end. For that you will need to add the environment variables on your build configuration (azure-static-web-apps-.yml).
Like so:
env: # Add environment variables here
CONTENTFUL_SPACE_ID: <your-id>
Here is the link for that in documenation.
Not to be confused with this one which is used for defining backend environment variables.
They are called environment variables. They are intended to store sensitive data such as tokens, identifiers, etc, and they shouldn't be pushed in your repository, so you should ignore them (in your .gitignore file).
By default, Gatsby creates 2 environments without noticing you, one for each compilation method:
gatsby develop: uses .env.development
gatsby build: uses .env.production
Note: you can change this behavior if needed to add your own environments using NODE_ENV custom commands.
So, to pass your data to your gatsby-config.js you just need to create two files (.env.development and .env.production) at the root of your project. Then, add the following snippet at the top of your gatsby-config.js:
require("dotenv").config({
path: `.env.${process.env.NODE_ENV}`,
})
Note: dotenv is already a dependency of Gatsby so you don't need to install it again
This will tell Gatsby where to take the environment variables.
You just remain to populate both environment files. Look for the credentials in Contentful and add them in the files using the sane naming than you've set in your gatsby-config.js:
CONTENTFUL_SPACE_ID=123456789
CONTENTFUL_ACCESS_TOKEN=123456789
Keep also in mind that when dealing with Azure, Netlify, AWS, or similar CI/CD tools, you'll need to provide to the server the same environment files to avoid a code-breaking when pushing the changes.

How to pass value of NODE_EXTRA_CA_CERTS to AWS Lambda deployed with Serverless?

I am deploying a Node AWS Lambda with Serverless. Due to the internal requirements of the institution in which this code will be run, I need to pass extra certificates. The only solution I've been able to find is to pass NODE_EXTRA_CA_CERTS as a CLI argument. Using typical environmental variables (defined, for example, in dotenv) does not work because by that point in Node the certificates have already been configured.
My extra certs are in MyCerts.pem in the project root, and the Lambda function I'm trying to run is called function1. Running the Lambda locally with NODE_EXTRA_CA_CERTS=./MyCerts.pem npx serverless invoke local -f function1 -l works correctly. However, once I deploy to AWS using npx serverless deploy -v, I cannot find a way to properly include these additional certs, including by invoking from the CLI using NODE_EXTRA_CA_CERTS=./MyCerts.pem npx serverless invoke -f function1 -l.
I've tried everything I can think of and am at a loss. Can someone help?
I think this should definitely be possible in AWS Lambda.
There is an example on dev.to [1] which is similar to your use case.
However, they are using .NET Core and the AWS SAM, but it should be easy to adapt the solution to serverless and Node.js.
Basically, you need two steps:
Create a Lambda layer which holds your additional certificate file [2][3]
Add the environment variable NODE_EXTRA_CA_CERTS to your serverless.yml and point the path at the file you uploaded in your Lambda layer [4]
References
[1] https://dev.to/leading-edje/aws-lambda-layer-for-private-certificates-465j
[2] https://www.serverless.com/plugins/serverless-layers
[3] https://www.serverless.com/blog/publish-aws-lambda-layers-serverless-framework
[4] https://www.serverless.com/blog/serverless-v1.2.0
I don't think NODE_EXTRA_CA_CERTS works in Lambda. I tried setting it as an environment variable to a dummy file that doesn't exist. It did not generate a warning as stated by the documentation so I assume it was ignored.
A message will be emitted (once) with process.emitWarning() if the file is missing or malformed, but any errors are otherwise ignored.
I assume it's because of this warning:
This environment variable is ignored when node runs as setuid root or has Linux file capabilities set.
Here is another question confirming it doesn't work.
I was able to get Node.js to pay attention to NODE_EXTRA_CA_CERTS by starting another node process from the Lambda function. That second process gave me the warning I was looking for:
Warning: Ignoring extra certs from `hello.pem`, load failed: error:02001002:system library:fopen:No such file or directory
I am sure there are a lot of downsides for starting a secondary process to handle the request (concurrency would be my main concern), but it's a workaround you can try.
const ca_env = Object.assign({NODE_EXTRA_CA_CERTS: "hello.pem"}, process.env);;
require("child_process").execSync('node -e "console.log(\'hello\')"', {env: ca_env});
I ran into this problem too and taking Martin Loper's inputs, I had set environment variable NODE_EXTRA_CA_CERTS prefixed with /var/task/ and then the path inside my lambda code base where the cert is actually located. e.g. it would like like /var/task/certs/myCustomCA.pem if I had a folder certs in my lambda with myCustomCA.pem.
I worked on the same issue and resolved this by uploading the cert in the project folder, then the node should be able to use the NODE_EXTRA_CA_CERTS
Lambda layer storages the cert in the /opt folder which i think the node module don't have the access to read the content directly.
It works with a prefix /var/task/

Understanding require in nodejs related to the AWS module

I'm going through a project where in app.js the AWS module is required and its config is set via AWS.config.update. In a later file, AWS is required again but this time it uses the credentials set in the app.js file earlier. How does this work? I would have thought that we need to set the credentials again since the module is being re-imported in a different file.
It would help to see the project structure or files but here is what I am thinking:
app.js is run first (as I am guessing this is your index) and thats where the credentials are configured originally.
Then later, when you require the module again in a different point of the application, since app.js already executed at the start there is no need to reconfigure the AWS module as it already holds its present configuration.

config prod, config stage and keys files are good practice?

I am using config files to store defaults and passwords/tokens/keys.
Defaults are no problem to be public.
Obviously I want passwords to remain secret.
I mean - not to push the to GitHub.
I thought about make a configs directory contains the following files:
common.js everybody can see. keys.js passwords/tokens/keys. Shouldn't be pushed to GitHub - using .gitignore file to prevent this. keys-placeholder.js should contain just placeholders so who clones the project should understand to create keys.js file and place his real passwords.
Is it a good practice? How do you hide passwords from pushing to GitHub and also make it comfortable to use when build the project for first time?
Personally, I use config for public app configuration/constants and .env file and dotenv package for secrets.
Then add .env in .gitignore.
So example project would be
config // app configuration/constants
- prod.json
- dev.json
- test.json
.env // secrets
src/
- models
- app.js
...
----- added -----
Why don't you put the config in the src dir?
A: Of course it's totally up to you where to put your config folder.
It's just a matter of preference.
What about staging config?
A: Like question#1, you can add staging.json under config.
If you don't provide any placeholder file for .env, how do I know which passwords should I fill in this file?
A: Typical .env file looks like below.
API_CREDENTIAL=your api credentials
DB_PASSWORD=your db password
How do you lazyload the prod/dev config files to the node app?
A: I don't see much benefit for lazyloading small json files.
If you're asking specific how to guide for config and dotenv library,
please refer to their Github repository.(config, dotenv)

Resources