How to pass value of NODE_EXTRA_CA_CERTS to AWS Lambda deployed with Serverless? - node.js

I am deploying a Node AWS Lambda with Serverless. Due to the internal requirements of the institution in which this code will be run, I need to pass extra certificates. The only solution I've been able to find is to pass NODE_EXTRA_CA_CERTS as a CLI argument. Using typical environmental variables (defined, for example, in dotenv) does not work because by that point in Node the certificates have already been configured.
My extra certs are in MyCerts.pem in the project root, and the Lambda function I'm trying to run is called function1. Running the Lambda locally with NODE_EXTRA_CA_CERTS=./MyCerts.pem npx serverless invoke local -f function1 -l works correctly. However, once I deploy to AWS using npx serverless deploy -v, I cannot find a way to properly include these additional certs, including by invoking from the CLI using NODE_EXTRA_CA_CERTS=./MyCerts.pem npx serverless invoke -f function1 -l.
I've tried everything I can think of and am at a loss. Can someone help?

I think this should definitely be possible in AWS Lambda.
There is an example on dev.to [1] which is similar to your use case.
However, they are using .NET Core and the AWS SAM, but it should be easy to adapt the solution to serverless and Node.js.
Basically, you need two steps:
Create a Lambda layer which holds your additional certificate file [2][3]
Add the environment variable NODE_EXTRA_CA_CERTS to your serverless.yml and point the path at the file you uploaded in your Lambda layer [4]
References
[1] https://dev.to/leading-edje/aws-lambda-layer-for-private-certificates-465j
[2] https://www.serverless.com/plugins/serverless-layers
[3] https://www.serverless.com/blog/publish-aws-lambda-layers-serverless-framework
[4] https://www.serverless.com/blog/serverless-v1.2.0

I don't think NODE_EXTRA_CA_CERTS works in Lambda. I tried setting it as an environment variable to a dummy file that doesn't exist. It did not generate a warning as stated by the documentation so I assume it was ignored.
A message will be emitted (once) with process.emitWarning() if the file is missing or malformed, but any errors are otherwise ignored.
I assume it's because of this warning:
This environment variable is ignored when node runs as setuid root or has Linux file capabilities set.
Here is another question confirming it doesn't work.
I was able to get Node.js to pay attention to NODE_EXTRA_CA_CERTS by starting another node process from the Lambda function. That second process gave me the warning I was looking for:
Warning: Ignoring extra certs from `hello.pem`, load failed: error:02001002:system library:fopen:No such file or directory
I am sure there are a lot of downsides for starting a secondary process to handle the request (concurrency would be my main concern), but it's a workaround you can try.
const ca_env = Object.assign({NODE_EXTRA_CA_CERTS: "hello.pem"}, process.env);;
require("child_process").execSync('node -e "console.log(\'hello\')"', {env: ca_env});

I ran into this problem too and taking Martin Loper's inputs, I had set environment variable NODE_EXTRA_CA_CERTS prefixed with /var/task/ and then the path inside my lambda code base where the cert is actually located. e.g. it would like like /var/task/certs/myCustomCA.pem if I had a folder certs in my lambda with myCustomCA.pem.

I worked on the same issue and resolved this by uploading the cert in the project folder, then the node should be able to use the NODE_EXTRA_CA_CERTS
Lambda layer storages the cert in the /opt folder which i think the node module don't have the access to read the content directly.
It works with a prefix /var/task/

Related

SAM Lambda Layer module not found for shared nodejs code

I'm defining multiple lambda functions in a single template.yaml. These functions have some common code, but not published modules. I assumed I could turn this common stuff into a versioned layer. With a directory to the effect as follows:
Project
LambdaFunc1
package.json
node_modules
func1.js
LambdaFunc2
package.json
node_modules
func2.js
common-stuff
package.json
my-common.js
template.yaml
node_modules
After testing, I copy common-stuff into the Projects/node_modules directory and my other LambdaFuncs resolve require('common-stuff') based on Node moving up the directory structure for not found modules.
To have SAM do the build/package/deploy, I noticed SAM doesn't touch the common-stuff however creates an .aws-sam/build structure with the two other Lambda functions. I had to create a structure for SAM's CodeURI to zip up.
Package/common-stuff/packaged/nodejs/node_modules/common-stuff/ with my package.json and my-common.js.
My package.json uses name: "common-stuff", main: "my-common.js"
There are no other files - nothing under nodejs as I'm only packaging the modules. This appears to me the reason for Layers. I have verified SAM packages a zip file containing nodejs/node_modules/common-stuff/... by downloading the Layer zip file.
In the Lambda function template def, I add the permission to allow 'lambda:GetLayerVersion'. When I view the Lambda function in the console, I see this permission along with the others.
Interestingly, aws lambda get-layer-version-policy --layer-name MyLayer --version-number 8 --output text
returns an error that there are no policies attached. My guess is that is because I've directly added it to the function, as I see it on the Lambda function with the correct Allow/GetLayerVersion.
This would seem to satisfy what I've read, however Node doesn't find the module. CloudWatch logs just say it can't find the module, nothing about permissions or syntax. Also, these functions worked until I added the Layer approach.
'sam local start-api' doesn't work either, same error. When I look in the Windows 10 default layer cache directory C:\Users\me\AppData\Roaming\AWS SAM\ there is an empty layers-pkg directory.
Is there some other magic I'm missing? Is there a better approach for sharing common code across Node Lambda functions?
I can't tell if AWS can't get the Layer, or the zip structure is wrong, or the require('common-stuff') is different (hope not).
Scott

Understanding require in nodejs related to the AWS module

I'm going through a project where in app.js the AWS module is required and its config is set via AWS.config.update. In a later file, AWS is required again but this time it uses the credentials set in the app.js file earlier. How does this work? I would have thought that we need to set the credentials again since the module is being re-imported in a different file.
It would help to see the project structure or files but here is what I am thinking:
app.js is run first (as I am guessing this is your index) and thats where the credentials are configured originally.
Then later, when you require the module again in a different point of the application, since app.js already executed at the start there is no need to reconfigure the AWS module as it already holds its present configuration.

How to get environment variables defined in serverless.yml in tests

I am using the serverless framework for running lambda functions on AWS.
In my serverless.yml there are environment variables that are fetched from SSM.
When I write integration tests for the code, I need the code to have the environment variables and I can't find a good way to do this.
I don't want to duplicate all the variables definitions just for the tests, they are already defined in the serverless.yml. Also, some are secrets and I can't commit them to source conrol, so I would have to also repeat them in the ci environment.
Tried using the serverless-jest-plugin but it is not working and not well maintained.
Ideas I had for solutions:
Make the tests exec sls invoke - this will work but would mean that the code cannot be debugged, I won't know the test coverage, and it will be slow.
Parse the serverless.yml myself and export the env variables - possible but rewriting the logic of pulling the SSM variables just for tests seems wrong.
Any ideas?
The solution we ended up using is a serverless plugin called serverless-export-env.
After adding this plugin you can run serverless export-env to export all the resolved environment variables to an .env file. This resolves ssm parameters correctly and made integration testing much simpler for us.
BTW, to get the environment variables set from the .env file use the the dotenv npm package.
Credit to grishezz for finding the solution
You can run node with --require option to inject .env file to a serverless command.
Create .env at the project root with package.json, and list variables in .env.
Install serverless and dotenv in the project by yarn add -D serverless dotenv.
Run a command like node -r dotenv/config ./node_modules/.bin/sls invoke.
Then, you can get environment variables in the handler process.env.XXX.
Are you looking to do mocked unit tests, or something more like integration tests?
In the first case, you don't need real values for the environment variables. Mock your database, or whatever requires environment variables set. This is actually the preferable way because the tests will run super quickly with proper mocks.
If you are actually looking to go with end-to-end/integration kind of approach, then you would do something like sls invoke, but from jest using javascript. So, like regular network calls to your deployed api.
Also, I would recommend not to store keys in serverless.yml. Try the secret: ${env:MY_SECRET} syntax instead (https://serverless.com/framework/docs/providers/aws/guide/variables#referencing-environment-variables), and use environment variables instead. If you have a ci/cd build server, you can store your secrets there.
After searching I did my custom solution
import * as data from './secrets.[stage].json'
if( process.env.NODE_ENV === 'test'){
process.env = Object.assign( data, process.env );
}
//'data' is the object that has the Serverless environment variables
The SLS environment variables in my case at the file secrets.[stage].json
Serverless.yml has
custom:
secrets: ${file(secrets.[stage].json)}

Setting up environment variables in node, specifically, GOOGLE_APPLICATION_CREDENTIALS

I have a node application and I'm trying to use the google language api. I want to set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the json file in the same directory (sibling to package.json and app.js).
I had tried process.env.GOOGLE_APPLICATION_CREDENTIALS = "./key.json"; in my app.js file (using express), but it isn't working. I have also tried putting "GOOGLE_APPLICATION_CREDENTIALS":"./key.json" in my package.json and that didn't work as well. It DOES work when I run in the terminal export GOOGLE_APPLICATION_CREDENTIALS="./key".
Here is the error message:
ERROR: Error: Unexpected error while acquiring application default credentials: Could not load the default credentials. Browse to https://developers.google.com/accounts/docs/application-default-credentials for more information.
Any tips are appreciated, thanks!
After reading and reading about this issue on the internet, the only way to resolve this issue for me was to declare the environment variable for the node execution:
GOOGLE_APPLICATION_CREDENTIALS="./key.json" node index.js
Because I was able to print the token from my server console, but when I was running the node application, the library was unable to retrieve the system environment value, but setting the variable for the execution, it was able to retrieve the value.
It could be that the environment variable in your OS was not accurately set. For example in Linux you usually have to set GOOGLE_APPLICATION_CREDENTIALS in the terminal where you executed your app.
export GOOGLE_APPLICATION_CREDENTIALS="[PATH]"
Another option you have is passing the json path by code. It is documented this process using Node.js with the Cloud Storage.
Just to update this thread.
Relative paths with GOOGLE_APPLICATION_CREDENTIALS now works fine with dotenv :)
Just store your service-account credentials as a file in your project and reference it relative to your env-file as a path. Then it should work fine :)
I encountered the same issue. I was able to get it working by doing the following:
export GOOGLE_APPLICATION_CREDENTIALS="//Users/username/projects/projectname/the.json"
The issue is covered mostly in the docs here.
the command varies slightly depending on what your OS is:
The GOOGLE_APPLICATION_DEFAULT environment variable you are setting is accessed by the google client libraries - it wont work with a relative path, you'll need to set the absolute path.

how to make .ebextensions work when deploying a node js application?

I am having troubles understanding how .ebextensions is used when deploying a node js application using elasticbeanstalk. I have created a file called 01run.config in the top directory of may application:
my_app:
|-- server.js
|-- site/(...)
|-- node-modules
|-- .ebextensions/01run.config
The file .ebextensions contains my AWS credentials and a parameter referring to a S3 bundle that my app uses.
option_settings:
- option_name: AWS_SECRET_KEY
value: MY-AWS-SECRET-KEY
- option_name: AWS_ACCESS_KEY_ID
value: MY-AWS-KEY-ID
- option_name: PARAM1
value: MY-S3-BUNDLE-ID
After deploying my app using eb create, an .elasticbeanstalk/optionsettings.my_app-env is created that contains many variables, amongst which PARAM1 is set to "". Also the credentials do not exist.
I think I read somewhere that .ebextensions is when initiating the application, so this is not necessarily bad that I don't see these variables in the optionsettings.my_app-env'. However, the variables are not set up, and the application does not work properly. I'd appreciate any explanations.
I find that official documentation a bit confusing to understand.
It seems that the problem was that I had not commited .ebextensions to git. Apparently, the file is read on initializing your application, so it has to be part of the bundle sent to elasticbeanstalk.
I had taken the idea of using the config file to set up the authentication keys from the amazon documentation http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_nodejs_custom_container.html.
However, I had not commited the file because it is clear that you are not supposed to commit your authentication keys (more on this discussion here: How do you pass custom environment variable on Amazon Elastic Beanstalk (AWS EBS)?).
I end up simplifying the file to contain the PARAM1 option, and I passed the secret key and access key id throughout the elasticbenastalk online interface.
Your config file example is missing the namespace. You must specify namespace for each of your option settings.
You can pass the environment options in the .elasticbeanstalk/optionsettings.[your-env-name] file.
You should have a section called [aws:elasticbeanstalk:application:environment]. It might be populated with PARAM1=...etc. Just add your environment variables under this section. The files in the .elasticbeanstalk directory should not be committed.
After doing eb update you can verify that the options were added if you go to the web-based EBS console. The new options should show up. I believe that any old options added through the console do not get removed.

Resources