I'm defining multiple lambda functions in a single template.yaml. These functions have some common code, but not published modules. I assumed I could turn this common stuff into a versioned layer. With a directory to the effect as follows:
Project
LambdaFunc1
package.json
node_modules
func1.js
LambdaFunc2
package.json
node_modules
func2.js
common-stuff
package.json
my-common.js
template.yaml
node_modules
After testing, I copy common-stuff into the Projects/node_modules directory and my other LambdaFuncs resolve require('common-stuff') based on Node moving up the directory structure for not found modules.
To have SAM do the build/package/deploy, I noticed SAM doesn't touch the common-stuff however creates an .aws-sam/build structure with the two other Lambda functions. I had to create a structure for SAM's CodeURI to zip up.
Package/common-stuff/packaged/nodejs/node_modules/common-stuff/ with my package.json and my-common.js.
My package.json uses name: "common-stuff", main: "my-common.js"
There are no other files - nothing under nodejs as I'm only packaging the modules. This appears to me the reason for Layers. I have verified SAM packages a zip file containing nodejs/node_modules/common-stuff/... by downloading the Layer zip file.
In the Lambda function template def, I add the permission to allow 'lambda:GetLayerVersion'. When I view the Lambda function in the console, I see this permission along with the others.
Interestingly, aws lambda get-layer-version-policy --layer-name MyLayer --version-number 8 --output text
returns an error that there are no policies attached. My guess is that is because I've directly added it to the function, as I see it on the Lambda function with the correct Allow/GetLayerVersion.
This would seem to satisfy what I've read, however Node doesn't find the module. CloudWatch logs just say it can't find the module, nothing about permissions or syntax. Also, these functions worked until I added the Layer approach.
'sam local start-api' doesn't work either, same error. When I look in the Windows 10 default layer cache directory C:\Users\me\AppData\Roaming\AWS SAM\ there is an empty layers-pkg directory.
Is there some other magic I'm missing? Is there a better approach for sharing common code across Node Lambda functions?
I can't tell if AWS can't get the Layer, or the zip structure is wrong, or the require('common-stuff') is different (hope not).
Scott
Related
I Currently have a project in AWS with several lambda functions, most of the functions in NodeJS, I want to know if is there a way to create a lambda layer with my own code functions that I use in different lambdas without publish it in npm, I already search in old questions in stack question-1 question-2, but these were not answered
Thanks for help!
create a folder in your local machine called nodejs
put your "shared" logic in that folder like /nodejs/shared.js
you can zip this nodejs folder and upload as a layer
in your lambda code require the shared.js as const shared = require('/opt/nodejs/shared.js')
Links:
Lambda layers: https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html
Detailed guide: https://explainexample.com/computers/aws/aws-lambda-layers-node-app
Using layers with SAM: https://docs.aws.amazon.com/serverlessrepo/latest/devguide/sharing-lambda-layers.html
So I'm trying to set up a function in AWS Lambda to run some python code I imported from a zip.
I've edited the handler to run the file then the function I want to run
I've tried having the file in the directory created when I imported the zip folder, after which I I moved it to the main function directory. Neither worked
Not too sure what is wrong here,
the full error returned when I run test is:
Response
{
"errorMessage": "Unable to import module 'main': No module named 'main'",
"errorType": "Runtime.ImportModuleError",
"stackTrace": []
}
Edit: really new to Lambda so please excuse any silly mistakes
The problem is that, while you appear to have a module named main, it has not been deployed to the Lambda service yet. When you click Test, Lambda runs the deployed code. Perhaps your module was renamed to main some time after your initial deployment?
Local changes to code need to be saved and then deployed. The deploy step is important because until you deploy the code, the Lambda service will continue to run the previous code.
This has actually been a common problem historically in the Lambda console, but enhancements have been made to make it more obvious that a deployment is needed. For example the console now indicates "Changes not deployed" after you make a change, until you hit the Deploy button.
I found this question while facing the problem myself. Issue was that the zip put "main.py" in a subfolder.
Hope this is helpful for any others!
I am deploying a Node AWS Lambda with Serverless. Due to the internal requirements of the institution in which this code will be run, I need to pass extra certificates. The only solution I've been able to find is to pass NODE_EXTRA_CA_CERTS as a CLI argument. Using typical environmental variables (defined, for example, in dotenv) does not work because by that point in Node the certificates have already been configured.
My extra certs are in MyCerts.pem in the project root, and the Lambda function I'm trying to run is called function1. Running the Lambda locally with NODE_EXTRA_CA_CERTS=./MyCerts.pem npx serverless invoke local -f function1 -l works correctly. However, once I deploy to AWS using npx serverless deploy -v, I cannot find a way to properly include these additional certs, including by invoking from the CLI using NODE_EXTRA_CA_CERTS=./MyCerts.pem npx serverless invoke -f function1 -l.
I've tried everything I can think of and am at a loss. Can someone help?
I think this should definitely be possible in AWS Lambda.
There is an example on dev.to [1] which is similar to your use case.
However, they are using .NET Core and the AWS SAM, but it should be easy to adapt the solution to serverless and Node.js.
Basically, you need two steps:
Create a Lambda layer which holds your additional certificate file [2][3]
Add the environment variable NODE_EXTRA_CA_CERTS to your serverless.yml and point the path at the file you uploaded in your Lambda layer [4]
References
[1] https://dev.to/leading-edje/aws-lambda-layer-for-private-certificates-465j
[2] https://www.serverless.com/plugins/serverless-layers
[3] https://www.serverless.com/blog/publish-aws-lambda-layers-serverless-framework
[4] https://www.serverless.com/blog/serverless-v1.2.0
I don't think NODE_EXTRA_CA_CERTS works in Lambda. I tried setting it as an environment variable to a dummy file that doesn't exist. It did not generate a warning as stated by the documentation so I assume it was ignored.
A message will be emitted (once) with process.emitWarning() if the file is missing or malformed, but any errors are otherwise ignored.
I assume it's because of this warning:
This environment variable is ignored when node runs as setuid root or has Linux file capabilities set.
Here is another question confirming it doesn't work.
I was able to get Node.js to pay attention to NODE_EXTRA_CA_CERTS by starting another node process from the Lambda function. That second process gave me the warning I was looking for:
Warning: Ignoring extra certs from `hello.pem`, load failed: error:02001002:system library:fopen:No such file or directory
I am sure there are a lot of downsides for starting a secondary process to handle the request (concurrency would be my main concern), but it's a workaround you can try.
const ca_env = Object.assign({NODE_EXTRA_CA_CERTS: "hello.pem"}, process.env);;
require("child_process").execSync('node -e "console.log(\'hello\')"', {env: ca_env});
I ran into this problem too and taking Martin Loper's inputs, I had set environment variable NODE_EXTRA_CA_CERTS prefixed with /var/task/ and then the path inside my lambda code base where the cert is actually located. e.g. it would like like /var/task/certs/myCustomCA.pem if I had a folder certs in my lambda with myCustomCA.pem.
I worked on the same issue and resolved this by uploading the cert in the project folder, then the node should be able to use the NODE_EXTRA_CA_CERTS
Lambda layer storages the cert in the /opt folder which i think the node module don't have the access to read the content directly.
It works with a prefix /var/task/
What is the prefered way to share code between AWS Lambda functions?
I have a structure like this:
functions
a
node_modules
index.js
package.json
b
node_modules
index.js
package.json
c
node_modules
index.js
package.json
This let every function keep its own node_modules and I can package the whole thing with the CLI.
But what about custom code that needs to be shared?
I can require("../mylibrary") but the package command would still not include it.
As dmigo mentioned already it's possible with Lambda layers. Here is some SAM template code for using the Lambda Layer Code:
Globals:
Function:
Runtime: nodejs8.10
MyFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: a/
Handler: index.handler
Layers:
- !Ref MySharedLayer
MySharedLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: SharedLayer
Description: Some code to share with the other lambda functions
ContentUri: layer/
CompatibleRuntimes:
- nodejs8.10
RetentionPolicy: Retain
Now you can use Layers to share libraries and code between your Functions. You can create a Layer from a zip file the same way you do that for a Function.
The layer package will look more or less like this:
my-layer.zip
└ nodejs/node_modules/mylibrary
If you create your Functions on top of this Layer then in the code it can be referenced like this:
const shared = require('mylibrary');
It is worth noticing that Layers support versioning and relate to Functions as many-to-many. Which makes them second npm.
Try serverless framework, you can use the include/exclude artifacts without having to write your own scripts.
checkout serverless.com
I also use private node packages, but they need to be installed before sls deploy.
The problem with Lambda is, you have to deploy a ZIP-file, so all your code needs to be accesible from one root directory.
I solved this with a script that copies my shared code to all the Lambda-function directories before I archive every single function and upload them.
Symlinks are probably also possible.
I'm playing with this PDF To Image converter and I've cloned the repo, run npm install, changed this section:
var s3EventHandler = new S3EventHandler({
region: 'my-region',
outputBucketName: 'my-bucket-name'
s3: s3,
resolution: 72
});
Renamed it exports.js, zipped up the the js, node_modules folder, package.json and event.json (I've also tried with both of these jsons removed) and uploaded it into my Lambda function. The s3 trigger has been created and so far is working fine.
I've had multiple test failures because it couldn't find a either the async module and tmp module, which I've moved to the top level and it seems to fix it (however it doesn't complain about the other modules that it requires and aren't in the top level).
In the test it complains s3 is not defined which I'm sorta lost with as there isn't a lot of details with it. I thought it could be that I'm just running test so the s3 trigger with itself is missing.
When I upload a pdf into the bucket, Lambda reports that it runs but fails. Going into CloudWatch Logs says there is no log stream for it. I've checked the IAM role and it has permissions to CreateLogStream and PutLogEvents (it was the templated IAM policy).
How can I get my logs working to find the problem? Or what can I do to fix the s3 not defined issue which is my only clue atm? It could be related to the top level module requirement however that doesn't seem consistent as only some modules need to be at the top level?
Looks like "CreateLogGroup" Permission is missing from what you have mentioned. The following permissions are required for lambda to write logs to CloudWatch
"logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents"