Handler in subdirectory of AWS Lambda function not running - node.js

I'm getting an awfully unfortunate error on Lambda:
Unable to import module 'lib/index': Error
at require (internal/module.js:20:19)
Which is strange because there is definitely a function called handler getting exported from lib/index...not sure if the whole subdirectory thing has been an issue for others so I wanted to ask.
sam-template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Does something crazy
Resources:
SomeFunction:
Type: AWS::Serverless::Function
Properties:
Handler: lib/index.handler
Role: arn:aws:iam::...:role/lambda-role
Runtime: nodejs6.10
Timeout: 10
Events:
Timer:
Type: Schedule
Properties:
Schedule: rate(1 minute)
Module structure
|-- lib
| `-- index.js
`-- src
`-- index.js
I have nested it here because I'm transpiling ES6 during my build process using the following, excerpt from package.json:
"build": "babel src -d lib"
buildspec.yaml
version: 0.1
phases:
install:
commands:
- npm install
- aws cloudformation package --template-file sam-template.yaml --s3-bucket some-bucket --output-template-file compiled-sam-template.yaml
build:
commands:
- npm run build
post_build:
commands:
- npm prune --production
artifacts:
files:
- 'node_modules/**/*'
- 'lib/**/*'
- 'compiled-template.yaml'

The aws cloudformation package command is shipping the built assets, which is run in the install phase of the shown code. Moving it to the post_build will ensure it captures everything needed, including the lib/index in question:
post_build:
commands:
- npm prune --production
- aws cloudformation package ...

You are trying to import lib/index which will try to find a package named lib as if you did npm install --save lib but you are most likely trying to import a file relative to your own project and you are not giving it a relative path in your import.
Change 'lib/index' to './lib/index' - or '../lib/index' etc. - depending where it is and see if it helps.
By the way, if you're trying to import the file lib/index.js and not a directory lib/import/ then you may use a shorter ./lib path, as in:
const lib = require('./lib');
Of course you didn't show even a single line of your code so I can only guess what you're doing.

Your handler should be .lib/index.handler considering your index.js file is in a subdirectory lib.

The reference to the handler must be relative to the lambda to be execute;
Ex:
if the file lambda is placed in the path:
x-lambda/yyy/lambda.py
the handler must be:
..yyy/lambda.lambda_handler
it suppose that in the lambda.py exist the function: lambda_handler()

Related

Deploying with Bitbucket pipeline

I hope you are doing well! I'm facing a problem when trying to deploy using Bitbucket pipeline.
The project is a React project at version 18.2.0 and its files are in the frontend folder.
bitbucket-pipelines.yml
image: atlassian/default-image:3
# Workflow Configuration
pipelines:
branches:
staging:
- parallel:
- step:
name: Build and Test
script:
- npm install --prefix ./frontend/ --legacy-peer-deps
- npm audit fix --force --prefix ./frontend/
- npm audit fix --force --prefix ./frontend/
- npm run build --prefix ./frontend/
artifacts:
- ./frontend/build/**
- step:
name: Deploy to Staging
deployment: Staging
script:
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: $USER
SERVER: $SERVER
REMOTE_PATH: '/var/www/html'
LOCAL_PATH: './'
The error that I am not able to solve is related to the LOCAL_PATH folder.
I tried some variations:
./frontend/build/*
No such file or directory
./build/*
No such file or directory
./
error: unexpected filename: .
Thanks in advance for any help
TL;DR
In your build step, use:
artifacts:
- frontend/build/**
In the scp deploy, use the same value for the LOCAL_PATH variable:
variables:
...
LOCAL_PATH: 'frontend/build/*'
Explanation
The reason why the usual syntax does not work is that Bitbucket uses glob patterns for its paths. For example from the documentation on artifacts:
You can use glob patterns to define artifacts. Glob patterns that
start with a * will need to be put in quotes.
Note: As these are glob
patterns, path segments “.” and “..” won’t work. Use paths relative to
the build directory.
That means you don't want or need the leading ./. Check the Bitbucket documentation on scp deployment for a concrete example matching your case.

AWS Lambda packaging with dependencies

Further outlining is in the context of NodeJS and Monorepo (based on Lerna).
I have AWS stack with several AWS Lambda inside deployed by means of AWS CloudFormation. Some of the lambdas are simple (the single small module) and could be inlined:
https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-lambda.Code.html#static-from-wbr-inlinecode
const someLambda = new Function(this, 'some-lambda', {
code: Code.fromInline(fs.readFileSync(require.resolve(<relative path to lambda module>), 'utf-8')),
handler: 'index.handler',
runtime: Runtime.NODEJS_12_X
});
Some have no dependencies and packaged as follows:
const someLambda = new Function(this, 'some-lambda', {
code: Code.fromAsset(<relative path to folder with lambda>),
handler: 'index.handler',
runtime: Runtime.NODEJS_12_X
});
But in case of relatively huge lambdas with dependencies, as I understand, we only way to package (proposed by API) is #aws-cdk/aws-lambda-nodejs:
import * as lambdaNJS from "#aws-cdk/aws-lambda-nodejs";
export function createNodeJSFunction(
scope: cdk.Construct, id: string, nodejsFunctionProps: Partial<NodejsFunctionProps>
) {
const params: NodejsFunctionProps = Object.assign({
parcelEnvironment: { NODE_ENV: 'production' },
}, nodejsFunctionProps);
return new lambdaNJS.NodejsFunction(scope, id, params);
}
For standalone packages, it works well, but in case of the monorepo it just hangs on synth of the stack.
I just looking for alternatives, cause I believe it is not a good idea to bundle (parcel) BE sources.
I've created the following primitive library to zip only required node_modules despite packages hoisting.
https://github.com/redneckz/slice-node-modules
Usage (from monorepo root):
$ npx #redneckz/slice-node-modules \
-e packages/some-lambda/lib/index.js \
--exclude 'aws-*' \
--zip some-lambda.zip
--exclude 'aws-*' - AWS runtime is included by default, so no need to package it.
Here is an example if using cloudformation and template.yml.
Create a make file: Makefile with following targets
# Application
APPLICATION=applicatin-name
# AWS
PROFILE=your-profile
REGION=us-east-1
S3_BUCKET=${APPLICATION}-deploy
install:
rm -rf node_modules
npm install
clean:
rm -rf build
build: clean
mkdir build
zip -qr build/package.zip src node_modules
ls -lah build/package.*
deploy:
sam package \
--profile ${PROFILE} \
--region ${REGION} \
--template-file template.yaml \
--s3-bucket ${S3_BUCKET} \
--output-template-file ./build/package.yaml
sam deploy \
--profile ${PROFILE} \
--region ${REGION} \
--template-file ./build/package.yaml \
--stack-name ${APPLICATION}-lambda \
--capabilities CAPABILITY_NAMED_IAM
Make sure the s3 bucket is created, you could add this step as another target in the Makefile.
How to build and deploy on AWS ?
make build
make deploy
I have struggled with this as well, and I was using your slice-node-modules successfully for a while. As I have consolidated more of my projects into monorepos and begun using shared dependencies which reside as siblings rather than being externally published, I ran into shortcomings with that approach.
I've created a new tool called lerna-to-lambda which was specifically tailored to my use case. I published it publicly with minimal documentation, hopefully enough to help others in similar situations. The gist of it is that you run l2l in your bundling step, after you've installed all of your dependencies, and it copies what is needed into an output directory which is then ready to deploy to Lambda using SAM or whatever.
For example, from the README, something like this might be in your Lambda function's package.json:
"scripts": {
...
"clean": "rimraf build lambda",
"compile": "tsc -p tsconfig.build.json",
"package": "l2l -i build -o lambda",
"build": "yarn run clean && yarn run compile && yarn run package"
},
In this case, the compile step is compiling TypeScript files from a source directory into JavaScript files in the build directory. Then the package step bundles up all the code from build along with all of the Lambda's dependencies (except aws-sdk) into the directory lambda, which is what you'd deploy to AWS. If someone were using plain JavaScript rather than TypeScript, they could just copy the necessary .js files into the build directory before packaging.
It's likely that your solution is still working fine for your needs, but I thought I would share this here as an alternative in case others are in a similar situation and have trouble using slice-node-modules.

Issues with python and the serverless framework

I am trying to understand how to setup multiple python lambdas and a step function within one single serverless.yml with each python lambda having its own dependencies. All of my lambda functions collaborate in the context of a step function for a shared common goal. With this rationale, it makes sense to me to put all of the code under one serverless.yml file.
As part of my MANY hours of trial and error and reading I found about the serverless-python-requirements plugin for The Serverless Framework that helps in packaging python functions that rely on OS-specific python libraries and also allow the separation of multiple requirements.txt in case different lambdas require different dependencies.
So at this point my problem is that the generated package is not including the dependencies that I provide in the requirements.txt whenever each function has its own requirements.txt
These are my artifacts:
package.json
{
"engines": {
"node": ">=10.0.0",
"npm": ">=6.0.0"
},
"name": "example",
"version": "1.0.0",
"description": "",
"main": "index.js",
"dependencies": {
"serverless-python-requirements": "^5.1.0"
},
"devDependencies": {
"serverless": "^1.72.0"
},
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"license": "ISC"
}
serverless.yml
service: example
frameworkVersion: ">=1.72.0 <2.0.0"
plugins:
- serverless-python-requirements
custom:
stage: "${opt:stage, env:SLS_STAGE, 'local'}"
log_level: "${env:LOG_LEVEL, 'INFO'}"
pythonRequirements:
dockerizePip: true
provider:
name: aws
# profile: ${self:custom.profile}
stage: ${self:custom.stage}
runtime: python3.8
environment:
LOG_LEVEL: ${self:custom.log_level}
package:
individually: true
exclude:
- ./**
include:
- vendored/**
functions:
function1:
# module: folder1
handler: folder1/function1.handler
package:
include:
- 'folder1/**'
memorySize: 128
timeout: 60
function2:
# module: folder2
handler: folder2/function2.handler
package:
include:
- 'folder2/**'
memorySize: 128
timeout: 60
finally, my 2 python lambda functions are in separate folders and one of them requires specific dependencies:
folder1
function1.py
requirements.txt
folder2
function1.py
function1.py
import json
import logging
import os
import sys
import pyjokes
log_level = os.environ.get('LOG_LEVEL', 'INFO')
logging.root.setLevel(logging.getLevelName(log_level))
_logger = logging.getLogger(__name__)
class HandlerBaseError(Exception):
'''Base error class'''
class ComponentIdentifierBaseError(HandlerBaseError):
'''Base Component Identifier Error'''
def handler(event, context):
'''Function entry'''
_logger.debug('Event received: {}'.format(json.dumps(event)))
body = {
"message": "Go Serverless v1.0! Your function executed successfully!",
"joke":pyjokes.get_joke()
}
resp = {
'status': 'OK',
"body": json.dumps(body)
}
_logger.debug('Response: {}'.format(json.dumps(resp)))
return resp
if __name__ == "__main__":
handler('', '')
requirements.txt
pyjokes==0.6.0
function2.py
import json
import logging
import os
import sys
log_level = os.environ.get('LOG_LEVEL', 'INFO')
logging.root.setLevel(logging.getLevelName(log_level))
_logger = logging.getLogger(__name__)
class HandlerBaseError(Exception):
'''Base error class'''
class ElasticSearchPopulatorBaseError(HandlerBaseError):
'''Base Component Identifier Error'''
def handler(event, context):
'''Function entry'''
_logger.debug('Event received: {}'.format(json.dumps(event)))
resp = {
'status': 'OK'
}
_logger.debug('Response: {}'.format(json.dumps(resp)))
return resp
Note: I did try using the module+handler keywords in the serverless.xml as recommended on this link: ttps://github.com/UnitedIncome/serverless-python-requirements without any success
Something that I noted is that if I use the module+handler as follows:
functions:
function1:
module: folder1
handler: function1.handler
package:
include:
- 'folder1/**'
memorySize: 128
timeout: 60
Then, when I try running the function locally using: serverless invoke local -f function1 --log I get an error saying:
ModuleNotFoundError: No module named 'function1'
Also, if anyone has an example of multiple lambdas with different requirements.txt that works I would be very gratelful, ideally something just different than the typical hello world examples, the hello worlds all work very well for me ;), but in scenarios like this one where I would like to setup common libraries, have different dependencies, etc, and use one common serverless.yml things seem to fall apart. Again, my opinion is that these lambdas will operate together under one step function umbrella so there's strong cohesion here and I think that their build and deployment should happen under one common serverless service.
I recently developed a similar application using the serverless-python-requirements plugin that encapsulates multiple lambdas as part of one stack, and I was receiving ModuleNotFoundError whilst invoking the lambda function locally, yet it would work remotely; however, when I removed the module parameter from my serverless.yml file I was able to invoke locally but then it broke for remote executions.
I've been able to find a workaround by setting a path prefix in my serverless.yml:
functions:
LambdaTest:
handler: ${env:HANDLER_PATH_PREFIX, ""}handler.handler
module: src/test_lambda
package:
include:
- src/test_lambda/**
When I invoke the function locally, I prepend the environment variable to my command:
HANDLER_PATH_PREFIX=src/test_lambda/ ./node_modules/serverless/bin/serverless.js invoke local -f LambdaTest -p ./tests/resources/base_event.yml
I don't include the environment variable when invoking the function in AWS.
In order for this to work, I needed to add an init.py file to the root directory where my lambda function resides with the following code (taken from this solution) so that any modules I'm including in the code that exist in the lambda's directory (e.g. some_module -- see directory tree below):
import os
import sys
sys.path.append(os.path.dirname(os.path.realpath(__file__)))
My lambda's directory structure:
src/
└── test_lambda
├── __init__.py <=== add the code to this one
├── handler.py
├── requirements.txt
└── some_module
├── __init__.py
└── example.py
As for your question regarding lambdas that use different requirements.txt files -- I use the individually parameter, like so:
package:
individually: true
include:
- infra/**
exclude:
- "**/*"
Within each requirements.txt for each lambda I refer to a separate requirements.txt file that resides in the base directory of my project using the -r option -- this file contains libraries that are common to all lambdas, so when Serverless is installing packages for each lambda it'll also include packages included in my ./requirements.txt file too.
I've included this solution in an issue regarding the serverless-python-requirements plugin in GitHub which would be worth keeping an eye on should this behaviour of the module parameter turns out to be a bug.
Hope it helps and let me know if you require clarification on anything.

.gitlab-ci.yml to include multiple shell functions from multiple yml files

I have a Gitlab mono repository with some backend Java and frontend Node.js code. To create a CI, I'm working on a shared approach to build both the applications.
In the application repository, let's call it "A", I have source code as well a .gitlab-ci.yml file as below,
A
├── .gitlab-ci.yml
├── backendapi
└── ui
.gitlab-ci.yml file,
---
include:
- project: 'root/B'
ref: master
file: 'top-level.yml'
- project: 'root/B'
ref: master
file: 'maven.yml'
- project: 'root/B'
ref: master
file: 'node.yml'
I have another repository called "B", where I have all my CI functionalities in three different files.
B
├── maven.yml
├── node.yml
└── top-level.yml
top-level.yml file that has my build stage in it,
---
stages:
- build
variables:
GIT_SSL_NO_VERIFY: "1"
.build_script: &build_script
stage: build
tags:
- default
- docker
java_build:
<<: *build_script
image:
name: maven:latest
script:
- backend_build
node_build:
<<: *build_script
image:
name: node:slim
script:
- frontend_build
maven.yml, that has mvn build function,
.maven_build: &maven_build |-
function backend_build {
cd backendapi
mvn clean package -DskipTests
}
before_script:
- *maven_build
node.yml, with node function in it,
.node_build: &node_build |-
function frontend_build {
cd ui
npm install
npm build
}
before_script:
- *node_build
When the .gitlab-ci.yml file in repository "A" is run, it is calling the top-level.yml, maven.yml and node.yml files from the repository "B" which is good.
The problem here is when it runs the java_build it is unable to find the backend_build function from maven.yml instead it seems like it only loading the frontend_build function from node.yml file or overwriting the backend_build function from maven.yml file. The node_build works as expected, cause it can find the frontend_build function.
the Skipping Git submodules setup
Authenticating with credentials from /root/.docker/config.json
Authenticating with credentials from /root/.docker/config.json
Authenticating with credentials from /root/.docker/config.json
$ function frontend_build { # collapsed multi-line command
$ backend_build
/bin/bash: line 90: backend_build: command not found
I know that I can copy all the functions into one big yml file in repository "B" and include the in .gitlab-ci.yml in the repository "A" but here I'm trying to understand is it even possible to try the above approach.
Thanks in advance!
Ok, Finally found a hack but not a complete answer as yaml files cannot act accordingly as I stated in my question, but I took a different approach to solve the problem.
Well, there are no more maven.yml or node.yml, there are only four files in the repository B backend.yml, frontend.yml, hybrid.yml and top-level.yml.
The backend.yml has all the functions (build_app, lint_app, unit_test_app and so on..) that are required and the same follows the frontend.yml with different commands in the functions.
ex: In the backend.yml build_app function I will have the maven command at the same time in the frontend.yml build_app function I will have the nom command. Here the build_app function name is common in both the frontend.yml and backend.yml but the functionality is different.
In the top-level.yml stages, I specified the common function name as build_app in the script key.
stages:
- build
variables:
GIT_SSL_NO_VERIFY: "1"
.build_script: &build_script
stage: build
tags:
- default
- docker
build:
<<: *build_script
image: maven:latest
script:
- build_app
But in the .gitlab-ci.yml, depending on the build I need to do, I include that specific yml file. In the below example I want to build the backend and included the backend.yml same applies for the frontend.
include:
- project: 'root/B'
ref: master
file: 'top-level.yml'
- project: 'root/B'
ref: master
file: 'backend.yml'
If I have to build both the backend and frontend, I will use a hybrid.yml with the same function name as build_app but include both the maven and npm command. I know this is not the right approach but I will suffice the use case I'm trying to solve.
Thank you for helping me with the question!
Happy Automation :)

Serverless not including my node_modules

I have a nodejs serverless project that has this structure:
-node_modules
-package.json
-serverless.yml
-funcitons
-medium
-mediumHandler.js
my serverless.yml:
service: googleAnalytic
provider:
name: aws
runtime: nodejs6.10
stage: dev
region: us-east-1
package:
include:
- node_modules/**
functions:
mediumHandler:
handler: functions/medium/mediumHandler.mediumHandler
events:
- schedule:
name: MediumSourceData
description: 'Captures data between set dates'
rate: rate(2 minutes)
- cloudwatchEvent:
event:
source:
- "Lambda"
detail-type:
- ""
- cloudwatchLog: '/aws/lambda/mediumHandler'
my sls info shows:
Service Information
service: googleAnalytic
stage: dev
region: us-east-1
stack: googleAnalytic-dev
api keys:
None
endpoints:
None
functions:
mediumHandler: googleAnalytic-dev-mediumHandler
When I run sls:
serverless invoke local -f mediumHandler
it works and my script where I included googleapis and aws-sdk work. But when I deploy, those functions are skipped and show no error.
When debugging serverless's packaging process, use sls package (or sls deploy --noDeploy (for old versions). You'll get a .serverless directory that you can inspect to see what's inside the deployment package.
From there, you can see if node_modules is included or not and make changes to your serverless.yml correspondingly without needing to deploy every time you make a change.
Serverless will exclude development packages by default. Check your package.json and ensure your required packages are in the dependencies object, as devDependencies will be excluded.
I was dumb to put this in my serverless.yml which caused me the same issue you're facing.
package:
patterns:
- '!node_modules/**'

Resources