How can I get service-name and function-name from serverless.yml in my Lambda NodeJS? - node.js

service: serverless-demo-app
provider:
name: aws
runtime: nodejs10.x
functions:
sample1:
handler: sample1/handler
events:
- http:
path: sample1
method: get
sample2:
handler: sample2/handler
events:
- http:
path: sample2
method: get
When I am invoking sample2 from sample1, I need it's full name, like: serverless-demo-app-dev-sample2
So, how can I get service name, function name and environment name inside function1?

Try:
${self:service}
..to obtain the service name.
AWS_LAMBDA_FUNCTION_NAME, will show your functions.

Related

When Serverless WarmUp Plugin invoke my lambda , than my lambda gives error when i manually invoke it workes fine

**serverless. yml **.
service: LambdaColdStartRnD
configValidationMode: error
provider:
name: aws
runtime: nodejs14.x
memorySize: 512
timeout: 30
stage: development
region: ap-south-1
lambdaHashingVersion: 20201221
iamRoleStatements:
- Effect: 'Allow'
Action:
- 'lambda:InvokeFunction'
Resource: '*'
plugins:
- serverless-webpack
- serverless-plugin-warmup
functions:
api:
handler: lambda.handler
events:
- http: ANY /
- http: 'ANY /{proxy+}'
package:
individually: true
patterns:
- '!node_modules/**'
custom:
warmup:
RNDwarmer:
enabled: true
role: IamRoleLambdaExecution
events:
- schedule: 'cron(0/2 * ? * * *)'
concurrency: 5
prewarm: true
webpack:
webpackConfig: 'webpack.config.js' # Name of webpack configuration file
includeModules: false # Node modules configuration for packaging
packager: 'npm' # Packager that will be used to package your external modules
excludeFiles: src/**/*.test.js # Provide a glob for files to ignore.
I have defined a custom warmup which creates 5 containers and it is itializing the function with 5 conatiners but cant invoke function below is screenshot of xray traces and logs.
.

create log group and log stream using serverless framework

I have the following Terraform code. How can I implement the same in Serverless framework?
resource "aws_cloudwatch_log_group" "abc" {
name = logGroupName
tags = tags
}
resource "aws_cloudwatch_log_stream" "abc" {
depends_on = ["aws_cloudwatch_log_group.abc"]
name = logStreamName
log_group_name = logGroupName
}
My Serverless.yml file looks more like this. Basically I need to create a Log Group and Log Stream with names.
provider:
name: aws
runtime: python3.7
cfnRole: arn:cfnRole
iamRoleStatements:
- Effect: 'Allow'
Action:
- lambda:InvokeFunction
Resource: 'arn....'
functions:
handle:
handler: handler.handle
events:
- schedule:
rate: rate (2 hours)
resources:
Resources:
IamRoleLambdaExecution:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
In your Resources you have to add AWS::Logs::LogGroup and AWS::Logs::LogStream.
But tags on AWS::Logs::LogGroup are not supported.

AWS Lambda SNS event is not binding to the correct SNS Topic ARN using Serverless yml

I have a serverless resource for SNS topic in the resources section in the serverless.yml something like this,
resources:
Resources:
SNSTopic:
Type: AWS::SNS::Topic
Properties:
DisplayName: SNS Topic
TopicName: ${self:service}-${self:provider.stage}-Topic
When I am trying to bind this SNS topic to my lambda event as given below, lambda is not triggering by the SNS event. When I check AWS console for that lambda function the SNS event bound with wrong ARN value.
Function:
handler: src/sample/file.lambdaHandler
role: s3FullAccessRole
events: SNSTopic
Properties:
Policies:
- AWSLambdaExecute
- Statement:
- Effect: Allow
Action:
- 'lambda:InvokeFunction'
I have tried with changing event with all the different ways mentioned in here, https://serverless.com/framework/docs/providers/aws/events/sns/. The only way I found is to hard code the SNS Topic ARN value in the lambda event, which is not ideal for my situation.
Any help is really appreciated.
You could actually create a variable in custom with the arn of the sns topic
custom:
region: ${opt:region, self:provider.region}
snsTopic: ${self:service}-${self:provider.stage}-Topic
snsTopicArn: { "Fn::Join" : ["", ["arn:aws:sns:${self:custom.region}:", { "Ref" : "AWS::AccountId" }, ":${self:custom.snsTopic}" ] ] }
then just use the arn on the places you need.
Or you can use the plugin https://github.com/silvermine/serverless-plugin-external-sns-events to basic reference the topic name.
If you have only 1 serverless.yml and don't want to have a separate cloudformation file I would use the first option
EDIT:
To use the arn follow the instructions on serverless https://serverless.com/framework/docs/providers/aws/events/sns#using-a-pre-existing-topic
functions:
dispatcher:
handler: <handler>
events:
- sns:
arn: ${self:custom.snsTopicArn}
since you have the sns topic on the same serverless.yml, you can even ignore the snsTopicArn variable and build it like one of the suggestions using !Ref which should be a better option for you:
functions:
dispatcher:
handler: <handler>
events:
- sns:
arn: !Ref SNSTopic
topicName: ${self:custom.snsTopic}
full example:
service: testsns
provider:
name: aws
runtime: nodejs12.x
region: eu-west-1
functions:
hello:
handler: handler.hello
events:
- sns:
arn: !Ref SuperTopic
topicName: MyCustomTopic
Properties:
Policies:
- AWSLambdaExecute
- Statement:
- Effect: Allow
Action:
- 'lambda:InvokeFunction'
resources:
Resources:
SuperTopic:
Type: AWS::SNS::Topic
Properties:
TopicName: MyCustomTopic
Finally got it!
I end up removing my SNS TOPIC declaration from resource section of serverless.yml added under iamRoleStatements, something like this,
iamRoleStatements:
- Effect: Allow
Action:
- SNS:Publish
Resource: { "Fn::Join" : ["", ["arn:aws:sns:${self:provider.region}:", { "Ref" : "AWS::AccountId" }, ":${self:custom.mySnsTopic}" ] ] }
And added variables in the custom section
custom:
mySnsTopic: "${self:service}-${self:provider.stage}-sns-consume"
mySnsTopicArn: { "Fn::Join" : ["", ["arn:aws:sns:${self:provider.region}:", { "Ref" : "AWS::AccountId" }, ":${self:custom.mySnsTopic}" ] ] }
then mapped this to the lambda function events
Function:
handler: src/sample/file.lambdaHandler
role: s3FullAccessRole
events: ${self:custom.mySnsTopicArn}
Properties:
Policies:
- AWSLambdaExecute
For reference link

Environment Variables with Serverless and AWS Lambda

I am learning serverless framework and I'm making a simple login system.
Here is my serverless.yml file
service: lms-auth
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: ap-south-1
environment:
MONGODB_URI: $(file(../env.yml):MONOGDB_URI)
JWT_SECRET: $(file(../env.yml):JWT_SECRET)
functions:
register:
handler: handler.register
events:
- http:
path: auth/register/
method: post
cors: true
login:
handler: handler.login
events:
- http:
path: auth/login/
method: post
cors: true
plugins:
- serverless-offline
As you can see, I have two environment variables and both of them are referencing to a different file in the same root folder.
Here is that env.yml file
MONOGDB_URI: <MY_MONGO_DB_URI>
JWT_SECRET: LmS_JWt_secREt_auth_PasSWoRds
When I do sls deploy, I see that both the variables are logging as null. The environment variables aren't sent to lambda.
How can I fix this?
Also, currently I'm using this method and adding the env.yml to .gitignore and saving the values. Is there any other efficient way of hiding sensitive data?
I would do something like this to help you out with the syntax
service: lms-auth
custom: ${file(env.yml)}
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: ap-south-1
environment:
MONGODB_URI: ${self:custom.mongodb_uri}
JWT_SECRET: ${self:custom.jwt_secret}
functions:
register:
handler: handler.register
events:
- http:
path: auth/register/
method: post
cors: true
login:
handler: handler.login
events:
- http:
path: auth/login/
method: post
cors: true
plugins:
- serverless-offline
Then in your env.yml you can do
mongodb_uri: MY_MONGO_DB_URI
jwt_secret: LmS_JWt_secREt_auth_PasSWoRds
Enviroment variables
1. Add useDotenv:true to your .yml file 2.Add your variables like this -> ${env:VARIABLE_NAME}3.Create a file called .env.dev and write the variables (You can add .env.prod but you have to change the stage inside your .yml file ) Example :
service: lms-auth
useDotenv: true
provider:
name: aws
runtime: nodejs12.x
stage: dev
region: us-east-1
environment:
MONGODB_URI: ${env:MONOGDB_URI}
JWT_SECRET: ${env:JWT_SECRET}
functions:
register:
handler: handler.register
events:
- http:
path: auth/register/
method: post
cors: true
login:
handler: handler.login
events:
- http:
path: auth/login/
method: post
cors: true
plugins:
- serverless-offline
.env.dev
MONOGDB_URI = The URI Value
JWT_SECRET = The JWT Scret Value
I ended up solving it. I had set up my Dynamo DB in AWS us-west region. reinitialized in US-East-2, and reset the region under 'provider' within the .yml file.

LambdaFunction - Value of property Variables must be an object with String (or simple type) properties

I am using serverless to deploy my Lambda based application. It was deploying just fine, and then it stopped for some reason. I paired down the entire package to the serverless.yml below and one function in the handler - but I keep getting this error:
Serverless Error ---------------------------------------
An error occurred: TestLambdaFunction - Value of property Variables must be an object with String (or simple type) properties.
Stack Trace --------------------------------------------
Here is the serverless.yml
# serverless.yml
service: some-api
provider:
name: aws
runtime: nodejs6.10
stage: prod
region: us-east-1
iamRoleStatements:
$ref: ./user-policy.json
environment:
config:
region: us-east-1
plugins:
- serverless-local-dev-server
- serverless-dynamodb-local
- serverless-step-functions
package:
exclude:
- .gitignore
- package.json
- README.md
- .git
- ./**.test.js
functions:
test:
handler: handler.test
events:
- http: GET test
resources:
Outputs:
NewOutput:
Description: Description for the output
Value: Some output value
Test Lambda Function in Package
#handler.test
module.exports.test = (event, context, callback) => {
callback(null, {
statusCode: 200,
body: JSON.stringify({
message: 'sadfasd',
input: event
})
})
}
Turns out, this issue does not have any relationship to the Lambda function. Here is the issue that caused the error.
This does NOT work:
environment:
config:
region: us-east-1
This DOES work:
environment:
region: us-east-1
Simply put, I don't think you can have more than one level in your yaml environment variables.
Even if you try sls print as a sanity check, this issue will not pop up. Only in sls deploy.
You have been warned, and hopefully saved!
Other thing that might cause this kind of error is using invalid yaml syntax.
It's easy to get confused about this.
Valid syntax for environment variables
environment:
key: value
Invalid syntax for environment variables
environment:
- key: value
Notice little dash in the code below?
In yaml syntax - means array and therefore, code below is interpreted as array, not object.
So that's why error tells "Value of property Variables must be an object with String (or simple type) properties."
This can be easily fixed by removing - in front of all keys.

Resources