serverless events are missing - node.js

I have a problem with my serverless configuration resulting in lambda functions being deployed without their triggers.
I have a main serverless.yml for my skills, as below:
service: ${file(./${env:DEPLOY_FILE_NAME}):service}
provider:
name: aws
custom:
globalSchedule: ${file(./${env:DEPLOY_FILE_NAME_STAGE}):globalSchedule}
roleName: ${file(./${env:DEPLOY_FILE_NAME_STAGE}):roleName}
profileName: ${file(./${env:DEPLOY_FILE_NAME_STAGE}):profileName}
plugins:
- pluginHandler
runtime: nodejs4.3
cfLogs: true
stage: ${file(./${env:DEPLOY_FILE_NAME_STAGE}):stage}
region: ${file(./${env:DEPLOY_FILE_NAME_STAGE}):region}
memorySize: ${file(./${env:DEPLOY_FILE_NAME_STAGE}):memorySize}
timeout: ${file(./${env:DEPLOY_FILE_NAME_STAGE}):timeout}
keepWarm: false
useApigateway: false
events:
${file(./${env:DEPLOY_FILE_NAME}):events}
package:
exclude:
${file(./${env:DEPLOY_FILE_NAME}):exclude}
functions:
smartHome:
handler: ${file(./${env:DEPLOY_FILE_NAME_STAGE}):handler}
Then, I have two sets of yaml settings files. One for ${skill_type}_${localization} ie customskill_eu.yml and another stage-specific ${skill_type}${localization}{$stage} like smarthome_us_dev.yml etc.
service: alexa-SmartHomeSkillAdapter
exclude:
- app.js
- .idea/**
- .npmignore/**
- .jshintrc
- build/**
- documentation.docx
- dist/**
- event.json
- lambda_function_custom_skill.js
- resources/**
- custom_skill_eu.yml
- custom_skill_us.yml
- smart_home_eu.yml
- smart_home_us.yml
- serverless_settings/**
- tests/**
events:
- s3: ${file(./${env:DEPLOY_FILE_NAME_STAGE}):s3}
- alexaSmartHome: amzn1.ask.skill.d48263be-c7ef-4d61-a773-d6431567e6d6
What is wrong? Please advise.
Thank you.

You need to add events to your functions. Have a read through the serverless documentation for events.
Currently serverless supports lambdas to be invoked by API GateWay, Kinesis, DynamoDB, S3, Schedule, SNS, and Alexa Skill. (read more)
So in this case, adding a required events tag should solve your problem.
...
functions:
smartHome:
handler: ${file(./${env:DEPLOY_FILE_NAME_STAGE}):handler}
events: ${file(./${env:DEPLOY_FILE_NAME}):events}
...
Alternatively, you can always define all resources and their actions using traditional CloudFormation format within serverless resources node.

Related

Configuring Serverless to handle required path parameters

I'm new to serverless and writing my first service. It is built for the AWS API gateway and node.js lambda functions. Consider this as my serverless.yaml file:
service: applicationCatalog
frameworkVersion: '2'
provider:
name: aws
runtime: nodejs12.x
functions:
listShirts:
handler: handler.listShirts
events:
- httpApi: GET /
createShirt:
handler: handler.createShirt
events:
- httpApi: POST /
getShirt:
handler: handler.getShirt
events:
- httpApi:
method: GET
path: "/{shirtId}"
request:
parameters:
paths:
shirtId: true
deleteShirt:
handler: handler.deleteShirt
events:
- httpApi:
method: DELETE
path: "/{shirtId}"
request:
parameters:
paths:
shirtId: true
resources: {}
The functions listShirts, createShirt, and getShirt all work as I expected, and deleteShirt works too when a ShirtId is passed. The issue is when I don't pass the ShirtId on a delete. Assuming my service url is "https://shirts.mywardrobeapi.com". I'd expect this request:
DELETE https://shirts.mywardrobeapi.com
to trigger an error response from the API gateway. Instead, the deleteShirt function is invoked. Of course I could handle this simple check inside the function, but I thought that's what the { "shirtId" : true } setting in the serverless.yaml file was for. How can I get this setting to treat shirtId as required and not invoke the function when it's not provided? If I can't, what is this purpose of this setting?
I would suggest using Middy and validator middleware for handling required parameters.
Yep, the disadvantage is that your lambda is triggered all the time. But also you obtain
the flexibility of keeping handling required params in the code
clear logging if the parameters were wrong
you may also validate outgoing responses
keeps you serverless.yaml cleaner
We prefer the middy more than precise configure of Gateway API.

Resources > Repository triggers not firing and default triggers not disabled in Azure DevOps yaml pipeline

I have a triggers set up in our azure-pipelines.yml like so below:
the scriptsconn represents a connection to the default/self repo that contains the deployment pipeline yaml.
the serviceconn represents a microservice repo we are building and deploying using the template and publish tasks.
We have multiple microservices with similar build pipelines, so this approach is an attempt to lessen the amount of work needed to update these steps.
Right now the issue we're running into is two fold:
no matter what branch we specify in the scriptsconn resources -> repositories section the build triggers for every commit to every branch in the repo.
no matter how we configure the trigger for serviceconn we cannot get the build to trigger for any commit, PR created, or PR merged.
According to the link below this configuration should be pretty straighforward. Can someone point out what mistake we're making?
https://github.com/microsoft/azure-pipelines-yaml/blob/master/design/pipeline-triggers.md#repositories
resources:
repositories:
- repository: scriptsconn
type: bitbucket
endpoint: BitbucketAzurePipelines
name: $(scripts.name)
ref: $(scripts.branch)
trigger:
- develop
- repository: serviceconn
type: bitbucket
endpoint: BitbucketAzurePipelines
name: (service.name)
ref: $(service.branch)
trigger:
- develop
pr:
branches:
- develop
variables:
- name: service.path
value: $(Agent.BuildDirectory)/s/$(service.name)
- name: scripts.path
value: $(Agent.BuildDirectory)/s/$(scripts.name)
- name: scripts.output
value: $(scripts.path)/$(release.folder)/$(release.filename)
- group: DeploymentScriptVariables.Dev
stages:
- stage: Build
displayName: Build and push an image
jobs:
- job: Build
displayName: Build
pool:
name: 'Self Hosted 1804'
steps:
- checkout: scriptsconn
- checkout: serviceconn
The document you linked to is actually a design document. So it's possible\likely that not everything on that page is implemented. In the design document I also see this line:
However, triggers are not enabled on repository resource today. So, we will keep the current behavior and in the next version of YAML we will enable the triggers by default.
The current docs on the YAML schema seem to indicate that triggers are not supported on Repository Resources yet.
Just as an FYI you can see the current supported YAML schema at this url.
https://dev.azure.com/{organization}/_apis/distributedtask/yamlschema?api-version=5.1
I am not 100% sure on what you are after template wise. General suggestion, if you are going with the reusable content template workflow, you could trigger from an azure-pipelines.yml file in each of your microservice repos, consuming the reusable steps from your template. Hope that helps!

How to wait for CodePipeline to deploy the function in Cloudformation

I have a Cloudformation template and I wanted to create Aurora(MySQL) tables through it. However, there is no built-in resource for it. So, I decided to build a custom resource function to create tables upon DbCluster creation. Moreover, as CI/CD Pipelines can also be created by Cloudformation, I prepared a template like below. However, it throws an error:
Function not found: arn:aws:lambda:eu-central-1:xxxxxxxxxxxx:MyFunctionName (Service: AWSLambda; Status Code: 404; Error Code: ResourceNotFoundException; Request ID: ...)
Apparently, the CustomResource runs whenever Pipeline is created. But I need to wait for its first deployment of function in order to use it in custom resource. Thought the property RestartExecutionOnUpdate: true in AWS::CodePipeline::Pipeline and adding DependsOn in Custom::RdsBootstrap would help but they did not.
Resources:
# Serverless Aurora DB Cluster
MyDbCluster:
Type: AWS::RDS::DBCluster
...
# Build Project
MyCustomResourceFunctionBuildProject:
Type: AWS::CodeBuild::Project
...
# Pipeline for deploying Custom Resource Function Source Code
MyCustomResourceFunctionPipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: custom-resource-function-pipeline
RestartExecutionOnUpdate: true
Stages:
- Name: Source
...
- Name: Build
...
- Name: Pipeline
...
# Custom Resource Function
RdsBootstrap:
Type: Custom::RdsBootstrap
DependsOn: [MyDbCluster, MyCustomResourceFunctionPipeline]
Version: '1.0'
Properties:
ServiceToken: !Sub "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:MyFunctionName"
So, how to make the custom resource not only wait for CodePipeline creation; but also its initial deployment?

Message Hub as event source in Serverless project doesn't create any triggers or rules

I'm trying to set up a Message Hub topic as an event source for the cloud function like so:
custom:
org: MyOrganization
space: dev
mhServiceName: my-kafka-service
functions:
main:
handler: src/handler.main
events:
- message_hub:
package: /${self:custom.org}_${self:custom.space}/Bluemix_${self:custom.mhServiceName}_Credentials-1
topic: test_topic
When I deploy the service, there are no triggers or rules being created. Thus the function is not being invoked when I publish messages to the Kafka topic.
I also tried to explicitly set a trigger and rule, but that only creates a trigger of type custom, instead of type message hub. Custom triggers seem to not work in this scenario.
What am I missing?
Update
As James pointed out, the reason the triggers and rules were not created was due to the fact, that the indentation wasn't correct.
I was still running into problems with the package not being found (see my answer to James solution) when trying to deploy the function and I've found out what the problem was there.
Turns out, you have to do two more things that are not explicitly mentioned in the documentation.
1) You have to manually create service credentials (the documentation assumes you called them Credentials-1 so I did the same)
2) You have to bind Kafka (Message Hub, now called Event Streams) to your function in your serverless.yml
The resulting function definition should look like this:
functions:
main:
handler: src/handler.main
bind:
- service:
name: messagehub
instance: ${self:custom.mhServiceName}
events:
- message_hub:
package: /${self:custom.org}_${self:custom.space}/Bluemix_${self:custom.mhServiceName}_Credentials-1
topic: test_topic
The YAML indentation on the serverless.yml is incorrect. This means the event properties aren't registered by the framework during deployment.
Change the serverless.yml file to the following format and it should work.
custom:
org: MyOrganization
space: dev
mhServiceName: my-kafka-service
functions:
main:
handler: src/handler.main
events:
- message_hub:
package: /${self:custom.org}_${self:custom.space}/Bluemix_${self:custom.mhServiceName}_Credentials-1
topic: test_topic

AWS SAM FindInMap Not Populating Variable

I am trying to get a simple SAM template to populate environmental variables "dynamically" using the !FindInMap intrinsic function. I have followed many examples, including AWS's documentation, without any luck. For some reason the function will not populate environment variables using it even though everything seems to be correct. It will just set the variable to an empty string.
You can see from the code below that I am using a !Ref function inside of it, but have tried hardcoding the parameters of the function without any luck. You'll also notice that the function is in the Global section, and you may think it's not working because it's there and not function environmentals, but I've tried both with neither of them working. You'll also notice that I am populating a environment variable called STAGE which is working correctly and setting it to "local".
I am testing the function by running sam start local-api and outputting the environment variables in the response.
Any suggestions would be very helpful.
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: "Test Server"
Parameters:
Environment:
Type: String
Default: local
AllowedValues:
- local
- test
- prod
Mappings:
EnvParams:
local:
stage: "local"
databaseUrl: "mongodb://localhost:32768/test"
Globals:
Function:
Timeout: 500
Runtime: nodejs8.10
Environment:
Variables:
STAGE: !Ref Environment
DB_URL: !FindInMap [EnvParams, !Ref Environment, databaseUrl]
Resources:
ArticlesGetFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: src/articles/
Handler: index.getById
Events:
HelloWorld:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Path: /api/article/
Method: get
Outputs:
HelloWorldApi:
Description: "API Gateway endpoint URL for Prod stage for Hello World function"
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/"
HelloWorldFunction:
Description: "Hello World Lambda Function ARN"
Value: !GetAtt HelloWorldFunction.Arn
HelloWorldFunctionIamRole:
Description: "Implicit IAM Role created for Hello World function"
Value: !GetAtt HelloWorldFunctionRole.Arn
It looks like !FindInMap isn't supported in local debugging yet. Here's the relevant GitHub issue:
https://github.com/awslabs/aws-sam-cli/issues/476
To set and test Environment Variables in SAM CLI, you can use the --env-vars option instead. !FindInMap is also supported when deployed via CloudFormation, you could test this feature by deploying a simple Lambda function and running a test query against it.
I had similar error because of this:
!FindInMap [EnvMap, !Ref Stage, dbpass] - correct
!FindInMap [EnvMap, !Ref Stage, dbpass] - error

Resources