how to reference iam role (created in parent stack) in nested stack - nested

how to reference the iam role (created in the parent stack) in a nested stack
Here are my yml files for the parent and child stack
I used the !Ref and !GetAtt, none of them are working.
Parent Stack:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description:
SAM Template for Nested application resources
Resources:
Layer:
Type: AWS::Lambda::LayerVersion
Properties:
CompatibleRuntimes:
- nodejs16.x
Content:
S3Bucket: bucketName
S3Key: Key
Description: My layer
LayerName: lambdaLayer
LicenseInfo: MIT
SourceIAMRole:
Type: AWS::IAM::Role
Properties:
RoleName: source-lambda-iam-omni-agent-role
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Sid: ''
Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: translate-policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- comprehend:DetectDominantLanguage
- translate:TranslateText
Resource: '*'
- PolicyName: invokeLambda-sns-sqs-sm-policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- lambda:InvokeAsync
- lambda:InvokeFunction
- sns:Publish
- iam:ListRoles
- iam:GetRole
- secretsmanager:GetSecretValue
- secretsmanager:ListSecrets
- secretsmanager:UpdateSecret
- sqs:*
Resource: '*'
- PolicyName: sts-policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- sts:AssumeRole
Resource: '*'
- PolicyName: write-cloudwatch-logs-policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- logs:CreateLogStream
- logs:CreateLogGroup
- logs:PutLogEvents
Resource: '*'
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaKinesisExecutionRole
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
- arn:aws:iam::aws:policy/AmazonS3FullAccess
- arn:aws:iam::aws:policy/AmazonAPIGatewayInvokeFullAccess
- arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess
- arn:aws:iam::aws:policy/AmazonConnect_FullAccess
config:
Type: AWS::Serverless::Application
Properties:
Location: customerconfig.yml
Parameters:
LayerARN: !Ref Layer
IAMRole: !Ref SourceIAMRole
DependsOn:
- Layer
- SourceIAMRole
I did pass the IAM role to the nested stack in the parameter as you can see above in the parent stack. Now I need to pass the ARN created in the parent stack to child (nested) stack
Child stack (Nested)
AWSTemplateFormatVersion: 2010-09-09
Description:
Start from scratch starter project
Transform: AWS::Serverless-2016-10-31
Globals:
Function:
Runtime: nodejs16.x
Timeout: 15
CodeUri: ./
VpcConfig:
SecurityGroupIds:
- sg-005941cb59bd3c74e
SubnetIds:
- subnet-083b8c9bc31cefb69
- subnet-0f77f5b03c7fc1bc7
Layers:
- !Ref LayerARN
MemorySize: 128
Parameters:
LayerARN:
Type: String
IAMRole:
Type: String
Resources:
helloFromLambdaFunction:
Type: AWS::Serverless::Function
Properties:
Role: !GetAtt IAMRole.Arn
Handler: api/getapplicationconfig.handler
Description: A Lambda function that returns a static string.

I did manage to make it work,
While passing the parameters to nested stack, for the IAM role we will use
!GetAtt SourceIAMRole.Arn.
config:
Type: AWS::Serverless::Application
Properties:
Location: customerconfig.yml
Parameters:
LayerARN: !Ref Layer
IAMRole: !GetAtt SourceIAMRole.Arn

Related

Generate communication between lambdas - AWS SAM - Step Functions

I am learning to use AWS SAM and it occurred to me that when receiving a payload a lambda has the responsibility of verifying that what is expected is correct. If it is, another lambda will be called to tokenize its data and once that process is finished, call two more lambdas to save the results obtained.
The structure of this project that I am trying to do is as follows:
and my template.yaml file is organized as follows:
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Description: >
lambda-data-dictionary-register
Sample SAM Template for step-function
Resources:
StockTradingStateMachine:
Type: AWS::Serverless::StateMachine # More info about State Machine Resource: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-statemachine.html
Properties:
DefinitionUri: statemachine/data-dictionary.asl.json
DefinitionSubstitutions:
DataDictionaryFunctionArn: !GetAtt DataDictionaryFunction.Arn
TokenizeFunctionArn: !GetAtt TokenizeFunction.Arn
MongoDBFunctionArn: !GetAtt MongoDBFunction.Arn
RedisFunctionArn: !GetAtt RedisFunction.Arn
Events:
HourlyTradingSchedule:
Type: Schedule # More info about Schedule Event Source: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-property-statemachine-schedule.html
Properties:
Description: Schedule to run the stock trading state machine every hour
Enabled: False # This schedule is disabled by default to avoid incurring charges.
Schedule: "rate(1 hour)"
Policies: # Find out more about SAM policy templates: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-policy-templates.html
- LambdaInvokePolicy:
FunctionName: !Ref DataDictionaryFunction
- LambdaInvokePolicy:
FunctionName: !Ref TokenizeFunction
- LambdaInvokePolicy:
FunctionName: !Ref MongoDBFunction
- LambdaInvokePolicy:
FunctionName: !Ref RedisFunction
DataDictionaryFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-function.html
Properties:
CodeUri: functions/data-dictionary-register/
Handler: app.lambdaHandler
Runtime: nodejs16.x
Architectures:
- x86_64
Events:
Api:
Type: Api
Properties:
Path: /api/data-dictionary-register
Method: GET
Metadata:
BuildMethod: esbuild
BuildProperties:
Minify: false
Target: 'es2020'
Sourcemap: true
UseNpmCi: true
TokenizeFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-function.html
Properties:
CodeUri: functions/tokenize-data/
Handler: app.lambdaHandler
Runtime: nodejs16.x
Architectures:
- x86_64
Events:
Api:
Type: Api
Properties:
Path: /api/tokenize-data
Method: GET
Metadata:
BuildMethod: esbuild
BuildProperties:
Minify: false
Target: 'es2020'
Sourcemap: true
UseNpmCi: true
MongoDBFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: functions/lambda-mongo-db/
Handler: app.lambdaHandler
Runtime: nodejs16.x
Architectures:
- x86_64
Events:
Api:
Type: Api
Properties:
Path: /api/lambda-mongo-db
Method: GET
Metadata:
BuildMethod: esbuild
BuildProperties:
Minify: false
Target: 'es2020'
Sourcemap: true
UseNpmCi: true
RedisFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: functions/lambda-redis/
Handler: app.lambdaHandler
Runtime: nodejs16.x
Architectures:
- x86_64
Events:
Api:
Type: Api
Properties:
Path: /api/lambda-redis
Method: GET
Metadata:
BuildMethod: esbuild
BuildProperties:
Minify: false
Target: 'es2020'
Sourcemap: true
UseNpmCi: true
With all of the above, I want to know how I can make the data-dictionary-register lambda, when processing an incoming data and the result is successful, pass a JSON to the tokenize-data lambda, and this in turn sends it to the other two (lambda-mongo-db and lambda-redis). I want to emphasize that I am working in my local environment and the ideal now is to do everything there.
Ultimately, my question is: how do I make the end of one successful process the start of another?
Additionally, I indicate that my test files at the moment are found with this structure in their corresponding app.ts:
At the time I tried with Axios pasting for example to http://127.0.0.1:8080/api/tokenize-data from http://127.0.0.1:8080/api/data-dictionary-register but it always gives an error and reading me I ran into step functions...

Socket IO not working with multiple node js servers

I have 3 servers on AWS.
When
user A connects with server 1 and
user B connects with server 2
they are unable to join or chat. But if both connects with same server then they are joining successfully.
What would be the best way to handle this situations
Over the AWS, I am using Load Balancer with Auto Scaling.
From socket.io's documentation:
Now that you have multiple Socket.IO nodes accepting connections, if you want to broadcast events to everyone (or even everyone in a certain room) you’ll need some way of passing messages between processes or computers.
The interface in charge of routing messages is what we call the Adapter. You can implement your own on top of the socket.io-adapter (by inheriting from it) or you can use the one we provide on top of Redis: socket.io-redis:
Implementing the Redis adaptor in our socket.io powered application for horizontal scaling is just a few lines of added code to the index.js file:
var io = require('socket.io')(3000);
var redis = require('socket.io-redis');
io.adapter(redis({ host: 'localhost', port: 6379 }));
To Enable AutoScaling in your application, add following line of code to the scaling-policy.yml file
# Enable autoscaling for the service
ScalableTarget:
Type: AWS::ApplicationAutoScaling::ScalableTarget
DependsOn: Service
Properties:
ServiceNamespace: 'ecs'
ScalableDimension: 'ecs:service:DesiredCount'
ResourceId:
Fn::Join:
- '/'
- - service
- Fn::ImportValue: !Join [':', [!Ref 'EnvironmentName', 'ClusterName']]
- !Ref 'ServiceName'
MinCapacity: 2
MaxCapacity: 10
RoleARN:
Fn::ImportValue:
!Join [':', [!Ref 'EnvironmentName', 'AutoscalingRole']]
# Create scaling policies for the service
ScaleDownPolicy:
Type: AWS::ApplicationAutoScaling::ScalingPolicy
DependsOn: ScalableTarget
Properties:
PolicyName:
Fn::Join:
- '/'
- - scale
- !Ref 'EnvironmentName'
- !Ref 'ServiceName'
- down
PolicyType: StepScaling
ResourceId:
Fn::Join:
- '/'
- - service
- Fn::ImportValue: !Join [':', [!Ref 'EnvironmentName', 'ClusterName']]
- !Ref 'ServiceName'
ScalableDimension: 'ecs:service:DesiredCount'
ServiceNamespace: 'ecs'
StepScalingPolicyConfiguration:
AdjustmentType: 'ChangeInCapacity'
StepAdjustments:
- MetricIntervalUpperBound: 0
ScalingAdjustment: -1
MetricAggregationType: 'Average'
Cooldown: 60
ScaleUpPolicy:
Type: AWS::ApplicationAutoScaling::ScalingPolicy
DependsOn: ScalableTarget
Properties:
PolicyName:
Fn::Join:
- '/'
- - scale
- !Ref 'EnvironmentName'
- !Ref 'ServiceName'
- up
PolicyType: StepScaling
ResourceId:
Fn::Join:
- '/'
- - service
- Fn::ImportValue: !Join [':', [!Ref 'EnvironmentName', 'ClusterName']]
- !Ref 'ServiceName'
ScalableDimension: 'ecs:service:DesiredCount'
ServiceNamespace: 'ecs'
StepScalingPolicyConfiguration:
AdjustmentType: 'ChangeInCapacity'
StepAdjustments:
- MetricIntervalLowerBound: 0
MetricIntervalUpperBound: 15
ScalingAdjustment: 1
- MetricIntervalLowerBound: 15
MetricIntervalUpperBound: 25
ScalingAdjustment: 2
- MetricIntervalLowerBound: 25
ScalingAdjustment: 3
MetricAggregationType: 'Average'
Cooldown: 60
# Create alarms to trigger these policies
LowCpuUsageAlarm:
Type: AWS::CloudWatch::Alarm
Properties:
AlarmName:
Fn::Join:
- '-'
- - low-cpu
- !Ref 'EnvironmentName'
- !Ref 'ServiceName'
AlarmDescription:
Fn::Join:
- ' '
- - "Low CPU utilization for service"
- !Ref 'ServiceName'
- "in stack"
- !Ref 'EnvironmentName'
MetricName: CPUUtilization
Namespace: AWS/ECS
Dimensions:
- Name: ServiceName
Value: !Ref 'ServiceName'
- Name: ClusterName
Value:
Fn::ImportValue: !Join [':', [!Ref 'EnvironmentName', 'ClusterName']]
Statistic: Average
Period: 60
EvaluationPeriods: 1
Threshold: 20
ComparisonOperator: LessThanOrEqualToThreshold
AlarmActions:
- !Ref ScaleDownPolicy
HighCpuUsageAlarm:
Type: AWS::CloudWatch::Alarm
Properties:
AlarmName:
Fn::Join:
- '-'
- - high-cpu
- !Ref 'EnvironmentName'
- !Ref 'ServiceName'
AlarmDescription:
Fn::Join:
- ' '
- - "High CPU utilization for service"
- !Ref 'ServiceName'
- "in stack"
- !Ref 'EnvironmentName'
MetricName: CPUUtilization
Namespace: AWS/ECS
Dimensions:
- Name: ServiceName
Value: !Ref 'ServiceName'
- Name: ClusterName
Value:
Fn::ImportValue: !Join [':', [!Ref 'EnvironmentName', 'ClusterName']]
Statistic: Average
Period: 60
EvaluationPeriods: 1
Threshold: 70
ComparisonOperator: GreaterThanOrEqualToThreshold
AlarmActions:
- !Ref ScaleUpPolicy

AWS SAM Schedule Lambda is not triggering as per Schedule

I have a use case where I need to call one API every 2 minutes to check the updates and store the result into the database. For the same, I am trying the AWS Schedule lambda function using AWS SAM CLI using Python but my Lambda function is not getting triggered. Below is my code:
app.py
def lambda_schedule(event, context):
print("Lambda Schedule event started Successfully......")
print("Lambda function ARN:", context.invoked_function_arn)
print("CloudWatch log stream name:", context.log_stream_name)
print("CloudWatch log group name:", context.log_group_name)
print("Lambda Request ID:", context.aws_request_id)
print("Lambda Schedule event ended Successfully......")
template.yaml
CronLambdaFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: hello_world/
Handler: app.lambda_schedule
Runtime: python3.8
Events:
PullBalanceScheduleRule:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- "aws.events"
PullBalanceScheduleRule:
Type: AWS::Events::Rule
Properties:
Description: "PullBalanceScheduleRule"
ScheduleExpression: "rate(2 minutes)"
State: "ENABLED"
Targets:
-
Arn: !GetAtt CronLambdaFunction.Arn
Id: "CronLambdaFunction"
PermissionForEventsToInvokeLambda:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !Ref "PullBalanceScheduleRule"
Action: "lambda:InvokeFunction"
Principal: "events.amazonaws.com"
SourceArn:
-
Arn: !GetAtt PullBalanceScheduleRule.Arn
Id: "PullBalanceScheduleRule"
can anyone tell me, what is wrong in my code OR what is missing in my code?
I got my mistakes. Mistakes were in the Permission sections. I am posting here correct yaml configurations so it could help to someone who are new to AWS SAM CLI.
app.py
def lambda_schedule(event, context):
print("Lambda Schedule event started Successfully......")
print("Lambda function ARN:", context.invoked_function_arn)
print("CloudWatch log stream name:", context.log_stream_name)
print("CloudWatch log group name:", context.log_group_name)
print("Lambda Request ID:", context.aws_request_id)
print("Lambda Schedule event ended Successfully......")
template.yaml
CronLambdaFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: hello_world/
Handler: app.lambda_schedule
Runtime: python3.8
Events:
PullBalanceScheduleRule:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- "aws.events"
PullBalanceScheduleRule:
Type: AWS::Events::Rule
Properties:
Description: "PullBalanceScheduleRule"
ScheduleExpression: "rate(2 minutes)"
State: "ENABLED"
Targets:
-
Arn: !GetAtt CronLambdaFunction.Arn
Id: "CronLambdaFunction"
PermissionForEventsToInvokeLambda:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !Ref "CronLambdaFunction"
Action: "lambda:InvokeFunction"
Principal: "events.amazonaws.com"
SourceArn:
Fn::GetAtt:
- "PullBalanceScheduleRule"
- "Arn"

serverless step function state machine not created on sls deploy

I am trying to setup a step function in my serverless application but when I deploy the application to aws the state machine is not created for the step functions. Here is my lambda file. I have no idea what I am doing wrong on this but there must be something in my setup that is causing the state machine creation to fail.
service: help-please
provider:
name: aws
versionFunctions: false
runtime: nodejs12.x
vpc:
securityGroupIds:
- sg
subnetIds:
- subnet
- subnet
stage: dev
region: us-west-2
iamRoleStatements:
- Effect: 'Allow'
Action:
- 'xray:PutTraceSegments'
- 'xray:PutTelemetryRecords'
- 'sns:*'
- 'states:*'
Resource: '*'
functions:
upsertNewCustomerRecord:
handler: .build/handler.upsertNewCustomerRecord
iamRoleStatements:
- Effect: 'Allow'
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
- logs:DescribeLogGroups
- ec2:CreateNetworkInterface
- ec2:DescribeNetworkInterfaces
- ec2:DeleteNetworkInterface
- cognito-idp:AdminInitiateAuth
- ccognito-idp:DescribeUserPool
- cognito-idp:DescribeUserPoolClient
- cognito-idp:ListUserPoolClients
- cognito-idp:ListUserPools
- 'xray:PutTraceSegments'
- 'xray:PutTelemetryRecords'
Resource: '*'
sendNewCustomerEmail:
handler: .build/handler.sendNewCustomerEmail
iamRoleStatements:
- Effect: 'Allow'
Action:
- logs:DescribeLogGroups
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
- ec2:CreateNetworkInterface
- ec2:DescribeNetworkInterfaces
- ec2:DeleteNetworkInterface
- cognito-idp:AdminInitiateAuth
- ccognito-idp:DescribeUserPool
- cognito-idp:DescribeUserPoolClient
- cognito-idp:ListUserPoolClients
- cognito-idp:ListUserPools
- 'xray:PutTraceSegments'
- 'xray:PutTelemetryRecords'
Resource: '*'
upsertCognitoUser:
handler: .build/handler.upsertCognitoUser
iamRoleStatements:
- Effect: 'Allow'
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
- logs:DescribeLogGroups
- ec2:CreateNetworkInterface
- ec2:DescribeNetworkInterfaces
- ec2:DeleteNetworkInterface
- cognito-idp:AdminInitiateAuth
- ccognito-idp:DescribeUserPool
- cognito-idp:DescribeUserPoolClient
- cognito-idp:ListUserPoolClients
- cognito-idp:ListUserPools
- 'xray:PutTraceSegments'
- 'xray:PutTelemetryRecords'
Resource: '*'
stepFunctions:
stateMachines:
signupstepfunc:
definition:
Comment: 'Sign up step function'
StartAt: UpsertNewCustomerRecord
States:
UpsertNewCustomerRecord:
Type: Task
Resource: 'arn'
Next: SendNewCustomerEmail
SendNewCustomerEmail:
Type: Task
Resource: 'arn'
Next: UpsertCognitoUser
UpsertCognitoUser:
Type: Task
Resource: 'arn'
End: true
plugins:
- serverless-plugin-typescript
- serverless-offline
- serverless-iam-roles-per-function
- serverless-plugin-tracing
- serverless-step-functions
- serverless-pseudo-parameters
- serverless-prune-plugin
Going by the yaml file provided, I think the issue might be the indentation on your function definition:
functions:
upsertNewCustomerRecord:
handler: .build/handler.upsertNewCustomerRecord
iamRoleStatements:
Must be replaced by this:
functions:
upsertNewCustomerRecord:
handler: .build/handler.upsertNewCustomerRecord
iamRoleStatements:
Can you make this change and try once?
Well unfortunately I figured out that I fat fingered a back slash in my buildspec file and that's what caused this to occur be wary of doing this because when deploying using serverless your build won't fail you'll have to dig into every file just to figure out what's going on. Just something to keep in mind.
In my case, i simply forgot to put End: true at the last stage of the step functions. Maybe this will help.

Lambda#edge not triggered by an origin-request event

I'm trying to configure Lambda#edge functions using CloudFormation. After deploying the template everything looks find in the console, however the lambda functions listening to origin-request events are not being triggered.
Strange enough, a viewer-request event does manage to trigger a function.
What am I doing wrong? How can I make the origin-request event to work?
Here's my template:
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Description: Deployment of Lambda#edge functions
Parameters:
Stage:
Type: String
AllowedValues:
- staging
- production
Default: staging
Description: Stage that can be added to resource names
CodeBucket:
Type: String
Description: The S3 Bucket name for latest code upload
CodeKey:
Type: String
Description: The S3 Key for latest code upload
# Mappings:
# AliasMap:
# staging:
# Alias: "staging-app.achrafsouk.com"
# production:
# Alias: "app.achrafsouk.com"
Resources:
CFDistribution:
Type: AWS::CloudFront::Distribution
Properties:
DistributionConfig:
Enabled: true
Logging:
Bucket: XXX.s3.amazonaws.com
IncludeCookies: false
Prefix: !Sub ${Stage}
PriceClass: PriceClass_100
Comment:
!Sub "Lambda#Edge - ${Stage}"
# Aliases:
# - !FindInMap [AliasMap,
# Ref: Stage, Alias]
Origins:
- CustomOriginConfig:
OriginProtocolPolicy: https-only
DomainName: !Sub "${Stage}.defaultOrigin.example.com"
Id: defaultOrigin
OriginCustomHeaders:
- HeaderName: X-Node-Env
HeaderValue: !Sub "${Stage}"
- CustomOriginConfig:
OriginProtocolPolicy: https-only
DomainName: !Sub "${Stage}.another.example.com"
Id: rateLimitsOrigin
OriginCustomHeaders:
- HeaderName: X-Node-Env
HeaderValue: !Sub "${Stage}"
DefaultCacheBehavior:
ForwardedValues:
Headers:
- CloudFront-Viewer-Country
- X-Request-Host
- User-Agent
QueryString: true
ViewerProtocolPolicy: https-only
TargetOriginId: defaultOrigin
DefaultTTL: 0
MaxTTL: 0
LambdaFunctionAssociations:
# - EventType: viewer-request
# LambdaFunctionARN:
# Ref: ViewerRequestLambdaFunction.Version
- EventType: origin-request
LambdaFunctionARN:
Ref: OriginRequestLambdaFunction.Version
CacheBehaviors:
- ForwardedValues:
Headers:
- CloudFront-Viewer-Country
- X-Request-Host
- User-Agent
QueryString: true
ViewerProtocolPolicy: https-only
DefaultTTL: 60
MaxTTL: 60
TargetOriginId: rateLimitsOrigin
PathPattern: "/rate-limit*"
LambdaFunctionAssociations:
- EventType: origin-request
LambdaFunctionARN:
Ref: GetRateLimitLambdaFunction.Version
# ViewerRequestLambdaFunction:
# Type: AWS::Serverless::Function
# Properties:
# CodeUri:
# Bucket: !Sub ${CodeBucket}
# Key: !Sub ${CodeKey}
# Role: !GetAtt LambdaEdgeFunctionRole.Arn
# Runtime: nodejs10.x
# Handler: src/functions/viewerRequest.handler
# Timeout: 5
# AutoPublishAlias: live
OriginRequestLambdaFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri:
Bucket: !Sub ${CodeBucket}
Key: !Sub ${CodeKey}
Role: !GetAtt LambdaEdgeFunctionRole.Arn
Runtime: nodejs10.x
Handler: src/functions/originRequest.handler
Timeout: 5
AutoPublishAlias: live
GetRateLimitLambdaFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri:
Bucket: !Sub ${CodeBucket}
Key: !Sub ${CodeKey}
Role: !GetAtt LambdaEdgeFunctionRole.Arn
Runtime: nodejs10.x
Handler: src/functions/getRateLimit.handler
Timeout: 5
AutoPublishAlias: live
LambdaEdgeFunctionRole:
Type: "AWS::IAM::Role"
Properties:
Path: "/"
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Sid: "AllowLambdaServiceToAssumeRole"
Effect: "Allow"
Action:
- "sts:AssumeRole"
Principal:
Service:
- "lambda.amazonaws.com"
- "edgelambda.amazonaws.com"
Outputs:
# ViewerRequestLambdaFunctionVersion:
# Value: !Ref ViewerRequestLambdaFunction.Version
OriginRequestLambdaFunctionVersion:
Value: !Ref OriginRequestLambdaFunction.Version
GetRateLimitLambdaFunctionVersion:
Value: !Ref GetRateLimitLambdaFunction.Version
CFDistribution:
Description: Cloudfront Distribution Domain Name
Value: !GetAtt CFDistribution.DomainName
So, after painful hours of trial and error the solution was staring me in the face:
CloudFront distribution was emitting and error saying it could not connect to the origin, and it was right - because I've specified dummy domains in the origins definition of the CloudFront distribution.
My logic was that if my lambda#edge functions intercept the requests before reaching the origin, then it shouldn't matter if the origin's domains is real or not.
As it turns out, it's partially true. Because it seams that:
viewer-request event is triggered before trying to connect to the origin, and there for it can by fake.
origin-request event is triggered only after trying to connect to the origin, so you'd need to specify something real.
Hope my wasited hours will contribute to someone else exploring the same path of reasoning.
In case it helps someone in future with this problem - I noticed that origin-response (which I accidentally selected instead of origin-request) events were (mostly) working for me, but there were still a large number of OriginConnectError in the logs, and the "Percentage of Viewer Requests by Result Type" report was showing a high error rate.
It turns out it was only working at all because I had my origin connection set to https-only, but my origin (an empty s3 bucket) only accepted http connections - and for some reason the events were still being triggered.
Changing the origin connection to the correct http-only completely stopped things working for origin-response, until I also corrected the event type to origin-request
So, in general, check the OriginProtocolPolicy is http-only if you're using an s3 bucket origin. If it's wrong, things might still appear to work sometimes, but not always, and you'll end up with high error rates.

Resources