How can I set proxyTimeout in express-gateway? - node.js

Where exactly do I put proxyTimeout in gateway.config.yml?

You can set timeout for proxy in pipelines:
pipelines:
- name: default
apiEndpoints:
- test
policies:
- proxy:
- action:
serviceEndpoint: testService
proxyTimeout: 6000

Related

When Serverless WarmUp Plugin invoke my lambda , than my lambda gives error when i manually invoke it workes fine

**serverless. yml **.
service: LambdaColdStartRnD
configValidationMode: error
provider:
name: aws
runtime: nodejs14.x
memorySize: 512
timeout: 30
stage: development
region: ap-south-1
lambdaHashingVersion: 20201221
iamRoleStatements:
- Effect: 'Allow'
Action:
- 'lambda:InvokeFunction'
Resource: '*'
plugins:
- serverless-webpack
- serverless-plugin-warmup
functions:
api:
handler: lambda.handler
events:
- http: ANY /
- http: 'ANY /{proxy+}'
package:
individually: true
patterns:
- '!node_modules/**'
custom:
warmup:
RNDwarmer:
enabled: true
role: IamRoleLambdaExecution
events:
- schedule: 'cron(0/2 * ? * * *)'
concurrency: 5
prewarm: true
webpack:
webpackConfig: 'webpack.config.js' # Name of webpack configuration file
includeModules: false # Node modules configuration for packaging
packager: 'npm' # Packager that will be used to package your external modules
excludeFiles: src/**/*.test.js # Provide a glob for files to ignore.
I have defined a custom warmup which creates 5 containers and it is itializing the function with 5 conatiners but cant invoke function below is screenshot of xray traces and logs.
.

serverless step function state machine not created on sls deploy

I am trying to setup a step function in my serverless application but when I deploy the application to aws the state machine is not created for the step functions. Here is my lambda file. I have no idea what I am doing wrong on this but there must be something in my setup that is causing the state machine creation to fail.
service: help-please
provider:
name: aws
versionFunctions: false
runtime: nodejs12.x
vpc:
securityGroupIds:
- sg
subnetIds:
- subnet
- subnet
stage: dev
region: us-west-2
iamRoleStatements:
- Effect: 'Allow'
Action:
- 'xray:PutTraceSegments'
- 'xray:PutTelemetryRecords'
- 'sns:*'
- 'states:*'
Resource: '*'
functions:
upsertNewCustomerRecord:
handler: .build/handler.upsertNewCustomerRecord
iamRoleStatements:
- Effect: 'Allow'
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
- logs:DescribeLogGroups
- ec2:CreateNetworkInterface
- ec2:DescribeNetworkInterfaces
- ec2:DeleteNetworkInterface
- cognito-idp:AdminInitiateAuth
- ccognito-idp:DescribeUserPool
- cognito-idp:DescribeUserPoolClient
- cognito-idp:ListUserPoolClients
- cognito-idp:ListUserPools
- 'xray:PutTraceSegments'
- 'xray:PutTelemetryRecords'
Resource: '*'
sendNewCustomerEmail:
handler: .build/handler.sendNewCustomerEmail
iamRoleStatements:
- Effect: 'Allow'
Action:
- logs:DescribeLogGroups
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
- ec2:CreateNetworkInterface
- ec2:DescribeNetworkInterfaces
- ec2:DeleteNetworkInterface
- cognito-idp:AdminInitiateAuth
- ccognito-idp:DescribeUserPool
- cognito-idp:DescribeUserPoolClient
- cognito-idp:ListUserPoolClients
- cognito-idp:ListUserPools
- 'xray:PutTraceSegments'
- 'xray:PutTelemetryRecords'
Resource: '*'
upsertCognitoUser:
handler: .build/handler.upsertCognitoUser
iamRoleStatements:
- Effect: 'Allow'
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
- logs:DescribeLogGroups
- ec2:CreateNetworkInterface
- ec2:DescribeNetworkInterfaces
- ec2:DeleteNetworkInterface
- cognito-idp:AdminInitiateAuth
- ccognito-idp:DescribeUserPool
- cognito-idp:DescribeUserPoolClient
- cognito-idp:ListUserPoolClients
- cognito-idp:ListUserPools
- 'xray:PutTraceSegments'
- 'xray:PutTelemetryRecords'
Resource: '*'
stepFunctions:
stateMachines:
signupstepfunc:
definition:
Comment: 'Sign up step function'
StartAt: UpsertNewCustomerRecord
States:
UpsertNewCustomerRecord:
Type: Task
Resource: 'arn'
Next: SendNewCustomerEmail
SendNewCustomerEmail:
Type: Task
Resource: 'arn'
Next: UpsertCognitoUser
UpsertCognitoUser:
Type: Task
Resource: 'arn'
End: true
plugins:
- serverless-plugin-typescript
- serverless-offline
- serverless-iam-roles-per-function
- serverless-plugin-tracing
- serverless-step-functions
- serverless-pseudo-parameters
- serverless-prune-plugin
Going by the yaml file provided, I think the issue might be the indentation on your function definition:
functions:
upsertNewCustomerRecord:
handler: .build/handler.upsertNewCustomerRecord
iamRoleStatements:
Must be replaced by this:
functions:
upsertNewCustomerRecord:
handler: .build/handler.upsertNewCustomerRecord
iamRoleStatements:
Can you make this change and try once?
Well unfortunately I figured out that I fat fingered a back slash in my buildspec file and that's what caused this to occur be wary of doing this because when deploying using serverless your build won't fail you'll have to dig into every file just to figure out what's going on. Just something to keep in mind.
In my case, i simply forgot to put End: true at the last stage of the step functions. Maybe this will help.

Lambda#edge not triggered by an origin-request event

I'm trying to configure Lambda#edge functions using CloudFormation. After deploying the template everything looks find in the console, however the lambda functions listening to origin-request events are not being triggered.
Strange enough, a viewer-request event does manage to trigger a function.
What am I doing wrong? How can I make the origin-request event to work?
Here's my template:
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Description: Deployment of Lambda#edge functions
Parameters:
Stage:
Type: String
AllowedValues:
- staging
- production
Default: staging
Description: Stage that can be added to resource names
CodeBucket:
Type: String
Description: The S3 Bucket name for latest code upload
CodeKey:
Type: String
Description: The S3 Key for latest code upload
# Mappings:
# AliasMap:
# staging:
# Alias: "staging-app.achrafsouk.com"
# production:
# Alias: "app.achrafsouk.com"
Resources:
CFDistribution:
Type: AWS::CloudFront::Distribution
Properties:
DistributionConfig:
Enabled: true
Logging:
Bucket: XXX.s3.amazonaws.com
IncludeCookies: false
Prefix: !Sub ${Stage}
PriceClass: PriceClass_100
Comment:
!Sub "Lambda#Edge - ${Stage}"
# Aliases:
# - !FindInMap [AliasMap,
# Ref: Stage, Alias]
Origins:
- CustomOriginConfig:
OriginProtocolPolicy: https-only
DomainName: !Sub "${Stage}.defaultOrigin.example.com"
Id: defaultOrigin
OriginCustomHeaders:
- HeaderName: X-Node-Env
HeaderValue: !Sub "${Stage}"
- CustomOriginConfig:
OriginProtocolPolicy: https-only
DomainName: !Sub "${Stage}.another.example.com"
Id: rateLimitsOrigin
OriginCustomHeaders:
- HeaderName: X-Node-Env
HeaderValue: !Sub "${Stage}"
DefaultCacheBehavior:
ForwardedValues:
Headers:
- CloudFront-Viewer-Country
- X-Request-Host
- User-Agent
QueryString: true
ViewerProtocolPolicy: https-only
TargetOriginId: defaultOrigin
DefaultTTL: 0
MaxTTL: 0
LambdaFunctionAssociations:
# - EventType: viewer-request
# LambdaFunctionARN:
# Ref: ViewerRequestLambdaFunction.Version
- EventType: origin-request
LambdaFunctionARN:
Ref: OriginRequestLambdaFunction.Version
CacheBehaviors:
- ForwardedValues:
Headers:
- CloudFront-Viewer-Country
- X-Request-Host
- User-Agent
QueryString: true
ViewerProtocolPolicy: https-only
DefaultTTL: 60
MaxTTL: 60
TargetOriginId: rateLimitsOrigin
PathPattern: "/rate-limit*"
LambdaFunctionAssociations:
- EventType: origin-request
LambdaFunctionARN:
Ref: GetRateLimitLambdaFunction.Version
# ViewerRequestLambdaFunction:
# Type: AWS::Serverless::Function
# Properties:
# CodeUri:
# Bucket: !Sub ${CodeBucket}
# Key: !Sub ${CodeKey}
# Role: !GetAtt LambdaEdgeFunctionRole.Arn
# Runtime: nodejs10.x
# Handler: src/functions/viewerRequest.handler
# Timeout: 5
# AutoPublishAlias: live
OriginRequestLambdaFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri:
Bucket: !Sub ${CodeBucket}
Key: !Sub ${CodeKey}
Role: !GetAtt LambdaEdgeFunctionRole.Arn
Runtime: nodejs10.x
Handler: src/functions/originRequest.handler
Timeout: 5
AutoPublishAlias: live
GetRateLimitLambdaFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri:
Bucket: !Sub ${CodeBucket}
Key: !Sub ${CodeKey}
Role: !GetAtt LambdaEdgeFunctionRole.Arn
Runtime: nodejs10.x
Handler: src/functions/getRateLimit.handler
Timeout: 5
AutoPublishAlias: live
LambdaEdgeFunctionRole:
Type: "AWS::IAM::Role"
Properties:
Path: "/"
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Sid: "AllowLambdaServiceToAssumeRole"
Effect: "Allow"
Action:
- "sts:AssumeRole"
Principal:
Service:
- "lambda.amazonaws.com"
- "edgelambda.amazonaws.com"
Outputs:
# ViewerRequestLambdaFunctionVersion:
# Value: !Ref ViewerRequestLambdaFunction.Version
OriginRequestLambdaFunctionVersion:
Value: !Ref OriginRequestLambdaFunction.Version
GetRateLimitLambdaFunctionVersion:
Value: !Ref GetRateLimitLambdaFunction.Version
CFDistribution:
Description: Cloudfront Distribution Domain Name
Value: !GetAtt CFDistribution.DomainName
So, after painful hours of trial and error the solution was staring me in the face:
CloudFront distribution was emitting and error saying it could not connect to the origin, and it was right - because I've specified dummy domains in the origins definition of the CloudFront distribution.
My logic was that if my lambda#edge functions intercept the requests before reaching the origin, then it shouldn't matter if the origin's domains is real or not.
As it turns out, it's partially true. Because it seams that:
viewer-request event is triggered before trying to connect to the origin, and there for it can by fake.
origin-request event is triggered only after trying to connect to the origin, so you'd need to specify something real.
Hope my wasited hours will contribute to someone else exploring the same path of reasoning.
In case it helps someone in future with this problem - I noticed that origin-response (which I accidentally selected instead of origin-request) events were (mostly) working for me, but there were still a large number of OriginConnectError in the logs, and the "Percentage of Viewer Requests by Result Type" report was showing a high error rate.
It turns out it was only working at all because I had my origin connection set to https-only, but my origin (an empty s3 bucket) only accepted http connections - and for some reason the events were still being triggered.
Changing the origin connection to the correct http-only completely stopped things working for origin-response, until I also corrected the event type to origin-request
So, in general, check the OriginProtocolPolicy is http-only if you're using an s3 bucket origin. If it's wrong, things might still appear to work sometimes, but not always, and you'll end up with high error rates.

How to deploy Express Gateway to Azure

I am able to run an express gateway Docker container and a Redis Docker container locally and would like to deploy this to Azure. How do I go about it?
This is my docker-compose.yml file:
version: '2'
services:
eg_redis:
image: redis
hostname: redis
container_name: redisdocker
ports:
- "6379:6379"
networks:
gateway:
aliases:
- redis
express_gateway:
build: .
container_name: egdocker
ports:
- "9090:9090"
- "8443:8443"
- "9876:9876"
volumes:
- ./system.config.yml:/usr/src/app/config/system.config.yml
- ./gateway.config.yml:/usr/src/app/config/gateway.config.yml
networks:
- gateway
networks:
gateway:
And this is my system.config.yml file:
# Core
db:
redis:
host: 'redis'
port: 6379
namespace: EG
# plugins:
# express-gateway-plugin-example:
# param1: 'param from system.config'
crypto:
cipherKey: sensitiveKey
algorithm: aes256
saltRounds: 10
# OAuth2 Settings
session:
secret: keyboard cat
resave: false
saveUninitialized: false
accessTokens:
timeToExpiry: 7200000
refreshTokens:
timeToExpiry: 7200000
authorizationCodes:
timeToExpiry: 300000
And this is my gateway.config.yml file:
http:
port: 9090
admin:
port: 9876
hostname: 0.0.0.0
apiEndpoints:
# see: http://www.express-gateway.io/docs/configuration/gateway.config.yml/apiEndpoints
api:
host: '*'
paths: '/ip'
methods: ["POST"]
serviceEndpoints:
# see: http://www.express-gateway.io/docs/configuration/gateway.config.yml/serviceEndpoints
httpbin:
url: 'https://httpbin.org/'
policies:
- basic-auth
- cors
- expression
- key-auth
- log
- oauth2
- proxy
- rate-limit
- request-transformer
pipelines:
# see: https://www.express-gateway.io/docs/configuration/gateway.config.yml/pipelines
basic:
apiEndpoints:
- api
policies:
- request-transformer:
- action:
body:
add:
payload: "'Test'"
headers:
remove: ["'Authorization'"]
add:
Authorization: "'new key here'"
- key-auth:
- proxy:
- action:
serviceEndpoint: httpbin
changeOrigin: true
Mounting the YAML files and then hitting the /ip endpoint is where I am stuck.
According to the configuration file you've posted I'd say you need to instruct Express Gateway to listen on 0.0.0.0 if run from a container, otherwise it won't be able to listed to external connections.

Environment Variables with Serverless and AWS Lambda

I am learning serverless framework and I'm making a simple login system.
Here is my serverless.yml file
service: lms-auth
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: ap-south-1
environment:
MONGODB_URI: $(file(../env.yml):MONOGDB_URI)
JWT_SECRET: $(file(../env.yml):JWT_SECRET)
functions:
register:
handler: handler.register
events:
- http:
path: auth/register/
method: post
cors: true
login:
handler: handler.login
events:
- http:
path: auth/login/
method: post
cors: true
plugins:
- serverless-offline
As you can see, I have two environment variables and both of them are referencing to a different file in the same root folder.
Here is that env.yml file
MONOGDB_URI: <MY_MONGO_DB_URI>
JWT_SECRET: LmS_JWt_secREt_auth_PasSWoRds
When I do sls deploy, I see that both the variables are logging as null. The environment variables aren't sent to lambda.
How can I fix this?
Also, currently I'm using this method and adding the env.yml to .gitignore and saving the values. Is there any other efficient way of hiding sensitive data?
I would do something like this to help you out with the syntax
service: lms-auth
custom: ${file(env.yml)}
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: ap-south-1
environment:
MONGODB_URI: ${self:custom.mongodb_uri}
JWT_SECRET: ${self:custom.jwt_secret}
functions:
register:
handler: handler.register
events:
- http:
path: auth/register/
method: post
cors: true
login:
handler: handler.login
events:
- http:
path: auth/login/
method: post
cors: true
plugins:
- serverless-offline
Then in your env.yml you can do
mongodb_uri: MY_MONGO_DB_URI
jwt_secret: LmS_JWt_secREt_auth_PasSWoRds
Enviroment variables
1. Add useDotenv:true to your .yml file 2.Add your variables like this -> ${env:VARIABLE_NAME}3.Create a file called .env.dev and write the variables (You can add .env.prod but you have to change the stage inside your .yml file ) Example :
service: lms-auth
useDotenv: true
provider:
name: aws
runtime: nodejs12.x
stage: dev
region: us-east-1
environment:
MONGODB_URI: ${env:MONOGDB_URI}
JWT_SECRET: ${env:JWT_SECRET}
functions:
register:
handler: handler.register
events:
- http:
path: auth/register/
method: post
cors: true
login:
handler: handler.login
events:
- http:
path: auth/login/
method: post
cors: true
plugins:
- serverless-offline
.env.dev
MONOGDB_URI = The URI Value
JWT_SECRET = The JWT Scret Value
I ended up solving it. I had set up my Dynamo DB in AWS us-west region. reinitialized in US-East-2, and reset the region under 'provider' within the .yml file.

Resources