API Gateway refuses to connect to S3,DynamodB using Localstack and serverless - node.js

I'm developing Localstack application where i need to connect s3 bucket and dynamodB to the API gateway, when i invoke my lambda using command line using
serverless invoke local -f helloagain
it just works fine, but when i invoke it through postman using api gateway e.g. http://localhost:4566/restapis/82wh1mpf11/local/_user_request_/helloagain it gives internal server error.
Read operations on s3 and ddb are working fine, it is getting a tedious task when it comes to dumping the data into these resources.
Sharing my code snippets below
this is docker-compose file
version: "3.1"
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- EDGE_PORT=4566
- SERVICES=lambda,s3,cloudformation,sts,apigateway,iam,route53
ports:
- "4566-4597:4566-4597"
volumes:
- "${TEMPDIR:-/tmp/localstack}:/temp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
this is a serverless.yml
service: localstack-lambda
plugins:
- serverless-localstack
custom:
localstack:
debug: true
stages:
- local
- dev
endpointFile: localstack_endpoints.json
# frameworkVersion: "2"
provider:
name: aws
runtime: nodejs12.x
#functions are pretty important
functions:
hello:
handler: handler.hello
events:
- http:
path: hello
method: any
helloagain:
handler: handler.helloagain
events:
- http:
path: helloagain
method: any
createbucket:
handler: s3handler.createbucket
events:
- http:
path: createbucket
method: any
createDDB:
handler: ddb.createDDB
events:
- http:
path: createDDB
method: any
listTables:
handler: ddb.listTables
events:
- http:
path: listTables
method: any
this is helloagain lambda script
const aws = require('aws-sdk');
const fs = require('fs');
const s3 = new aws.S3({
apiVersion: '2006-03-01',
endpoint: `http://localhost:4566`, // This two lines are
s3ForcePathStyle: true, //only needed for LocalStack.
});
module.exports.helloagain = async (event) => {
const file_buffer = fs.readFileSync("demo2.txt");
const params = {
Bucket: "newbucket",
Key: "demo2.txt",
Body:file_buffer
}
const res = await s3.upload(params).promise();
return {
statusCode: 200,
body: JSON.stringify({
res
})
}
}
i've already checked invoking this lambda through command line that
file demo2.txt already exists
bucket exists
i'm getting this error in the console
localstack.services.awslambda.lambda_executors.InvocationException: Lambda process returned with error. Result: {"errorType":"NetworkingError","errorMessage":"connect ECONNREFUSED 127.0.0.1:4566"}.
and this in the postman

Related

Localstack lambda trying to reach AWS outside container

I have a localstack image :
version: '3'
services:
localstack:
image: localstack:latest
ports:
- 4566:4566
environment:
- SERVICES=ssm
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
- LOCALSTACK_HOSTNAME=localstack
- LAMBDA_EXECUTOR=local
- LAMBDA_REMOTE_DOCKER=true
volumes:
- './.localstack:/tmp/localstack'
- '/var/run/docker.sock:/var/run/docker.sock'
I created a lambda which is running inside the docker container :
service: service-trigger-action-runner
provider:
name: aws
region: eu-central-1
runtime: nodejs12.x
plugins:
- serverless-offline
custom:
defaultStage: local
functions:
trigger_action_runner:
handler: ../src/trigger_action_runner.handler
environment:
URL_SSM: 'http://localhost:4566'
AWS_ACCESS_KEY_ID: 'ABCDEFGH123456'
AWS_SECRET_ACCESS_KEY: 'key'
Launching the all in a node.js integration test, I'am sending an event in order to trigger my lambda, this part is successfully working
const lambda = new Lambda({
apiVersion: '2031',
endpoint: 'http://localhost:3000'
})
const params = {
FunctionName: 'service-trigger-action-runner-dev-trigger_action_runner',
InvocationType: 'RequestResponse',
Payload: JSON.stringify({ eventTypeAliasName: "fake_event_alias"})
}
await lambda.invoke(params).promise()
Inside my handle script, I'm creating a lambda alias, and my issue is that this call is not staying inside my container, it's not mocked :
exports.handler = async (event) => {
const params = {
Description: 'alias name',
FunctionName: event.eventTypeAlias,
FunctionVersion: '007',
Name: event.eventTypeAliasName
}
console.log(`Creating alias with params=${JSON.stringify(params)}`)
return await lambdaAwsClient.createAlias(params).promise()
This below error is thrown :
UnrecognizedClientException: The security token included in the request is invalid.
How can I do to keep the call createAlias inside the container ?

Using dynamodb in local docker localhost:8000 with serverless-framework serverless-offline application running on localhost:4500

I'm looking to add state to serverless-framework node application running locally. I came across the official DynamoDb docker image, i'd like to use serverless framework with this dynamodb instance running on docker exposed at localhost:8000 without using the sls install dynamodb version.
I have tried using it normally with the nodejs aws-sdk with the endpoint and region configured to local. The new user table is lready created and database is accessible via aws-cli --endpoint localhost:8000 but can't access the dynamodb instance through nodejs aws-sdk
// server.js
const AWS = require('aws-sdk');
AWS.config.update({
region: 'localhost',
endpoint: "http://127.0.0.1:8000"
});
const ddb = new AWS.DynamoDB.DocumentClient();
const params = {
"TableName":tableName,
"IndexName":"email-index",
"KeyConditions":{
"email":{
"ComparisonOperator": "EQ",
"AttributeValueList": [{"S":email}]
}
}
};
ddb.query(params, (err,data) => {
console.log('query', data); // returns query null
}
//handler.js
const server = require('./server');
const http = require('serverless-http');
module.exports.client = http(server);
// serverless.yml
provider:
name: aws
runtime: nodejs10.16.0
region: ca-central-1
profile: default
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:Query
- dynamodb:Scan
- dynamodb:CreateTable
- dynamodb:ListTables
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: "arn:aws:dynamodb:ddblocal:000000000000:table/user"
plugins:
- serverless-offline
functions:
client:
handler: handler.client
events:
- http: GET /
- http: 'GET /{param+}'
- http:
path: /signin
method: post
cors: true
- http:
path: /signup
method: post
cors: true
I expected to get a response from the dynamodb in docker local but the aws-sdk cannot connect to it. The above http events go to express.js which works well.
Try to update if local
let dynamoDb = new AWS.DynamoDB.DocumentClient();
if (process.env.STAGE === 'dev') dynamoDb = new AWS.DynamoDB.DocumentClient({
region: 'localhost',
endpoint: 'http://localhost:8000',
accessKeyId: 'DEFAULT_ACCESS_KEY',
secretAccessKey: 'DEFAULT_SECRET'
});

How to add AWS user permissions using serverless?

I have created a user in the AWS console with access only to the Lambda service.
My question is, using the serverless framework, in my serverless.yaml, is it possible to add S3 Full access to my user and possibly any other service?
Thank you.
handler.js
'use strict';
const aws = require('aws-sdk');
const s3 = new aws.S3({ apiVersion: '2006-03-01' });
module.exports.helloWorld = (event, context, callback) => {
const params = {};
s3.listBuckets(params, function(err, data) {
if (err) console.log(err, err.stack);
else console.log(data);
});
const response = {
statusCode: 200,
message: JSON.stringify({message: 'Success!'})
};
callback(null, response);
};
serverless.yaml
provider:
name: aws
runtime: nodejs8.10
region: eu-blah-1
iamRoleStatements:
- Effect: "Allow"
Action:
- "s3:ListBucket"
- "s3:PutObject"
- "s3:GetObject"
Resource: "arn:aws:s3:::examplebucket/*"
functions:
helloWorld:
handler: handler.helloWorld
events:
- http:
path: hello-world
method: get
cors: true
If you are referring to the permissions that you give to the Lambda Function to have at execution time, after it has been deployed by the Serverless Framework, then you add role permissions in the serverless.yaml file, within the provider section.
Here is an example of permissions for the Lambda to talk to S3, Execute other Lambdas, and Send Emails with SES:
iamRoleStatements:
- Effect: "Allow"
Action:
- "s3:PutObject"
- "s3:DeleteObject"
- "s3:DeleteObjects"
Resource: arn:aws:s3:::${self:custom.s3WwwBucket}/content/pages/*
- Effect: Allow
Action:
- lambda:InvokeFunction
- lambda:InvokeAsync
Resource: arn:aws:lambda:${self:custom.region}:*:function:${self:service}-${opt:stage}-*
- Effect: "Allow"
Action:
- "ses:SendEmail"
- "ses:SendEmailRaw"
Resource: "arn:aws:ses:eu-west-1:01234567891234:identity/noreply#example.com"

How to call an AWS Step Function using the definitions in the serverless-step-functions plugin?

I'm using Serverless Framework to create my Lambda functions and the serverless-step-functions plugin to define my step functions.
Is it possible to call an step function directly from one of the lambda functions using the name defined into the serverless.yml file?
I was trying to solve the same problem and this question and the self answer were very helpful. However, I want to add another answer with more details and a working example to help future readers.
There are two things that you may need:
1- Start a State Machine
2- Invoke one specific function from a State Machine (usually for testing purposes)
The following demo uses both cases.
First, we need to configure the serverless.yml file to declare the State Machine, the Lambda functions and the correct IAM permissions.
service: test-state-machine
provider:
name: aws
runtime: nodejs4.3
region: us-east-1
stage: dev
environment:
AWS_ACCOUNT: 1234567890 # use your own AWS ACCOUNT number here
# define the ARN of the State Machine
STEP_FUNCTION_ARN: "arn:aws:states:${self:provider.region}:${self:provider.environment.AWS_ACCOUNT}:stateMachine:${self:service}-${self:provider.stage}-lambdaStateMachine"
# define the ARN of function step that we want to invoke
FUNCTION_ARN: "arn:aws:lambda:${self:provider.region}:${self:provider.environment.AWS_ACCOUNT}:function:${self:service}-${self:provider.stage}-stateMachineFirstStep"
functions:
# define the Lambda function that will start the State Machine
lambdaStartStateMachine:
handler: handler.lambdaStartStateMachine
role: stateMachine # we'll define later in this file
# define the Lambda function that will execute an arbitrary step
lambdaInvokeSpecificFuncFromStateMachine:
handler: handler.lambdaInvokeSpecificFuncFromStateMachine
role: specificFunction # we'll define later in this file
stateMachineFirstStep:
handler: handler.stateMachineFirstStep
# define the State Machine
stepFunctions:
stateMachines:
lambdaStateMachine:
Comment: "A Hello World example"
StartAt: firstStep
States:
firstStep:
Type: Task
Resource: stateMachineFirstStep
End: true
# define the IAM permissions of our Lambda functions
resources:
Resources:
stateMachine:
Type: AWS::IAM::Role
Properties:
RoleName: stateMachine
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: stateMachine
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: "Allow"
Action:
- "states:StartExecution"
Resource: "${self:provider.environment.STEP_FUNCTION_ARN}"
specificFunction:
Type: AWS::IAM::Role
Properties:
RoleName: specificFunction
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: specificFunction
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: "Allow"
Action:
- "lambda:InvokeFunction"
Resource: "${self:provider.environment.FUNCTION_ARN}"
package:
exclude:
- node_modules/**
- .serverless/**
plugins:
- serverless-step-functions
Define the Lambda functions inside the handler.js file.
const AWS = require('aws-sdk');
module.exports.lambdaStartStateMachine = (event, context, callback) => {
const stepfunctions = new AWS.StepFunctions();
const params = {
stateMachineArn: process.env.STEP_FUNCTION_ARN,
input: JSON.stringify({ "msg": "some input" })
};
// start a state machine
stepfunctions.startExecution(params, (err, data) => {
if (err) {
callback(err, null);
return;
}
const response = {
statusCode: 200,
body: JSON.stringify({
message: 'started state machine',
result: data
})
};
callback(null, response);
});
};
module.exports.lambdaInvokeSpecificFuncFromStateMachine = (event, context, callback) => {
const lambda = new AWS.Lambda();
const params = {
FunctionName: process.env.FUNCTION_ARN,
Payload: JSON.stringify({ message: 'invoked specific function' })
};
// invoke a specific function of a state machine
lambda.invoke(params, (err, data) => {
if (err) {
callback(err, null);
return;
}
const response = {
statusCode: 200,
body: JSON.stringify({
message: 'invoke specific function of a state machine',
result: data
})
};
callback(null, response);
});
};
module.exports.stateMachineFirstStep = (event, context, callback) => {
const response = {
statusCode: 200,
body: JSON.stringify({
message: 'state machine first step',
input: event
}),
};
callback(null, response);
};
Deploy executing:
serverless deploy stepf
serverless deploy
Test using:
serverless invoke -f lambdaStartStateMachine
serverless invoke -f lambdaInvokeSpecificFuncFromStateMachine
Solved using serverless environment variables:
environment:
MYFUNCTION_ARN: "arn:aws:states:${self:provider.region}:${self:provider.environment.AWS_ACCOUNT}:stateMachine:${self:service}-${self:provider.stage}-myFunction"
In the function:
var params = {
stateMachineArn: process.env.MYFUNCTION_ARN
};
Here is how you solve it nowadays.
In your serverless.yml, define your stepFunctions and also Outputs:
# define your step functions
stepFunctions:
stateMachines:
myStateMachine:
name: stateMachineSample
events:
- http:
path: my-trigger
method: GET
# make it match your step functions definition
Outputs:
myStateMachine:
Value:
Ref: StateMachineSample
Then you can set your state machine ARN as an environment using ${self:resources.Outputs.fipeStateMachine.Value}.

Lambda: Amazon s3 direct upload error signature does not match

I want to upload image files to AWS s3 bucket by using pre-signed URLs, But I'm getting an error shown in the screen shot, I've followed the post from this page s3 direct file upload, I would like to know what mistake I'm making and also I want to know whether this is server side issue or there should I use some different approach for making put request to 'pre-signed' URL, thanks ahead.
My serverless.yml
service: my-service-api
provider:
name: aws
runtime: nodejs4.3
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: "Allow"
Action:
- "dynamodb:*"
Resource: "*"
- Effect: "Allow"
Action:
- "s3:*"
Resource: "arn:aws:s3:::profile-images/*"
custom:
globalResultTtlInSeconds: 1
package:
individually: true
include:
- node_modules/mysql/**
- node_modules/bluebird/**
- node_modules/joi/**
exclude:
- .git/**
- .bin/**
- tmp/**
- api/**
- node_modules/**
- utils/**
- package.json
- npm_link.sh
- templates.yml
functions:
profiles:
handler: api/profiles/handler.profiles
events:
- http:
method: POST
path: api/profiles/uploadURL
cors: true
integration: lambda
request: ${file(./templates.yml):request}
authorizer:
arn: arn:aws:lambda:us-east-1:000000000000:function:customAuthorizer
resultTtlInSeconds: ${self:custom.globalResultTtlInSeconds}
identitySource: method.request.header.Authorization
package:
include:
- api/profiles/**
- node_modules/node-uuid/**
- node_modules/jsonwebtoken/**
- node_modules/rand-token/**
resources:
Resources:
UploadBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: profile-images
AccessControl: PublicRead
CorsConfiguration:
CorsRules:
- AllowedMethods:
- GET
- PUT
- POST
- HEAD
AllowedOrigins:
- "*"
AllowedHeaders:
- "*"
IamPolicyInvokeLambdaFunction:
Type: AWS::IAM::Policy
Properties:
PolicyName: "lambda-invoke-function"
Roles:
- {"Ref" : "IamRoleLambdaExecution"}
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- "lambda:InvokeFunction"
Resource: "*"
My handler file
var s3Params = {
Bucket: 'profile-images',
Key: image.name,
ACL: 'public-read'
};
s3.getSignedUrl('putObject', s3Params, function (err, url){
if(err){
console.log('presignedURL err:',err);
context.succeed({error: err});
}
else{
console.log('presignedURL: ',url);
context.succeed({uploadURL: url});
}
});
After spending more time on this issue, I realized that this was not problem on server side but the problem was in making request. I needed to set headers for my PUT request because when AWS s3 receives any request it checks the signature of that request versus the headers so if you are setting 'ContentType' or 'ACL' while creating preSignedURL then you have to provide the 'Content-Type' and 'x-amz-acl' in your request.
This is my updated 's3Params'
var s3Params = {
Bucket: 'profile-images',
Key: image.name,
ACL: 'public-read',
ContentType: image.type
};
And this is my request
Lastly I got some help from this post set headers for presigned PUT s3 requests

Resources