Using AWS SDK from Lambda running in VPC - node.js

I have a simple lambda function as follows
var AWS = require("aws-sdk");
exports.handler = (event, context, callback) => {
var ec2 = new AWS.EC2({region:'us-east-1'});
return ec2.describeRegions({}).promise()
.then(function(regionResponse) {
console.log(regionResponse.Regions)
callback(null, regionResponse.Regions);
})
.catch(
function (err) {
console.log({"error" : err});
callback(err, null);
}
)
};
I can run this function outside of a VPC successfully.
I create a VPC using the VPC wizard and create a VPC with a single public subnet and an Internet Gateway. I place the function in the VPC and give it an execution role with Lambda VPC Execution rights.
It now fails with a timeout, which I have set to 10 seconds (normal execution 1 sec)
What am I missing from my config that prevents the function from accessing the AWS SDK inside the VPC?

You are putting callback after return statement. Of course it will never be executed because you returned from the function.
If the subnet you are running the Lambda is not public or does not have NAT Gateway, it won't be able to connect to Internet, thus to AWS API's.

Related

Invoking a Lambda to message connected websockets

Is it possible to setup a stand-alone WebSocket service with lambdas that can be invoked from lambdas in separate services?
I've got an existing system that does things and then attempts to broadcast an update to connected clients by invoking a lambda in a websocket service like so:
const lambda = new Lambda({
region: 'us-east-1',
endpoint: 'https://lambda.us-east-1.amazonaws.com'
});
lambda.invoke({
FunctionName: `dev-functionName`,
Payload: JSON.stringify({payload, clientGroup}),
InvocationType: 'Event'
});
This triggers the correct lambda, which then
gets the connection IDs from a Dynamo table
sets domainName to {api-id}.execute-api.us-east-1.amazonaws.com
attempts to message connections them like so:
const ws = create(`https://${domainName}/dev/#connections/${ConnectionId}`);
// Also tried
//const ws = create(`https://${domainName}/dev`);
//const ws = create(`${domainName}/dev`);
//const ws = create(`${domainName}`);
const params:any = {
Data: JSON.stringify(payload),
ConnectionId
};
try{
return ws.postToConnection(params).promise();
} catch (err) {
if(err.statusCode == 410){
await removeConn(ConnectionId); // Delete connection from Dynamo
} else {
throw err;
}
}
The create function just returns:
return new AWS.ApiGatewayManagementApi({
apiVersion: '2018-11-29',
endpoint: domainName
});
CloudWatch logs suggest that all functions are triggering and completing successfully with no errors. It also shows that connections are being retrieved from Dynamo. However the clients are not receiving any messages.
When running the projects locally and using localhost urls, everything works as expected. What am I doing wrong here?
First, the correct domain for the endpoint is
const endpoint = `${domainName}/${"dev"}`;
Second, to see errors in CloudWatch, you need to await the postToConnection promise
Third, the external services were calling a REST endpoint in the WebSocket service. This meant that there are 2 entries added to API Gateway. The REST APIs need the following permissions for the WebSocket API:
Action:
- "execute-api:Invoke"
- "execute-api:ManageConnections"

AWS lambda fails to call Facebook SDK

I am setting up a AMS lambda to call the facebook sdk internally but unfortunately I am not able to get any response from facebook SDK.
Please find the below code :-
const listCampaign = async (event, context) => {
context.callbackWaitsForEmptyEventLoop = false;
await validateAuthToken(event.headers.Authorization, event.headers.accountId);
console.log("account are",account)
return await account.getCampaigns([
Campaign.Fields.account_id,
Campaign.Fields.adlabels,
Campaign.Fields.bid_strategy,
Campaign.Fields.boosted_object_id,
Campaign.Fields.brand_lift_studies,
Campaign.Fields.budget_rebalance_flag,
Campaign.Fields.budget_remaining,
Campaign.Fields.buying_type,
Campaign.Fields.can_create_brand_lift_study,
])
.then((campaign) => {
console.log("first check 3",campaign) // No response from facebook SDK and after 30 sec it get end point time out
})
The most probable cause could be that the security group is not configured for outside connections. If that is not the case and the lambda function is deployed in a VPC, please do check that the VPC subnets have NAt and Internet gateway permissions.

Node.js code works locally but does not work on AWS Lambda

I have a node.js function for AWS Lambda. It reads a JSON file from an S3 bucket as a stream, parses it and prints the parsed objects to the console. I am using stream-json module for parsing.
It works on my local environment and prints the objects to console. But it does not print the objects to the log streams(CloudWatch) on Lambda. It simply times out after the max duration. It prints other log statements around, but not the object values.
1. Using node.js 6.10 in both environments.
2. callback to the Lambda function is invoked only after the stream ends.
3. Lambda has full access to S3
4. Also tried Promise to wait until streams complete. But no change.
What am I missing? Thank you in advance.
const AWS = require('aws-sdk');
const {parser} = require('stream-json');
const {streamArray} = require('stream-json/streamers/StreamArray');
const {chain} = require('stream-chain');
const S3 = new AWS.S3({ apiVersion: '2006-03-01' });
/** ******************** Lambda Handler *************************** */
exports.handler = (event, context, callback) => {
// Get the object from the event and show its content type
const bucket = event.Records[0].s3.bucket.name;
const key = event.Records[0].s3.object.key;
const params = {
Bucket: bucket,
Key: key
};
console.log("Source: " + bucket +"//" + key);
let s3ReaderStream = S3.getObject(params).createReadStream();
console.log("Setting up pipes");
const pipeline = chain([
s3ReaderStream,
parser(),
streamArray(),
data => {
console.log(data.value);
}
]);
pipeline.on('data', (data) => console.log(data));
pipeline.on('end', () => callback(null, "Stream ended"));
};
I have figured out that it is because my Lambda function is running inside a private VPC.
(I have to run it inside a private VPC because it needs to access my ElastiCache instance. I removed related code when I posted the code, for simplification).
Code can access S3 from my local machine, but not from the private VPC.
There is a process to ensure that S3 is accessible from within your VPC. It is posted here https://aws.amazon.com/premiumsupport/knowledge-center/connect-s3-vpc-endpoint/
Here is another link that explains how you should setup a VPC end point to be able to access AWS resources from within a VPC https://aws.amazon.com/blogs/aws/new-vpc-endpoint-for-amazon-s3/

Delay in publishing message on topic using aws-sdk iotData.publish on aws lambda

I am using aws-sdk for publishing message on topic below is the code:
var AWS = require('aws-sdk');
AWS.config.region = 'us-east-1';
AWS.config.credentials = {
accessKeyId: 'myaccesskeyid',
secretAccessKey: 'mysecretaccesskey'
}
function LEDOnIntent() {
this.iotdata = new AWS.IotData({
endpoint: 'XXXXXXXXX.iot.us-east-1.amazonaws.com'
});
}
LEDOnIntent.prototype.publishMessage = function() {
console.log('>publishMessage');
var params = {
topic: 'test_topic',
/* required */
payload: new Buffer('{action : "LED on"}') || 'STRING_VALUE',
qos: 1
};
this.iotdata.publish(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else {
console.log("Message published : " + data); // successful response
}
});
}
It works fine in local unit testing but when I deploy this code on AWS lambda then I got very uneven behaviour. For the first few requests it will not publish message then it will work fine when I continuously test it. When I test after some break then again it stop working for some initial requests.
Behind the scene, Lambda runs like a container model. It means to create a container when needs and destroy it if doesn't require the container.
The reason you see a delay in the initial request because It takes time to set up a container and do the necessary bootstrapping, which adds some latency each time the Lambda function is invoked. You typically see this latency when a Lambda function is invoked for the first time or after it has been updated because AWS Lambda tries to reuse the container for subsequent invocations of the Lambda function.
AWS Lambda maintains the container for some time in anticipation of another Lambda function invocation. In effect, the service freezes the container after a Lambda function completes, and thaws the container for reuse if AWS Lambda chooses to reuse the container when the Lambda function is invoked again.
Please read the official documentation here

How to use encrypted environment variables in AWS Lambda?

I am trying to use encrypted environment variables in an AWS Lambda function running in Node.js 4.3, but the code hangs when trying to decrypt the variables. I don't get any error messages, it just times out. Here is what I have tried:
I created the encryption key in the same region as the Lambda, and ensured that the role the Lambda runs as has access to the key. (I've even tried giving the role full control of the key.)
When creating the Lambda, I enable encryption helpers, select my encryption key, and encrypt the environment variable:
Next I click the "Code" button which gives me javascript code that's supposed to handle the decryption at runtime. Here is the code--the only change I have made is to add console.log statements and I added a try/catch:
"use strict";
const AWS = require('aws-sdk');
const encrypted = process.env['DBPASS'];
let decrypted;
function processEvent(event, context, callback) {
console.log("Decrypted: " + decrypted);
callback();
}
exports.handler = (event, context, callback) => {
if (decrypted) {
console.log('data is already decrypted');
processEvent(event, context, callback);
} else {
console.log('data is NOT already decrypted: ' + encrypted);
// Decrypt code should run once and variables stored outside of the function
// handler so that these are decrypted once per container
const kms = new AWS.KMS();
console.log('got kms object');
try {
var myblob = new Buffer(encrypted, 'base64');
console.log('got blob');
kms.decrypt({ CiphertextBlob: myblob }, (err, data) => {
console.log('inside decrypt callback');
if (err) {
console.log('Decrypt error:', err);
return callback(err);
}
console.log('try to get plaintext');
decrypted = data.Plaintext.toString('ascii');
console.log('decrypted: ' + decrypted);
processEvent(event, context, callback);
});
}
catch(e) {
console.log("exception: " + e);
callback('error!');
}
}
};
Here is what I get when I run the function:
data is NOT already decrypted: AQECAH.....
got kms object
got blob
END RequestId: 9b7af.....
Task timed out after 30.00 seconds
When I run the function, it times out. I see that it prints all log statements up to "got blob" then it just stops. No error message other than timed out. I've tried increasing timeout and memory for the Lambda but it just makes it wait longer before timing out.
How is decryption supposed to work when I never tell the app what decryption key to use? The documentation for decrypt does not mention any way to tell it what decryption key to use. And I am not getting any error messages that would tell me it doesn't know what key to use or anything.
I've tried going through this tutorial but it just tells me to do the same thing I've already done. I've also read all of the environment variables documentation but it says that what I'm doing should just work.
Decrypting the environment variables requires an API call to the KMS service. To do that, your Lambda function must have access to the internet since there are no VPC endpoints for KMS. So, if your Lambda is running in a VPC, make sure you have a NAT configured for the VPC to allow your Lambda function to call KMS.

Resources