AWS Lambda behind VPC times out when communicating with S3 even with endpoint - node.js

I have a lambda behind a VPC. When I try to get an S3 object, I get a "connect ETIMEDOUT" error. I set up an Endpoint and still have this problem.
I'm able to get the object if I remove the VPC so I know the VPC is the issue and not permissions.
I had already set up an Internet Gateway to communicate with the outside world (and I've confirmed that that works). Following Stack Overflow and these instructions(https://aws.amazon.com/blogs/aws/new-vpc-endpoint-for-amazon-s3/), I created an Endpoint to Service "com.amazonaws.us-east-1.s3" with "Full Access" and associated it with the Route Table I had created to get outside-world access.
Screenshot of VPC Gateway Endpoint created
The VPC, the lambda and the S3 are all in the same region. (Lambda and S3 are created via SAM.)
I initially had default AWS and S3 objects. I've tried setting the region for both with no luck.
AWS.config.update({ region: 'us-east-1'});
const s3 = new AWS.S3({ region: 'us-east-1' });
const s3FileParams = {
Bucket: srcBucket,
Key: srcKey,
};
const resp = await s3.getObject(s3FileParams).promise();
I also tried explicitly setting the s3 endpoint as s3 = new AWS.S3({ endpoint: 'https://s3.us-east-1.amazonaws.com' });
Let me know any other information I can provide and thanks in advance.

Requirements for using an S3 Gateway Endpoint:
Ensure that the endpoint policy allows the appropriate access to S3. This is required in addition to the Lambda's IAM permissions.
Add an entry to the route table(s) used by any subnets needing to use the gateway.
Ensure that the Lambda's security group allows outgoing HTTPS traffic to either the internet (0.0.0.0/0) or to the prefix list ID (pl-xxxxxxx) for S3 in your region.
You must enable DNS resolution in your VPC. Enable the enableDnsHostnames and enableDnsSupport attributes on the VPC.
The S3 buckets being accessed must be in the same region as the VPC.

The answer was item 3 in Greg's list above. I switched to a new security group that (for now) allowed all traffic to anything in the outbound rules and that solved my problem.
(Now that I know there's a path forward, I can experiment with better outbound rules.)
Thanks to all! (And to the original folk who posted about VPC endpoints in other questions.)

Related

Terraform aws_storagegateway_gateway not finding VPC Endpoint

I'm trying to launch an S3 File Gateway (AWS Storage Gateway) via Terraform, with EC2 hosting and a VPC endpoint for Storage Gateway.
I've been able to launch the Storage Gateway EC2 into a private subnet, then launch a second EC2 instance into the public subnet so that I can retrieve the gateway's activation key (https://docs.aws.amazon.com/storagegateway/latest/userguide/get-activation-key.html).
Unfortunately, when I provide a value for the activation_key in Terraform, it seems to be ignoring the gateway_vpc_endpoint, and just creates the Storage Gateway with a Public endpoint instead.
Code used:
resource "aws_storagegateway_gateway" "s3_file_gateway" {
gateway_vpc_endpoint = aws_vpc_endpoint.storage_gateway.dns_entry[0].dns_name
activation_key = "XXXX-XXXX-XXXX-XXXX-XXXX"
gateway_name = "Storage-Gateway"
gateway_timezone = var.gateway_timezone
gateway_type = var.gateway_type
cloudwatch_log_group_arn = aws_cloudwatch_log_group.storage_gateway.arn
tags = var.tags
lifecycle {
ignore_changes = [smb_active_directory_settings, gateway_ip_address]
}
}
Has anyone come across this and been able to resolve it?
This question is a few months old now, but when you connect onto the second EC2 to retrieve the activation key from the Gateway EC2, you're probably curling the incorrect URL.
You may have been following the instructions in this documentation:
https://docs.aws.amazon.com/storagegateway/latest/userguide/get-activation-key.html
When actually, this is the documentation that is more useful for what you're trying to achieve:
https://docs.aws.amazon.com/filegateway/latest/files3/gateway-private-link.html
This states that the format of the URL you should curl to get the activation key is:
http://VM IP
ADDRESS/?gatewayType=FILE_S3&activationRegion=REGION&vpcEndpoint=VPCEndpointDNSname&no_redirect

Subscribing to SNS topic with Terraform via HTTP inside VPC

I have an EB instance that lives inside of a VPC. I do not want this instance to be externally accessible and it also needs to access an RDS instance inside the same VPC.
I want to create a subscription from SNS to this EB instance.
Here is the Terraform I have come up with:
resource "aws_sns_topic_subscription" "my_sub" {
topic_arn = aws_sns_topic.my_topic.arn
protocol = "http"
endpoint = "http://${aws_elastic_beanstalk_environment.my_eb_app.endpoint_url}/api/sns"
endpoint_auto_confirms = true
}
However, this fails because it is an internal endpoint:
Error: Error creating SNS topic: AuthorizationError: Not authorized to subscribe internal endpoints
status code: 403, request id: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx
How should I work around this?
There's apparently no way around making the endpoint public.
People recommend instead subscribing an SQS queue and polling that.

InvalidLocationConstraint creating a bucket in af-south-1 (Cape Town) region using node.js aws-sdk

I am getting a InvalidLocationConstraint: The specified location-constraint is not valid error when trying to create a S3 bucket in the af-south-1 (Cape Town) region using node.js aws-sdk, at version 2.726.0 (The latest at the time).
The region has been enabled and I am able to create a bucket using the management console. The IAM user I am using for debugging has full administrative access in the account.
My create bucket call is:
let res = await s3.createBucket({
Bucket: 'bucketname',
CreateBucketConfiguration: { LocationConstraint: 'af-south-1' }
}).promise();
This works for regions other than af-south-1.
In the documentation, a list of location constraints is given, is this list exhaustive of all possible options, or just a list of examples?
Is it possible to create a bucket in af-south-1 using the sdk, or am I doing something wrong?
This is similar to this question.
Newer AWS regions only support regional endpoints. Thus, if creating buckets in more than one region, a new instance of the S3 class needs to be created for each of the regions if you are using one of the newer regions:
const s3 = new AWS.S3({
region: 'af-south-1',
});

How can I check if the AWS SDK was provided with credentials?

There are many ways to provide the AWS SDK with credentials to perform operations.
I want to make sure any of the methods were successful in setting up the interface before I try my operation on our continuous deployment system.
How can I check if AWS SDK was able to find credentials?
You can access them via the config.credentials property on the main client. All AWS service libraries included in the SDK have a config property.
Class: AWS.Config
The main configuration class used by all service objects to set the region, credentials, and other options for requests.
By default, credentials and region settings are left unconfigured. This should be configured by the application before using any AWS service APIs.
// Using S3
var s3 = new AWS.S3();
console.log(s3.config.credentials);

Can't run ec2 method in AWS Lambda Function

I'm invoking the following lambda function to describe an instance information:
'use strict'
var aws = require('aws-sdk');
exports.handler = function(event, context) {
var instanceID = JSON.parse(event.Records[0].Sns.Message).Trigger.Dimensions[0].value;
aws.config.region = 'us-east-1';
var ec2 = new aws.EC2;
var params = {InstanceIds: [instanceID]};
ec2.describeInstances(params, function(e, data) {
if (e)
console.log(e, e.stack);
else
console.log(data);
}
};
In CloudWatch Logs I can see that function runs until the end, but doesn't log nothing inside ec2.describeInstances method:
END RequestId: xxxxxxxxxxxxxx
REPORT RequestId: xxxxxxxxxxxxxx Duration: xx ms Billed Duration: xx ms Memory Size: xx MB Max Memory Used: xx MB
My lambda function has VPC access and IAM Role of AdministratorAccess (full access). For some reason, it can't run ec2.describeInstances method. What is wrong and how can I fix it?
When you add VPC configuration to a Lambda function, it can only access resources in that VPC. If a Lambda function needs to access both VPC resources and the public Internet, the VPC needs to have a Network Address Translation (NAT) instance inside the VPC. So for that EC2 instance to send logs to cloud watch it needs internet connection through the NAT instance.
AWS Lambda uses the VPC information you provide to set up ENIs that allow your Lambda function to access VPC resources. Each ENI is assigned a private IP address from the IP address range within the Subnets you specify, but is not assigned any public IP addresses. Therefore, if your Lambda function requires Internet access (for example, to access AWS services that don't have VPC endpoints, such as Amazon Cloudwatch), you can configure a NAT instance inside your VPC or you can use the Amazon VPC NAT gateway. For more information, see NAT Gateways in the Amazon VPC User Guide. You cannot use an Internet gateway attached to your VPC, since that requires the ENI to have public IP addresses.
First, try giving this role to your Lambda
{
"Effect": "Allow",
"Resource": "*",
"Action": [
"ec2:DescribeInstances",
"ec2:CreateNetworkInterface",
"ec2:AttachNetworkInterface",
"ec2:DescribeNetworkInterfaces",
"ec2:DeleteNetworkInterface",
"ec2:DetachNetworkInterface",
"ec2:ModifyNetworkInterfaceAttribute",
"ec2:ResetNetworkInterfaceAttribute",
"autoscaling:CompleteLifecycleAction"
]
}
If that doesn't make a difference then,
you need to create an ENI. Go to the 'Network Interfaces' page https://console.aws.amazon.com/ec2/v2/home#NIC:sort=securityGroup and 'Create Network Interface'. Select the appropriate security groups and the subnet (say s0bn3t).
Now, in the "Advanced Settings" of your Lambda, when you select the VPC, you will see a list of subnets. Now, select the subnet that the above ENI is associated with ('s0bn3t').
I believe this should do it.

Resources