If use this code within a Lambda which complies with everything I read on stackoverflow and on the AWS SDK documentation.
However, it neither returns anything nor throws an error. The code is simply stuck on s3.getObject(params).promise() so the lambda function runs on a timeout, even after more then 30 seconds. The file i try to fetch is actually 25kb.
Any idea why this happens?
var AWS = require('aws-sdk');
var s3 = new AWS.S3({httpOptions: {timeout: 3000}});
async function getObject(bucket, objectKey) {
try {
const params = {
Bucket: bucket,
Key: objectKey
}
console.log("Trying to fetch " + objectKey + " from bucket " + bucket)
const data = await s3.getObject(params).promise()
console.log("Done loading image from S3")
return data.Body.toString('utf-8')
} catch (e) {
console.log("error loading from S3")
throw new Error(`Could not retrieve file from S3: ${e.message}`)
}
}
When testing the function, i receive the following timeout.
START RequestId: 97782eac-019b-4d46-9e1e-3dc36ad87124 Version: $LATEST
2019-03-19T07:51:30.225Z 97782eac-019b-4d46-9e1e-3dc36ad87124 Trying to fetch public-images/low/ZARGES_41137_PROD_TECH_ST_LI.jpg from bucket zarges-pimdata-test
2019-03-19T07:51:54.979Z 97782eac-019b-4d46-9e1e-3dc36ad87124 error loading from S3
2019-03-19T07:51:54.981Z 97782eac-019b-4d46-9e1e-3dc36ad87124 {"errorMessage":"Could not retrieve file from S3: Connection timed out after 3000ms","errorType":"Error","stackTrace":["getObject (/var/task/index.js:430:15)","","process._tickDomainCallback (internal/process/next_tick.js:228:7)"]}
END RequestId: 97782eac-019b-4d46-9e1e-3dc36ad87124
REPORT RequestId: 97782eac-019b-4d46-9e1e-3dc36ad87124 Duration: 24876.90 ms
Billed Duration: 24900 ms Memory Size: 512 MB Max Memory Used: 120 MB
The image I am fetching is actually public available:
https://s3.eu-central-1.amazonaws.com/zarges-pimdata-test/public-images/low/ZARGES_41137_PROD_TECH_ST_LI.jpg
const data = (await (s3.getObject(params).promise())).Body.toString('utf-8')
If your Lambda function is associated with a VPC it loses internet access which is required to access S3. However, instead of following the Lambda warning that says "Associate a NAT" etc, you can create an S3 endpoint in the VPC > Endpoints settings, and your Lambda function will work as expected, with no need to manually set up Internet access for your VPC.
https://aws.amazon.com/blogs/aws/new-vpc-endpoint-for-amazon-s3/
Default timeout of AWS SDK is 120000 ms. If your lambda's timeout is shorter then that, you will never receive the actual error.
Either extend your AWS timeout
var AWS = require('aws-sdk');
var s3 = new AWS.S3({httpOptions: {timeout: 3000}});
or extend the timout of your lambda.
This issue is definitely related to connection.
Check out your VPC settings as it is likely blocking the Lambda connection to the Internet (AWS managed services as S3 are accessible only via Internet).
If you are using localstack, make sure SSL is false and s3ForcePathStyle is true.
That was my problem
AWS.S3({endpoint: '0.0.0.0:4572', sslEnabled: false, s3ForcePathStyle:true})
More details here
Are you sure you are providing your accessKeyId and secretAccessKey? I was having timeouts with no error message until I added them to the config:
AWS.config.update({ signatureVersion: 'v4', region: "us-east-1",
accessKeyId: secret.accessKeyID,
secretAccessKey: secret.secretAccessKey });
Related
I have nodejs/express app from which I want to connect to AWS S3.
I do have a temporary approach to make connection,
environment file
aws_access_key_id=XXX
aws_secret_access_key=XXXX/
aws_session_token=xxxxxxxxxxxxxxxxxxxxxxxxxx
S3-connection-service.js
const AWS = require("aws-sdk");
AWS.config.update({
accessKeyId: `${process.env.aws_access_key_id}`,
secretAccessKey: `${process.env.aws_secret_access_key}`,
sessionToken: `${process.env.aws_session_token}`,
region: `${process.env.LOCAL_AWS_REGION}`
});
const S3 = new AWS.S3();
module.exports = {
listBucketContent: (filePath) =>
new Promise((resolve, reject) => {
const params = { Bucket: bucketName, Prefix: filePath };
S3.listObjects(params, (err, objects) => {
if (err) {
reject(err);
} else {
resolve(objects);
}
});
}),
....
....
}
controller.js
const fetchFile = require("../../S3-connection-service.js");
const AWSFolder = await fetchFile.listBucketContent(filePath);
Fine it's works and I'm able to access S3 bucket files and play with it.
PROBLEM
The problem is connection is not persistent. Since, I use session_token, connection remains alive for sometime and again after sometime new tokens will be generated, I have to copy-paste them in env file and re-run the node app.
I really have no idea how can I make connection persistent ?
Where to store AWS confidential/secrets and how to use them to connect to S3 so connection remains alive ?
Just remove
AWS.config.update({
accessKeyId: `${process.env.aws_access_key_id}`,
secretAccessKey: `${process.env.aws_secret_access_key}`,
sessionToken: `${process.env.aws_session_token}`,
region: `${process.env.LOCAL_AWS_REGION}`
});
code block from lambda source in file S3-connection-service.js
Attach a role to lambda function with proper permissions. You will have same functionally.
For local development.
You can set environment variable before testing your application.
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-west-2
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html
If you are using any IDE you can set these environment variables on it.
If you are testing from cli
$ AWS_ACCESS_KEY_ID=EXAMPLE AWS_SECRET_ACCESS_KEY=EXAMPLEKEY AWS_DEFAULT_REGION=us-west-2 npm start
connect to S3 so connection remains alive ?
You can't make one request to S3 and keep it alive forever.
These are your options:
Add a try/catch statement inside your code to handle credentials expired error. Then, generate new credentials and re-initialize the S3 client.
Instead of using a Role, use a User. (IAM Identities). User credentials can be valid forever. You won't need to update the credentials in this case.
Do not provide the credentials to AWS.config.update like you are doing right now. If you don't provide the credentials, the AWS client will try to read them from your ~/.aws/credentials file automatically. If you create a script to update them every hour (ex: a cronjob), then your credentials will be up-to-date at all times.
< premise>
I'm new cloud computing in general, AWS specifically, and REST API, and am trying to cobble together a "big-picture" understanding.
I am working with LocalStack - which, by my understanding, simulates the real AWS by responding identically to (a subset of) the AWS API if you specify the endpoint address/port that LocalStack listens at.
Lastly, I've been working from this tutorial: https://dev.to/goodidea/how-to-fake-aws-locally-with-localstack-27me
< /premise>
Using the noted tutorial, and per its guidance, I successfully creating a S3 bucket using the AWS CLI.
To demonstrate uploading a local file to the S3 bucket, though, the tutorial switches to node.js, which I think demonstrates the AWS node.js SDK:
# aws.js
# This code segment comes from https://dev.to/goodidea/how-to-fake-aws-locally-with-localstack-27me
#
const AWS = require('aws-sdk')
require('dotenv').config()
const credentials = {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_KEY,
}
const useLocal = process.env.NODE_ENV !== 'production'
const bucketName = process.env.AWS_BUCKET_NAME
const s3client = new AWS.S3({
credentials,
/**
* When working locally, we'll use the Localstack endpoints. This is the one for S3.
* A full list of endpoints for each service can be found in the Localstack docs.
*/
endpoint: useLocal ? 'http://localhost:4572' : undefined,
/**
* Including this option gets localstack to more closely match the defaults for
* live S3. If you omit this, you will need to add the bucketName to the `Key`
* property in the upload function below.
*
* see: https://github.com/localstack/localstack/issues/1180
*/
s3ForcePathStyle: true,
})
const uploadFile = async (data, fileName) =>
new Promise((resolve) => {
s3client.upload(
{
Bucket: bucketName,
Key: fileName,
Body: data,
},
(err, response) => {
if (err) throw err
resolve(response)
},
)
})
module.exports = uploadFile
.
# test-upload.js
# This code segment comes from https://dev.to/goodidea/how-to-fake-aws-locally-with-localstack-27me
#
const fs = require('fs')
const path = require('path')
const uploadFile = require('./aws')
const testUpload = () => {
const filePath = path.resolve(__dirname, 'test-image.jpg')
const fileStream = fs.createReadStream(filePath)
const now = new Date()
const fileName = `test-image-${now.toISOString()}.jpg`
uploadFile(fileStream, fileName).then((response) => {
console.log(":)")
console.log(response)
}).catch((err) => {
console.log(":|")
console.log(err)
})
}
testUpload()
Invocation :
$ node test-upload.js
:)
{ ETag: '"c6b9e5b1863cd01d3962c9385a9281d"',
Location: 'http://demo-bucket.localhost:4572/demo-bucket/test-image-2019-03-11T21%3A22%3A43.511Z.jpg',
key: 'demo-bucket/test-image-2019-03-11T21:22:43.511Z.jpg',
Key: 'demo-bucket/test-image-2019-03-11T21:22:43.511Z.jpg',
Bucket: 'demo-bucket' }
I do not have prior experience with node.js, but my understanding of the above code is that it uses the AWS.S3.upload() AWS node.js SDK method to copy a local file to a S3 bucket, and prints the HTTP response (is that correct?).
Question: I observe that the HTTP response includes a "Location" key whose value looks like a URL I can copy/paste into a browser to view the image directly from the S3 bucket; is there a way to get this location using the AWS CLI?
Am I correct to assume that AWS CLI commands are analogues of the AWS SDK?
I tried uploading a file to my S3 bucket using the aws s3 cp CLI command, which I thought would be analogous to the AWS.S3.upload() method above, but it didn't generate any output, and I'm not sure what I should have done - or should do - to get a Location the way the HTTP response to the AWS.S3.upload() AWS node SDK method did.
$ aws --endpoint-url=http://localhost:4572 s3 cp ./myFile.json s3://myBucket/myFile.json
upload: ./myFile.json to s3://myBucket/myFile.json
Update: continued study makes me now wonder whether it is implicit that a file uploaded to a S3 bucket by any means - whether by CLI command aws s3 cp or node.js SDK method AWS.S3.upload(), etc. - can be accessed at http://<bucket_name>.<endpoint_without_http_prefix>/<bucket_name>/<key> ? E.g. http://myBucket.localhost:4572/myBucket/myFile.json?
If this is implicit, I suppose you could argue it's unnecessary to ever be given the "Location" as in that example node.js HTTP response.
Grateful for guidance - I hope it's obvious how painfully under-educated I am on all the involved technologies.
Update 2: It looks like the correct url is <endpoint>/<bucket_name>/<key>, e.g. http://localhost:4572/myBucket/myFile.json.
AWS CLI and the different SDKs offer similar functionality but some add extra features and some format the data differently. It's safe to assume that you can do what the CLI does with the SDK and vice-versa. You might just have to work for it a little bit sometimes.
As you said in your update, not every file that is uploaded to S3 is publicly available. Buckets have policies and files have permissions. Files are only publicly available if the policies and permissions allow it.
If the file is public then you can just construct the URL as you described. If you have the bucket setup for website hosting, you can also use the domain you setup.
But if the file is not public or you just want a temporary URL, you can use aws presign s3://myBucket/myFile.json. This will give you a URL that can be used by anyone to download the file with the permissions of whoever executed the command. The URL will be valid for one hour unless you choose a different time with --expires-in. The SDK has similar functionality as well but you have to work a tiny bit harder to use it.
Note: Starting with version 0.11.0, all APIs are exposed via a single edge service, which is accessible on http://localhost:4566 by default.
Considering that you've added some files to your bucket
aws --endpoint-url http://localhost:4566 s3api list-objects-v2 --bucket mybucket
{
"Contents": [
{
"Key": "blog-logo.png",
"LastModified": "2020-12-28T12:47:04.000Z",
"ETag": "\"136f0e6acf81d2d836043930827d1cc0\"",
"Size": 37774,
"StorageClass": "STANDARD"
}
]
}
you should be able to access your file with
http://localhost:4566/mybucket/blog-logo.png
I have a node.js function for AWS Lambda. It reads a JSON file from an S3 bucket as a stream, parses it and prints the parsed objects to the console. I am using stream-json module for parsing.
It works on my local environment and prints the objects to console. But it does not print the objects to the log streams(CloudWatch) on Lambda. It simply times out after the max duration. It prints other log statements around, but not the object values.
1. Using node.js 6.10 in both environments.
2. callback to the Lambda function is invoked only after the stream ends.
3. Lambda has full access to S3
4. Also tried Promise to wait until streams complete. But no change.
What am I missing? Thank you in advance.
const AWS = require('aws-sdk');
const {parser} = require('stream-json');
const {streamArray} = require('stream-json/streamers/StreamArray');
const {chain} = require('stream-chain');
const S3 = new AWS.S3({ apiVersion: '2006-03-01' });
/** ******************** Lambda Handler *************************** */
exports.handler = (event, context, callback) => {
// Get the object from the event and show its content type
const bucket = event.Records[0].s3.bucket.name;
const key = event.Records[0].s3.object.key;
const params = {
Bucket: bucket,
Key: key
};
console.log("Source: " + bucket +"//" + key);
let s3ReaderStream = S3.getObject(params).createReadStream();
console.log("Setting up pipes");
const pipeline = chain([
s3ReaderStream,
parser(),
streamArray(),
data => {
console.log(data.value);
}
]);
pipeline.on('data', (data) => console.log(data));
pipeline.on('end', () => callback(null, "Stream ended"));
};
I have figured out that it is because my Lambda function is running inside a private VPC.
(I have to run it inside a private VPC because it needs to access my ElastiCache instance. I removed related code when I posted the code, for simplification).
Code can access S3 from my local machine, but not from the private VPC.
There is a process to ensure that S3 is accessible from within your VPC. It is posted here https://aws.amazon.com/premiumsupport/knowledge-center/connect-s3-vpc-endpoint/
Here is another link that explains how you should setup a VPC end point to be able to access AWS resources from within a VPC https://aws.amazon.com/blogs/aws/new-vpc-endpoint-for-amazon-s3/
I'm trying to create an S3 bucket and immediately assign a lambda notification event to it.
Here's the node test script I wrote:
const aws = require('aws-sdk');
const uuidv4 = require('uuid/v4');
aws.config.update({
accessKeyId: 'key',
secretAccessKey:'secret',
region: 'us-west-1'
});
const s3 = new aws.S3();
const params = {
Bucket: `bucket-${uuidv4()}`,
ACL: "private",
CreateBucketConfiguration: {
LocationConstraint: 'us-west-1'
}
};
s3.createBucket(params, function (err, data) {
if (err) {
throw err;
} else {
const bucketUrl = data.Location;
const bucketNameRegex = /bucket-[a-z0-9\-]+/;
const bucketName = bucketNameRegex.exec(bucketUrl)[0];
const params = {
Bucket: bucketName,
NotificationConfiguration: {
LambdaFunctionConfigurations: [
{
Id: `lambda-upload-notification-${bucketName}`,
LambdaFunctionArn: 'arn:aws:lambda:us-west-1:xxxxxxxxxx:function:respondS3Upload',
Events: ['s3:ObjectCreated:CompleteMultipartUpload']
},
]
}
};
// Throws "Unable to validate the following destination configurations" until an event is manually added and deleted from the bucket in the AWS UI Console
s3.putBucketNotificationConfiguration(params, function(err, data) {
if (err) {
console.error(err);
console.error(this.httpResponse.body.toString());
} else {
console.log(data);
}
});
}
});
The creation works fine but calling s3.putBucketNotificationConfiguration from the aws-sdk throws:
{ InvalidArgument: Unable to validate the following destination configurations
at Request.extractError ([...]/node_modules/aws-sdk/lib/services/s3.js:577:35)
at Request.callListeners ([...]/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
at Request.emit ([...]/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
at Request.emit ([...]/node_modules/aws-sdk/lib/request.js:683:14)
at Request.transition ([...]/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo ([...]/node_modules/aws-sdk/lib/state_machine.js:14:12)
at [...]/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> ([...]/node_modules/aws-sdk/lib/request.js:38:9)
at Request.<anonymous> ([...]/node_modules/aws-sdk/lib/request.js:685:12)
at Request.callListeners ([...]/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
message: 'Unable to validate the following destination configurations',
code: 'InvalidArgument',
region: null,
time: 2017-11-10T02:55:43.004Z,
requestId: '9E1CB35811ED5828',
extendedRequestId: 'tWcmPfrAu3As74M/0sJL5uv+pLmaD4oBJXwjzlcoOBsTBh99iRAtzAloSY/LzinSQYmj46cwyfQ=',
cfId: undefined,
statusCode: 400,
retryable: false,
retryDelay: 4.3270874729153475 }
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>InvalidArgument</Code>
<Message>Unable to validate the following destination configurations</Message>
<ArgumentName1>arn:aws:lambda:us-west-1:xxxxxxxxxx:function:respondS3Upload, null</ArgumentName1>
<ArgumentValue1>Not authorized to invoke function [arn:aws:lambda:us-west-1:xxxxxxxxxx:function:respondS3Upload]</ArgumentValue1>
<RequestId>9E1CB35811ED5828</RequestId>
<HostId>tWcmPfrAu3As74M/0sJL5uv+pLmaD4oBJXwjzlcoOBsTBh99iRAtzAloSY/LzinSQYmj46cwyfQ=</HostId>
</Error>
I've run it with a role assigned to lambda with what I think are all the policies it needs. I could be missing something. I'm using my root access keys to run this script.
I've thought it might be a timing error where S3 needs time to create the bucket before adding the event, but I've waited a while, hardcoded the bucket name, and run my script again which throws the same error.
The weird thing is that if I create the event hook in the S3 UI and immediately delete it, my script works if I hardcode that bucket name into it. It seems like creating the event in the UI adds some needed permissions but I'm not sure what that would be in the SDK or in the console UI.
Any thoughts or things to try? Thanks for your help
You are getting this message because your s3 bucket is missing permissions for invoking your lambda function.
According to AWS documentation! there are two types of permissions required:
Permissions for your Lambda function to invoke services
Permissions for Amazon S3 to invoke your Lambda function
You should create an object of type 'AWS::Lambda::Permission' and it should look similar to this:
{
"Version": "2012-10-17",
"Id": "default",
"Statement": [
{
"Sid": "<optional>",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "<ArnToYourFunction>",
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "<YourAccountId>"
},
"ArnLike": {
"AWS:SourceArn": "arn:aws:s3:::<YourBucketName>"
}
}
}
]
}
Finally looked at this again after a year. This was a hackathon project from last year that we revisted. #davor.obilinovic's answer was very helpful in pointing me to the Lambda permission I needed to add. Still took me a little bit to figure out exactly what I needed it to look like.
Here are the AWS JavaScript SDK and Lambda API docs
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Lambda.html#addPermission-property
https://docs.aws.amazon.com/lambda/latest/dg/API_AddPermission.html
The JS SDK docs have this line:
SourceArn: "arn:aws:s3:::examplebucket/*",
I couldn't get it working for the longest time and was still getting the Unable to validate the following destination configurations error.
Changing it to
SourceArn: "arn:aws:s3:::examplebucket",
fixed that issue. The /* was apparently wrong and I should have looked at the answer I got here more closely but was trying to follow the AWS docs.
After developing for a while and creating lots of buckets, Lambda permissions and S3 Lambda notifications, calling addPermission started throwing a The final policy size (...) is bigger than the limit (20480). Adding new, individual, permissions for each bucket adds them to the bottom of the Lambda Function Policy and apparently that policy has a max size.
The policy doesn't seem editable in the AWS Management Console so I had fun deleting each entry with the SDK. I copied the policy JSON, pulled the Sids out and called removePermission in a loop (which threw rate limit errors and I had to run it many times).
Finally I discovered that omitting the SourceArn key will give Lambda permission to all S3 buckets.
Here's my final code using the SDK to add the permission I needed. I just ran this once for my function.
const aws = require('aws-sdk');
aws.config.update({
accessKeyId: process.env.AWS_ACCESS,
secretAccessKey: process.env.AWS_SECRET,
region: process.env.AWS_REGION,
});
// Creates Lambda Function Policy which must be created once for each Lambda function
// Must be done before calling s3.putBucketNotificationConfiguration(...)
function createLambdaPermission() {
const lambda = new aws.Lambda();
const params = {
Action: 'lambda:InvokeFunction',
FunctionName: process.env.AWS_LAMBDA_ARN,
Principal: 's3.amazonaws.com',
SourceAccount: process.env.AWS_ACCOUNT_ID,
StatementId: `example-S3-permission`,
};
lambda.addPermission(params, function (err, data) {
if (err) {
console.log(err);
} else {
console.log(data);
}
});
}
If it still useful for someone, this is how I add the permission to the lambda function using java:
AWSLambda client = AWSLambdaClientBuilder.standard().withRegion(clientRegion).build();
AddPermissionRequest requestLambda = new AddPermissionRequest()
.withFunctionName("XXXXX")
.withStatementId("XXXXX")
.withAction("lambda:InvokeFunction")
.withPrincipal("s3.amazonaws.com")
.withSourceArn("arn:aws:s3:::XXXXX" )
.withSourceAccount("XXXXXX");
client.addPermission(requestLambda);
Please check
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/lambda/AWSLambda.html#addPermission-com.amazonaws.services.lambda.model.AddPermissionRequest-
The Console is another way to allow s3 to invoke a lambda:
Note
When you add a trigger to your function with the Lambda console, the
console updates the function's resource-based policy to allow the
service to invoke it. To grant permissions to other accounts or
services that aren't available in the Lambda console, use the AWS CLI.
So you just need to add and configure an s3 trigger to your lambda from aws console
https://docs.aws.amazon.com/lambda/latest/dg/access-control-resource-based.html
For me it was Lambda expecting permission over the entire bucket and not the bucket and keys
It may be helpful to look here: AWS Lambda : creating the trigger
Theres kind of an unclear error when you have conflicting events in a bucket. You need to purge the other events to create a new one.
You have to add s3 resource based policy on lambda too. From your Lambda, go to Configurations > Permissions > Resource-based policy .
Explained in more details here
If the accepted answer by #davor.obilinovic (https://stackoverflow.com/a/47674337) still doesn't fix it, note that you'll get this error even if an existing event notification is now invalid.
Ex. Let's say you configured bucketA/prefix1 to trigger lambda1, bucketA/prefix2 to trigger lambda2. After a while you decide to delete lambda2 but don't remove the event notification for bucketA/prefix2.
Now if you try to configure bucketA/prefix3 to trigger lambda3, you'll see this error "Unable to validate the following destination configurations" even though you are trying to only add lambda3 and lambda3 is configured correctly as #davor.obilinovic answered.
Additional Context:
The reason for this behavior is because AWS does not have a "add_event_notification" api. They only have a "put_bucket_notification" which takes in the complete list of all the old event notifications plus the new one that we want to add. So each time we want to add an event notification, we have to send the entire list and they validate the entire list. It would have been easier/clearer had they specified which "following destination" they were referring to in their error message.
For me it was a totally different issue causing the "Unable to validate the following destination configurations" error.
Apparently, there was an old Lambda function on the same bucket, that was dropped a while before. However, AWS does not always remove the event notification from the S3 bucket, even if the old Lambda and its trigger are long gone.
This causes the conflict and the strange error message.
Resolution -
Navigate to the S3 bucket => Properties => Event notifications,
and drop any old remaining events still defined.
After this, everything went back to normal and worked like a charm.
Good luck!
I faced the same com.amazonaws.services.s3.model.AmazonS3Exception: Unable to validate the following destination configurations error when I tried to execute putBucketNotificationConfiguration
Upon checking around, I found that every time you update the bucket notification configuration, AWS will do a test notification check on all the existing notification configurations. If any of the test fails, say for the reason such as you removed the destination lambda or SNS topic of an older configuration, AWS will fail the entire bucket notification configuration request with above exception.
To address this, either identify/fix the configuration that is failing the test or remove all the existing configurations(if plausible) in the bucket using aws s3api put-bucket-notification-configuration --bucket=myBucketName --notification-configuration="{}" and then try updating the bucket configuration.
I'm facing one of these AWS Lambda node.js timeout when trying to access DynamoDB issues but the symptoms appear different and the solutions I found don't solve this issue.
Timeout is set to 5min, memory is set to 128MB but doesn't exceed 30MB usage.
IAM policies for the role are:
AWSLambdaFullAccess
AmazonDynamoDBFullAccess
AWSLambdaVPCAccessExecutionRole
The default VPC has 7 security groups and include the default security group with:
Inbound: All Traffic, All protocol, All port range,
Outbound: All Traffic, All protocol, All port range, 0.0.0.0/0
Here is the code:
var aws = require('aws-sdk');
exports.handler = function(event, context) {
var dynamo = new aws.DynamoDB();
dynamo.listTables(function(err, data) {
if (err) {
context.fail('Failed miserably:' + err.stack);
} else {
context.succeed('Function Finished! Data :' + data.TableNames);
}
});
};
And the Outcome:
START RequestId: 5d2a0294-fb6d-11e6-989a-edaa5cb75cba Version: $LATEST
END RequestId: 5d2a0294-fb6d-11e6-989a-edaa5cb75cba
REPORT RequestId: 5d2a0294-fb6d-11e6-989a-edaa5cb75cba Duration: 300000.91 ms Billed Duration: 300000 ms Memory Size: 128 MB Max Memory Used: 21 MB
2017-02-25T15:21:21.778Z 5d2a0294-fb6d-11e6-989a-edaa5cb75cba Task timed out after 300.00 seconds
The related node.js version issue solved here doesn't work for me and returns a "ReferenceError: https is not defined at exports.handler (/var/task/index.js:6:16)". Also AWS has deprecated version 0.10.
Here is the code with the https reference:
var aws = require('aws-sdk');
exports.handler = function(event, context) {
var dynamo = new aws.DynamoDB({
httpOptions: {
agent: new https.Agent({
rejectUnauthorized: true,
secureProtocol: "TLSv1_method",
ciphers: "ALL"
})
}
});
dynamo.listTables(function(err, data) {
if (err) {
context.fail('Failed miserably:' + err.stack);
} else {
context.succeed('Function Finished! Data :' + data.TableNames);
}
});
};
Outcome:
START RequestId: 6dfd3db7-fae0-11e6-ba81-a52f5fc3c3eb Version: $LATEST
2017-02-24T22:27:31.010Z 6dfd3db7-fae0-11e6-ba81-a52f5fc3c3eb ReferenceError: https is not defined
at exports.handler (/var/task/index.js:6:16)
END RequestId: 6dfd3db7-fae0-11e6-ba81-a52f5fc3c3eb
REPORT RequestId: 6dfd3db7-fae0-11e6-ba81-a52f5fc3c3eb Duration: 81.00 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 26 MB
RequestId: 6dfd3db7-fae0-11e6-ba81-a52f5fc3c3eb Process exited before completing request
With a timeout set to 5min I can't believe that AWS wouldn't be able to return the list of tables in the allocated timeframe and permission issues typically appear in the logs.
Thanks for looking into this.
I guess your Lambda is in a private subnet. In this case by default your Lambda will not have outbound internet access. You need to create a NAT Gateway or NAT Instance to let VPC protected resources to access outside Internet. DynamoDB API is outside Internet from VPC point of view.
You no longer need to create a NAT gateway/instance
You can create a VPC Endpoint for Dynamo DB which will open Lambda in the private subnet to access Dynamo. Create an endpoint in your VPC that aligns to the VPC/subnet setup you have for lambda and you will have no issues with access.
You can limit access to specific services or resources.
https://aws.amazon.com/blogs/aws/new-vpc-endpoints-for-dynamodb/
This can be done for any global AWS service, S3 etc