NoSuchBucketPolicy error when trying to generate Bucket Policy - node.js

I'm trying out AWS S3 for the first time and I wrote the following function to generate a bucket policy.
// Load the AWS SDK for Node.js
var AWS = require('aws-sdk');
// Load configuration
AWS.config = new AWS.Config();
AWS.config.accessKeyId = sails.config.accessKeyId;
AWS.config.secretAccessKey = sails.config.secretAccessKey;
AWS.config.region = sails.config.region
// Create S3 object
var s3 = new AWS.S3();
// Defining the required parameters
var params = {
Bucket: "bucket-name-here"
};
s3.getBucketPolicy(params, function(error, date) {
if(error) {
// An error occurred
console.log("Error\n" + error);
return res.json({
message: "Error",
'error': error
});
} else {
// Successful
console.log("Data\n" + date);
return res.json({
message: "Successful",
'data': date
});
}
});
But the response is always NoSuchBucketPolicy: The bucket policy does not exist
I tried uploading a test file into the bucket, listing all buckets and it both worked as expected. What is wrong with the code?

Your code doesn't "generate" a bucket policy... it tries to fetch the existing policy of a bucket. Buckets don't have a policy until you create one, so this error would be normal in that case.
Error Code: NoSuchBucketPolicy
Description: specified bucket does not have a bucket policy.
HTTP Status Code: 404 Not Found
http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html

Related

What could be the error when we pass the invalid bucket name and bucket which is not present on s3

I was working on s3 i was creating a corner case when the user put the wrong bucket which is not present on the s3 but when I did the request call with an invalid bucket name I am still getting the error message as 'Bucket does not exist.
So I didn't get that like if we pass the correct format we should be getting the message as follow 'Bucket doesn't exist'.
But in case of incorrect form of bucket it should be giving the error message for invalid bucket name and error code according to that
When you say invalid bucket-name that has to be dealt separately by your code. And you can use the below code to check if the bucket exist:
const checkBucketExists = async bucket => {
// here you can add the validation for bucketName
// and based on the validation you can return the status code.
const s3 = new AWS.S3();
const options = {
Bucket: bucket,
};
try {
await s3.headBucket(options).promise();
return true;
} catch (error) {
if (error.statusCode === 404) {
return false;
}
throw error;
}
};

S3 how to find if object has pre-signed URL?

Learning S3 I know how to generate a presigned URL:
const aws = require('aws-sdk')
const s3 = new aws.S3()
aws.config.update({
accessKeyId: 'id-omitted',
secretAccessKey: 'key-omitted'
})
const myBucket = 'foo'
const myKey = 'bar.png'
const signedUrlExpireSeconds = 60 * 5
const url = s3.getSignedUrl('getObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds
})
console.log(`Presigned URL: ${url}`)
and from reading the documentation I can retrieve what's in the bucket with headObject but I've tested trying to find wether an object already has a presigned URL:
1st attempt:
let signedUrl = await s3.validSignedURL('getObject', params).promise()
console.log(`Signed URL: ${signedUrl}`)
2nd attempt:
await s3.getObject(params, (err, data) => {
if (err) console.log(err)
return data.Body.toString('utf-8')
})
3rd attempt:
let test = await s3.headObject(params).promise()
console.log(`${test}`)
and I'm coming up short. I know could create a file or log to a file when a presigned URL is created but I think that would be a hack. Is there a way in Node I can check an object to see if it has a presigned URL created for it? I'm not looking to do this in the dashboard I'm looking for a way to do this solely in the terminal/script. Going through the tags and querying Google I'm not finding any luck
Referenced:
S3 pre-signed url - check if url was used?
Creating Pre-Signed URLs for Amazon S3 Buckets
GET Object
Pre-Signing AWS S3 URLs
How to check if an prefix / key exists on S3 before creating a presigned URL?
How to get response from S3 getObject in Node.js?
AWS signed url if the object exists using promises
Is there a way in Node I can check an object to see if it has a presigned URL created for it?
Short answer: No
Long answer: There is no information about the signed urls stored on the object or any list of created urls. You can even create a signed url completely on client side without invoking any service
That question is interesting. I'd tried to find whether some place stored the presigned URL, but still not found.
But what gusto2 says is true, you can just create a presigned URL without any aws service, which is exactly what aws-sdk doing.
Check this file: https://github.com/aws/aws-sdk-js/blob/cc29728c1c4178969ebabe3bbe6b6f3159436394/ts/cloudfront.ts
Then you can get how presigned URL is generated:
var getRtmpUrl = function (rtmpUrl) {
var parsed = url.parse(rtmpUrl);
return parsed.path.replace(/^\//, '') + (parsed.hash || '');
};
var getResource = function (url) {
switch (determineScheme(url)) {
case 'http':
case 'https':
return url;
case 'rtmp':
return getRtmpUrl(url);
default:
throw new Error('Invalid URI scheme. Scheme must be one of'
+ ' http, https, or rtmp');
}
};
getSignedUrl: function (options, cb) {
try {
var resource = getResource(options.url);
} catch (err) {
return handleError(err, cb);
}
var parsedUrl = url.parse(options.url, true),
signatureHash = Object.prototype.hasOwnProperty.call(options, 'policy')
? signWithCustomPolicy(options.policy, this.keyPairId, this.privateKey)
: signWithCannedPolicy(resource, options.expires, this.keyPairId, this.privateKey);
parsedUrl.search = null;
for (var key in signatureHash) {
if (Object.prototype.hasOwnProperty.call(signatureHash, key)) {
parsedUrl.query[key] = signatureHash[key];
}
}
try {
var signedUrl = determineScheme(options.url) === 'rtmp'
? getRtmpUrl(url.format(parsedUrl))
: url.format(parsedUrl);
} catch (err) {
return handleError(err, cb);
}
return handleSuccess(signedUrl, cb);
}

Receiving invalid image format error with NodeJS Rekognition api call

I'm trying to make a call to the Amazon Rekognition service with NodeJS. The call is going through but I receive an InvalidImageFormatException error in which it says:
Invalid Input, input image shouldn't be empty.
I'm basing my code off an S3 example:
var AWS = require('aws-sdk');
var rekognition = new AWS.Rekognition({region: 'us-east-1'});
//Create a bucket and upload something into it
var params = {
Image: {
S3Object: {
Bucket: "MY-BUCKET-NAME",
Name: "coffee.jpg"
}
},
MaxLabels: 10,
MinConfidence: 70.0
};
var request = rekognition.detectLabels(params, function(err, data) {
if(err){
console.log(err, err.stack); // an error occured
}
else{
console.log(data); // successful response
}
});
The documentation states that the service only accepts PNG or JPEG images but I can't figure out what is going on.

AWS Lambda function write to S3

I have a Node 4.3 Lambda function in AWS. I want to be able to write a text file to S3 and have read many tutorials about how to integrate with S3. However, all of them are about how to call Lambda functions after writing to S3.
How can I create a text file in S3 from Lambda using node? Is this possible? Amazons documentation doesn't seem to cover it.
Yes it is absolutely possible!
var AWS = require('aws-sdk');
function putObjectToS3(bucket, key, data){
var s3 = new AWS.S3();
var params = {
Bucket : bucket,
Key : key,
Body : data
}
s3.putObject(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
}
Make sure that you give your Lambda function the required write permissions to the target s3 bucket / key path by selecting or updating the IAM Role your lambda executes under.
IAM Statement to add:
{
"Sid": "Stmt1468366974000",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket-name-goes-here/optional-path-before-allow/*"
]
}
Further reading:
AWS JavaScript SDK
The specific "Put Object" details
After long long time of silence-failing of 'Task timed out after X' without any good error message, i went back to the beginning, to Amazon default template example, and that worked!
> Lambda > Functions > Create function > Use a blueprints > filter: s3.
Here is my tweaked version of amazon example:
const aws = require('aws-sdk');
const s3 = new aws.S3({ apiVersion: '2006-03-01' });
async function uploadFileOnS3(fileData, fileName){
const params = {
Bucket: "The-bucket-name-you-want-to-save-the-file-to",
Key: fileName,
Body: JSON.stringify(fileData),
};
try {
const response = await s3.upload(params).promise();
console.log('Response: ', response);
return response;
} catch (err) {
console.log(err);
}
};
IAM Statement for serverless.com - Write to S3 to specific bucket
service: YOURSERVICENAME
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: eu-west-1
timeout: 60
iamRoleStatements:
- Effect: "Allow"
Action:
- s3:PutObject
Resource: "**BUCKETARN**/*"
- Effect: "Deny"
Action:
- s3:DeleteObject
Resource: "arn:aws:s3:::**BUCKETARN**/*"
You can upload file on s3 using
aws-sdk
If you are using IAM user then you have to provide access key and secret key and make sure you have provided necessary permission to IAM user.
var AWS = require('aws-sdk');
AWS.config.update({accessKeyId: "ACCESS_KEY",secretAccessKey: 'SECRET_KEY'});
var s3bucket = new AWS.S3({params: {Bucket: 'BUCKET_NAME'}});
function uploadFileOnS3(fileName, fileData){
var params = {
Key: fileName,
Body: fileData,
};
s3bucket.upload(params, function (err, res) {
if(err)
console.log("Error in uploading file on s3 due to "+ err)
else
console.log("File successfully uploaded.")
});
}
Here I temporarily hard-coded AWS access and secret key for testing purposes. For best practices refer to the documentation.
One more option (export file as multipartFormFata):
React > Node.js (AWS Lambda) > S3 Bucket
https://medium.com/#mike_just_mike/aws-lambda-node-js-export-file-to-s3-4b35c400f484

AWS node.js not creating S3 bucket?

I'm trying to use the basic tutorial to create an S3 bucket as follows
var AWS = require('aws-sdk');
AWS.config.loadFromPath('./myawsconfig.json');
AWS.config.update({region: 'eu-west-1'});
var s3 = new AWS.S3();
s3.client.createBucket({Bucket: 'pBucket'}, function() {
var data = {Bucket: 'pBucket', Key: 'myKey', Body: 'Hello!'};
s3.client.putObject(data, function(err, data) {
if (err) {
console.log("Error uploading data: ", err);
} else {
console.log("Successfully uploaded data to myBucket/myKey");
}
});
});
But I'm receiving the following error
node createbucket.js
Error uploading data: { [NoSuchBucket: The specified bucket does not exist]
message: 'The specified bucket does not exist',
code: 'NoSuchBucket',
name: 'NoSuchBucket',
statusCode: 404,
retryable: false }
I just ran into this problem, apparently the Node.js tutorial code doesn't compile. I got a var Object doesn't have createBucket method.
This worked:
var AWS = require('aws-sdk');
AWS.config.loadFromPath('./credentials.json');
// Set your region for future requests.
AWS.config.update({region: 'us-east-1'});
// Create a bucket and put something in it.
var s3 = new AWS.S3();
s3.client.createBucket({Bucket: 'hackathon-test'}, function() {
var data = {Bucket: 'hackathon-test', Key: 'myKey', Body: 'Hello!'};
s3.client.putObject(data, function(err, data) {
if (err) {
console.log("Error uploading data: ", err);
} else {
console.log("Successfully uploaded data to myBucket/myKey");
}
});
});
I had this issue, discovering that my api-user didn't have permission to create the bucket.
Slightly more thorough error checking revealed this...
s3.client.createBucket({Bucket: 'someBucket'}, function(err) {
if (err) {
console.log("Error creating bucket: ", err);
} else {
console.log("Successfully created bucket 'someBucket'");
}
// ...
According to aws S3 bucket name restrictions.
your bucket name shouldn't contain any uppercase letter. so 'pBucket' is invalid.
http://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html
The rules for DNS-compliant bucket names are:
Bucket names must be at least 3 and no more than 63 characters long.
Bucket names must be a series of one or more labels. Adjacent labels are separated by a single period (.). Bucket names can contain
lowercase letters, numbers, and hyphens. Each label must start and end
with a lowercase letter or a number.
Bucket names must not be
formatted as an IP address (e.g., 192.168.5.4).
When using virtual
hosted–style buckets with SSL, the SSL wild card certificate only
matches buckets that do not contain periods. To work around this, use
HTTP or write your own certificate verification logic.
A couple of pointers that I missed and someone may find useful
if you set the region as part of the S3 object var s3 = new AWS.S3({region: 'us-west-1'}); then the call will fail (in my experience).
You can therefore set the region via either
a) AWS.config.update({ region: 'eu-west-1' });
b) as part of the params on createBucket
s3.createBucket({
Bucket: bucketName,
CreateBucketConfiguration: {
LocationConstraint: "eu-west-1"
}
}, function () {
also, watch out for caps or underscores in the bucket name as that took an hour of my life too (DNS compliant only).

Resources