Getting "NoSuchKey" error when creating S3 signedUrl with NodeJS - node.js

I'm trying to access an S3 bucket with nodejs using aws-sdk.
When I call the s3.getSignedUrl method and use the url it provides, I get a "NoSuchKey" error in the url.
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Key>{MY_BUCKET_NAME}/{REQUESTED_FILENAME}</Key>
My theory is that the request path I'm passing is wrong. Comparing my request:
{BUCKET_NAME}.s3.{BUCKET_REGION}.amazonaws.com/{BUCKET_NAME}/{KEY}
With the url created from the AWS console:
{BUCKET_NAME}.s3.{BUCKET_REGION}.amazonaws.com/{KEY}
Why is aws-sdk adding the "{BUCKET_NAME}" at the end?
NodeJS code:
// s3 instance setup
const s3 = new AWS.S3({
region: BUCKET_REGION,
endpoint: BUCKET_ENDPOINT, // {MY_BUCKET_NAME}.s3.{REGION}.amazonaws.com
s3ForcePathStyle: true,
signatureVersion: "v4",
});
const getSignedUrlFromS3 = async (filename) => {
const s3Params = {
Bucket: BUCKET_NAME,
Key: filename,
Expires: 60,
};
const signedUrl = await s3.getSignedUrl("getObject", s3Params);
return { name: filename, url: signedUrl };
};

The SDK adds the bucket name in the path because you specifically ask it to:
s3ForcePathStyle: true,
However, according to your comment, you use the bucket name in the endpoint already ("I have my endpoint as {MY_BUCKET_NAME}.s3.{REGION}.amazonaws.com") so your endpoint isn't meant to use path style...
Path style means using s3.amazonaws.com/bucket/key instead of bucket.s3.amazonaws.com/key. Forcing path style with an endpoint that actually already contains the bucket name ends up with bucket.s3.amazonaws.com/bucket/key which is interpreted as key bucket/key instead of key.
The fix should be to disable s3ForcePathStyle and instead to set s3BucketEndpoint: true because you specified an endpoint for an individual bucket.
However, in my opinion it's unnecessary to specify an endpoint in the first place - just let the SDK handle these things for you! I'd remove both s3ForcePathStyle and endpoint (then s3BucketEndpoint isn't needed either).

Related

Deleting file from Digital Ocean Space with Next (NodeJs)

I encountered weird behavior, while trying to delete file from S3 bucket on digitaloceanspace. I use aws-sdk and I follow the official example. However, the method doesn't delete the file, no error occurs and returned data object (which should be a key of deleted item) is empty. Below the code:
import AWS from "aws-sdk";
export default async function handler(req, res){
const key = req.query.key
const spacesEndpoint = new AWS.Endpoint("ams3.digitaloceanspaces.com");
const s3 = new AWS.S3({
endpoint: spacesEndpoint,
secretAccessKey: process.env.AWS_SECRET_KEY,
accessKeyId: process.env.AWS_ACCESS_KEY,
});
const params = {
Bucket: process.env.BUCKET_NAME,
Key: key,
};
s3.deleteObject(params, function (error, data) {
if (error) {
res.status({ error: "Something went wrong" });
}
console.log("Successfully deleted file", data);
});
}
The environmental variables are correct, the other (not mentioned above) upload file method works just fine.
The key passed to the params has format 'folder/file.ext' and it exists for sure.
What is returned from the callback is log: 'Successfully deleted file {}'
Any ideas what is happening here?
Please make sure you don't have any spaces in your key (filename), otherwise, it won't work. I was facing a similar problem when I saw there was a space in the filename that I was trying to delete.
Let's have an example if we are trying to upload a file named "new file.jpeg", then the DigitalOcean space is going to save it as "new\20%file.jpeg" and when you will try to delete this file, it won't find any file having name as "new file.jpeg". That's why we need to trim any whitespace or space between the words in the file name.
Hope it helps.

Unable to fetch file from S3 in node js

I have stored video files in S3 bucket and now i want to show the files to clients through an API. Here is my code for it
app.get('/vid', async(req, res) => {
AWS.config.update({
accessKeyId: config.awsAccessKey,
secretAccessKey: config.awsSecretKey,
region: "ap-south-1"
});
let s3 = new AWS.S3();
var p = req.query.p
res.attachment(p);
var options = {
Bucket: BUCKET_NAME,
Key: p,
};
console.log(p, "name")
try {
await s3.getObject(options).
createReadStream().pipe(res);
} catch (e) {
console.log(e)
}
})
This is the output I am getting when ther is this file available in S3 bucket -
vid_kdc5stoqnrIjEkL9M.mp4 name
NoSuchKey: The specified key does not exist.
This is likely caused by invalid parameters being passed into the function.
To check for invalid parameters you should double check the strings that are being passed in. For the object check the following:
Check the value of p, ensure it is the exact same name of the full object key.
Validate that the correct BUCKET_NAME is being used
No trailing characters (such as /)
Perform any necessary decoding before passing parameters in.
If in doubt use logging to output the exact value, also to test the function try testing with hard coded values to validate you can actually retrieve the objects.
For more information take a look at the How can I troubleshoot the 404 "NoSuchKey" error from Amazon S3? page.
Having certain characters in the bucketname leads to this error.
In your case, there is an underscore. Try renaming the file.
Also refer to this
S3 Bucket Naming Requirements Docs
Example from Docs:
The following example bucket names are not valid:
aws_example_bucket (contains underscores)
AwsExampleBucket (contains uppercase letters)
aws-example-bucket- (ends with a hyphen)

aws-sdk s3.getObject with redirect

In short, I'm trying to resize an image through through a redirect, aws lambda, and the aws-sdk.
Following along the tutorial on resizing an image on the fly with AWS, AWS - resize-images-on-the-fly, I've managed to make everything work according to the walkthrough, however my question is related to making the call to the bucket.
Currently the only way I can get this to work is by calling,
http://MY_BUCKET_WEBSITE_HOSTNAME/25×25/blue_marble.jpg.
If the image isn't available, the request is redirected, image resized, and then placed back in the bucket.
What I would like to do, is access the bucket in the aws-sdk through the s3.getObject() call, rather than to that direct link.
As of now, I can only access the images that are currently in the bucket, so the redirect is never happening.
My thought was the request wasn't being sent to the correct endpoint and from what I was able to find online, I changed the way the sdk was created to this -
s3 = new aws.S3({
accessKeyId: "myAccessKeyId",
secretAccessKey: "mySecretAccessKey",
region: "us-west-2",
endpoint: '<MYBUCKET>.s3-website-us-west-2.amazonaws.com',
s3BucketEndpoint: true,
sslEnabled: false,
signatureVersion: 'v4'
})
params = {
Bucket: 'MY_BUCKET',
Key: '85x85/blue_marble.jpg'
};
s3.getObject(params, (error, data) => data);
From what I can tell the endpoints in the request look correct.
When I visit the endpoints directly in the browser, everything works as expected.
But when using the sdk, only available images return. There is no redirect, no data returns, and I get the error.
XMLParserError: Non-whitespace before first tag.
Not sure if it's possible to do with s3.getObject(), seems like it may, but I can't seem to figure it out.
Use headObject to check if the object exists. If not you can call your API to do the resize & then retry the get after the resize.
var params = {
Bucket: config.get('s3bucket'),
Key: path
};
s3.headObject(params, function (err, metadata) {
if (err && err.code === 'NotFound') {
// Call your resize API here. Once your resize API returns a success, you can get the object\URL.
} else {
s3.getSignedUrl('getObject', params, callback); //Use this secure URL to access the object.
}
});

AWS S3 signed URLs with aws-sdk fails with "AuthorizationQueryParametersError"

I am trying to create a pre-signed URL for a private file test.png on S3.
My code:
var AWS = require('aws-sdk');
AWS.config.region = 'eu-central-1';
const s3 = new AWS.S3();
const key = 'folder/test.png';
const bucket = 'mybucket';
const expiresIn = 2000;
const params = {
Bucket: bucket,
Key: key,
Expires: expiresIn,
};
console.log('params: ', params);
console.log('region: ', AWS.config.region);
var url = s3.getSignedUrl('getObject', params);
console.log('url sync: ', url);
s3.getSignedUrl('getObject', params, function (err, urlX) {
console.log("url async: ", urlX);
});
which returns a URL in the console.
When I try to access it, it shows
<Error>
<Code>AuthorizationQueryParametersError</Code>
<Message>
Query-string authentication version 4 requires the X-Amz-Algorithm, X-Amz-Credential, X-Amz-Signature, X-Amz-Date, X-Amz-SignedHeaders, and X-Amz-Expires parameters.
</Message>
<RequestId>97377E063D0B1D09</RequestId>
<HostId>
6GE7EdqUvCEJis+fPoWR0Ffp2kN9Mlql4gs+qB4uY3hA4qR2wYrImkZfv05xy4XVjsZnRDVN63s=
</HostId>
</Error>
I am totally stuck and would really appreciate some idea on how to solve it.
i tested your code. i only made modifications to key and bucket. it works. may i know the aws sdk version you are using and the nodejs version you are using? my test was executed on nodejs 8.1.2 and aws-sdk#2.77.0.
I was able to reproduce your error when I executed curl.
curl url (wrong) ->
<Error><Code>AuthorizationQueryParametersError</Code><Message>Query-string authentication version 4 requires the X-Amz-Algorithm, X-Amz-Credential, X-Amz-Signature, X-Amz-Date, X-Amz-SignedHeaders, and X-Amz-Expires parameters.</Message>
curl "url" (worked)
if you curl without the double quotes, ampersand is interpreted by the shell as a background process.
Alternatively, you could try pasting the generated link in a browser.
Hope this helps.

S3.getSignedUrl to accept multiple content-type

I'm using the react-s3-uploader node package, which takes in a signingUrlfor obtaining a signedUrl for storing an object into S3.
Currently I've configured a lambda function (with an API Gateway endpoint) to generate this signedUrl. After some tinkering, I've got it to work, but noticed that I have to define in my lambda function the content-type, which looks like this:
var AWS = require('aws-sdk');
const S3 = new AWS.S3()
AWS.config.update({
region: 'us-west-2'
})
exports.handler = function(event, context) {
console.log('context, ', context)
console.log('event, ', event)
var params = {
Bucket: 'video-bucket',
Key: 'videoname.mp4',
Expires: 120,
ACL: 'public-read',
ContentType:'video/mp4'
};
S3.getSignedUrl('putObject', params, function (err, url) {
console.log('The URL is', url);
context.done(null, {signedUrl: url})
});
}
The issue is that I want this signed url to be able to accept multiple types of video files, and I've tried setting ContentType to video/*, which doesn't work. Also, because this lambda endpoint isn't what actually takes the upload, I can't pass in the filetype to this function beforehand.
You'll have to find a way to discover the file type and pass it to the Lambda function as an argument. There isn't an alternative, here, with a pre-signed PUT.
The request signing process for PUT has no provision for wildcards or multiple/alternative values.
In case anyone else is looking for a working answer, I eventually found out that react-s3-uploader does pass the content-type and filename over to the getSignin url (except I had forgotten to pass the query through in API Gateway earlier), so I was able to extract it as event.params.querystring.contentType in lambda.
Then in the params, I simply set {ContentType: event.params.querystring.contentType} and now it accepts all file formats.

Resources