In short, I'm trying to resize an image through through a redirect, aws lambda, and the aws-sdk.
Following along the tutorial on resizing an image on the fly with AWS, AWS - resize-images-on-the-fly, I've managed to make everything work according to the walkthrough, however my question is related to making the call to the bucket.
Currently the only way I can get this to work is by calling,
http://MY_BUCKET_WEBSITE_HOSTNAME/25×25/blue_marble.jpg.
If the image isn't available, the request is redirected, image resized, and then placed back in the bucket.
What I would like to do, is access the bucket in the aws-sdk through the s3.getObject() call, rather than to that direct link.
As of now, I can only access the images that are currently in the bucket, so the redirect is never happening.
My thought was the request wasn't being sent to the correct endpoint and from what I was able to find online, I changed the way the sdk was created to this -
s3 = new aws.S3({
accessKeyId: "myAccessKeyId",
secretAccessKey: "mySecretAccessKey",
region: "us-west-2",
endpoint: '<MYBUCKET>.s3-website-us-west-2.amazonaws.com',
s3BucketEndpoint: true,
sslEnabled: false,
signatureVersion: 'v4'
})
params = {
Bucket: 'MY_BUCKET',
Key: '85x85/blue_marble.jpg'
};
s3.getObject(params, (error, data) => data);
From what I can tell the endpoints in the request look correct.
When I visit the endpoints directly in the browser, everything works as expected.
But when using the sdk, only available images return. There is no redirect, no data returns, and I get the error.
XMLParserError: Non-whitespace before first tag.
Not sure if it's possible to do with s3.getObject(), seems like it may, but I can't seem to figure it out.
Use headObject to check if the object exists. If not you can call your API to do the resize & then retry the get after the resize.
var params = {
Bucket: config.get('s3bucket'),
Key: path
};
s3.headObject(params, function (err, metadata) {
if (err && err.code === 'NotFound') {
// Call your resize API here. Once your resize API returns a success, you can get the object\URL.
} else {
s3.getSignedUrl('getObject', params, callback); //Use this secure URL to access the object.
}
});
Related
I encountered weird behavior, while trying to delete file from S3 bucket on digitaloceanspace. I use aws-sdk and I follow the official example. However, the method doesn't delete the file, no error occurs and returned data object (which should be a key of deleted item) is empty. Below the code:
import AWS from "aws-sdk";
export default async function handler(req, res){
const key = req.query.key
const spacesEndpoint = new AWS.Endpoint("ams3.digitaloceanspaces.com");
const s3 = new AWS.S3({
endpoint: spacesEndpoint,
secretAccessKey: process.env.AWS_SECRET_KEY,
accessKeyId: process.env.AWS_ACCESS_KEY,
});
const params = {
Bucket: process.env.BUCKET_NAME,
Key: key,
};
s3.deleteObject(params, function (error, data) {
if (error) {
res.status({ error: "Something went wrong" });
}
console.log("Successfully deleted file", data);
});
}
The environmental variables are correct, the other (not mentioned above) upload file method works just fine.
The key passed to the params has format 'folder/file.ext' and it exists for sure.
What is returned from the callback is log: 'Successfully deleted file {}'
Any ideas what is happening here?
Please make sure you don't have any spaces in your key (filename), otherwise, it won't work. I was facing a similar problem when I saw there was a space in the filename that I was trying to delete.
Let's have an example if we are trying to upload a file named "new file.jpeg", then the DigitalOcean space is going to save it as "new\20%file.jpeg" and when you will try to delete this file, it won't find any file having name as "new file.jpeg". That's why we need to trim any whitespace or space between the words in the file name.
Hope it helps.
I'm trying to access an S3 bucket with nodejs using aws-sdk.
When I call the s3.getSignedUrl method and use the url it provides, I get a "NoSuchKey" error in the url.
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Key>{MY_BUCKET_NAME}/{REQUESTED_FILENAME}</Key>
My theory is that the request path I'm passing is wrong. Comparing my request:
{BUCKET_NAME}.s3.{BUCKET_REGION}.amazonaws.com/{BUCKET_NAME}/{KEY}
With the url created from the AWS console:
{BUCKET_NAME}.s3.{BUCKET_REGION}.amazonaws.com/{KEY}
Why is aws-sdk adding the "{BUCKET_NAME}" at the end?
NodeJS code:
// s3 instance setup
const s3 = new AWS.S3({
region: BUCKET_REGION,
endpoint: BUCKET_ENDPOINT, // {MY_BUCKET_NAME}.s3.{REGION}.amazonaws.com
s3ForcePathStyle: true,
signatureVersion: "v4",
});
const getSignedUrlFromS3 = async (filename) => {
const s3Params = {
Bucket: BUCKET_NAME,
Key: filename,
Expires: 60,
};
const signedUrl = await s3.getSignedUrl("getObject", s3Params);
return { name: filename, url: signedUrl };
};
The SDK adds the bucket name in the path because you specifically ask it to:
s3ForcePathStyle: true,
However, according to your comment, you use the bucket name in the endpoint already ("I have my endpoint as {MY_BUCKET_NAME}.s3.{REGION}.amazonaws.com") so your endpoint isn't meant to use path style...
Path style means using s3.amazonaws.com/bucket/key instead of bucket.s3.amazonaws.com/key. Forcing path style with an endpoint that actually already contains the bucket name ends up with bucket.s3.amazonaws.com/bucket/key which is interpreted as key bucket/key instead of key.
The fix should be to disable s3ForcePathStyle and instead to set s3BucketEndpoint: true because you specified an endpoint for an individual bucket.
However, in my opinion it's unnecessary to specify an endpoint in the first place - just let the SDK handle these things for you! I'd remove both s3ForcePathStyle and endpoint (then s3BucketEndpoint isn't needed either).
I'm using Amazon's Node.js aws-sdk to create expiring pre-signed S3 URLs for digital product downloads, and struggling with the result. I've got the SDK configured with my keys successfully, and I've tried both a synchronous approach (not shown) and an async approach (shown) at collecting signed urls. The calls work, I never hit any errors and I am successfully returned signed URLs. Here's the twist: the URLs I get back don't work.
const promises = skus.map(function(sku) {
const key = productKeys[sku];
return new Promise((resolve, reject) => {
s3.getSignedUrl('getObject', {
Bucket: 'my-products',
Key: key,
Expires: 60 * 60 * 24, // Time in seconds; 24 hours
}, function(err, res) {
if (err) {
reject(err);
} else {
resolve({
text: productNames[sku],
url: res,
});
}
});;
});
});
I had assumed it was an error with the keys I had allocated, which I had assigned to an IAM User who has full S3 bucket access. So, I tried using a root level keypair and I get the same access denied result. Interestingly: the URLs I get back take the form https://my-bucket.s3.amazonaws.com/Path/To/My/Product.zip?AWSAccessKeyId=blahblahMyKey&Expires=43914919&Signature=blahblahmysig&x-amz-security-token=hugelongstring. I've not seen this x-amz-security-token thing before, and if I try just removing that query param, I get Access Denied but for a different reason: the AWSAccessKeyId is one that is not associated with any of my accounts. It's not the one I've configured the SDK with and it's not one I've allocated on my S3 account. No idea where it comes from, and no idea how that relates to the x-amz-security-token param.
Anyway, I'm stumped. I just want a working pre-signed url... what gives? Thanks for your help.
I'm using the react-s3-uploader node package, which takes in a signingUrlfor obtaining a signedUrl for storing an object into S3.
Currently I've configured a lambda function (with an API Gateway endpoint) to generate this signedUrl. After some tinkering, I've got it to work, but noticed that I have to define in my lambda function the content-type, which looks like this:
var AWS = require('aws-sdk');
const S3 = new AWS.S3()
AWS.config.update({
region: 'us-west-2'
})
exports.handler = function(event, context) {
console.log('context, ', context)
console.log('event, ', event)
var params = {
Bucket: 'video-bucket',
Key: 'videoname.mp4',
Expires: 120,
ACL: 'public-read',
ContentType:'video/mp4'
};
S3.getSignedUrl('putObject', params, function (err, url) {
console.log('The URL is', url);
context.done(null, {signedUrl: url})
});
}
The issue is that I want this signed url to be able to accept multiple types of video files, and I've tried setting ContentType to video/*, which doesn't work. Also, because this lambda endpoint isn't what actually takes the upload, I can't pass in the filetype to this function beforehand.
You'll have to find a way to discover the file type and pass it to the Lambda function as an argument. There isn't an alternative, here, with a pre-signed PUT.
The request signing process for PUT has no provision for wildcards or multiple/alternative values.
In case anyone else is looking for a working answer, I eventually found out that react-s3-uploader does pass the content-type and filename over to the getSignin url (except I had forgotten to pass the query through in API Gateway earlier), so I was able to extract it as event.params.querystring.contentType in lambda.
Then in the params, I simply set {ContentType: event.params.querystring.contentType} and now it accepts all file formats.
I am trying to force files to download from Amazon S3 using the GET request parameter response-content-disposition.
I first created a signed URL which works fine when I want to view the file.
I then attempt to redirect there with the response-content-disposition header. Here is my code:
res.writeHead(302, {
'response-content-disposition': 'attachment',
'Location': 'http://s3-eu-west-1.amazonaws.com/mybucket/test/myfile.txt?Expires=1501018110&AWSAccessKeyId=XXXXXX&Signature=XXXXX',
});
However, this just redirects to the file and does not download it.
Also when I try and visit with the file with the response-content-disposition as GET variable:
http://s3-eu-west-1.amazonaws.com/mybucket/test/myfile.txt?Expires=1501018110&AWSAccessKeyId=XXXXXX&Signature=XXXXX&response-content-disposition=attachment
..I reveive the following response:
The request signature we calculated does not match the signature you provided. Check your key and signing method.
Hi you can force download a file or can change file name using below sample code. This sample code is to download a file using preSignedUrl.
The important thing here is the ResponseContentDisposition key in params of getSignedUrl method. No need to pass any header in your request like content-disposition ..
var aws = require('aws-sdk');
var s3 = new aws.S3();
exports.handler = function (event, context) {
var params = {
Bucket: event.bucket,
Key: event.key,
ResponseContentDisposition :'attachment;filename=' + 'myprefix' + event.key
};
s3.getSignedUrl('getObject', params, function (err, url) {
if (err) {
console.log(JSON.stringify(err));
context.fail(err);
}
else {
context.succeed(url);
}
});
};
The correct way of using the response-content-disposition option is to include it as a GET variable but you're not calculating the signature correctly.
You can find more information on how you should calculate the signature in the Amazon REST Authentication guide