AWS S3 signed URLs with aws-sdk fails with "AuthorizationQueryParametersError" - node.js

I am trying to create a pre-signed URL for a private file test.png on S3.
My code:
var AWS = require('aws-sdk');
AWS.config.region = 'eu-central-1';
const s3 = new AWS.S3();
const key = 'folder/test.png';
const bucket = 'mybucket';
const expiresIn = 2000;
const params = {
Bucket: bucket,
Key: key,
Expires: expiresIn,
};
console.log('params: ', params);
console.log('region: ', AWS.config.region);
var url = s3.getSignedUrl('getObject', params);
console.log('url sync: ', url);
s3.getSignedUrl('getObject', params, function (err, urlX) {
console.log("url async: ", urlX);
});
which returns a URL in the console.
When I try to access it, it shows
<Error>
<Code>AuthorizationQueryParametersError</Code>
<Message>
Query-string authentication version 4 requires the X-Amz-Algorithm, X-Amz-Credential, X-Amz-Signature, X-Amz-Date, X-Amz-SignedHeaders, and X-Amz-Expires parameters.
</Message>
<RequestId>97377E063D0B1D09</RequestId>
<HostId>
6GE7EdqUvCEJis+fPoWR0Ffp2kN9Mlql4gs+qB4uY3hA4qR2wYrImkZfv05xy4XVjsZnRDVN63s=
</HostId>
</Error>
I am totally stuck and would really appreciate some idea on how to solve it.

i tested your code. i only made modifications to key and bucket. it works. may i know the aws sdk version you are using and the nodejs version you are using? my test was executed on nodejs 8.1.2 and aws-sdk#2.77.0.
I was able to reproduce your error when I executed curl.
curl url (wrong) ->
<Error><Code>AuthorizationQueryParametersError</Code><Message>Query-string authentication version 4 requires the X-Amz-Algorithm, X-Amz-Credential, X-Amz-Signature, X-Amz-Date, X-Amz-SignedHeaders, and X-Amz-Expires parameters.</Message>
curl "url" (worked)
if you curl without the double quotes, ampersand is interpreted by the shell as a background process.
Alternatively, you could try pasting the generated link in a browser.
Hope this helps.

Related

Getting "NoSuchKey" error when creating S3 signedUrl with NodeJS

I'm trying to access an S3 bucket with nodejs using aws-sdk.
When I call the s3.getSignedUrl method and use the url it provides, I get a "NoSuchKey" error in the url.
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Key>{MY_BUCKET_NAME}/{REQUESTED_FILENAME}</Key>
My theory is that the request path I'm passing is wrong. Comparing my request:
{BUCKET_NAME}.s3.{BUCKET_REGION}.amazonaws.com/{BUCKET_NAME}/{KEY}
With the url created from the AWS console:
{BUCKET_NAME}.s3.{BUCKET_REGION}.amazonaws.com/{KEY}
Why is aws-sdk adding the "{BUCKET_NAME}" at the end?
NodeJS code:
// s3 instance setup
const s3 = new AWS.S3({
region: BUCKET_REGION,
endpoint: BUCKET_ENDPOINT, // {MY_BUCKET_NAME}.s3.{REGION}.amazonaws.com
s3ForcePathStyle: true,
signatureVersion: "v4",
});
const getSignedUrlFromS3 = async (filename) => {
const s3Params = {
Bucket: BUCKET_NAME,
Key: filename,
Expires: 60,
};
const signedUrl = await s3.getSignedUrl("getObject", s3Params);
return { name: filename, url: signedUrl };
};
The SDK adds the bucket name in the path because you specifically ask it to:
s3ForcePathStyle: true,
However, according to your comment, you use the bucket name in the endpoint already ("I have my endpoint as {MY_BUCKET_NAME}.s3.{REGION}.amazonaws.com") so your endpoint isn't meant to use path style...
Path style means using s3.amazonaws.com/bucket/key instead of bucket.s3.amazonaws.com/key. Forcing path style with an endpoint that actually already contains the bucket name ends up with bucket.s3.amazonaws.com/bucket/key which is interpreted as key bucket/key instead of key.
The fix should be to disable s3ForcePathStyle and instead to set s3BucketEndpoint: true because you specified an endpoint for an individual bucket.
However, in my opinion it's unnecessary to specify an endpoint in the first place - just let the SDK handle these things for you! I'd remove both s3ForcePathStyle and endpoint (then s3BucketEndpoint isn't needed either).

AWS S3 After Uploading, Image is broken

Re-question
environment : swift, Nodejs, s3, lambda, aws-serverless-express module
Problem:
After uploading AS multipart Format with Alamofire(multipart/form-data) on swift, The image is broken on the s3 in AWS
code:
let photoKey = value.originalname + insertedReviewId + `_${i}.jpeg`
let photoParam = {
Bucket: bucket,
Key: photoKey,
Body: value.buffer,
ACL: "public-read-write",
ContentType: value.mimetype, /* minetype: image/jpege */
};
//image upload
let resultUploadS3 = await s3.upload(photoParam).promise();
Thanks to read
Self answer
I use aws-serverless-express and for middleware, aws-serverless-express/middleware.
I don't know what is problem, however, I remove aws-serverless-express/middleware module it is work. all of image perfectly upload, not broken file.
if you use aws-serverless-express/middleware, body-parser, multer on Nodejs, let try remove aws-serverless-express/middleware.

S3.getSignedUrl to accept multiple content-type

I'm using the react-s3-uploader node package, which takes in a signingUrlfor obtaining a signedUrl for storing an object into S3.
Currently I've configured a lambda function (with an API Gateway endpoint) to generate this signedUrl. After some tinkering, I've got it to work, but noticed that I have to define in my lambda function the content-type, which looks like this:
var AWS = require('aws-sdk');
const S3 = new AWS.S3()
AWS.config.update({
region: 'us-west-2'
})
exports.handler = function(event, context) {
console.log('context, ', context)
console.log('event, ', event)
var params = {
Bucket: 'video-bucket',
Key: 'videoname.mp4',
Expires: 120,
ACL: 'public-read',
ContentType:'video/mp4'
};
S3.getSignedUrl('putObject', params, function (err, url) {
console.log('The URL is', url);
context.done(null, {signedUrl: url})
});
}
The issue is that I want this signed url to be able to accept multiple types of video files, and I've tried setting ContentType to video/*, which doesn't work. Also, because this lambda endpoint isn't what actually takes the upload, I can't pass in the filetype to this function beforehand.
You'll have to find a way to discover the file type and pass it to the Lambda function as an argument. There isn't an alternative, here, with a pre-signed PUT.
The request signing process for PUT has no provision for wildcards or multiple/alternative values.
In case anyone else is looking for a working answer, I eventually found out that react-s3-uploader does pass the content-type and filename over to the getSignin url (except I had forgotten to pass the query through in API Gateway earlier), so I was able to extract it as event.params.querystring.contentType in lambda.
Then in the params, I simply set {ContentType: event.params.querystring.contentType} and now it accepts all file formats.

S3 file upload stream using node js

I am trying to find some solution to stream file on amazon S3 using node js server with requirements:
Don't store temp file on server or in memory. But up-to some limit not complete file, buffering can be used for uploading.
No restriction on uploaded file size.
Don't freeze server till complete file upload because in case of heavy file upload other request's waiting time will unexpectedly
increase.
I don't want to use direct file upload from browser because S3 credentials needs to share in that case. One more reason to upload file from node js server is that some authentication may also needs to apply before uploading file.
I tried to achieve this using node-multiparty. But it was not working as expecting. You can see my solution and issue at https://github.com/andrewrk/node-multiparty/issues/49. It works fine for small files but fails for file of size 15MB.
Any solution or alternative ?
You can now use streaming with the official Amazon SDK for nodejs in the section "Uploading a File to an Amazon S3 Bucket" or see their example on GitHub.
What's even more awesome, you finally can do so without knowing the file size in advance. Simply pass the stream as the Body:
var fs = require('fs');
var zlib = require('zlib');
var body = fs.createReadStream('bigfile').pipe(zlib.createGzip());
var s3obj = new AWS.S3({params: {Bucket: 'myBucket', Key: 'myKey'}});
s3obj.upload({Body: body})
.on('httpUploadProgress', function(evt) { console.log(evt); })
.send(function(err, data) { console.log(err, data) });
For your information, the v3 SDK were published with a dedicated module to handle that use case : https://www.npmjs.com/package/#aws-sdk/lib-storage
Took me a while to find it.
Give https://www.npmjs.org/package/streaming-s3 a try.
I used it for uploading several big files in parallel (>500Mb), and it worked very well.
It very configurable and also allows you to track uploading statistics.
You not need to know total size of the object, and nothing is written on disk.
If it helps anyone I was able to stream from the client to s3 successfully (without memory or disk storage):
https://gist.github.com/mattlockyer/532291b6194f6d9ca40cb82564db9d2a
The server endpoint assumes req is a stream object, I sent a File object from the client which modern browsers can send as binary data and added file info set in the headers.
const fileUploadStream = (req, res) => {
//get "body" args from header
const { id, fn } = JSON.parse(req.get('body'));
const Key = id + '/' + fn; //upload to s3 folder "id" with filename === fn
const params = {
Key,
Bucket: bucketName, //set somewhere
Body: req, //req is a stream
};
s3.upload(params, (err, data) => {
if (err) {
res.send('Error Uploading Data: ' + JSON.stringify(err) + '\n' + JSON.stringify(err.stack));
} else {
res.send(Key);
}
});
};
Yes putting the file info in the headers breaks convention but if you look at the gist it's much cleaner than anything else I found using streaming libraries or multer, busboy etc...
+1 for pragmatism and thanks to #SalehenRahman for his help.
I'm using the s3-upload-stream module in a working project here.
There is also some good examples from #raynos in his http-framework repository.
Alternatively you can look at - https://github.com/minio/minio-js. It has minimal set of abstracted API's implementing most commonly used S3 calls.
Here is an example of streaming upload.
$ npm install minio
$ cat >> put-object.js << EOF
var Minio = require('minio')
var fs = require('fs')
// find out your s3 end point here:
// http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
var s3Client = new Minio({
url: 'https://<your-s3-endpoint>',
accessKey: 'YOUR-ACCESSKEYID',
secretKey: 'YOUR-SECRETACCESSKEY'
})
var outFile = fs.createWriteStream('your_localfile.zip');
var fileStat = Fs.stat(file, function(e, stat) {
if (e) {
return console.log(e)
}
s3Client.putObject('mybucket', 'hello/remote_file.zip', 'application/octet-stream', stat.size, fileStream, function(e) {
return console.log(e) // should be null
})
})
EOF
putObject() here is a fully managed single function call for file sizes over 5MB it automatically does multipart internally. You can resume a failed upload as well and it will start from where its left off by verifying previously upload parts.
Additionally this library is also isomorphic, can be used in browsers as well.

S3 Force File Download with NodeJS

I am trying to force files to download from Amazon S3 using the GET request parameter response-content-disposition.
I first created a signed URL which works fine when I want to view the file.
I then attempt to redirect there with the response-content-disposition header. Here is my code:
res.writeHead(302, {
'response-content-disposition': 'attachment',
'Location': 'http://s3-eu-west-1.amazonaws.com/mybucket/test/myfile.txt?Expires=1501018110&AWSAccessKeyId=XXXXXX&Signature=XXXXX',
});
However, this just redirects to the file and does not download it.
Also when I try and visit with the file with the response-content-disposition as GET variable:
http://s3-eu-west-1.amazonaws.com/mybucket/test/myfile.txt?Expires=1501018110&AWSAccessKeyId=XXXXXX&Signature=XXXXX&response-content-disposition=attachment
..I reveive the following response:
The request signature we calculated does not match the signature you provided. Check your key and signing method.
Hi you can force download a file or can change file name using below sample code. This sample code is to download a file using preSignedUrl.
The important thing here is the ResponseContentDisposition key in params of getSignedUrl method. No need to pass any header in your request like content-disposition ..
var aws = require('aws-sdk');
var s3 = new aws.S3();
exports.handler = function (event, context) {
var params = {
Bucket: event.bucket,
Key: event.key,
ResponseContentDisposition :'attachment;filename=' + 'myprefix' + event.key
};
s3.getSignedUrl('getObject', params, function (err, url) {
if (err) {
console.log(JSON.stringify(err));
context.fail(err);
}
else {
context.succeed(url);
}
});
};
The correct way of using the response-content-disposition option is to include it as a GET variable but you're not calculating the signature correctly.
You can find more information on how you should calculate the signature in the Amazon REST Authentication guide

Resources