I want to get a pre-signed URL for my S3 bucket for a PUT request like this (in node.js)
AWS.config.update({
accessKeyId: s3Config.accessKeyId,
secretAccessKey: s3Config.secretAccessKey,
region: s3Config.region,
signatureVersion: 'v4'
});
var s3bucket = new AWS.S3({params: {Bucket: s3Config.bucket,Key:'/content'}});
s3Config.preSignedURL = s3bucket.getSignedUrl('putObject',{ACL:s3Config.acl})
as a result i get
https://[BUCKET].s3.[REGION].amazonaws.com/[KEY]?[presignedURLStuff]
This URL is according to Amazon wrong. The URL has to be in the format http://*.s3.amazonaws.com/*. I also get the Error net::ERR_INSECURE_RESPONSE from the pre-flight. What do I have to do that the function constructs the right URL. Removing the region from the URL leads to 400 Bad Request. Pre-flight OPTIONS works then.
I believe a PUT request needs to go to a specific bucket region.
If for some reason the URL that you get back has the wrong region for your bucket (say your EC2 instance is in a different region) you can set the bucket region in the S3 init.
return new aws.S3({region:"s3-eu-west-1"})
Related
I am trying to upload an image using presigned url
const s3Params = {
Bucket: config.MAIN_BUCKET,
Key: S3_BUCKET + '/'+fileName,
ContentType: fileType,
Expires: 900,
ACL: 'public-read'
};
const s3 = new AWS.S3({
accessKeyId: config.accessKeyId,
secretAccessKey: config.secretAccessKey,
'region': config.region
});
const url = await s3.getSignedUrlPromise('putObject', s3Params)
return url
i get a url something like
https://s3.eu-west-1.amazonaws.com/bucket/folder/access.JPG?AWSAccessKeyId=xxxx&Content-Type=multipart%2Fform-data&Expires=1580890085&Signature=xxxx&x-amz-acl=public-read
i have tried uploading file with content type image/jpg, multipart/form-data.
Tried generating url without filetype and upload.
tried put and post method
but nothing seems to work
Error always :
The request signature we calculated does not match the signature you provided. Check your key and signing method.
Access credentials have appropriate permissions because these upload files fine when trying though s3 putobject upload (though api instead of presigned url)
Edit:
It seems that postman is sending content-type as multipart/form-data; boundary=--------------------------336459561795502380899802. here boundary is added extra. how to fix this?
As per the AWS S3 documentation Signing and Authenticating REST request, S3 is using SignatureVersion4 by default.
But the nodejs AWS-SDK is using SignatureVersion2 by default.
So you have to explicitly specify SignatureVersion4 in request header
Add this code in S3 config
s3 = new AWS.S3({
'signatureVersion':'v4'
});
I was testing through form-data on postman. but getsignedUrl() function does not support that. Tried using binary and it worked fine. For multipart there seems to be a different function in aws sdk
This problem has been driving me nuts for two days now.
The objective: Upload an image directly from the browser to S3 via a pre-signed URL supplied by the getSignedUrl function in the AWS Javascript SDK.
I haven't had any problems generating URLs with getSignedUrl. The following code...
const params = {
Key: key,
Bucket: process.env.S3_BUCKET,
ContentType: "image/jpeg"
};
S3.getSignedUrl("putObject", params, callback);
...yields something like:
https://s3.amazonaws.com/foobar-bucket/someImage.jpeg?AWSAccessKeyId=ACCESSKEY123&Content-Type=image%2Fjpeg&Expires=1543357053&Signature=3fgjyj7gpJiQvbIGhqWXSY40JUU%3D&x-amz-acl=private&x-amz-security-token=FQoGZXIvYXdzEDYaDPzeqKMbfgetCcZBaCL0AWftL%2BIT%2BP3tqTDVtNU1G8eC9sjl9unhwknrYvnEcrztfR9%2FO9AGD6VDiDDKfTQ9SmQpfXmiyTKDwAcevTwxeRnj6hGwnHgvzFVBzoslrB8MxrxjUpiI7NQW3oRMunbLskZ4LgvQYs8Rh%2FDjat4H%2F%2BvfPxDSQUSa41%2BFKcoySUHGh2xqfBFGCkHlIqVgk1KELDHmTaNckkvc9B4cgEXmAd3u1f1KC9mbobYcLLRPIzMj9bLJH%2BIlINylzubao1pCQ7m%2BWdX5xAZDhTSNwQfo4ywSWV7kUpbq2dgEriOiKAReEjmFQtuGqYBi3t2dhrasptOlXFXUozdz23wU%3D
But uploading an image via PUT request to the provided URL always returns a 403 SignatureDoesNotMatch error from S3.
What DOES work:
Calling getSignedUrl() from a local instance of AWS Lambda (via serverless-offline).
What DOESN'T work:
Setting the query string variables as headers (Content-Type, x-amz-*, etc.)
Removing all headers
Changing the ACL when getting the URL (private, public-read-write, no ACL, etc.)
Changing the region of aws-sdk in Node
Trying POST instead of PUT (it's worth a shot)
Any help on this issue would be greatly appreciated. I'm about to throw my computer out the window and jump out after it in frustration if this continues to be a problem, as it simply does NOT want to work!
I figured it out. The Lambda function invoking getSignedUrl() did not have the correct IAM role permissions to access the S3 bucket in question. In serverless.yml...
iamRoleStatements:
- Effect: Allow
Action:
- s3:*
Resource: "arn:aws:s3:::foobar-bucket/*"
I wouldn't actually use a wildcard here, but you get the picture. The fact that getSignedUrl() still succeeds and returns a URL even when the URL is doomed to fail because of missing permissions is extremely misleading.
I hope this answer helps some confused soul in the future.
It worked for me doing it in the old school way: (axios kept giving 403 Forbidden)
const xhr = new XMLHttpRequest();
xhr.open("PUT", signedRequest);
xhr.onreadystatechange = () => {
if (xhr.readyState === 4) {
if (xhr.status === 200) {
//Put your logic here..
//When it get's here you can access the image using the url you got when signed.
}
}
};
xhr.send(file);
Notice this needs to run from the client, so you will need to configure the Cross Origin Police in your Bucket.
So I want to pipe a file straight to the client; how I am currently doing it is create a file to disk, then sending that file straight to the client.
router.get("/download/:name", async (req, res) => {
const s3 = new aws.S3();
const dir = "uploads/" + req.params.name + ".apkg"
let file = fs.createWriteStream(dir);
await s3.getObject({
Bucket: <bucket-name>,
Key: req.params.name + ".apkg"
}).createReadStream().pipe(file);
await res.download(dir);
});
I just looked up that res.download() only serves locally. Is there a way you can do it directly from AWS S3 to Client download? i.e. pipe files straight to user. Thanks in advance
As described in this SO thread:
You can simply pipe the read stream into the response instead of the piping it to the file, just make sure to supply the correct Content-Type and to set it as an attachment, so the browser will know how to handle the response properly.
res.attachment(req.params.name);
await s3.getObject({
Bucket: <bucket-name>,
Key: req.params.name + ".apkg"
}).createReadStream().pipe(res);
On more pattern for this is to create a signed url directly to the S3 object and then let the client download straight from S3, instead of streaming it from your node webserver. This will reduce the workload from your web server.
You will need to use the getSignedUrl method from the AWS S3 SDK for JS.
Then, Once you have the URL, just return it to your client to download the file by themselves.
You should take into account that once you give the client a signed URL that has download permissions for, say, 5 minutes, they will only be able to download that file during those next 5 minutes. And you should also take into account that they will be able to pass that URL to anyone else for download during those 5 minutes, so it is dependant on how secure you need this to be.
S3 can be used to content so I would do the following.
Add CORS headers on your node response. This will enable browser to download from another origin i.e. S3.
Enable S3 web server on your bucket.
Script to download redirect from S3 - this you could achieve in JS.
Use signed URL as suggested in the other post if you need to protect S3 content.
I have my RESTapi server on which I store AWS public/secret keys. I also store client public/secret key (client is a user I created - it has permission to make CORS requests).
I have my external server which will upload files directly to S3 bucket. But I dont want to store AWS credentials on it - I want it before upload to somehow call main server to sign request and then upload file directly to s3.
For now I am using aws-sdk on external server like this:
var aws = require('aws-sdk');
aws.config.update({
"accessKeyId": process.env.AMAZONS3_ACCESSKEY_ID,
"secretAccessKey": process.env.AMAZONS3_ACCESSKEY,
"region": process.env.AMAZONS3_REGION_CLOUD,
});
var s3 = new aws.S3({ params: { Bucket: 'myCustomBucket' } });
s3.putObject(...);
Now I need to change so external server will call main server with some S3 params and it will get back signed key or something like that and it will use it to upload file...
So how endpoint on main server should look like (what params in should consumes and how to generate the sign)?
And then how I can make request from external server using the sign?
Have a look here http://docs.aws.amazon.com/aws-sdk-php/guide/latest/service-s3.html under the section create presigned url
// Get a command object from the client and pass in any options
// available in the GetObject command (e.g. ResponseContentDisposition)
$command = $client->getCommand('GetObject', array(
'Bucket' => $bucket,
'Key' => 'data.txt',
'ResponseContentDisposition' => 'attachment; filename="data.txt"'
));
// Create a signed URL from the command object that will last for
// 10 minutes from the current time
$signedUrl = $command->createPresignedUrl('+10 minutes');
echo file_get_contents($signedUrl);
// > Hello!
Create the command (in your case a put not a get) on one server, pass this to the main server which will create the presigned url. Pass this back to the external server to execute.
I'm quite new to node.js and would like to do the following:
user can upload one file
upload should be saved to amazon s3
file information should be saved to a database
script shouldn't be limited to specific file size
As I've never used S3 or done uploads before I might have some
wrong ideas - please correct me, if I'm wrong.
So in my opinion the original file name should be saved into the db and returned for download but the file on S3 should be renamed to my database entry id to prevent overwriting files. Next, should the files be streamed or something? I've never done this but it just seems not to be smart to cache files on the server to then push them to S3, does it?
Thanks for your help!
At first I recommend to look at knox module for NodeJS. It is from quite reliable source. https://github.com/LearnBoost/knox
I write a code below for Express module, but if you do not use it or use another framework, you should still understand basics. Take a look at CAPS_CAPTIONS in the code, you want to change them according to your needs / configuration. Please also read comments to understand pieces of code.
app.post('/YOUR_REQUEST_PATH', function(req, res, next){
var fs = require("fs")
var knox = require("knox")
var s3 = knox.createClient({
key: 'YOUR PUBLIC KEY HERE' // take it from AWS S3 configuration
, secret: 'YOUR SECRET KEY HERE' // take it from AWS S3 configuration
, bucket: 'YOUR BUCKET' // create a bucket on AWS S3 and put the name here. Configure it to your needs beforehand. Allow to upload (in AWS management console) and possibly view/download. This can be made via bucket policies.
})
fs.readFile(req.files.NAME_OF_FILE_FIELD.path, function(err, buf){ // read file submitted from the form on the fly
var s3req = s3.put("/ABSOLUTE/FOLDER/ON/BUCKET/FILE_NAME.EXTENSION", { // configure putting a file. Write an algorithm to name your file
'Content-Length': buf.length
, 'Content-Type': 'FILE_MIME_TYPE'
})
s3req.on('response', function(s3res){ // write code for response
if (200 == s3res.statusCode) {
// play with database here, use s3req and s3res variables here
} else {
// handle errors here
}
})
s3req.end(buf) // execute uploading
})
})