aws presigned url request signature mismatch while upload - node.js

I am trying to upload an image using presigned url
const s3Params = {
Bucket: config.MAIN_BUCKET,
Key: S3_BUCKET + '/'+fileName,
ContentType: fileType,
Expires: 900,
ACL: 'public-read'
};
const s3 = new AWS.S3({
accessKeyId: config.accessKeyId,
secretAccessKey: config.secretAccessKey,
'region': config.region
});
const url = await s3.getSignedUrlPromise('putObject', s3Params)
return url
i get a url something like
https://s3.eu-west-1.amazonaws.com/bucket/folder/access.JPG?AWSAccessKeyId=xxxx&Content-Type=multipart%2Fform-data&Expires=1580890085&Signature=xxxx&x-amz-acl=public-read
i have tried uploading file with content type image/jpg, multipart/form-data.
Tried generating url without filetype and upload.
tried put and post method
but nothing seems to work
Error always :
The request signature we calculated does not match the signature you provided. Check your key and signing method.
Access credentials have appropriate permissions because these upload files fine when trying though s3 putobject upload (though api instead of presigned url)
Edit:
It seems that postman is sending content-type as multipart/form-data; boundary=--------------------------336459561795502380899802. here boundary is added extra. how to fix this?

As per the AWS S3 documentation Signing and Authenticating REST request, S3 is using SignatureVersion4 by default.
But the nodejs AWS-SDK is using SignatureVersion2 by default.
So you have to explicitly specify SignatureVersion4 in request header
Add this code in S3 config
s3 = new AWS.S3({
'signatureVersion':'v4'
});

I was testing through form-data on postman. but getsignedUrl() function does not support that. Tried using binary and it worked fine. For multipart there seems to be a different function in aws sdk

Related

downloading S3 files using express [duplicate]

I am currently trying to download the file from the s3 bucket using a button from the front-end. How is it possible to do this? I don't have any idea on how to start this thing. I have tried researching and researching, but no luck -- all I have searched are about UPLOADING files to the s3 bucket but not DOWNLOADING files. Thanks in advance.
NOTE: I am applying it to ReactJS (Frontend) and NodeJS (Backend) and also, the file is uploaded using Webmerge
UPDATE: I am trying to generate a download link with this (Tried node even if I'm not a backend dev) (lol)
see images below
what I have tried so far
onClick function
If the file you are trying to download is not public then you have to create a signed url to get that file.
The solution is here Javascript to download a file from amazon s3 bucket?
for getting non public files, which revolves around creating a lambda function that will generate a signed url for you then use that url to download the file on button click
BUT if the file you are trying to download you is public then you don't need a signed url, you just need to know the path to the file, the urls are structured like: https://s3.amazonaws.com/ [file path]/[filename]
They is also aws amplify its created and maintain by AWS team.
Just follow Get started and downloading the file from your react app is simply as:
Storage.get('hello.png', {expires: 60})
.then(result => console.log(result))
.catch(err => console.log(err));
Here is my solution:
let downloadImage = url => {
let urlArray = url.split("/")
let bucket = urlArray[3]
let key = `${urlArray[4]}/${urlArray[5]}`
let s3 = new AWS.S3({ params: { Bucket: bucket }})
let params = {Bucket: bucket, Key: key}
s3.getObject(params, (err, data) => {
let blob=new Blob([data.Body], {type: data.ContentType});
let link=document.createElement('a');
link.href=window.URL.createObjectURL(blob);
link.download=url;
link.click();
})
}
The url in the argument refers to the url of the S3 file.
Just put this in the onClick method of your button. You will also need the AWS SDK

Upload base64 file into GCS using signedURL

I'm trying to upload base64 file/image into Google cloud storage using the signed URL. My server side code (NodeJS) is something like this:
let {Storage} = require('#google-cloud/storage');
storage = new Storage({
projectId,
keyFilename: gcloudServiceAccountFilePath,
});
function generateSignedUrl(){
const options = {
version: 'v4',
action: 'write',
expires: Date.now() + 15 * 60 * 1000, // 15 minutes
//contentType: 'application/octet-stream'
};
}
const [url] = await storage.bucket(gcloudBucket)
.file(`${fileRelativePath}`).getSignedUrl(options);
return url;
}
Now when I try with POSTMAN with below configuration,
Request: PUT
URL: https://storage.googleapis.com/my-signed-url.. //generated from above code
Headers:
x-goog-acl: 'public-read'
Content-Type: 'image/jpeg'
Body:
raw : 'base64-file-conent'
My uploaded file in GCS stays as base64 and file size is also different as you can see in the storage.
1st image is directly uploaded into GCS with drag & drop.
2nd image is uploaded with POSTMAN
Not sure if I'm missing something while generating signed-url or any headers while uploading file through postman.
Thanks :)
The reason for the difference in the Object sizes uploaded to the Google Cloud Storage is actually the difference in the Metadata of the Object. When you upload the image Object with POSTMAN by using REST APIs, the API header is added as part of the image's metadata. This Google Documentation clearly states that “the Cloud Storage stores these headers as part of the object's metadata”.
The first line of the Object metadata Introduction also confirms that Objects stored in Cloud Storage have metadata associated with them. Hence, the API headers are added as the Metadata of your Image object and consequently increase the Size of the Object.
Image Objects uploaded via the Console do not have Object metadata, except they are explicitly set.

AWS S3 - PUT to URL from getSignedUrl() returns 403 SignatureDoesNotMatch error

This problem has been driving me nuts for two days now.
The objective: Upload an image directly from the browser to S3 via a pre-signed URL supplied by the getSignedUrl function in the AWS Javascript SDK.
I haven't had any problems generating URLs with getSignedUrl. The following code...
const params = {
Key: key,
Bucket: process.env.S3_BUCKET,
ContentType: "image/jpeg"
};
S3.getSignedUrl("putObject", params, callback);
...yields something like:
https://s3.amazonaws.com/foobar-bucket/someImage.jpeg?AWSAccessKeyId=ACCESSKEY123&Content-Type=image%2Fjpeg&Expires=1543357053&Signature=3fgjyj7gpJiQvbIGhqWXSY40JUU%3D&x-amz-acl=private&x-amz-security-token=FQoGZXIvYXdzEDYaDPzeqKMbfgetCcZBaCL0AWftL%2BIT%2BP3tqTDVtNU1G8eC9sjl9unhwknrYvnEcrztfR9%2FO9AGD6VDiDDKfTQ9SmQpfXmiyTKDwAcevTwxeRnj6hGwnHgvzFVBzoslrB8MxrxjUpiI7NQW3oRMunbLskZ4LgvQYs8Rh%2FDjat4H%2F%2BvfPxDSQUSa41%2BFKcoySUHGh2xqfBFGCkHlIqVgk1KELDHmTaNckkvc9B4cgEXmAd3u1f1KC9mbobYcLLRPIzMj9bLJH%2BIlINylzubao1pCQ7m%2BWdX5xAZDhTSNwQfo4ywSWV7kUpbq2dgEriOiKAReEjmFQtuGqYBi3t2dhrasptOlXFXUozdz23wU%3D
But uploading an image via PUT request to the provided URL always returns a 403 SignatureDoesNotMatch error from S3.
What DOES work:
Calling getSignedUrl() from a local instance of AWS Lambda (via serverless-offline).
What DOESN'T work:
Setting the query string variables as headers (Content-Type, x-amz-*, etc.)
Removing all headers
Changing the ACL when getting the URL (private, public-read-write, no ACL, etc.)
Changing the region of aws-sdk in Node
Trying POST instead of PUT (it's worth a shot)
Any help on this issue would be greatly appreciated. I'm about to throw my computer out the window and jump out after it in frustration if this continues to be a problem, as it simply does NOT want to work!
I figured it out. The Lambda function invoking getSignedUrl() did not have the correct IAM role permissions to access the S3 bucket in question. In serverless.yml...
iamRoleStatements:
- Effect: Allow
Action:
- s3:*
Resource: "arn:aws:s3:::foobar-bucket/*"
I wouldn't actually use a wildcard here, but you get the picture. The fact that getSignedUrl() still succeeds and returns a URL even when the URL is doomed to fail because of missing permissions is extremely misleading.
I hope this answer helps some confused soul in the future.
It worked for me doing it in the old school way: (axios kept giving 403 Forbidden)
const xhr = new XMLHttpRequest();
xhr.open("PUT", signedRequest);
xhr.onreadystatechange = () => {
if (xhr.readyState === 4) {
if (xhr.status === 200) {
//Put your logic here..
//When it get's here you can access the image using the url you got when signed.
}
}
};
xhr.send(file);
Notice this needs to run from the client, so you will need to configure the Cross Origin Police in your Bucket.

Piping a file straight to the client using Node.js and Amazon S3

So I want to pipe a file straight to the client; how I am currently doing it is create a file to disk, then sending that file straight to the client.
router.get("/download/:name", async (req, res) => {
const s3 = new aws.S3();
const dir = "uploads/" + req.params.name + ".apkg"
let file = fs.createWriteStream(dir);
await s3.getObject({
Bucket: <bucket-name>,
Key: req.params.name + ".apkg"
}).createReadStream().pipe(file);
await res.download(dir);
});
I just looked up that res.download() only serves locally. Is there a way you can do it directly from AWS S3 to Client download? i.e. pipe files straight to user. Thanks in advance
As described in this SO thread:
You can simply pipe the read stream into the response instead of the piping it to the file, just make sure to supply the correct Content-Type and to set it as an attachment, so the browser will know how to handle the response properly.
res.attachment(req.params.name);
await s3.getObject({
Bucket: <bucket-name>,
Key: req.params.name + ".apkg"
}).createReadStream().pipe(res);
On more pattern for this is to create a signed url directly to the S3 object and then let the client download straight from S3, instead of streaming it from your node webserver. This will reduce the workload from your web server.
You will need to use the getSignedUrl method from the AWS S3 SDK for JS.
Then, Once you have the URL, just return it to your client to download the file by themselves.
You should take into account that once you give the client a signed URL that has download permissions for, say, 5 minutes, they will only be able to download that file during those next 5 minutes. And you should also take into account that they will be able to pass that URL to anyone else for download during those 5 minutes, so it is dependant on how secure you need this to be.
S3 can be used to content so I would do the following.
Add CORS headers on your node response. This will enable browser to download from another origin i.e. S3.
Enable S3 web server on your bucket.
Script to download redirect from S3 - this you could achieve in JS.
Use signed URL as suggested in the other post if you need to protect S3 content.

AWS S3 getSignedUrl constructs wrong URL

I want to get a pre-signed URL for my S3 bucket for a PUT request like this (in node.js)
AWS.config.update({
accessKeyId: s3Config.accessKeyId,
secretAccessKey: s3Config.secretAccessKey,
region: s3Config.region,
signatureVersion: 'v4'
});
var s3bucket = new AWS.S3({params: {Bucket: s3Config.bucket,Key:'/content'}});
s3Config.preSignedURL = s3bucket.getSignedUrl('putObject',{ACL:s3Config.acl})
as a result i get
https://[BUCKET].s3.[REGION].amazonaws.com/[KEY]?[presignedURLStuff]
This URL is according to Amazon wrong. The URL has to be in the format http://*.s3.amazonaws.com/*. I also get the Error net::ERR_INSECURE_RESPONSE from the pre-flight. What do I have to do that the function constructs the right URL. Removing the region from the URL leads to 400 Bad Request. Pre-flight OPTIONS works then.
I believe a PUT request needs to go to a specific bucket region.
If for some reason the URL that you get back has the wrong region for your bucket (say your EC2 instance is in a different region) you can set the bucket region in the S3 init.
return new aws.S3({region:"s3-eu-west-1"})

Resources