Nodejs AWS SDK S3 Generate Presigned URL - node.js

I am using the NodeJS AWS SDK to generate a presigned S3 URL. The docs give an example of generating a presigned URL.
Here is my exact code (with sensitive info omitted):
const AWS = require('aws-sdk')
const s3 = new AWS.S3()
AWS.config.update({accessKeyId: 'id-omitted', secretAccessKey: 'key-omitted'})
// Tried with and without this. Since s3 is not region-specific, I don't
// think it should be necessary.
// AWS.config.update({region: 'us-west-2'})
const myBucket = 'bucket-name'
const myKey = 'file-name.pdf'
const signedUrlExpireSeconds = 60 * 5
const url = s3.getSignedUrl('getObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds
})
console.log(url)
The URL that generates looks like this:
https://bucket-name.s3-us-west-2.amazonaws.com/file-name.pdf?AWSAccessKeyId=[access-key-omitted]&Expires=1470666057&Signature=[signature-omitted]
I am copying that URL into my browser and getting the following response:
<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist</Message>
<BucketName>[bucket-name-omitted]</BucketName>
<RequestId>D1A358D276305A5C</RequestId>
<HostId>
bz2OxmZcEM2173kXEDbKIZrlX508qSv+CVydHz3w6FFPFwC0CtaCa/TqDQYDmHQdI1oMlc07wWk=
</HostId>
</Error>
I know the bucket exists. When I navigate to this item via the AWS Web GUI and double click on it, it opens the object with URL and works just fine:
https://s3-us-west-2.amazonaws.com/[bucket-name-omitted]/[file-name-omitted].pdf?X-Amz-Date=20160808T141832Z&X-Amz-Expires=300&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Signature=[signature-omitted]&X-Amz-Credential=ASIAJKXDBR5CW3XXF5VQ/20160808/us-west-2/s3/aws4_request&X-Amz-SignedHeaders=Host&x-amz-security-token=[really-long-key]
So I am led to believe that I must be doing something wrong with how I'm using the SDK.

Dustin,
Your code is correct, double check following:
Your bucket access policy.
Your bucket permission via your API key.
Your API key and secret.
Your bucket name and key.

Since this question is very popular and the most popular answer is saying your code is correct, but there is a bit of problem in the code which might lead a frustrating problem. So, here is a working code
AWS.config.update({
accessKeyId: ':)))',
secretAccessKey: ':DDDD',
region: 'ap-south-1',
signatureVersion: 'v4'
});
const s3 = new AWS.S3()
const myBucket = ':)))))'
const myKey = ':DDDDDD'
const signedUrlExpireSeconds = 60 * 5
const url = s3.getSignedUrl('getObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds
});
console.log(url);
The noticeable difference is the s3 object is created after the config update, without this the config is not effective and the generated url doesn't work.

Here is the complete code for generating pre-signed (put-object) URL for any type of file in S3.
If you want you can include expiration time using Expire parameter in parameter.
The below code will upload any type of file like excel(xlsx, pdf, jpeg)
const AWS = require('aws-sdk');
const fs = require('fs');
const axios = require('axios');
const s3 = new AWS.S3();
const filePath = 'C:/Users/XXXXXX/Downloads/invoice.pdf';
var params = {
Bucket: 'testing-presigned-url-dev',
Key: 'dummy.pdf',
"ContentType": "application/octet-stream"
};
s3.getSignedUrl('putObject', params, function (err, url) {
console.log('The URL is', url);
fs.writeFileSync("./url.txt", url);
axios({
method: "put",
url,
data: fs.readFileSync(filePath),
headers: {
"Content-Type": "application/octet-stream"
}
})
.then((result) => {
console.log('result', result);
}).catch((err) => {
console.log('err', err);
});
});

I had a use case where using node.js ; I wanted to get object from s3 and download it to some temp location and then give it as attachment to third-party service! This is how i broke the code:
get signed url from s3
make rest call to get object
write that into local location
It may help anyone; if there is same use case; chekout below link;
https://medium.com/#prateekgawarle183/fetch-file-from-aws-s3-using-pre-signed-url-and-store-it-into-local-system-879194bfdcf4

For me, I was getting a 403 because the IAM role I had used to get the signed url was missing the S3:GetObject permission for the bucket/object in question. Once I added this permission to the IAM role, the signed url began to work correctly afterwards.

Probably not the answer you are looking for, But it turned our I swapped AWS_ACCESS_KEY_ID with AWS_SECRET_ACCESS_KEY
for future visitors, you might want to double check that.

Try this function with promise.
const AWS = require("aws-sdk");
const s3 = new AWS.S3({
accessKeyId: 'AK--------------6U',
secretAccessKey: 'kz---------------------------oGp',
Bucket: 'bucket-name'
});
const getSingedUrl = async () => {
const params = {
Bucket: 'bucket_name',
Key: 'file-name.pdf',
Expires: 60 * 5
};
try {
const url = await new Promise((resolve, reject) => {
s3.getSignedUrl('getObject', params, (err, url) => {
err ? reject(err) : resolve(url);
});
});
console.log(url)
} catch (err) {
if (err) {
console.log(err)
}
}
}
getSingedUrl()

Related

The URL provided by AWS to upload files endpoint result in Error: SignatureDoesNotMatch after accessing it

I am trying to use this tutorial to upload files directly from the browser in a bucket using presigned URL.
I tryed this:
export const myFunction = async (imageType: string) => {
const s3 = new AWS.S3();
const imageName = "somename";
const s3Params = {
Bucket: bucketname,
Key: imageName,
ContentType: 'image/' + imageType,
Expires: 300,
};
const uploadURL = await s3.getSignedUrlPromise('putObject', s3Params);
return JSON.stringify({
uploadURL,
imageName,
});
};
And I do get back an URL in postman, I accessed it and then it redirected me to a new request, I set it to put and try to send the request without any file and with a file with the same name from the URL (the name of the image appear in the URL) in the form-data, sent it, but both my tries resulted in this error :
SignatureDoesNotMatch. The request signature we calculated does not match the signature you provided. Check your key and signing method.
I read something on stack, found oput it could be due to the file name if it has exotic characters, but it isn't my case.
How can I solve this problem?

Upload image to s3 from url

I am trying to upload a image, for which i have a url into s3
I want to do the same without downloading the image to local storage
filePath = imageURL;
let params = {
Bucket: 'bucketname',
Body: fs.createReadStream(filePath),
Key: "folder/" + id + "originalImage"
};
s3.upload(params, function (err, data) {
if (err) console.log(err);
if (data) console.log("original image success");
});
expecting a success but getting error :
myURL is a https publicly accesible url.
[Error: ENOENT: no such file or directory, open '<myURL HERE>']
errno: -2,
code: 'ENOENT',
syscall: 'open',
path:
'<myURL HERE>' }
There are two ways to place files into an Amazon S3 bucket:
Upload the contents via PutObject(), or
Copy an object that is already in Amazon S3
So, in the case of your "web accessible Cloudinary image", it will need to be retrieved, then uploaded. You could either fully download the image and then upload it, or you could do some fancy stuff with streaming bodies where the source image is read into a buffer and then written to S3. Either way, the file will need to be "read" from Cloudinary the "written" to S3.
As for the "web accessible Amazon S3 link", you could use the CopyObject() command to instruct S3 to directly copy the object from the source bucket to the destination bucket. (It also works within a bucket.)
This will make a GET request to the image URL, and pipe the response directly to the S3 upload stream, without the need to save the image to the local file system.
const request = require('request');
const fs = require('fs');
const imageURL = 'https://example.com/image.jpg';
const s3 = new AWS.S3();
const params = {
Bucket: 'bucketname',
Key: "folder/" + id + "originalImage"
};
const readStream = request(imageURL);
const uploadStream = s3.upload(params).createReadStream();
readStream.pipe(uploadStream);
You can use following snippet to upload files in S3 using the file URL.
import axios from 'axios';
import awsSDK from 'aws-sdk';
const uploadImageToS3 = () => {
const url = 'www.abc.com/img.jpg';
axios({
method: 'get',
url: url,
responseType: 'arraybuffer',
})
.then(function (response) {
console.log('res', response.data);
const arrayBuffer = response.data;
if (arrayBuffer) {
awsSDK.config.update({
accessKeyId: 'aws_access_key_id',
secretAccessKey: 'aws_secret_access_key',
region: 'aws_region',
});
const s3Bucket = new awsSDK.S3({
apiVersion: '2006-03-01',
params: {
Bucket: 'aws_bucket_name'
}
});
const data = {
Body: arrayBuffer,
Key: 'unique_key.fileExtension',
ACL: 'public-read'
};
const upload = s3Bucket.upload(data, (err, res) => {
if (err) {
console.log('s3 err:', err)
}
if (res) {
console.log('s3 res:', res)
}
})
}
});
}
uploadImageToS3();

Upload file to Amazon S3 using HTTP PUT

I work in a financial institution and for security reasons my employer cannot give out the access key id and the access key secret to the AWS account. This means I can't use aws-sdk.
As a next option, would it be possible to upload files using HTTP PUT to a public S3 bucket without using the AWS-SDK that requires the access key id and the access key secret?
I had a look at this answer: How to upload a file using a rest client for node
And was thinking of this approach:
var request = require('request');
var options = {
method: 'PUT',
preambleCRLF: true,
postambleCRLF: true,
uri: 'https://s3-ap-southeast-2.amazonaws.com/my-bucket/myFile.pdf',
multipart: [
{
'content-type': 'application/pdf'
body: fs.createReadStream('/uploads/uploaded-file.pdf')
}
]
}
request(options, function(err, response, body){
if(err){
return console.log(err);
}
console.log('File uploaded to s3');
});
Could that work?
Your above code works only if you have custom storage(that too it should be public) and not for AWS storage.
For AWS storage access key id and the access key secret is mandatory, without these you cannot upload the files to storage
This is a bit old but for anyone looking for the same you can now use a pre signed url to achieve this, how it works is you create a preSigned url on your server, share it with the client and use this to upload the file to s3
server to generate an url:
const AWS = require('aws-sdk')
const s3 = new AWS.S3({
region: 'us-east-1',
signatureVersion: 'v4'
})
AWS.config.update({accessKeyId: 'access-key', secretAccessKey: 'access-pass'})
const myBucket = 'clearg-developers'
const myKey = 'directory/newFile.zip'
const signedUrlExpireSeconds = 60 * 5 //seconds the url expires
const url = s3.getSignedUrl('putObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds
});
return url
and on the client from node you can put to get an empty body:
var fileName = '/path/to/file.ext';
var stats = fs.statSync(fileName);
fs.createReadStream(fileName).pipe(request({
method: 'PUT',
url: url,
headers: {
'Content-Length': stats['size']
}
}, function (err, res, body) {
console.log('success');
}));

stream response from nodejs request to s3

How do you use request to download contents of a file and directly stream it up to s3 using the aws-sdk for node?
The code below gives me Object #<Request> has no method 'read' which makes it seem like request does not return a readable stream...
var req = require('request');
var s3 = new AWS.S3({params: {Bucket: myBucket, Key: s3Key}});
var imageStream = req.get(url)
.on('response', function (response) {
if (200 == response.statusCode) {
//imageStream should be read()able by now right?
s3.upload({Body: imageStream, ACL: "public-read", CacheControl: 5184000}, function (err, data) { //2 months
console.log(err,data);
});
}
});
});
Per the aws-sdk docs Body needs to be a ReadableStream object.
What am I doing wrong here?
This can be pulled off using the s3-upload-stream module, however I'd prefer to limit my dependencies.
Since I had the same problem as #JoshSantangelo (zero byte files on S3) with request#2.60.0 and aws-sdk#2.1.43, let me add an alternative solution using Node's own http module (caveat: simplified code from a real life project and not tested separately):
var http = require('http');
function copyToS3(url, key, callback) {
http.get(url, function onResponse(res) {
if (res.statusCode >= 300) {
return callback(new Error('error ' + res.statusCode + ' retrieving ' + url));
}
s3.upload({Key: key, Body: res}, callback);
})
.on('error', function onError(err) {
return callback(err);
});
}
As far as I can tell, the problem is that request does not fully support the current Node streams API, while aws-sdk depends on it.
References:
request issue about the readable event not working right
generic issue for "new streams" support in request
usage of the readable event in aws-sdk
You want to use the response object if you're manually listening for the response stream:
var req = require('request');
var s3 = new AWS.S3({params: {Bucket: myBucket, Key: s3Key}});
var imageStream = req.get(url)
.on('response', function (response) {
if (200 == response.statusCode) {
s3.upload({Body: response, ACL: "public-read", CacheControl: 5184000}, function (err, data) { //2 months
console.log(err,data);
});
}
});
});
As Request has been deprecated, here's a solution utilizing Axios
const AWS = require('aws-sdk');
const axios = require('axios');
const downloadAndUpload = async function(url, fileName) {
const res = await axios({ url, method: 'GET', responseType: 'stream' });
const s3 = new AWS.S3(); //Assumes AWS credentials in env vars or AWS config file
const params = {
Bucket: IMAGE_BUCKET,
Key: fileName,
Body: res.data,
ContentType: res.headers['content-type'],
};
return s3.upload(params).promise();
}
Note, that the current version of the AWS SDK doesn't throw an exception if the AWS credentials are wrong or missing - the promise simply never resolves.

Fetching signed URL from AWS S3 on my Node Server

Solved:
I want to get a signed URL from my amazon S3 server. I am new to AWS. where do i set my secret-key and access_id_key so that S3 identifies request from my server.
var express=require('express');
var app=express();
var AWS = require('aws-sdk')
, s3 = new AWS.S3()
, params = {Bucket: 'my-bucket', Key: 'path/to/key', Expiration: 20}
s3.getSignedUrl('getObject', params, function (err, url) {
console.log('Signed URL: ' + url)
})
app.listen(8000)
You can also set the credentials for each bucket if you are working with multiple buckets, you just need to pass the credentials into the constructor of the S3 object, like so:
var AWS = require('aws-sdk');
var credentials = {
accessKeyId: AWS_CONSTANTS.S3_KEY,
secretAccessKey: AWS_CONSTANTS.S3_SECRET,
region: AWS_CONSTANTS.S3_REGION
};
var s3 = new AWS.S3(credentials);
var params = {Bucket:'bucket-name', Key: 'key-name', Expires: 20};
s3.getSignedUrl('getObject', params, function (err, url) {
console.log('Signed URL: ' + url);
});
Later i solved my issue.
This was pretty helpful
http://aws.amazon.com/sdkfornodejs/ Moreover you can hardcode your credentials also as
var express=require("express");
var app=express();
var AWS = require('aws-sdk')
, s3 = new AWS.S3()
, params = {Bucket:'your-bucket-name on s3', Key: 'key-name-on s3 you want to store under', Expires: 20}
AWS.config.update({accessKeyId: 'Your-Access-Key-Id', secretAccessKey:
'Your-secret-key'});
AWS.config.region = 'us-west-2';
s3.getSignedUrl('getObject', params, function (err, url) {
console.log('Signed URL: ' + url);
});
app.listen(8000);

Resources