AWS Multipart Upload SignatureDoesNotMatch - node.js

I am trying to upload a PDF file to AWS S3 using multi part uploads. However, when I send the PUT request for uploading the part, I receive a SignatureDoesNotMatch error.
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
My Server Code (Node) is as below:
CREATE MultiPart Upload
const AWS = require('aws-sdk');
AWS.config.region = 'us-east-1';
const s3 = new AWS.S3({ apiVersion: '2006-03-01' });
const s3Params = {
Bucket: 'bucket-name',
Key: 'upload-location/filename.pdf',
}
const createRequest = await s3.createMultipartUpload({
...s3Params
ContentType: 'application/pdf'
}).promise();
GET Signed URL
let getSignedUrlParams = {
Bucket: 'bucket-name',
Key: 'upload-location/filename.pdf',
PartNumber: 1,
UploadId: 'uploadId',
Expires: 10 * 60
}
const signedUrl = await s3.getSignedUrl('uploadPart',getSignedUrlParams);
And the Client code (in JS) is :
const response = await axios.put(signedUrl, chunkedFile, {headers: {'Content-Type':'application-pdf'}});
A few things to note:
This code works when I allow all public access to the bucket.However, if all public access is blocked, the code does not work.
With all public access blocked, I am still able to upload to the bucket with the same credentials using aws cli.
I already have tried re-generating AWS Access Key ID and Secret Access Key and that didnt help.
Not able to figure out what the problem is. Any help would be appreciated.
PS: This is the first question I have posted here. So please forgive me if I havent posted it appropriately. Let me know if more details are required.

Try something like this,it worked for me.
var fileName = 'your.pdf';
var filePath = './' + fileName;
var fileKey = fileName;
var buffer = fs.readFileSync('./' + filePath);
// S3 Upload options
var bucket = 'loctest';
// Upload
var startTime = new Date();
var partNum = 0;
var partSize = 1024 * 1024 * 5; // Minimum 5MB per chunk (except the last part) http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadComplete.html
var numPartsLeft = Math.ceil(buffer.length / partSize);
var maxUploadTries = 3;
var multiPartParams = {
Bucket: bucket,
Key: fileKey,
ContentType: 'application/pdf'
};
var multipartMap = {
Parts: []
};
function completeMultipartUpload(s3, doneParams) {
s3.completeMultipartUpload(doneParams, function(err, data) {
if (err) {
console.log("An error occurred while completing the multipart upload");
console.log(err);
} else {
var delta = (new Date() - startTime) / 1000;
console.log('Completed upload in', delta, 'seconds');
console.log('Final upload data:', data);
}
});
}
You will get error if the upload fails. We can help you to solve this if you print the results of
console.log(this.httpResponse)
and
console.log(this.request.httpRequest)

What worked for me was the version of the signature. While initializing S3, the signature version should also be mentioned.
const s3 = new AWS.S3({ apiVersion: '2006-03-01', signatureVersion: 'v4' });

Remove Content-Part header from the axios call.
const response = await axios.put(signedUrl, chunkedFile);
When adding only a part you're not actually uploading a complete file, so the content type is not application-pdf in your case.
This is different than doing a PUT for a complete object.

Related

Uploading and showing image from S3

I am trying to upload an image through react app to s3 bucket and then receiving back the URL and showing the image on the screen.
I am able to upload the image (sort of) and get the URL back from the s3 server, but when I download it I am unable to open it - the formant is unsupported and I can't use the img tag to show it on the webpage. I guess that it is something to do with conversation to base64 but I can't figure out why it is not working.
The frontend(React) is:
const uploadImageToBucket = async (image) => {
console.log("fff",image)
let image_location
try {
const response = axios.post("http://localhost:5000/user/blogmanage/uploadimage",image)
image_location = response.then((response)=>response.data.body);
console.log("img loc", image_location)
return image_location;
} catch (error) {
}
}
The backend(nodejs) is
router.post("/blogmanage/uploadimage", async (req,res)=>{
const s3 = new AWS.S3({
accessKeyId: process.env["AWS_ACCESS_KEY_ID"],
secretAccessKey: process.env["AWS_SECRET_KEY"],
region: process.env['AWS_REGION']
});
const BUCKET_NAME = "mrandmrseatmedia";
var base64data = new Buffer.from( 'binary',req.body);
const params = {
Bucket: BUCKET_NAME,
Key: "test/test2.jpg",
Body: base64data
}
s3.upload(params, function (err,data){
if (err){
console.log(err)
res.status(404).json({msg:err});
}
else{
const image_location = `${data.Location}`;
console.log(`File uploaded successfully. ${data.Location}`);
res.status(200).json({body:image_location});
}
})
});
Thanks!
After a lot of testing and retesting and rewriting using this repo as an example
https://github.com/Jerga99/bwm-ng/blob/master/server/services/image-upload.js
It works.
The use of base64 is wrong in this case. It corrupts the file in some way. Multer library fixes it.

How to upload image to S3 bucket locally and generating a thumbnail automatically?

Good evening
I have this task. I have to upload an image to the S3 bucket using Node JS and generates a thumbnail on the go and not by using a lambda trigger. Everything should be done on my local machine terminal (or) in the local server(postman). I tried this code.
const fs = require('fs');
const ACESS_ID = 'A**********KV';
const SECRET_ID = 'G***********0';
const BUCKET_NAME = 'node-image-bucket';
// Initializing s3 interface
const s3 = new AWS.S3({
accessKeyId: ACESS_ID,
secretAccessKey: SECRET_ID,
});
// File reading function to S3
const uploadFile = (fileName) => {
// Read content from the file
const fileContent = fs.readFileSync(fileName);
// Setting up S3 upload parameters
const params = {
Bucket: BUCKET_NAME,
Key: 'scene2.jpg',
Body: fileContent
};
// Uploading files to the bucket
s3.upload(params, function(err, data){
if(err){
throw err;
}
console.log(data);
console.log(`File uploaded Successfully. ${data.Location}`);
});
};
uploadFile('./images/bg-hd.jpg');
Above code is working fine with a single image and the problem is every time I upload a file to the S3 bucket I need to change the S3 params key string value
I want to upload multiple images at once and creating a buffer for performance and it should create thumbnails automatically in the same bucket at the different folder.
Could anyone help me, guys! Please Any help Appreciated!!!
You cannot upload multiple files with one s3 operation but you can use the sharp module before uploading https://www.npmjs.com/package/sharp
to resize your image before calling the s3 api.
import * as sharp from 'sharp';
async function resize(buffer , width, height) {
return sharp(buffer).resize(width, height).toBuffer();
}
const thumbnailWidthSize = 200;
const thumbnailWidthHeight = 200;
const thumbnailImage = await resize(fileContent, thumbnailWidthSize, thumbnailWidthHeight)
You can then reuse your current upload function and run it as many times as many image resizes you need with different keys and wrap those calls around promise.all to make the operation fail if any of the upload fails.
await promise.all([
s3upload(image, imageKey),
s3upload(thumbnailImage, thumbnailImageKey)
])
So, there are two parts to your questions -
Converting the image to thumbnail on the fly while uploading to s3 bucket -
You can use the thumbd npm module and create a thumbd server.
Thumbd is an image thumbnailing server built on top of Node.js, SQS, S3, and ImageMagick.
Prerequistes for the thumbd server -
Thumbd requires the following environment variables to be set:
AWS_KEY the key for your AWS account (the IAM user must have access to the appropriate SQS and S3 resources).
AWS_SECRET the AWS secret key.
BUCKET the bucket to download the original images from. The thumbnails will also be placed in this bucket
AWS_REGION the AWS Region of the bucket. Defaults to: us-east-1.
CONVERT_COMMAND the ImageMagick convert command. Defaults to convert.
REQUEST_TIMEOUT how long to wait in milliseconds before aborting a remote request. Defaults to 15000.
S3_ACL the acl to set on the uploaded images. Must be one of private, or public-read. Defaults to private.
S3_STORAGE_CLASS the storage class for the uploaded images. Must be either STANDARD or REDUCED_REDUNDANCY. Defaults to STANDARD.
SQS_QUEUE the queue name to listen for image thumbnailing.
When running locally, I set these environment variables in a .env file and execute thumbd using pm2/forever/foreman.
Setup -
apt-get install imagemagick
npm install thumbd -g
thumbd install
thumbd start // Run thumbd as a server
After the thumbd server is up and running, refer the code below to change image to thumbnail while uploading to s3 bucket.
var aws = require('aws-sdk');
var url = require("url");
var awsS3Config = {
accessKeyId: ACESS_ID,
secretAccessKey: SECRET_ID,
region: 'us-west-2'
}
var BUCKET_NAME = 'node-image-bucket';
var sourceImageDirectory = "/tmp/"
var imageUploadDir = "/thumbnails/"
var imageName = 'image.jpg'
var uploadImageName = 'image.jpg'
aws.config.update(awsS3Config);
var s3 = new aws.S3();
var Client = require('thumbd').Client,
client = new Client({
awsKey: awsS3Config.accessKeyId,
awsSecret: awsS3Config.secretAccessKey,
s3Bucket: BUCKET_NAME,
sqsQueue: 'ThumbnailCreator',
awsRegion: awsS3Config.region,
s3Acl: 'public-read'
});
export function uploadAndResize(sourceImageDirectory, imageName, imageUploadDir, uploadImageName) {
return new Promise((resolve, reject)=>{
client.upload(sourceImageDirectory + imageName, imageUploadDir + uploadImageName, function(err) {
if (err) {
reject(err);
} else {
client.thumbnail(imageUploadDir + uploadImageName, [{
"suffix": "medium",
"width": 360,
"height": 360,
"background": "white",
"strategy": "%(command)s %(localPaths[0])s -resize %(width)sX%(height)s^ -gravity north -extent %(width)sX%(height)s %(convertedPath)s"
}, {
"suffix": "thumb",
"width": 100,
"height": 100,
"background": "white",
"strategy": "%(command)s %(localPaths[0])s -resize %(width)sX%(height)s^ -gravity north -extent %(width)sX%(height)s %(convertedPath)s"
}], {
//notify: 'https://callback.example.com'
});
var response = {};
//https://s3-ap-us-west-2.amazonaws.com/node-image-bucket/1/5825c7d0-127f-4dac-b802-ca24efba2bcd-original.jpeg
response.url = 'https://s3-' + awsS3Config.region + '.amazonaws.com/' + BUCKET_NAME + '/' + imageUploadDir;
response.uploadImageName = uploadImageName;
response.sourceImageName = imageName;
resolve(response);
}
})
})
Second, you wanted to upload multiple images without changing the string -
Loop over the below method for all the files in a localpath and you are good to go.
export function uploadFiles(localPath, localFileName, fileUploadDir, uploadFileName) {
return new Promise((resolve, reject) => {
fs.readFile(localPath + localFileName, function (err, file) {
if (err) {
reject(err);
}
var params = {
ACL: 'public-read',
Bucket: BUCKET_NAME,
Key: uploadFileName,
Body: file
};
s3.upload(params, function (err, data) {
fs.unlink(localPath + localFileName, function (err) {
if (err) {
reject(err);
} else {
resolve(data)
}
});
});
});
})
}

How to Upload files to Amazon AWS3 with NodeJS?

I'm trying to upload files from a MERN application I'm working on. I'm almost done with the NodeJS back end part.
Said application will allow users to upload images(jpg, jpeg, png, gifs, etc) to an Amazon AWS S3 bucket that I created.
Well, lets put it this way. I created a helper:
const aws = require('aws-sdk');
const fs = require('fs');
// Enter copied or downloaded access ID and secret key here
const ID = process.env.AWS_ACCESS_KEY_ID;
const SECRET = process.env.AWS_SECRET_ACCESS_KEY;
// The name of the bucket that you have created
const BUCKET_NAME = process.env.AWS_BUCKET_NAME;
const s3 = new aws.S3({
accessKeyId: ID,
secretAccessKey: SECRET
});
const uploadFile = async images => {
// Read content from the file
const fileContent = fs.readFileSync(images);
// Setting up S3 upload parameters
const params = {
Bucket: BUCKET_NAME,
// Key: 'cat.jpg', // File name you want to save as in S3
Body: fileContent
};
// Uploading files to the bucket
s3.upload(params, function(err, data) {
if (err) {
throw err;
}
console.log(`File uploaded successfully. ${data.Location}`);
});
};
module.exports = uploadFile;
That helper takes three of my environment variables which are the name of the bucket, the keyId and the secret key.
When adding files from the form(that will eventually be added in the front end) the user will be able to send more than one file.
Right now my current post route looks exactly like this:
req.body.user = req.user.id;
req.body.images = req.body.images.split(',').map(image => image.trim());
const post = await Post.create(req.body);
res.status(201).json({ success: true, data: post });
That right there works great but takes the req.body.images as a string with each image separated by a comma. What would the right approach be to upload(to AWS S3) the many files selected from the Windows directory pop up?. I tried doing this but did not work :/
// Add user to req,body
req.body.user = req.user.id;
uploadFile(req.body.images);
const post = await Post.create(req.body);
res.status(201).json({ success: true, data: post });
Thanks and hopefully your guys can help me out with this one. Right now I'm testing it with Postman but later on the files will be sent via a form.
Well you could just call the uploadFile multiple times for each file :
try{
const promises= []
for(const img of images) {
promises.push(uploadFile(img))
}
await Promise.all(promises)
//rest of logic
}catch(err){ //handle err }
On a side note you should warp S3.upload in a promise:
const AWS = require('aws-sdk')
const s3 = new AWS.S3({
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
})
module.exports = ({ params }) => {
return new Promise((resolve, reject) => {
s3.upload(params, function (s3Err, data) {
if (s3Err) return reject(s3Err)
console.log(`File uploaded successfully at ${data.Location}`)
return resolve(data)
})
})
}
Bonus, if you wish to avoid having your backend handle uploads you can use aws s3 signed urls and let the client browser handle that thus saving your server resources.
One more thing your Post object should only contain Urls of the media not the media itself.
// Setting up S3 upload parameters
const params = {
Bucket: bucket, // bucket name
Key: fileName, // File name you want to save as in S3
Body: Buffer.from(imageStr, 'binary'), //image must be in buffer
ACL: 'public-read', // allow file to be read by anyone
ContentType: 'image/png', // image header for browser to be able to render image
CacheControl: 'max-age=31536000, public' // caching header for browser
};
// Uploading files to the bucket
try {
const result = await s3.upload(params).promise();
return result.Location;
} catch (err) {
console.log('upload error', err);
throw err;
}

How do I set my images uploaded to S3 with nodejs script to display instead of download?

I have a node js script that uploads files to AWS S3 through the command line. The problem Im having is when I try to view the file in the browser it automatically downloads it.
I have done some research and most other posts point out the headers, but I have verified the headers are correct (image/png)
Additionally, when I upload the same file through the AWS console (log into AWS), I am able to view the file within the browser.
var fs = require('fs');
var path = require('path');
AWS.config.update({region: myRegion});
s3 = new AWS.S3({apiVersion: '2006-03-01'});
var uploadParams = {
Bucket: process.argv[2],
Key: '', // Key set below
Body: '', // Body set below after createReadStream
ContentType: 'image/jpeg',
ACL: 'public-read',
ContentDisposition: 'inline'
};
var file = process.argv[3];
var fileStream = fs.createReadStream(file);
fileStream.on('error', function(err) {
console.log('File Error', err);
});
uploadParams.Body = fileStream;
uploadParams.Key = path.basename(file);
s3.putObject(uploadParams, function(errBucket, dataBucket) {
if (errBucket) {
console.log("Error uploading data: ", errBucket);
} else {
console.log(dataBucket);
}
});
I get successful upload, but unable to view file in browser as it auto downloads.
You have to specify the contentDisposition as part of request headers. You can not specify it as part of request paramenters. Specify it in headers explicitly as below .
var params = {Bucket : "bucketname" , Key : "keyName" , Body : "actualData"};
s3.putObject(params).
on('build',function(req){
req.httpRequest.headers['Content-Type'] = 'application/pdf' ; // Whatever you want
req.httpRequest.headers['ContentDisposition'] = 'inline';
}).
send( function(err,data){
if(err){
console.log(err);
return res.status(400).json({sucess: false});
}else{
console.log(success);
return res.status(200).json({success: true});
}
});
Code to upload obejcts/images to s3
module.exports = function(app, models) {
var fs = require('fs');
var AWS = require('aws-sdk');
var accessKeyId = "ACESS KEY HERE";
var secretAccessKey = "SECRET KEY HERE";
AWS.config.update({
accessKeyId: accessKeyId,
secretAccessKey: secretAccessKey
});
var s3 = new AWS.S3();
app.post('/upload', function(req, res){
var params = {
Bucket: 'bucketname',
Key: 'keyname.png',
Body: "GiveSomeRandomWordOraProperBodyIfYouHave"
};
s3.putObject(params, function (perr, pres) {
if (perr) {
console.log("Error uploading data: ", perr);
} else {
console.log("Successfully uploaded data to myBucket/myKey");
}
});
});
}
The above code will make sure the object has been uploaded to s3. You cab see it listed in s3 bucket in the browser but you cant view its contents in s3 bucket.
You can not view items within S3. S3 is a storage box. you can only download and upload elements in it. If you need to view the contents you would have to download and view it in the browser or any explorer of your choice. If you simply need to list the objects in s3. Use the below code.
Code to list objects of s3
var AWS = require('aws-sdk');
AWS.config.update({accessKeyId: 'mykey', secretAccessKey: 'mysecret', region: 'myregion'});
var s3 = new AWS.S3();
var params = {
Bucket: 'bucketName',
Delimiter: '/',
Prefix: 's/prefix/objectPath/'
}
s3.listObjects(params, function (err, data) {
if(err)throw err;
console.log(data);
});
Use S3 list to list the elements of S3. This way you can view them. Create a hyperlink for each of the listed item and make it point to s3 download url. This way you can view in the browser and also download it if you need.
In case if you need to view the contents of it via node JS, use the code below to load the image as if you are loading it from a remote URL.
Code to Download contents:
var fs = require('fs'),
request = require('request');
var download = function(uri, filename, callback){
request.head(uri, function(err, res, body){
console.log('content-type:', res.headers['content-type']);
console.log('content-length:', res.headers['content-length']);
request(uri).pipe(fs.createWriteStream(filename)).on('close', callback);
});
};
download('httpo://s3/URL' 'name.png', function(){
console.log('done');
});
Code to load image into a buffer :
const request = require('request');
let url = 'http://s3url/image.png';
request({ url, encoding: null }, (err, resp, buffer) => {
// typeof buffer === 'object'
// Use the buffer
// This buffer will now contains the image data
});
Use the above to load the image into a buffer. Once its in buffer, you can manipulate it the way you need. The above code wont downloads the image but it help you to manipuate the image in s3 using a buffer.
Contains Example Code. The link will contain Specific Node JS code examples for uploading and Manipulating objects of s3. use it for reference.

S3 how to find if object has pre-signed URL?

Learning S3 I know how to generate a presigned URL:
const aws = require('aws-sdk')
const s3 = new aws.S3()
aws.config.update({
accessKeyId: 'id-omitted',
secretAccessKey: 'key-omitted'
})
const myBucket = 'foo'
const myKey = 'bar.png'
const signedUrlExpireSeconds = 60 * 5
const url = s3.getSignedUrl('getObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds
})
console.log(`Presigned URL: ${url}`)
and from reading the documentation I can retrieve what's in the bucket with headObject but I've tested trying to find wether an object already has a presigned URL:
1st attempt:
let signedUrl = await s3.validSignedURL('getObject', params).promise()
console.log(`Signed URL: ${signedUrl}`)
2nd attempt:
await s3.getObject(params, (err, data) => {
if (err) console.log(err)
return data.Body.toString('utf-8')
})
3rd attempt:
let test = await s3.headObject(params).promise()
console.log(`${test}`)
and I'm coming up short. I know could create a file or log to a file when a presigned URL is created but I think that would be a hack. Is there a way in Node I can check an object to see if it has a presigned URL created for it? I'm not looking to do this in the dashboard I'm looking for a way to do this solely in the terminal/script. Going through the tags and querying Google I'm not finding any luck
Referenced:
S3 pre-signed url - check if url was used?
Creating Pre-Signed URLs for Amazon S3 Buckets
GET Object
Pre-Signing AWS S3 URLs
How to check if an prefix / key exists on S3 before creating a presigned URL?
How to get response from S3 getObject in Node.js?
AWS signed url if the object exists using promises
Is there a way in Node I can check an object to see if it has a presigned URL created for it?
Short answer: No
Long answer: There is no information about the signed urls stored on the object or any list of created urls. You can even create a signed url completely on client side without invoking any service
That question is interesting. I'd tried to find whether some place stored the presigned URL, but still not found.
But what gusto2 says is true, you can just create a presigned URL without any aws service, which is exactly what aws-sdk doing.
Check this file: https://github.com/aws/aws-sdk-js/blob/cc29728c1c4178969ebabe3bbe6b6f3159436394/ts/cloudfront.ts
Then you can get how presigned URL is generated:
var getRtmpUrl = function (rtmpUrl) {
var parsed = url.parse(rtmpUrl);
return parsed.path.replace(/^\//, '') + (parsed.hash || '');
};
var getResource = function (url) {
switch (determineScheme(url)) {
case 'http':
case 'https':
return url;
case 'rtmp':
return getRtmpUrl(url);
default:
throw new Error('Invalid URI scheme. Scheme must be one of'
+ ' http, https, or rtmp');
}
};
getSignedUrl: function (options, cb) {
try {
var resource = getResource(options.url);
} catch (err) {
return handleError(err, cb);
}
var parsedUrl = url.parse(options.url, true),
signatureHash = Object.prototype.hasOwnProperty.call(options, 'policy')
? signWithCustomPolicy(options.policy, this.keyPairId, this.privateKey)
: signWithCannedPolicy(resource, options.expires, this.keyPairId, this.privateKey);
parsedUrl.search = null;
for (var key in signatureHash) {
if (Object.prototype.hasOwnProperty.call(signatureHash, key)) {
parsedUrl.query[key] = signatureHash[key];
}
}
try {
var signedUrl = determineScheme(options.url) === 'rtmp'
? getRtmpUrl(url.format(parsedUrl))
: url.format(parsedUrl);
} catch (err) {
return handleError(err, cb);
}
return handleSuccess(signedUrl, cb);
}

Resources