S3 getSignedUrl v2 equivalent in AWS Javascript SDK v3 - node.js

I just started using aws-sdk on my app to upload files to S3, and i'm debating whether to use aws-sdk v2 or v3.
V2 is the whole package, which is super bloated considering i only need the s3 services, not the myriad of other options. However, the documentation is very cryptic and im having a really hard time getting the equivalent getSignedUrl function to work in v3.
In v2, i have this code to sign the url and it works fine. I am using express on the server
import aws from 'aws-sdk';
const signS3URL = (req,res,next) => {
const s3 = new aws.S3({region:'us-east-2'});
const {fileName,fileType} = req.query;
const s3Params = {
Bucket : process.env.S3_BUCKET,
Key : fileName,
ContentType:fileType,
Expires: 60,
};
s3.getSignedUrl('putObject',s3Params,(err,data)=>{
if(err){
next(err);
}
res.json(data);
});
}
Now I've been reading documentation and examples trying to get the v3 equivalent to work, but i cant find any working example of how to use it. Here is how I have set it up so far
import {S3Client,PutObjectCommand} from '#aws-sdk/client-s3';
import {getSignedUrl} from '#aws-sdk/s3-request-presigner';
export const signS3URL = async(req,res,next) => {
console.log('Sign')
const {fileName,fileType} = req.query;
const s3Params = {
Bucket : process.env.S3_BUCKET,
Key : fileName,
ContentType:fileType,
Expires: 60,
// ACL: 'public-read'
};
const s3 = new S3Client()
s3.config.region = 'us-east-2'
const command = new PutObjectCommand(s3Params)
console.log(command)
await getSignedUrl(s3,command).then(signature =>{
console.log(signature)
res.json(signature)
}).catch(e=>next(e))
}
There are some errors in this code, and the first I can identify is creating the command variable using the PutObjectCommand function provided by the SDK. The documentation does not clarify to me what i need to pass it as the "input" https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/interfaces/putobjectcommandinput.html
Does anyone with experience using aws-sdk v3 know how to do this?
Also a side-question, where can i find the api reference for v2???? cuz all i find is the sdk docs that say "v3 now available" and i cant seem to find the reference to v2....
thanks for your time

The following code would give you a signedUrl in a JSON body with the key as signedUrl.
const signS3URL = async (req, res, next) => {
const { fileName, fileType } = req.query;
const s3Params = {
Bucket: process.env.S3_BUCKET,
Key: fileName,
ContentType: fileType,
// ACL: 'bucket-owner-full-control'
};
const s3 = new S3Client({ region: 'us-east-2' })
const command = new PutObjectCommand(s3Params);
try {
const signedUrl = await getSignedUrl(s3, command, { expiresIn: 60 });
console.log(signedUrl);
res.json({ signedUrl })
} catch (err) {
console.error(err);
next(err);
}
}
Keep the ACL as bucket-owner-full-control if you want the AWS account owning the Bucket to access the files.
You can go to the API Reference for both the JS SDK versions from here

In reference to the AWS docs and #GSSwain's answer (cannot comment, new) this link will show multiple examples getSignedURL examples.
Below is an example of uploading copied from AWS docs
// Import the required AWS SDK clients and commands for Node.js
import {
CreateBucketCommand,
DeleteObjectCommand,
PutObjectCommand,
DeleteBucketCommand }
from "#aws-sdk/client-s3";
import { s3Client } from "./libs/s3Client.js"; // Helper function that creates an Amazon S3 service client module.
import { getSignedUrl } from "#aws-sdk/s3-request-presigner";
import fetch from "node-fetch";
// Set parameters
// Create a random name for the Amazon Simple Storage Service (Amazon S3) bucket and key
export const bucketParams = {
Bucket: `test-bucket-${Math.ceil(Math.random() * 10 ** 10)}`,
Key: `test-object-${Math.ceil(Math.random() * 10 ** 10)}`,
Body: "BODY"
};
export const run = async () => {
try {
// Create an S3 bucket.
console.log(`Creating bucket ${bucketParams.Bucket}`);
await s3Client.send(new CreateBucketCommand({ Bucket: bucketParams.Bucket }));
console.log(`Waiting for "${bucketParams.Bucket}" bucket creation...`);
} catch (err) {
console.log("Error creating bucket", err);
}
try {
// Create a command to put the object in the S3 bucket.
const command = new PutObjectCommand(bucketParams);
// Create the presigned URL.
const signedUrl = await getSignedUrl(s3Client, command, {
expiresIn: 3600,
});
console.log(
`\nPutting "${bucketParams.Key}" using signedUrl with body "${bucketParams.Body}" in v3`
);
console.log(signedUrl);
const response = await fetch(signedUrl, {method: 'PUT', body: bucketParams.Body});
console.log(
`\nResponse returned by signed URL: ${await response.text()}\n`
);
} catch (err) {
console.log("Error creating presigned URL", err);
}
try {
// Delete the object.
console.log(`\nDeleting object "${bucketParams.Key}"} from bucket`);
await s3Client.send(
new DeleteObjectCommand({ Bucket: bucketParams.Bucket, Key: bucketParams.Key })
);
} catch (err) {
console.log("Error deleting object", err);
}
try {
// Delete the S3 bucket.
console.log(`\nDeleting bucket ${bucketParams.Bucket}`);
await s3Client.send(
new DeleteBucketCommand({ Bucket: bucketParams.Bucket })
);
} catch (err) {
console.log("Error deleting bucket", err);
}
};
run();

Related

Uploading and showing image from S3

I am trying to upload an image through react app to s3 bucket and then receiving back the URL and showing the image on the screen.
I am able to upload the image (sort of) and get the URL back from the s3 server, but when I download it I am unable to open it - the formant is unsupported and I can't use the img tag to show it on the webpage. I guess that it is something to do with conversation to base64 but I can't figure out why it is not working.
The frontend(React) is:
const uploadImageToBucket = async (image) => {
console.log("fff",image)
let image_location
try {
const response = axios.post("http://localhost:5000/user/blogmanage/uploadimage",image)
image_location = response.then((response)=>response.data.body);
console.log("img loc", image_location)
return image_location;
} catch (error) {
}
}
The backend(nodejs) is
router.post("/blogmanage/uploadimage", async (req,res)=>{
const s3 = new AWS.S3({
accessKeyId: process.env["AWS_ACCESS_KEY_ID"],
secretAccessKey: process.env["AWS_SECRET_KEY"],
region: process.env['AWS_REGION']
});
const BUCKET_NAME = "mrandmrseatmedia";
var base64data = new Buffer.from( 'binary',req.body);
const params = {
Bucket: BUCKET_NAME,
Key: "test/test2.jpg",
Body: base64data
}
s3.upload(params, function (err,data){
if (err){
console.log(err)
res.status(404).json({msg:err});
}
else{
const image_location = `${data.Location}`;
console.log(`File uploaded successfully. ${data.Location}`);
res.status(200).json({body:image_location});
}
})
});
Thanks!
After a lot of testing and retesting and rewriting using this repo as an example
https://github.com/Jerga99/bwm-ng/blob/master/server/services/image-upload.js
It works.
The use of base64 is wrong in this case. It corrupts the file in some way. Multer library fixes it.

How to Upload files to Amazon AWS3 with NodeJS?

I'm trying to upload files from a MERN application I'm working on. I'm almost done with the NodeJS back end part.
Said application will allow users to upload images(jpg, jpeg, png, gifs, etc) to an Amazon AWS S3 bucket that I created.
Well, lets put it this way. I created a helper:
const aws = require('aws-sdk');
const fs = require('fs');
// Enter copied or downloaded access ID and secret key here
const ID = process.env.AWS_ACCESS_KEY_ID;
const SECRET = process.env.AWS_SECRET_ACCESS_KEY;
// The name of the bucket that you have created
const BUCKET_NAME = process.env.AWS_BUCKET_NAME;
const s3 = new aws.S3({
accessKeyId: ID,
secretAccessKey: SECRET
});
const uploadFile = async images => {
// Read content from the file
const fileContent = fs.readFileSync(images);
// Setting up S3 upload parameters
const params = {
Bucket: BUCKET_NAME,
// Key: 'cat.jpg', // File name you want to save as in S3
Body: fileContent
};
// Uploading files to the bucket
s3.upload(params, function(err, data) {
if (err) {
throw err;
}
console.log(`File uploaded successfully. ${data.Location}`);
});
};
module.exports = uploadFile;
That helper takes three of my environment variables which are the name of the bucket, the keyId and the secret key.
When adding files from the form(that will eventually be added in the front end) the user will be able to send more than one file.
Right now my current post route looks exactly like this:
req.body.user = req.user.id;
req.body.images = req.body.images.split(',').map(image => image.trim());
const post = await Post.create(req.body);
res.status(201).json({ success: true, data: post });
That right there works great but takes the req.body.images as a string with each image separated by a comma. What would the right approach be to upload(to AWS S3) the many files selected from the Windows directory pop up?. I tried doing this but did not work :/
// Add user to req,body
req.body.user = req.user.id;
uploadFile(req.body.images);
const post = await Post.create(req.body);
res.status(201).json({ success: true, data: post });
Thanks and hopefully your guys can help me out with this one. Right now I'm testing it with Postman but later on the files will be sent via a form.
Well you could just call the uploadFile multiple times for each file :
try{
const promises= []
for(const img of images) {
promises.push(uploadFile(img))
}
await Promise.all(promises)
//rest of logic
}catch(err){ //handle err }
On a side note you should warp S3.upload in a promise:
const AWS = require('aws-sdk')
const s3 = new AWS.S3({
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
})
module.exports = ({ params }) => {
return new Promise((resolve, reject) => {
s3.upload(params, function (s3Err, data) {
if (s3Err) return reject(s3Err)
console.log(`File uploaded successfully at ${data.Location}`)
return resolve(data)
})
})
}
Bonus, if you wish to avoid having your backend handle uploads you can use aws s3 signed urls and let the client browser handle that thus saving your server resources.
One more thing your Post object should only contain Urls of the media not the media itself.
// Setting up S3 upload parameters
const params = {
Bucket: bucket, // bucket name
Key: fileName, // File name you want to save as in S3
Body: Buffer.from(imageStr, 'binary'), //image must be in buffer
ACL: 'public-read', // allow file to be read by anyone
ContentType: 'image/png', // image header for browser to be able to render image
CacheControl: 'max-age=31536000, public' // caching header for browser
};
// Uploading files to the bucket
try {
const result = await s3.upload(params).promise();
return result.Location;
} catch (err) {
console.log('upload error', err);
throw err;
}

Upload image to Cloudinary via Lambda using node.js

I'm working on a Serverless Webapp (node.js) and have come across a problem which has been blocking me now for 2 days.
Scenario:
- User comes to the site, uploads an image directly to our S3 bucket.
- Event is fired and is added to the AWS SQS queue ready for Lambda to pick up
- Lambda picks up the image key
- Lambda then connects to Cloudinary (via SDK)
- Lambda then tries to upload via the SDK command cloudinary.v2.uploader.upload('s3://bucket-name/file-name', (error, result) => { console.log(error, result)});
Problem:
Nothing gets uploaded to Cloudinary, no Errors are found in Cloudwatch, console.log is empty from callback.
The S3 bucket is whitelisted, I've created a file in my S3 .wellknown/cloudinary/cloud_name
index.js
import imageTransform from "./imageTransform";
exports.imageFromQueue = async event => {
event.Records.forEach(async record => {
const body = JSON.parse(record.body).Records[0];
const { eventName, s3 } = body;
if (eventName === "ObjectCreated:Post") {
const { key } = s3.object;
console.log("here", key); --> This gets printed out
const data = await ImageTransform(key);
}
});
};
}
imageTransform.js
const cloudinary = require("cloudinary").v2;
cloudinary.config({
cloud_name: process.env.CLOUDINARY_CLOUD_NAME,
api_key: process.env.CLOUDINARY_API_KEY,
api_secret: process.env.CLOUDINARY_API_SECRET
});
const ImageTransform = async imagePath => {
try {
const data = await cloudinary.uploader.upload(
`s3://${process.env.IMAGE_BUCKET}/${imagePath}`
);
console.log(data);
} catch (error) {
console.log("cloudinary - major error", error);
return { majorError: error };
}
}

Unit test AWS S3 functions with Mocha in Node

How can I mock S3 calls when unit testing in Node. I want to make sure the function is unit tested without making actual calls to S3. I would like to test what happens if everything goes as expected and if there are errors. I think Sinon is the tool of choice but I'm not sure how?
My s3 file is:
const AWS = require('aws-sdk');
AWS.config.region = 'ap-southeast-2';
const s3 = new AWS.S3();
const { S3_BUCKET } = process.env;
const propertyCheck = require('./utils/property-check');
module.exports.uploadS3 = (binary, folderName, fileName) => new Promise((resolve, reject) => {
if (!propertyCheck.valid(binary) ||
!propertyCheck.validString(folderName) ||
!propertyCheck.validString(fileName)) {
const error = '[uploadS3] Couldn\'t upload to S3 because of validation errors.';
console.error(error);
return reject(new Error(error));
}
const finalUrl = `${encodeURIComponent(folderName)}/${encodeURIComponent(fileName)}`;
s3.putObject({
Body: binary,
Key: finalUrl,
Bucket: S3_BUCKET,
ContentType: 'application/pdf',
ContentDisposition: 'inline',
ACL: 'public-read'
}, (error, data) => {
if (error) {
console.error(error);
return reject(new Error(`[uploadS3] ${error}`));
}
resolve(`https://${S3_BUCKET}.s3.amazonaws.com/${finalUrl}`);
});
});
Using Sinon is a great choice.
You could use aws-sdk-mock as it takes out a little bit of the work involved with setting up mocks, but will you probably find yourself using both.
Link to aws-sdk-mock: https://www.npmjs.com/package/aws-sdk-mock)
As an aside, you can replace your manual Promise creation with .promise() that is on most of the aws-sdk API.
Documentation Link: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Request.html#promise-property

Can't upload images in nodejs using aws-sdk

I've tried using aws-sdk and knox and I get status code 301 trying to upload images. I get status code 301 and message - 'The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. This works in php.
AWS.config.loadFromPath(__dirname + '/config/config.json');
fs.readFile(source, function (err, data) {
var s3 = new AWS.S3();
s3.client.createBucket({Bucket: 'mystuff'}, function() {
var d = {
Bucket: 'mystuff',
Key: 'img/test.jpg',
Body: data,
ACL: 'public-read'
};
s3.client.putObject(d, function(err, res) {
if (err) {
console.log("Error uploading data: ", err);
callback(err);
} else {
console.log("Successfully uploaded data to myBucket/myKey");
callback(res);
}
});
});
});
I actually solved this problem. In your config you have to have a region, since my bucket was "US Standard", I left my region blank and it worked.
config.json -
{ "accessKeyId": "secretKey", "secretAccessKey": "secretAccessKey", "region": ""}
go to s3 management console select one of your files and click on proporties - > look at the file link.
US standard
https://s3.amazonaws.com/yourbucket/
host in your console window
yourbucket.s3.amazonaws.com/
us-west-1
https://s3-us-west-1.amazonaws.com/yourbucket/
host in your console window
yourbucket.s3-us-west-1.amazonaws.com/
Did you try .send()?
I can upload to S3 by below code.
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/AWSRequest.html
var s3object = {Bucket: 'mystuff', Key: name, Body : data['data']};
s3.client.putObject(s3object).done(function(resp){
console.log("Successfully uploaded data");
}).fail(function(resp){
console.log(resp);
}).send();
I have the same problem with the new SDK and solved it by setting the endpoint option explicitly.
Reference : http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#constructor_details
Snippet:
var AWS = require('aws-sdk');
var s3 = new AWS.S3({ endpoint :'https://s3-your-region-varies.amazonaws.com' }),
myBucket = 'your-bucket-name';
var params = {Bucket: myBucket, Key: 'myUpload', Body: "Test"};
s3.putObject(params, function(err, data) {
if (err) {
console.log(err)
} else {
console.log("Successfully uploaded data to "+myBucket+"/testKeyUpload");
}
});
Alternatively, you can solve this by setting the region in your config file and you just have to be precise of your region name.

Resources