How would you upload a file through mocha/chai for testing? - node.js

I'm using the method where the client sends a request to the server to upload a file to an s3 bucket, and then the server sends back a signed request to allow the client to do this. I'm following this tutorial -
https://devcenter.heroku.com/articles/s3-upload-node
Does anyone know how I can write an API endpoint test for this? I'm not doing the client side code since it's an iPhone app but I still want to test my endpoint in my tests.

Based on a code like that, from your link :
app.get('/sign-s3', (req, res) => {
const s3 = new aws.S3();
const fileName = req.query['file-name'];
const fileType = req.query['file-type'];
const s3Params = {
Bucket: S3_BUCKET,
Key: fileName,
Expires: 60,
ContentType: fileType,
ACL: 'public-read'
};
s3.getSignedUrl('putObject', s3Params, (err, data) => {
if(err){
console.log(err);
return res.end();
}
const returnData = {
signedRequest: data,
url: `https://${S3_BUCKET}.s3.amazonaws.com/${fileName}`
};
res.write(JSON.stringify(returnData));
res.end();
});
});
I would do a unit test, as a full integration test will depend on your aws account on your test env. For that I would mock req and s3.getSignedUrl and test that getSignedUrl is called with the correct parameters. I would also add a test, still with mock, to be sure a correct json is returned.

Related

How to upload files to AWS S3?, I get an error when doing it (using private buckets)

I am trying to upload files to AWS S3 using getSignedUrlPromise() to obtain the access link since the bucket is completely private, I want it to only be accessible through the links that the server generates with getSignedUrlPromise().
The problem comes when I try to make a Put request to that link obtained since I get the following error, I also leave you the response that I receive.
Here is the code for configuring aws in nodeJS:
import AWS from 'aws-sdk';
const bucketName = 'atlasfitness-progress';
const region =process.env.AWS_REGION;
const accessKeyId = process.env.AWS_ACCESS_KEY
const secretAccessKey = process.env.AWS_SECRET_KEY
const URL_EXPIRATION_TIME = 60; // in seconds
const s3 = new AWS.S3({
region,
accessKeyId,
secretAccessKey,
signatureVersion: 'v4'
})
export const generatePreSignedPutUrl = async (fileName, fileType) => {
const params = ({
Bucket: bucketName,
Key: fileName,
Expires: 60
})
const url = await s3.getSignedUrlPromise('putObject', params);
return url;
}
And then I have a express controller to send the link when it's requested:
routerProgress.post('/prepare_s3', verifyJWT, async (req, res) => {
res.send({url: await generatePreSignedPutUrl(req.body.fileName, req.body.fileType)});
})
export { routerProgress };
But the problem comes in the frontend, here is the function that first asks for the link and then it tryies to upload the file to S3.
const upload = async (e) => {
e.preventDefault();
await JWT.checkJWT();
const requestObject = {
fileName: frontPhoto.name,
fileType: frontPhoto.type,
token: JWT.getToken()
};
const url = (await axiosReq.post(`${serverPath}/prepare_s3`, requestObject)).data.url;
//Following function is the one that doesn't work
const response = await fetch(url, {
method: "PUT",
headers: {
"Content-Type": "multipart/form-data"
},
body: frontPhoto
});
console.log(response);
}
And with this all is done, I can say that I am a newbie to AWS so it is quite possible that I have caused a rather serious error without realizing it, but I have been blocked here for some many days and I'm starting to get desperate. So if anyone detects the error or knows how I can make it work I would be very grateful for your help.
The first thing I note about your code is that you await on async operations but do not provide for exceptions. This is very bad practice as it hides possible failures. The rule of thumb is: whenever you need to await for a result, wrap your call in a try/catch block.
In your server-side code above, you have two awaits which can fail, and if they do, any error they generate is lost.
A better strategy would be:
export const generatePreSignedPutUrl = async (fileName, fileType) => {
const params = ({
Bucket: bucketName,
Key: fileName,
Expires: 60
})
let url;
try {
url = await s3.getSignedUrlPromise('putObject', params);
} catch (err) {
// do something with the error here
// and abort the operation.
return;
}
return url;
}
And in your POST route:
routerProgress.post('/prepare_s3', verifyJWT, async (req, res) => {
let url;
try {
url = await generatePreSignedPutUrl(req.body.fileName, req.body.fileType);
} catch (err) {
res.status(500).send({ ok: false, error: `failed to get url: ${err}` });
return;
}
res.send({ url });
})
And in your client-side code, follow the same strategy. At the very least, this will give you a far better idea of where your code is failing.
Two things to keep in mind:
Functions declared using the async keyword do not return the value of the expected result; they return a Promise of the expected result, and like all Promises, can be chained to both .catch() and .then() clauses.
When calling async functions from within another async function, you must do something with any exceptions you encounter because, due to their nature, Promises do not share any surrounding runtime context which would allow you to capture any exceptions at a higher level.
So you can use either Promise "thenable" chaining or try/catch blocks within async functions to trap errors, but if you choose not to do either, you run the risk of losing any errors generated within your code.
Here's an example of how to create a pre-signed URL that can be used to PUT an MP4 file.
const AWS = require('aws-sdk');
const s3 = new AWS.S3({
apiVersion: '2010-12-01',
signatureVersion: 'v4',
region: process.env.AWS_DEFAULT_REGION || 'us-east-1',
});
const params = {
Bucket: 'mybucket',
Key: 'videos/sample.mp4',
Expires: 1000,
ContentType: 'video/mp4',
};
const url = s3.getSignedUrl('putObject', params);
console.log(url);
The resulting URL will look something like this:
https://mybucket.s3.amazonaws.com/videos/sample.mp4?
Content-Type=video%2Fmp4&
X-Amz-Algorithm=AWS4-HMAC-SHA256&
X-Amz-Credential=AKIASAMPLESAMPLE%2F20200303%2Fus-east-1%2Fs3%2Faws4_request&
X-Amz-Date=20211011T090807Z&
X-Amz-Expires=1000&
X-Amz-Signature=long-sig-here&
X-Amz-SignedHeaders=host
You can test this URL by uploading sample.mp4 with curl as follows:
curl -X PUT -T sample.mp4 -H "Content-Type: video/mp4" "<signed url>"
A few notes:
hopefully you can use this code to work out where your problem lies.
pre-signed URLs are created locally by the SDK, so there's no need to go async.
I'd advise creating the pre-signed URL and then testing PUT with curl before testing your browser client, to ensure that curl works OK. That way you will know whether to focus your attention on the production of the pre-signed URL or on the use of the pre-signed URL within your client.
If your attempt to upload via curl fails with Access Denied then check that:
the pre-signed URL has not expired (they have time limits)
the AWS credentials you used to sign the URL allow PutObject to that S3 bucket
the S3 bucket policy does not explicitly deny your request

How to Upload files to Amazon AWS3 with NodeJS?

I'm trying to upload files from a MERN application I'm working on. I'm almost done with the NodeJS back end part.
Said application will allow users to upload images(jpg, jpeg, png, gifs, etc) to an Amazon AWS S3 bucket that I created.
Well, lets put it this way. I created a helper:
const aws = require('aws-sdk');
const fs = require('fs');
// Enter copied or downloaded access ID and secret key here
const ID = process.env.AWS_ACCESS_KEY_ID;
const SECRET = process.env.AWS_SECRET_ACCESS_KEY;
// The name of the bucket that you have created
const BUCKET_NAME = process.env.AWS_BUCKET_NAME;
const s3 = new aws.S3({
accessKeyId: ID,
secretAccessKey: SECRET
});
const uploadFile = async images => {
// Read content from the file
const fileContent = fs.readFileSync(images);
// Setting up S3 upload parameters
const params = {
Bucket: BUCKET_NAME,
// Key: 'cat.jpg', // File name you want to save as in S3
Body: fileContent
};
// Uploading files to the bucket
s3.upload(params, function(err, data) {
if (err) {
throw err;
}
console.log(`File uploaded successfully. ${data.Location}`);
});
};
module.exports = uploadFile;
That helper takes three of my environment variables which are the name of the bucket, the keyId and the secret key.
When adding files from the form(that will eventually be added in the front end) the user will be able to send more than one file.
Right now my current post route looks exactly like this:
req.body.user = req.user.id;
req.body.images = req.body.images.split(',').map(image => image.trim());
const post = await Post.create(req.body);
res.status(201).json({ success: true, data: post });
That right there works great but takes the req.body.images as a string with each image separated by a comma. What would the right approach be to upload(to AWS S3) the many files selected from the Windows directory pop up?. I tried doing this but did not work :/
// Add user to req,body
req.body.user = req.user.id;
uploadFile(req.body.images);
const post = await Post.create(req.body);
res.status(201).json({ success: true, data: post });
Thanks and hopefully your guys can help me out with this one. Right now I'm testing it with Postman but later on the files will be sent via a form.
Well you could just call the uploadFile multiple times for each file :
try{
const promises= []
for(const img of images) {
promises.push(uploadFile(img))
}
await Promise.all(promises)
//rest of logic
}catch(err){ //handle err }
On a side note you should warp S3.upload in a promise:
const AWS = require('aws-sdk')
const s3 = new AWS.S3({
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
})
module.exports = ({ params }) => {
return new Promise((resolve, reject) => {
s3.upload(params, function (s3Err, data) {
if (s3Err) return reject(s3Err)
console.log(`File uploaded successfully at ${data.Location}`)
return resolve(data)
})
})
}
Bonus, if you wish to avoid having your backend handle uploads you can use aws s3 signed urls and let the client browser handle that thus saving your server resources.
One more thing your Post object should only contain Urls of the media not the media itself.
// Setting up S3 upload parameters
const params = {
Bucket: bucket, // bucket name
Key: fileName, // File name you want to save as in S3
Body: Buffer.from(imageStr, 'binary'), //image must be in buffer
ACL: 'public-read', // allow file to be read by anyone
ContentType: 'image/png', // image header for browser to be able to render image
CacheControl: 'max-age=31536000, public' // caching header for browser
};
// Uploading files to the bucket
try {
const result = await s3.upload(params).promise();
return result.Location;
} catch (err) {
console.log('upload error', err);
throw err;
}

How to write to an existing file in a S3 bucket based on the pre signed URL?

I've been searching for a way to write to a JSON file in a S3 bucket from the pre signed URL. From my research it appears it can be done but these are not in Node:
http PUT a file to S3 presigned URLs using ruby
PUT file to S3 with presigned URL
Uploading a file to a S3 Presigned URL
Write to a AWS S3 pre-signed url using Ruby
How to create and read .txt file with fs.writeFile to AWS Lambda
Not finding a Node solution from my searches and using a 3rd party API I'm trying to write the callback to a JSON that is in a S3 bucket. I can generate the pre signed URL with no issues but when I try to write dummy text to the pre signed URL I get:
Error: ENOENT: no such file or directory, open
'https://path-to-file-with-signed-url'
When I try to use writeFile:
fs.writeFile(testURL, `This is a write test: ${Date.now()}`, function(err) {
if(err) return err
console.log("File written to")
})
and my understanding of the documentation under file it says I can use a URL. I'm starting to believe this might be a permissions issue but I'm not finding any luck in the documentation.
After implementing node-fetch I still get an error (403 Forbidden) writing to a file in S3 based on the pre signed URL, here is the full code from the module I've written:
const aws = require('aws-sdk')
const config = require('../config.json')
const fetch = require('node-fetch')
const expireStamp = 604800 // 7 days
const existsModule = require('./existsModule')
module.exports = async function(toSignFile) {
let checkJSON = await existsModule(`${toSignFile}.json`)
if (checkJSON == true) {
let testURL = await s3signing(`${toSignFile}.json`)
fetch(testURL, {
method: 'PUT',
body: JSON.stringify(`This is a write test: ${Date.now()}`),
}).then((res) => {
console.log(res)
}).catch((err) => {
console.log(`Fetch issue: ${err}`)
})
}
}
async function s3signing(signFile) {
const s3 = new aws.S3()
aws.config.update({
accessKeyId: config.aws.accessKey,
secretAccessKey: config.aws.secretKey,
region: config.aws.region,
})
params = {
Bucket: config.aws.bucket,
Key: signFile,
Expires: expireStamp
}
try {
// let signedURL = await s3.getSignedUrl('getObject', params)
let signedURL = await s3.getSignedUrl('putObject', params)
console.log('\x1b[36m%s\x1b[0m', `Signed URL: ${signedURL}`)
return signedURL
} catch (err) {
return err
}
}
Reviewing the permissions I have no issues with uploading and write access has been set in the permissions. In Node how can I write to a file in the S3 bucket using that file's pre-signed URL as the path?
fs is the filesystem module. You can't use it as an HTTP client.
You can use the built-in https module, but I think you'll find it easier to use node-fetch.
fetch('your signed URL here', {
method: 'PUT',
body: JSON.stringify(data),
// more options and request headers and such here
}).then((res) => {
// do something
}).catch((e) => {
// do something else
});
Was looking for an elegant way to transfer s3 file to an s3 signed url using PUT. Most examples I found were using the PUT({body : data}). I came across one suggestion to read the data to a readable stream and then pipe it to the PUT. However I still didn't like the notion of loading large files into memory and then assigning them to the put stream. Piping read to write is always better in memory and performance. Since the s3.getObject().createReadStream() returns a request object, which supports pipe, all that we need to do is to pipe it correctly to the PUT request which exposes a write stream.
Get object function
async function GetFileReadStream(key){
return new Promise(async (resolve,reject)=>{
var params = {
Bucket: bucket,
Key: key
};
var fileSize = await s3.headObject(params)
.promise()
.then(res => res.ContentLength);
resolve( {stream : s3.getObject(params).createReadStream(),fileSize});
});
}
Put object function
const request = require('request');
async function putStream(presignedUrl,readStream){
return new Promise((resolve,reject)=>{
var putRequestWriteStream = request.put({url:presignedUrl,headers:{'Content-Type':'application/octet-stream','Content-Length':readStream.fileSize }});
putRequestWriteStream.on('response', function(response) {
var etag = response.headers['etag'];
resolve(etag);
})
.on('end', () =>
console.log("put done"))
readStream.stream.pipe(putRequestWriteStream);
});
}
This works great with a very small memory foot print. Enjoy.

node js get image url

i want to
1-choose an image from my filesystem and upload it to server/local
2- get its url back using node js service . i managed to do step 1 and now i want to get the image url instead of getting the success message in res.end
here is my code
app.post("/api/Upload", function(req, res) {
upload(req, res, function(err) {
if (err) {
return res.end("Something went wrong!");
}
return res.end("File uploaded sucessfully!.");
});
});
i'm using multer to upload the image.
You can do something like this, using AWS S3 and it returns the url of the image uploaded
const AWS = require('aws-sdk')
AWS.config.update({
accessKeyId: <AWS_ACCESS_KEY>,
secretAccessKey: <AWS_SECRET>
})
const uploadImage = file => {
const replaceFile = file.data_uri.replace(/^data:image\/\w+;base64,/, '')
const buf = new Buffer(replaceFile, 'base64')
const s3 = new AWS.S3()
s3.upload({
Bucket: <YOUR_BUCKET>,
Key: <NAME_TO_SAVE>,
Body: buf,
ACL: 'public-read'
}, (err, data) => {
if (err) throw err;
return data.Location; // this is the URL
})
}
also you can check this express generator, which has the route to upload images to AWS S3 https://www.npmjs.com/package/speedbe
I am assuming that you are saving the image on the server file system and not a Storage solution like AWS S3 or Google Cloud Storage, where you get the url after upload.
Since, you are storing it on the filesystem, you can rename the file with a unique identifier like uuid or something else.
Then you can make a GET route and request that ID in query or path parameter and then read the file having that ID as the name and send it back.

Unit test AWS S3 functions with Mocha in Node

How can I mock S3 calls when unit testing in Node. I want to make sure the function is unit tested without making actual calls to S3. I would like to test what happens if everything goes as expected and if there are errors. I think Sinon is the tool of choice but I'm not sure how?
My s3 file is:
const AWS = require('aws-sdk');
AWS.config.region = 'ap-southeast-2';
const s3 = new AWS.S3();
const { S3_BUCKET } = process.env;
const propertyCheck = require('./utils/property-check');
module.exports.uploadS3 = (binary, folderName, fileName) => new Promise((resolve, reject) => {
if (!propertyCheck.valid(binary) ||
!propertyCheck.validString(folderName) ||
!propertyCheck.validString(fileName)) {
const error = '[uploadS3] Couldn\'t upload to S3 because of validation errors.';
console.error(error);
return reject(new Error(error));
}
const finalUrl = `${encodeURIComponent(folderName)}/${encodeURIComponent(fileName)}`;
s3.putObject({
Body: binary,
Key: finalUrl,
Bucket: S3_BUCKET,
ContentType: 'application/pdf',
ContentDisposition: 'inline',
ACL: 'public-read'
}, (error, data) => {
if (error) {
console.error(error);
return reject(new Error(`[uploadS3] ${error}`));
}
resolve(`https://${S3_BUCKET}.s3.amazonaws.com/${finalUrl}`);
});
});
Using Sinon is a great choice.
You could use aws-sdk-mock as it takes out a little bit of the work involved with setting up mocks, but will you probably find yourself using both.
Link to aws-sdk-mock: https://www.npmjs.com/package/aws-sdk-mock)
As an aside, you can replace your manual Promise creation with .promise() that is on most of the aws-sdk API.
Documentation Link: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Request.html#promise-property

Resources