Download files from AWS S3 in Node.js app - node.js

I have a code for uploading files to AWS S3 bucket:
var upload = multer({
storage: multerS3({
s3: s3,
bucket: 'mybucketname',
key: function (req, file, cb) {
cb(null, Date.now().toString())
}
}),
fileFilter: myfilefiltergeshere...
})
I want to download the uploaded source. I don't know how could it be done, because I do not really know, how to identify the file on S3. Is it the key field in the upload, or is it something else I have to specify?

For download you can
import AWS from 'aws-sdk'
AWS.config.update({
accessKeyId: '....',
secretAccessKey: '...',
region: '...'
})
const s3 = new AWS.S3()
async function download (filename) {
const { Body } = await s3.getObject({
Key: filename,
Bucket: 'mybucketname'
}).promise()
return Body
}
const dataFiles = await Promise.all(files.map(file => download(file)))
I have files in an array that's why i used files.map, but i guess you can look at the code for some guidance types, this might help you.
And for more you can read this.

Related

Is there a way to stream large data directly to S3 files instead saving locally first?

Node.js app has so big data, I don't want to save it into a file first before streaming, so my question is there a way to stream this data directly to AWS S3?
You can use Upload from #aws-sdk/lib-storage which may allow you to upload buffers, blobs, or streams.
For example, if you have a stream you can pass it to it as Body:
const { S3Client } = require('#aws-sdk/client-s3');
const { Upload } = require('#aws-sdk/lib-storage');
async function upload(stream, fileName, bucketName, contentType) {
const s3Client = new S3Client({ region: "us-east-1" });
const upload = new Upload({
client: s3Client,
params: {
Bucket: bucketName,
Key: fileName,
Body: stream,
ContentType: contentType,
}
});
return await upload.done();
}

How do I restrict file types to images only using multerS3?

I have been able to connect my S3 bucket to my web application using multerS3, however, I'm not sure how to go about restricting the files that can be uploaded to images only. Before I was using S3, I used multerFilter to check the mime type, and that worked. However, now, using the same code, it seems like any type of file will be uploaded to my S3 bucket. What's the best way to restrict file types to images with multerS3? Thank you.
const multerFilter = (req, file, cb) => {
if (file.mimetype.startsWith('image')) {
cb(null, true);
} else {
cb(new AppError('Not an image! Please upload images only.', 400), false);
}
};
const upload = multer({
storage: multerS3({
s3: s3,
acl: 'public-read',
bucket: 'BUCKET NAME',
contentType: multerS3.AUTO_CONTENT_TYPE,
fileFilter: multerFilter,
}),
});
I'm an idiot. I see the error in my code and have fixed it. I should not have put fileFilter inside of the storage setting. This is the fixed code:
const upload = multer({
storage: multerS3({
s3: s3,
acl: 'public-read',
bucket: 'BUCKET NAME',
contentType: multerS3.AUTO_CONTENT_TYPE,
}),
fileFilter: multerFilter,
});

Upload image to s3 from url

I am trying to upload a image, for which i have a url into s3
I want to do the same without downloading the image to local storage
filePath = imageURL;
let params = {
Bucket: 'bucketname',
Body: fs.createReadStream(filePath),
Key: "folder/" + id + "originalImage"
};
s3.upload(params, function (err, data) {
if (err) console.log(err);
if (data) console.log("original image success");
});
expecting a success but getting error :
myURL is a https publicly accesible url.
[Error: ENOENT: no such file or directory, open '<myURL HERE>']
errno: -2,
code: 'ENOENT',
syscall: 'open',
path:
'<myURL HERE>' }
There are two ways to place files into an Amazon S3 bucket:
Upload the contents via PutObject(), or
Copy an object that is already in Amazon S3
So, in the case of your "web accessible Cloudinary image", it will need to be retrieved, then uploaded. You could either fully download the image and then upload it, or you could do some fancy stuff with streaming bodies where the source image is read into a buffer and then written to S3. Either way, the file will need to be "read" from Cloudinary the "written" to S3.
As for the "web accessible Amazon S3 link", you could use the CopyObject() command to instruct S3 to directly copy the object from the source bucket to the destination bucket. (It also works within a bucket.)
This will make a GET request to the image URL, and pipe the response directly to the S3 upload stream, without the need to save the image to the local file system.
const request = require('request');
const fs = require('fs');
const imageURL = 'https://example.com/image.jpg';
const s3 = new AWS.S3();
const params = {
Bucket: 'bucketname',
Key: "folder/" + id + "originalImage"
};
const readStream = request(imageURL);
const uploadStream = s3.upload(params).createReadStream();
readStream.pipe(uploadStream);
You can use following snippet to upload files in S3 using the file URL.
import axios from 'axios';
import awsSDK from 'aws-sdk';
const uploadImageToS3 = () => {
const url = 'www.abc.com/img.jpg';
axios({
method: 'get',
url: url,
responseType: 'arraybuffer',
})
.then(function (response) {
console.log('res', response.data);
const arrayBuffer = response.data;
if (arrayBuffer) {
awsSDK.config.update({
accessKeyId: 'aws_access_key_id',
secretAccessKey: 'aws_secret_access_key',
region: 'aws_region',
});
const s3Bucket = new awsSDK.S3({
apiVersion: '2006-03-01',
params: {
Bucket: 'aws_bucket_name'
}
});
const data = {
Body: arrayBuffer,
Key: 'unique_key.fileExtension',
ACL: 'public-read'
};
const upload = s3Bucket.upload(data, (err, res) => {
if (err) {
console.log('s3 err:', err)
}
if (res) {
console.log('s3 res:', res)
}
})
}
});
}
uploadImageToS3();

how to stop image download instead of image display in aws s3 using node.js

I have upload an image using node.js in AWS S3 and that have successfully uploaded on AWS S3 bucket, but I try to view the image instead of download. I cant view that downloded file. I have used following code to upload an image on AWS S3:
var AWS = require('aws-sdk');
var config = require('../../server/config');
AWS.config.update({
accessKeyId: config.aws.accessKeyId,
secretAccessKey: config.aws.secretAccessKey,
region: config.aws.region
});
var s3 = new AWS.S3();
var Busboy = require('busboy');
var busboyBodyParser = require('busboy-body-parser');
app.use(busboyBodyParser());
app.post('/upload', function(req,res){
var directory = req.body.directory;
console.log(req.files.file);
var image = req.files.file.name;
var contenttype = req.files.file.mimetype;
if(req.body.directory) {
var file = directory+'/'+image;
} else {
var file = image;
}
var data = req.files.file.data;
var keys = {
Bucket: req.body.bucket,
Key: file,
Body: data,
ACL: 'public-read',
contentType: contenttype
};
s3.upload(keys, function(err, result) {
if (err) {
res.send({
isError:true,
status:400,
message:"File Not Uplaod",
data:err
});
} else {
var data = {
Location: result.Location,
key:result.key,
Bucket:result.Bucket
};
res.send({
isError:false,
status:200,
message:"File Uplaod",
data:data
});
}
});
});
I was stuck with this as well, but the following works:
let params = {
ACL: 'public-read',
Bucket: process.env.BUCKET_NAME,
Body: fs.createReadStream(req.file.path),
ContentType: req.file.mimetype,
Key: `avatar/${req.file.originalname}`
};
req.file.mimetype is what fixed it, which is basically the same as ContentType: image/jpeg but it identifies the extension of the file the user has uploaded as opposed to having to hardcode image/jpeg or image/png
I hope your issue is fixed though.
I have find that answer :
used ContentType:'image/jpeg' or ContentType: 'your variable' in keys to upload image in aws s3

How can I upload a file from an HTTP response body to S3 using putObject() from the AWS SDK (in Node.js)?

I'm trying to save a PDF file into S3 with the AWS SDK.
I'm getting the PDF through the body of a POST request (Application/PDF).
When saving the file into the local HD with fs.writeFile, the file looks ok. But when uploading it to S3, the file is corrupted (it's just a single
page PDF).
Any help or hint would be greatly appreciated!
var data = body // body from a POST request.
var fileName = "test.pdf";
fs.writeFile(fileName, data, {encoding : "binary"}, function(err, data) {
console.log('saved'); // File is OK!
});
s3.putObject({ Bucket: "bucketName", Key: fileName, Body: data }, function(err, data) {
console.log('uploaded') // File uploads incorrectly.
});
EDIT:
It works if I write and then read the file and then upload it.
fs.writeFile(fileName, data, {encoding : "binary"}, function(err, data) {
fs.readFile(fileName, function(err, fileData) {
s3.putObject({ Bucket: "bucketName", Key: fileName, Body: fileData }, function(err, data) {
console.log('uploaded') // File uploads correctly.
});
});
});
Try setting the contentType and/or ContentEncoding on your put to S3.
ContentType: 'binary', ContentEncoding: 'utf8'
See the code sample here for working example putObject makes object larger on server in Nodejs
I think it is because the data is consumed (i.e. a stream).
It would explain why after writting the data you send nothing to S3 and reading again the data you can send a valid PDF.
Try and see if it works by just sending the data directly to S3 without writting it to disk.
Yes, you forgot about callback of writeFile function, so when you started uploading to Amazon S3 your file wasn't saved completly. You shouldn't forget that node.js is asynchronous and an app won't wait when the fs.writeFile finishes it work, it simply run s3.putObject the same time.
/**
* JS library: Promise.promisify from bluebirdjs
**/
My code is as below
global.Promise = require('bluebird');
const aws = require('aws-sdk');
const aswAccessKey = {
accessKeyId: 'your-accesskey-id',
secretAccessKey: 'your-secret-access-key'
};
const fs = require('fs');
const path = require('path');
const uuidV4 = require('uuid/v4');
// Create S3 service object
// available apiVersion: '2006-03-01', '2013-04-01',
const s3 = new aws.S3(Object.assign(aswAccessKey, {
apiVersion: '2013-04-01'
}));
function putObject(bucketName, file) {
console.log('putObject into ', bucketName);
/**
* If we don't use versioned bucket, we must not pass VersionId
*/
const params = {
Bucket: bucketName,
Key: '',
Body: 'Plain text',
ACL: 'public-read',
ContentType: 'binary',
CacheControl: 'max-age=172800'
};
return Promise
.promisify(fs.readFile, {
context: fs
})(file)
.then((fileData) => {
console.log(fileData);
params.Body = fileData;
params.Key = 'g01/' + uuidV4() + '-' + path.basename(file);
return Promise
.promisify(s3.putObject, {
context: s3
})(params)
.then((data) => {
console.log('successful');
console.log(data);
})
.catch((err) => {
console.log('Error', err);
});
})
.catch(() => {
});
}

Resources