File uploading in Amazon S3 with Node.js - node.js

I am using aws-sdk to upload files on Amazon S3. It is working fine and uploading files, but my problem is; it changed file name after uploaded to the server. For example, if I upload sample.jpg, and it renamed to something like b4c743c8a2332525.jpg. Here is my code.
AWS.config.update({
accessKeyId: key,
secretAccessKey: secret
});
var fileStream = fs.createReadStream(path);
fileStream.on('error', function (err) {
if (err) { throw err; }
});
fileStream.on('open', function () {
var s3 = new AWS.S3();
s3.putObject({
Bucket: bucket,
Key: directory + file,
Body: fileStream
}, function (err) {
if (err)
res.send(err);
fs.unlinkSync(path);
});
});
Is it normal to change file name after uploaded files to S3 server, or is there any options to upload the same file name? Thank you.

Neither S3 nor the AWS SDK pick arbitrary file names for things you upload. The names are set by your own code.

Check the value of directory + file when you set it as the S3 object key. You may be uploading 'sample.jpg' from your browser (so the file is called sample.jpg locally on your disk), but the temporary file name that node.js uses to identify the file on it's disk may be using a hash like b4c743c8a2332525.jpg.

Related

AWS S3 Uploaded Image is Partially Loaded

I am trying to upload a locally stored image from my Node.js project's file structure using the aws-sdk package to my AWS S3 bucket and am able to successfully upload it, however, the uploaded image is a partially rendered version of the image. Only the top 1% (12kb) of it are visible when I view the URL created by AWS for the image. I've logged out the file to the console and made sure it was what I thought it was, and it is. But for some reason when I upload it to S3, it's a truncated / cut off version of the image.
All of the tutorials seem pretty straight forward but nobody seems to mention this problem. I've been grappling with it for hours but nothing seems to work. I've tried everything I can find online like:
Using fs.createReadStream(fileName) instead of just the file buffer but that didn't work (from Image file cut off when uploading to AWS S3 bucket via Django and Boto3)
Converting the buffer to base64 string and sending it that way
Adding the ContentLength param
Adding the ContentType to be the exact type of the image
Here's the relevant code:
const aws = require("aws-sdk")
const { infoLogger } = require("./logger")
async function uploadCoverImage() {
try {
aws.config.update({
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
region: "us-east-2",
})
const s3 = new aws.S3()
fs.readFile("cover.jpg", (error, image) => {
if (error) throw error
const params = {
Bucket: process.env.BUCKET_NAME,
Key: "cover.jpg",
Body: image,
ACL: "public-read",
ContentType: "image/jpg",
}
s3.upload(params, (error, res) => {
if (error) throw error
console.log(`${JSON.stringify(res)}`)
})
})
} catch (error) {
infoLogger.error(`Error reading cover file: ${JSON.stringify(error)}`)
}
}
module.exports = uploadCoverImage
I found out that it was uploading before the image had finished downloading via fs.createReadStream() in a different part of my codebase which is why it was partially loaded in S3. I never noticed because I only ever saw the fully loaded image in my local file system.

Return Data After S3 Upload in Node

I finally have file uploads working through Node and the AWS SDK...there's just one thing that's missing now that I haven't been able to crack yet, which is that I need to get the URL to the newly uploaded file on S3 and save it to my database.
const s3 = new AWS.S3();
const fileContent = Buffer.from(req.files.listPDF.data, 'binary');
const params = {
Bucket: 'my_bucket',
Key: filename,
Body: fileContent
};
s3.upload(params, function(err, data){
if (err) {
throw err;
}
});
I'm guessing it's promise-related, but I haven't had success with "await" yet. The data parameter in the function has a "Location" attribute, which I need. Originally, I was trying to set a previously-declared var to it. However, it wasn't doing anything since the upload was not yet completed. If anyone's grappled with this and cracked the code, I'd really appreciate your thoughts!

Can't figure out file path to download to local machine from AWS though nodejs server

I feel stupid. I'm trying create a function, to download files from my AWS S3 bucket to the client pc using nodeJS. The filePath variable works perfectly when I run the server on localhost, but when the project is uploaded I get Error: EISDIR: illegal operation on a directory, open './data/'
On localhost it download the documents to the data directory in my project. I would like to to download to the downloads directory on the client pc.
I have no idea how to specify a file path to a local directory. I've tried all sorts of paths such as ./d/Users/username/Desktop, same error each time.
AWS.config.update({
accessKeyId: "id",
secretAccessKey: "key"
});
const filePath = './data/'+req.body.file;
const bucketName = 'bucket';
const key =req.body.key;
var s3 = new AWS.S3();
const downloadFile = (filePath, bucketName, key) => {
var params = {
Bucket: bucketName,
Key: key
}
s3.getObject(params, (err, data) => {
if (err) console.log(err)
fs.writeFileSync(filePath, data.Body)
console.log(`${filePath} has been created!`);
res.send("File Downloaded");
})
}
downloadFile(filePath, bucketName, key)
});
I'm sure this is simple, but I can't find any specific examples online. Any help would be highly appreciated.

handling file uploads in Nodejs with AWS

I have a server in Node.js and say I have a POST request that uploads a multipart file to my server and then I upload it to AWS S3.
The issue is, with multer, I have to save the file to disk first.
If I deploy my server onto EC2 then how will file uploading work as it won't have a destination to temporarily store the file?
Thanks!
You can use streams with busboy. I don't have experience with the AWS Node SDK, but here's the general idea:
req.busboy.on('file', function (fieldname, file, filename) {
const params = { Bucket: 'bucket', Key: 'key', Body: file };
s3.upload(params, (err, data) => {
console.log(err, data);
});
});

How to create folder or key on s3 using AWS SDK for Node.js?

I'm using AWS SDK for Node.js to create a folder or key on s3. I searched on google, but I got nothing. Does anybody know how can I create a folder under my bucket with AWS SDK for Node.js?
and how can you check if this folder exists in your bucket already?
if you use console.aws.amazon.com, you can create a folder in your bucket easily. it seems I didn't figure it out how to create it with AWS SDK for Node.js?
S3 is not your typical file system. It's an object store. It has buckets and objects. Buckets are used to store objects, and objects comprise data (basically a file) and metadata (information about the file). When compared to a traditional file system, it's more natural to think of an S3 bucket as a drive rather than as a folder.
You don't need to pre-create a folder structure in an S3 bucket. You can simply put an object with the key cars/ford/focus.png even if cars/ford/ does not exist.
It's valuable to understand what happens at the API level in this case:
the putObject call will create an object at cars/ford/focus.png but it will not create anything representing the intermediate folder structure of cars/ or cars/ford/.
the actual folder structure does not exist, but is implied through delimiter=/ when you call listObjects, returning folders in CommonPrefixes and files in Contents.
you will not be able to test for the ford sub-folder using headObject because cars/ford/ does not actually exist (it is not an object). Instead you have 2 options to see if it (logically) exists:
call listObjects with prefix=cars/ford/ and find it in Contents
call listObjects with prefix=cars/, delimiter=/ and find it in CommonPrefixes
It is possible to create an S3 object that represents a folder, if you really want to. The AWS S3 console does this, for example. To create myfolder in a bucket named mybucket, you can issue a putObject call with bucket=mybucket, key=myfolder/, and size 0. Note the trailing forward slash.
Here's an example of creating a folder-like object using the awscli:
aws s3api put-object --bucket mybucket --key cars/ --content-length 0
In this case:
the folder is actually a zero-sized object whose key ends in /. Note that if you leave off the trailing / then you will get a zero-sized object that appears to be a file rather than a folder.
you are now able to test for the presence of cars/ in mybucket by issuing a headObject call with bucket=mybucket and key=cars/.
Finally, note that your folder delimiter can be anything you like, for example +, because it is simply part of the key and is not actually a folder separator (there are no folders). You can vary your folder delimiter from listObjects call to call if you like.
The code from #user2837831 doesn't seem to work anymore, probably with the new version of javascript sdk. So I am adding here the version of code that I am using to create a folder inside a bucket using node.js. This works with the 2.1.31 sdk. What is important is the '/' at the end of the Key value in params - using that it thinks you are trying to create a folder and not a file.
var AWS = require('aws-sdk');
AWS.config.region = 'us-east-1';
var s3Client = new AWS.S3();
var params = { Bucket: 'your_bucket_goes_here', Key: 'folderInBucket/', ACL: 'public-read', Body:'body does not matter' };
s3Client.upload(params, function (err, data) {
if (err) {
console.log("Error creating the folder: ", err);
} else {
console.log("Successfully created a folder on S3");
}
});
This is really straightforward you can do it by using the following, just remember the trailing slash.
var AWS = require("aws-sdk");
var s3 = new AWS.S3();
var params = {
Bucket: "mybucket",
Key: "mykey/"
};
s3.putObject(params).promise();
I find that we do not need an explicit directory creation call anymore.
Just the following works for me and automatically creates a directory hierarchy as I need.
var userFolder = 'your_bucket_name' + '/' + variable-with-dir-1-name + '/' + variable-with-dir-2-name;
// IMPORTANT : No trailing '/' at the end of the last directory name
AWS.config.region = 'us-east-1';
AWS.config.update({
accessKeyId: 'YOUR_KEY_HERE',
secretAccessKey: 'your_secret_access_key_here'
});
var bucket = new AWS.S3({
params: {
Bucket: userFolder
}
});
var contentToPost = {
Key: <<your_filename_here>>,
Body: <<your_file_here>>,
ContentEncoding: 'base64',
ContentType: <<your_file_content_type>>,
ServerSideEncryption: 'AES256'
};
bucket.putObject(contentToPost, function (error, data) {
if (error) {
console.log("Error in posting Content [" + error + "]");
return false;
} /* end if error */
else {
console.log("Successfully posted Content");
} /* end else error */
})
.on('httpUploadProgress',function (progress) {
// Log Progress Information
console.log(Math.round(progress.loaded / progress.total * 100) + '% done');
});
In console, the link generated first would be the bucket created path and second would be the folder structure.
var AWS = require("aws-sdk");
var path = require('path')
// Set the region
AWS.config.update({
region: "us-east-2",
accessKeyId: "your aws acces id ",
secretAccessKey: "your secret access key"
});
s3 = new AWS.S3();
var bucketParams = {
Bucket: "imageurrllll",
ACL: "public-read"
};
s3.createBucket(bucketParams, function(err, data) {
if (err) {
console.log("Error", err);
} else {
console.log("Success", data.Location);
var folder_name = 'root_folder'
//this is for local folder data path
var filePath = "./public/stylesheets/user.png"
//var child_folder='child'
var date = Date.now()
var imgData = `${folder_name}_${date}/` +
path.basename(filePath);
var params = {
Bucket: 'imageurrllll',
Body: '', //here you can give image data url from your local directory
Key: imgData,
ACL: 'public-read'
};
//in this section we are creating the folder structre
s3.upload(params, async function(err, aws_uploaded_url) {
//handle error
if (err) {
console.log("Error", err);
}
//success
else {
console.log("Data Uploaded in:", aws_uploaded_url.Location)
}
})
}
});

Resources