I am very new in nodejs. I use this code to upload files to an amazone S3.
s3.putObject({
Bucket: bucketName,
Key: key,
Body: content
}, (res) => {
console.log("One file added");
});
How can I handle an error if the upload of one file fails?
Everything is in the official documentation, just read it http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property
s3.putObject({
Bucket : bucketName,
Key : key,
Body : content
}, (err, res) => {
if (err) {
return console.error(err);
}
console.log("One file added");
});
Related
I have a program Model, and i the program has an image attribute which I use multers3 to upload when creating the Program.
The challenge that I am facing now is that, when I delete the program, everything gets deleted on my local machine but I realized that the file(image) still exists on my Aws s3 console. How do I get the file deleted both on my database and on Amazon s3?
Here are my Program routes
This is how I delete my Program
router.delete("/:id/delete", function (req, res) {
const ObjectId = mongoose.Types.ObjectId;
let query = { _id: new ObjectId(req.params.id) };
Program.deleteOne(query, function (err) {
if (err) {
console.log(err);
}
res.send("Success");
});
});
and this is how i creates my program.
router.post("/create", upload.single("cover"), async (req, res, next) => {
const fileName = req.file != null ? req.file.filename : null;
const program = new Program({
programtype: req.body.programtype,
title: req.body.title,
description: req.body.description,
programImage: req.file.location,
});
try {
console.log(program);
const programs = await program.save();
res.redirect("/programs");
} catch {
if (program.programImage != null) {
removeprogramImage(program.programImage);
}
res.render("programs/new");
}
});
Looking through the Multer-s3 repo, I can't find anything which mentions deleting from s3. There is this function in the source code, but, I can't figure out how to use it.
You could try using the AWS SDK directly via deleteObject:
const s3 = new aws.S3({
accessKeyId: 'access-key-id',
secretAccessKey: 'access-key',
Bucket: 'bucket-name',
});
s3.deleteObject({ Bucket: 'bucket-name', Key: 'image.jpg' }, (err, data) => {
console.error(err);
console.log(data);
});
I had exactly the same problem which is "that the file(image) still exists on my Aws s3 console" it could be because of passing image location instead of image name
When uploading the image to aws here is the respone
{
fieldname: 'name',
originalname: 'apple.png',
encoding: '7bit',
mimetype: 'image/png',
size: 59654,
bucket: 'my-bucket-name',
key: 'apple-1426277135446.png', //=> what i needed to pass as(key)
acl: 'public-read',
contentType: 'application/octet-stream',
contentDisposition: null,
storageClass: 'STANDARD',
serverSideEncryption: null,
metadata: null,
location: 'https://my-bucket-name.Xx.xu-eXst-3.amazonaws.com/apple-
1426277135446.png', // => this is what i was passing to deleteObject as "key"
etag: '"CXXFE*#&SHFLSKKSXX"',
versionId: undefined
}
my problem was that i was passing the image location instead of the image name
in deleteObject function
s3.deleteObject({ Bucket: 'bucket-name', Key: 'image.jpg' }, (err, data)
// key in the argument has to be the filename with extension without
// URL like: https://my-bucket-name.s3.ff-North-1.amazonaws.com/
=> {
console.error(err);
console.log(data);
});
so eventually i could extract the name of the file(image) with extension
and passed to the function above
here is what i used the function from this answer answer
function parseUrlFilename(url, defaultFilename = null) {
// 'https://my-bucket-name.Xx.xu-eXst-3.amazonaws.com/apple-
1426277135446.png'
let filename = new URL(url,
"https://example.com").href.split("#").shift().split("?").shift().split("/").pop(); //No need to change "https://example.com"; it's only present to allow for processing relative URLs.
if(!filename) {
if(defaultFilename) {
filename = defaultFilename;
//No default filename provided; use a pseudorandom string.
} else {
filename = Math.random().toString(36).substr(2, 10);
}
}
// resulting apple-1426277135446.png'
return filename;
}
I had exactly the same problem and fixed by given code,
s3.deleteObjects(
{
Bucket: 'uploads-images',
Delete: {
Objects: [{ Key: 'product-images/slider-image.jpg' }],
Quiet: false,
},
},
function (err, data) {
if (err) console.log('err ==>', err);
console.log('delete successfully', data);
return res.status(200).json(data);
}
);
This works exactly for me.
Example of file deletion from url (file location) on amazone server
This code allows you to have the fileKey from the url
Before you need install urldecode
npm i urldecode
public async deleteFile(location: string) {
let fileKey = decoder(location)
const datas = fileKey.split('amazonaws.com/')
fileKey = datas.pop();
const params = {
Bucket: 'Your Bucket',
Key: fileKey,
};
await this.AWS_S3.deleteObject(params).promise();
}
I successfully upload my files to the aws s3 bucket, but cannot get its location back , to store it back to my DB.
Here is my function:
const uploadFile = (filename, key) => {
return new Promise((resolve, reject)=> {
fs.readFile(filename, (err, data) => {
if(err){
reject(err);
};
const params = {
Bucket: "BUCKET_NAME",
Key: `student_${key}`, // File name you want to save as in S3
Body: data,
ACL: 'public-read'
};
s3.upload(params, function(err, data){
if(err){
throw err;
}
resolve(data.Location);
});
});
})
};
My router :
uploadFile.uploadFile(request.file.path, request.file.originalname).then((addr) => {
student_photo = addr;
})
Eventually I get empty string (when I console.log this).
The decision I found was to create a Promise to a function uploadFile, which in terms make it "thenable". So in .then() part I make query request to store info in my SQL.
I need to uplaod a pdf file from UI(written in Javascript) to Amazon S3 but I am trying to upload the file to the S3, I am getting some unicode format text and when I copy that text to notepad, or say, any other text editor I can the human readable text
I am using pdfmake to get the content of the file and upload it using getBufffer method.
var content = generatePDF(base64Img);
pdfMake.createPdf(content).getBuffer(function (data) {//Code}
The code that i used to upload the file to S3.
var params = {
Bucket: bucketName,
Key: file_name,
Body: data.toString(),
ContentType: 'application/pdf'
}
s3.upload(params, function (err, data) {
if (err) {
// code
}else{
//code
}
The file is getting uploaded successfully but I am getting the text like
!
" #$%&!' ()*')+,
!
!
!
!
But I am pasting it to other text editor, I am getting
Date: 04/20/19
I solved the above problem by passing the data from getBuffer to S3.
In S3, I passed to a buffer like
var data = new Buffer(event.data, 'binary');
uploaded the data to S3.
var params = {
Bucket: bucketName,
Key: file_name,
Body: data,
ContentType: 'application/pdf'
}
s3.upload(params, function (err, data) {
if (err) {
// code
}else{
//code
}
To upload a file from client end directly to s3 bucket you can use multer-s3.
FROM CLIENT END:
axios.post(url, data, {
onUploadProgress: ProgressEvent => {
this.setState({
loaded: (ProgressEvent.loaded / ProgressEvent.total * 100),
})
},
})
.then(res => { // then print response status
toast.success('Upload Success!')
})
.catch(err => { // then print response status
toast.error('Upload Failed!')
})
SERVER SIDE:
const upload = multer({
storage: multerS3({
s3: s3,
acl: 'public-read',
bucket: BUCKET_NAME,
key: function (req, file, cb) {
UPLOADED_FILE_NAME = Date.now() + '-' + file.originalname;
cb(null, UPLOADED_FILE_NAME);
}
})
}).array('file');
app.post('/upload', function (req, res) {
upload(req, res, function (err) {
if (err instanceof multer.MulterError) {
return res.status(500).json(err)
// A Multer error occurred when uploading.
} else if (err) {
return res.status(500).json(err)
// An unknown error occurred when uploading.
}
console.log('REQUEST FILE IS', UPLOADED_FILE_NAME)
return res.status(200).send(UPLOADED_FILE_NAME)
// Everything went fine.
})
});
Hi Im new to AWS lambda and S3. Im trying to create an API that will allow me to upload an image. I have following lambda code to upload the file. After upload i see that the file size is correct but the file is corrupted.
let encodedImage = event.body;
console.log(encodedImage);
let decodedImage = Buffer.from(encodedImage, "binary");
console.log(decodedImage.length);
const filePath = `${Date.now()}.jpg`;
const params = {
Bucket: "manufacturer-theme-assets",
Key: filePath,
"Body": decodedImage,
ContentType: "image/jpeg",
ACL: "public-read"
};
s3.putObject(params, (err, data) => {
if (err) {
callback(err, null);
} else {
let response = {
statusCode: 200,
"body": JSON.stringify(data)
"isBase64Encoded": false
};
callback(null, response);
}
});
Make sure you are using relevant content type for the image, and please share the corrupted image link from S3 or error which you get while opening the file
Else try this first place and check:
const filePath = `${Date.now()}.jpg`;
var params = {
ACL: "public-read",
Body: "decodedImage",
Bucket: "manufacturer-theme-assets",
Key: filePath
};
s3.putObject(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
I need to download and extracted tar.gz files from s3 bucket and upload to another bucket.The files are downloaded properly and the result is returning as binary data using s3.getObject.
I need to pass the binary data (files) to s3 bucket to upload using s3.putOBject.
But I dont know what "name" has to give for "Key" params in s3.putOBject when we push the binary files to s3.putObject. Kindly help me.
This is my code
var bucketName = "my.new.Bucket-Latest";
var fileKey = "Employee.tar.gz";
var params = { Bucket: bucketName, Key: fileKey };
s3.getObject(params, function(err, data,callback) {
if (err) {
console.log(err);
callback(err);
}
else {
zlib.gunzip(data.Body, function (err, result) {
if (err) {
console.log(err);
} else {
var extractedData = result;
s3.putObject({
Bucket: "bucketName",
Key: " ",
Body: extractedData,
ContentType: 'content-type'
}, function (err) {
console.log('uploaded file: ' + err);
});
}
});
}
});
Well even your own code says, that key is filename. Just generate some filename and assign to key property. It is like filesystem. When you creating a new file, you should give some name to it.