OVERVIEW: I'm using Amazon S3 to let users upload images. I've been working on an edit route so that users can edit their images after they've uploaded them. I'm doing this via a PUT route. This is my current solution. It works as far as overriding the current images.
PROBLEM: If a user selects zero new images it overwrites the old images with blank filepaths. Aka no images show up after editing. The same thing happens if a user only tries to edit one image, that one image will update but the others are overwritten with blank filepaths so no images show.
QUESTION: What is the correct way to let a user edit an image using Amazon S3 and Multer-S3?
Thanks for any help! :)
app.put("/:id", function(req, res){
upload(req,res,function(err) {
if(err) {
console.log(err);
res.redirect('/')
}
var filepath = undefined;
var filepath2 = undefined;
var filepath3 = undefined;
if(req.files[0]) {
filepath = req.files[0].key;
}
if(req.files[1]) {
filepath2 = req.files[1].key;
}
if(req.files[2]) {
filepath3 = req.files[2].key;
}
var image = filepath;
var image2 = filepath2;
var image3 = filepath3;
var newData = {image: image, image2: image2, image3: image3}
Rental.findByIdAndUpdate(req.params.id, {$set: newData},
function(err, rentals){
if(err){
req.flash("error", err.message);
res.redirect("back");
} else {
req.flash("success","Successfully Updated!");
res.redirect("/rentals/" + rental._id);
}
});
});
});
});
You should validate the user input before calling the downstream updates.
You should not need to call Rental.findByIdAndUpdate if the values have not been populated correctly (as there is nothing to update).
I can't offer more help without seeing the code for Rental.
Related
I'm developing a feature to allow user upload image to mongodb with nodejs :
My Problem :
Get image file from user's request and do 2 task: store current image to mongodb with collection name "Origin_image" for example and resize current image and store to mongodb with collection name "Thumbnail_image"
My solution so far:
I just only store success original image by using multer-gridfs-storage and multer like code below
const multer = require('multer');
const GridFsStorage = require('multer-gridfs-storage');
const multer = require('multer');
const GridFsStorage = require('multer-gridfs-storage');
let storageFS = new GridFsStorage({
db: app.get("mongodb"),
file: (req, file) => {
return new Promise((resolve, reject) => {
crypto.randomBytes(16, (err, buf) => {
if (err) {
return reject(err);
}
const filename = file.originalname;
const fileInfo = {
filename: filename,
bucketName: 'images'
};
resolve(fileInfo);
});
});
}
});
var upload = multer({ storage: storageFS }).single('image');
exports.uploadImage = async function (req, res) {
try {
upload(req, res, function (err) {
if (err) {
return res.send(err)
}
res.json({
status: true,
filePath: req.file.originalname
});
});
} catch (error) {
res.send(error);
}
}
Does anyone have any idea to solve my problem? thanks !
If you are using Angular on your frontend, let the end user handle the image resizing so that your server does not have to deal with the overhead. I am currently using ng2-img-max to resize images. You can initiate the resize as on file change.
I also wanted to have thumbnails and then the original, but this caused a huge issue in performance when resizing both and then again how to link them as GridFs stores them before you can do anything with them and all you have left is the response. So save yourself some time. Only resize once, to your limited size for the user and then for displaying thumbnail images, use sharp with custom query params to display the size you want.
Good luck and happy coding.
I need to verify that a mongodb document exists before continuing with Azure file upload.
The form consists of a file and a text field, the logic that is needed is the following:
Form submission
Get the text field
Search in mongodb for a document with the text field data
If the item exist continue with file upload to Azure else return
File upload to Azure
Save the URL to the file in the same MongoDB document that was found in
The problem I'm facing is that I can't touch field data in form.on('part') and can't get it to work with using form.parse first.
This is my code, I'm willing to change libraries and do whatever it takes to get it working.
var form = new multiparty.Form();
var formField = "";
form.parse(req, function(err, fields, files) {
formField = fields.fieldinform[0];
});
console.log(formField); //empty async?
model
.findOne({ name: formField })
.then(obj => {
form.on("part", function(part) {
if (!part.filename) return;
var size = part.byteCount;
var name = part.filename;
var container = "test";
blobService.createBlockBlobFromStream(
container,
name,
part,
size,
function(error) {
if (error) {
console.log("failed");
}
}
);
});
})
.catch(e => {
//do not continue
});
Help would be highly appreciated!
After a lot of searching and not coming up with a proper answer I decided to go with jquery that changed the action URL of the form to /upload/textintextfield before submission and then grab that with req.params.textfield in node.
<script>
$('#fileUploadForm').submit(function() {
$('#fileUploadForm').attr('action', '/upload/addvideo/' + $('#textfield').val())
return true;
});
</script>
I have an Express application that gets an image from a user via a form. There are several things I need to do with the image, and as it gets more complex, I'm not sure how to handle it. It is a message board post where there are some required text fields and an optional image upload. I need to:
Find the orientation of the image from EXIF data and reorient it if needed
Save a copy of the original image to the server (done)
Create a thumbnail of the image and save it to the server (done)
Save the record to the database, whether or not there's an uploaded image (done)
I'm concerned about the order in which I'm doing things, wondering if there's a more efficient way. I know I can call upload inside the route instead of passing it in, but I'd like to not repeat myself when I save the record to the database, since I need to save it whether there's an image or not.
I have code that's working for the final 3 steps, but am open to suggestions on how to improve it. For the first step, I'm stumped at how to go about getting the orientation of the original and rotating it if needed. Is this something I need to do client-side instead? And how do I work it into the existing code?
Here's the code:
Setup
var multer = require('multer');
var storage = multer.diskStorage({
destination: function (req, file, cb) {
cb(null, './public/uploads');
},
filename: function (req, file, cb) {
var fileExt = file.mimetype.split('/')[1];
if (fileExt == 'jpeg'){ fileExt = 'jpg';}
cb(null, req.user.username + '-' + Date.now() + '.' + fileExt);
}
})
var restrictImgType = function(req, file, cb) {
var allowedTypes = ['image/jpeg','image/gif','image/png'];
if (allowedTypes.indexOf(req.file.mimetype) !== -1){
// To accept the file pass `true`
cb(null, true);
} else {
// To reject this file pass `false`
cb(null, false);
//cb(new Error('File type not allowed'));// How to pass an error?
}
};
var upload = multer({ storage: storage, limits: {fileSize:3000000, fileFilter:restrictImgType} });
In Route
router.post('/new',upload.single('photo'),function(req,res){
var photo = null;
var allowedTypes = ['image/jpeg','image/gif','image/png'];
if (req.file){
photo = '/uploads/' + req.file.filename;
// save thumbnail -- should this part go elsewhere?
im.crop({
srcPath: './public/uploads/'+ req.file.filename,
dstPath: './public/uploads/thumbs/100x100/'+ req.file.filename,
width: 100,
height: 100
}, function(err, stdout, stderr){
if (err) throw err;
console.log('100x100 thumbnail created');
});
// I can get orientation here,
// but the image has already been saved
im.readMetadata('./public/uploads/'+ req.file.filename, function(err, metadata){
if (err) throw err;
console.log("exif orientation: " + metadata.exif.orientation);
});
}
// Save it
new Post({
username: req.user.username,
title: req.body.title,
body: req.body.messagebody,
photo: photo
}).save(function(err){
if (err){ console.log(err); }
res.redirect('/messageboard');
});
});
Thanks for your help
I'm connecting to the Riot API to obtain images from their data dragon services using the Request package. Once the Request gives me a response, I save the image onto the disk. Saving champion thumbnails is a cinch; no problems. However, item thumbnails have been failing 95% of the time. It only occasionally works on my local machine.
I've been deploying the app to Heroku and the images were all successfully loaded the 4/4 times I deployed. Unfortunately, as stated earlier, when I run the server locally, my internet actually hangs and I become unable to use the internet only when I try to obtain the item thumbnails.
Here is a snippet:
function retrieveAndProcessImage(imageURL, callback) {
"use strict";
console.log(imageURL);
request(imageURL, {encoding: 'binary'}, function (req_err, req_res) {
// Extract the image name by looking at the very end of the path.
var pathArray = (imageURL).split('/');
var imageName = pathArray[pathArray.length - 1];
if (req_err) {
console.log("Couldn't retrieve " + imageName);
} else {
callback(imageName, req_res['body']);
}
});
};
The above snippet obtains the image from the API. When it occasionally works, the proper image is saved so I at least know that the URLs are all correct. The callback is just an fs.writeFile that writes the file locally so I don't have to send another request.
function retrieveAndProcessJson(dataURL, callback) {
"use strict";
request(dataURL, function (req_err, req_res) {
// Try to parse the JSON.
try {
var bodyJSON = JSON.parse(req_res['body']);
} catch (err) {
_errorMessage(dataURL);
return;
}
// If the parsing failed or the retrieval failed, connection failed.
if (bodyJSON['status'] !== undefined) {
_errorMessage(dataURL);
return;
}
// Otherwise output the success.
console.log("Connected to " + dataURL);
// Callback.
callback(req_res['body']);
});
};
The above obtains the raw JSON and passes the result of it to a callback.
fileFuncs.retrieveAndProcessJson(dataURL, function (raw_body) {
var bodyJSON = JSON.parse(raw_body);
_saveData(raw_body);
_saveImages(bodyJSON['data'])
});
and save images is:
function _saveImages(dataJSON) {
var filePath;
var itemKey;
var item;
var imageURL;
var imageName;
// Check to see the destination folder exists, and create if it doesn't.
fileFuncs.checkFolder(api_constants.itemThumbnailPath);
// For each item in the item JSON, save the thumbnail.
for (itemKey in dataJSON) {
if (dataJSON.hasOwnProperty(itemKey)) {
item = dataJSON[itemKey];
// The image are saved as #{id}.png
imageName = item['id'] + '.png';
imageURL = api_constants.itemThumbnailURL + imageName;
// Retrieve the file from the API.
fileFuncs.retrieveAndProcessImage(imageURL, function (fileName, image) {
filePath = api_constants.itemThumbnailPath + fileName;
fs.writeFile(filePath, image, { encoding: 'binary' }, function (err) {
if (err) {
console.log("\t" + fileName + " already exists.");
return;
}
console.log("\tThumbnail " + fileName + " was saved.");
})
});
}
}
}
I'd rather not have to deploy to Heroku each and every time I want to see what the changes in my code look like. Are there any solutions where I could potentially put some time between each request? Alternatively, just something that would let me do mass amounts of requests without hanging my internet.
I am trying to write an import script in Nodejs that pulls data from the web and formats it and then sends it to my API.
Part of that includes pulling artist data from LastFM, fetching the images for each artist and sending them off to my API to resize and save.
The import script is just ran in terminal.
The part of the import script that is responsible for pulling the images down and sending off to my API looks like:
_.forEach(artist.images, function(image){
console.log('uploading image to server ' + image.url)
request.get(image.url)
.pipe(request.post('http://MyAPI/files/upload', function(err, files){
if (err) {
console.log(err);
}
console.log('back from upload');
console.log(files);
}));
});
And the files.upload action looks like:
upload: function(req, res){
console.log('saving image upload');
console.log(req.file('image'));
res.setTimeout(0);
var sizes = [
['avatar', '280x210'],
['medium', '640x640'],
['large', '1024x768'],
['square', '100x100'],
['smallsquare', '50x50'],
['avatarsquare', '32x32']
];
//resize to the set dimensions
//for each dimension - save the output to gridfs
_.forEach(sizes, function(bucket){
var width = bucket[1, 0], height = bucket[1, 2];
// Let's create a custom receiver
var receiver = new Writable({objectMode: true});
receiver._write = function(file, enc, cb) {
gm(file).resize(width, height).upload({
adapter: require('skipper-gridfs'),
uri: 'mongodb://localhost:27017/sonatribe.' + bucket[0]
}, function (err, uploadedFiles) {
if (err){
return res.serverError(err);
}
else{
return res.json({
files: uploadedFiles,
textParams: req.params.all()
});
}
});
cb();
};
/* req.file('image').upload(receiver, function(err, files){
if(err) console.log(err);
console.log('returning files');
return files;
});*/
});
}
However, console.log(req.file('image')); is not what I'd hope - probably because this code is expecting the image to be uploaded as part of a multi-part form upload with a field named image - which it is not...
I'm trying to figure out how the file will end up inside my action but my google foo is completely out of action today and I'm fairly (very) new to Node.
Anyone able to offer some pointers?