How can I display files that have been saved as a Buffer? - node.js

I am saving files as Buffers in my mongo database (using mongoose, nodejs, electron). For now, I'm keeping it simple with text-only files. I read a file in using
fs.readFile(filePath, function(err, data) {
if (err) {console.log(err);}
typeof callback == "function" && callback(data);
});
Then I create a new file in my database using the data variable. And, now I have something that looks like BinData(0,"SGVsbG8gV29ybGQK") stored in my mongodb. All is fine so far.
Now, what if I wanted to display that file in the UI? In this case, in Electron? I think there are two steps.
Step 1 The first is bringing this variable out of the DB and into the front-end. FYI: The model is called File and the variable that stores the file contents is called content.
So, I've tried File.content.toString() which gives me Object {type: "Buffer", data: Array[7]} which is not the string I'm expecting. What is happening here? I read here that this should work.
Step 2 Display this file. Now, since I'm only using text files right now, I can just display the string I get once Step 1 is working. But, is there a good way to do this for more complex files? Images and GIFs and such?

You should save the file mime.
And then set response.header MIME
response.setHeader("Content-Type", "image/jpeg");
response.write(binary)
response.end()

Related

Save an image file into a database with node/request/sequelize/mysql

I'm trying to save a remote image file into a database, but I'm having some issues with it since I've never done it before.
I need to download the image and pass it along (with node-request) with a few other properties to another node api that saves it into a mysql database (using sequelize). I've managed to get some data to save, but when I download it manually and try to open it, it's not really usable and no image shows up.
I've tried a few things: getting the image with node-request, converting it to a base64 string (read about that somewhere) and passing it along in a json payload, but that didn't work. Tried sending it as a multipart, but that didn't work either. Haven't worked with streams/buffers/multipart all that much before and never in node. I've tried looking into node-request pipes, but I couldn't really figure out how possibly apply them to this context.
Here's what I currently have (it's a part es6 class so there's no 'function' keywords; also, request is promisified):
function getImageData(imageUrl) {
return request({
url: imageUrl,
encoding: null,
json: false
});
}
function createEntry(entry) {
return getImageData(entry.image)
.then((imageData) => {
entry.image_src = imageData.toString('base64');
var requestObject = {
url: 'http://localhost:3000/api/entry',
method: 'post',
json: false,
formData: entry
};
return request(requestObject);
});
}
I'm almost 100% certain the problem is in this part because the api just takes what it gets and gives it to sequelize to put into the table, but I could be wrong. Image field is set as longblob.
I'm sure it's something simple once I figure it out, but so far I'm stumped.
This is not a direct answer to your question but it is rarely needed to actually store an image in the database. What is usually done is storing an image on storage like S3, a CDN like CloudFront or even just in a file system of a static file server, and then storing only the file name or some ID of the image in the actual database.
If there is any chance that you are going to serve those images to some clients then serving them from the database instead of a CDN or file system will be very inefficient. If you're not going to serve those images then there is still very little reason to actually put them in the database. It's not like you're going to query the database for specific contents of the image or sort the results on the particular serialization of an image format that you use.
The simplest thing you can do is save the images with a unique filename (either a random string, UUID or a key from your database) and keep the ID or filename in the database with other data that you need. If you need to serve it efficiently then consider using S3 or some CDN for that.

How to save the ID of a saved file using GridFS?

I am trying to save the ID of the file that I send via GridFS to my MongoDB (working with Mongoose). However I cant seem to find out how to get the created ID in fs.files with code?
var writestream = gfs.createWriteStream({
filename: req.file.originalname
});
fs.createReadStream(req.file.path).pipe(writestream);
writestream.on('close', function (file) {
// do something with `file`
console.log(file.filename + 'Written To DB');
});
I cant seem to be able to save the ID of the File that I wrote via the writestream.
The file is created in the lists etc. but how do I manage to save the file so that I can save it in one of my other MongoDB Documents?
This is old and I think you already solved you problem.
If I correctly understood, you were trying to get the id of the saved file.
You can simply do this:
console.log(writestream.id);

Express res.download() not actually downloading file

I'm attempting to return generated files to the front end through Express' res.download function. I'm using chrome, but whenever I call that API that executes the following code all that is returned is the same values returned from the Express res.sendFile() function.
I know that res.download uses res.sendFile, but I would like the download function to actually save to the file system instead of just returning the file in the body of the response.
This is my code.
exports.download = function(req,res) {
var filePath = //somefile that I want to download
res.download(filePath, 'response.txt', function(err) {
throw err;
}
}
I know that the above code at least partly works because I'm getting back, in the response, the contents of the file. However, I want it to be saved onto the file system.
Am I misunderstanding what the download function is supposed to do? Do I just need to take the response data and write it to the file system manually?
res.download adds headers that suggest to the browser that the file should be downloaded rather than opened. However, there's no way to force the browser to do this; it's ultimately the user's choice whether to download a particular file, typically.
If you're triggering this request with AJAX, well, that's not going to cause it to be downloaded, because your JavaScript is requesting that it get the data.
Do I just need to take the response data and write it to the file system manually?
You don't have file system access in browser-side JavaScript. I'm not sure how you intend to do this.

Need to validate file inputs in Node.js

I m using Node + Express + MongoDB to develop my application. In my application there are many file uploading forms. The uploading process is done perfecly. But my problem is , currently there is no validation for the file inputs. I need to validate these file inputs , like
Need to upload only .jpeg, .png files only.
File size must be less than 1MB.
But dont know how to solve this. I used mongoose.js' model schemas to validate strings and numbers. Searched a lot for finding a solution for file validation, but failed. Anyone who knows how to handle this problem, please help me
You should use npm module 'mmmagic' to check file type
var mmm = require('mmmagic'),
Magic = mmm.Magic;
var magic = new Magic(mmm.MAGIC_MIME_TYPE | mmm.MAGIC_MIME_ENCODING);
// the above flags can also be shortened down to just: mmm.MAGIC_MIME_TYPE
magic.detectFile('path/to/your/file.jpg', function(err, result) {
if (err) throw err;
console.log(result);//result is an object containig the file type
});
You can check file size using 'fs' node module

How can you persist user data for command line tools?

I'm a front-end dev just venturing into the Node.js, particularly in using it to create small command line tools.
My question: how do you persist data with command line tools? For example, if I want to keep track of the size of certain files over time, I'd need to keep a running record of changes (additions and deletions) to those files, and relevant date/time information.
On the web, you store that sort of data in a database on a server, and then query the database when you need it again. But how do you do it when you're creating a Node module that's meant to be used as a command line tool?
Some generic direction is all I'm after. I don't even know what to Google at this point.
It really depends on what you're doing, but a simple approach is to just save the data that you want to persist to a file and, since we're talking node, store it in JSON format.
Let's say you have some data like:
var data = [ { file: 'foo.bar', size: 1234, date: '2014-07-31 00:00:00.000'}, ...]
(it actually doesn't matter what it is, as long as it can be JSON.stringifiy()d)
You can just save it with:
fs.writeFile(filename, JSON.stringify(data), {encoding: 'utf8'}, function(err) { ... });
And load it again with:
fs.readFile(filename, {encoding: 'utf8'}, function(err, contents) {
data = JSON.parse(contents);
});
You'll probably want to give the user the ability to specify the name of the file you're going to persist the data to via an argument like:
node myscript.js <data_file>
You can get that passed in parameter with process.argv:
var filename = process.argv[2]; // Be sure to check process.argv.length and have a default
Using something like minimist can be really helpful if you want to get more complex like:
node myscript.js --output <data_file>
You also can store files in temporary directory, for example /tmp directory on linux and give user the option to change the directory.
To get path to temporary directory you can use os module in nodejs:
const os = require('os');
const tmp = os.tmpdir();

Resources