Upload .xlsx files to express/node and mongoDB - node.js

I'm trying to make a web application where a manager can upload a weekly work schedule to the application and then have the application parse out the data into html format.
My question to you guys is how to have the application convert the excel file into JSON while it is being upload. I'm new so I'm not sure how to simultaneously upload the file to Mongo while converting it to JSON.
edit - in my app.js :
xlsxj = require("xlsx-to-json");
xlsxj({
input: "sample.xlsx",
output: "output.json"
}, function(err, result) {
if(err) {
console.error(err);
}else {
console.log(result);
}
});
app.post('/upload' function(req, res) {
??? What to put here ???
});
my input where the user selects file to upload:
.div(style={position: 'absolute', right: '50px', top: '75px'})
unless user.email == "cca#gmail.com"
p Upload New Schedule
#uploadNew
form(action="...", method="post", enctype="multipart/form-data")
input(type="file", name="displayImage")
I'm wondering how I go from there to having the input converted to JSON and stored in the DB

I recommend using js-xlsx to parse the .xlsx file on Node.js. This is a highly tested library that parses the data into a JSON structure you can iterate over to extract data. It doesn't parse charts or handle macros but you likely do not need to.
You may or may not choose to store the data in MongoDB. Mongo is nice for many reasons, but you have to be sure that no attribute has an embedded $ or . as these are reserved and will generate an error. Another approach you may consider to store the XLSX file or the stringified JSON file on S3, using the aws-sdk

xlsxj = require("xlsx-to-json");
app.post('/upload' function(req, res) {
/* upload script here */
xlsxj({
input: "sample.xlsx", //change this names based on the upload file name
output: "output.json"
}, function(err, result) {
if(err) {
console.error(err);
}else {
console.log(result);
}
});
});

There are several npm packages which can convert xlsx data to json format.
You can search them on npm website.
Use this package.
Once the xlsx file is uploaded to the database. Convert it to xlsx and save it to database.

Related

DevTools failed to load SourceMap

I'm trying to create pages that will take a user information and save them to the database. the user information are {name, age....... picture}, when I put the information without a picture it work fine and the data saved to the database but when I try to put the picture with them it gives me the error.
I'm sorry for the picture quality.
any one can help me with this.
I'm using nodejs and react
thx :)
You would need to send the picture as Base64 in your object, as it follows:
var data = {
name: 'John',
age: 27,
picture: 'data:image/png;base64,R0lGODlhPQBEAJos...',
}
In NodeJS, if using Express, the picture will be req.body.picture. So, all you need to do is store the file, then get the temp path do do what you need.
You can store base64 file doing:
var filePath = './tmp/myPicture.png';
fs.writeFile(filePath, req.body.picture, 'base64', (err) => {
if (err) {
res.json({ err: 'Error while creating temp file from base64.' });
} else {
// Your file was uploaded, so you can read your file here.
}
});

Converting adobe inDesign to pptx (is it even possible?)

I'm struggling to find a solution. I have a bulk of Adobe inDesign files I'm trying to convert over as PDFs
I know you can export to inDesign -> PDF then from Acrobat PDF -> PPTX. This would work well if it was just one or two files. I don't want to keep doing this over and over. I've tried using pdf-powerpoint the only issue with that is it exports each slide as a PNG. I would still like to be able to edit them afterward. I've seen that it is possible to use javascript to automate Adobe products but, after combing through their documentation I'm not sure if it's possible to pipe data into other Adobe products. Any suggestions?
You want to convert a PDF file to a Microsoft Powerpoint file (pptx).
You want to achieve this using Node.js.
If my understanding is correct, how about this workaround? In this workaround, it uses an external API which is ConvertAPI. The pptx file converted by this API can be edited by Microsoft Powerpoint. When you try this, for example, you can also test this using "Free Package". When you try using "Free Package", please Sign Up at "Free Package" and retrieve your Secret key.
Sample script:
const fs = require('fs');
const request = require('request');
const pdfFile = "### PDF file ###"; // Please set PDF filename including the path.
const url = "https://v2.convertapi.com/convert/pdf/to/pptx?Secret=#####"; // Please set your Secret key.
const options = {
url: url,
method: 'POST',
formData: {File: fs.createReadStream(pdfFile)},
};
request(options, function(err, res, body) {
if (err) {
console.log(err);
return;
}
const obj = JSON.parse(body);
obj.Files.forEach(function(e) {
const file = new Buffer(e.FileData, "base64");
fs.writeFile(e.FileName, file, function(err) {
if (err) {
console.log(err);
return;
}
console.log("Done.");
});
});
});
Note:
Before you run ths script, please retrieve your secret key.
In this script, a PDF file is uploaded and converted to pptx file, and download it. Then, it is saved as a pptx file.
This is a simple sample script. So please modify it for your situation.
Reference:
PDF to PPTX API of ConvertAPI
If this workaround was not what you want, I'm sorry.

Google Cloud Platform file to node server via gcloud

I have a bucket on Google Cloud Platform where part of my application adds small text files with unique names (no extension).
A second app needs to retrieve individual text files (only one at a time) for insertion into a template.
I cannot find the correct api call for this.
Configuration is as required:
var gcloud = require('gcloud');
var gcs = gcloud.storage({
projectId: settings.bucketName,
keyFilename: settings.bucketKeyfile
});
var textBucket = gcs.bucket(settings.bucketTitle);
Saving to the bucket works well:
textBucket.upload(fileLocation, function(err, file) {
if(err) {
console.log("File not uploaded: " + err);
} else {
// console.log("File uploaded: " + file);
}
});
The following seems logical but returns only metadata and not the actual file for use in the callback;
textBucket.get(fileName, function(err, file) {
if(err) {
console.log("File not retrieved: " + err);
} else {
callback(file);
}
});
Probably no surprise this doesn't work since it's not actually in the official documentation but then again, neither is a simple asnyc function which returns a document you ask for.
The method get on a Bucket object is documented here: https://googlecloudplatform.github.io/gcloud-node/#/docs/v0.29.0/storage/bucket?method=get
If you want to simply download the file into memory, try the method download on a File object: https://googlecloudplatform.github.io/gcloud-node/#/docs/v0.29.0/storage/file?method=download. You can also use createReadStream if using a stream workflow.
If you have ideas for improving the docs, it would be great if you opened an issue on https://github.com/googlecloudplatform/gcloud-node so we can make it easier for the next person.

Prompt a csv file to download as pop up using node.js and node-csv-parser (node module)

Recently I have started working with node.js. While going through a requirement in one of my projects I am facing an issue where I should be able to write some data to a csv file dynamically and let it prompt as a popup to download for user (with save and cancel options - as we normally see). After googling for some time I decided to use csv npm module https://github.com/wdavidw/node-csv-parser. I am able to write data into a file and save it using this module. I want to prompt a popup for saving this file with/without saving the file.
my code looks something like this:
// Sample Data
var data = [["id", "subject1", "subject2", "subject3"], ["jack", 85, 90, 68], ["sam", 77, 89, 69]]
// Server Side Code
var csv = require('../../node_modules/csv');
var fs = require('fs');
createCSV = function(data, callback) {
csv().from(data).to(fs.createWriteStream('D:/test.csv')) // writing to a file
}
// Client side call sample
$("#exportToCSV").click(function() {
callToServer.createCSV(data);
return false;
});
This is working good as far as writing the csv file is concerned.
I want to prompt this file immediately to download for users.
If this can be done without saving the file, that will be great.
How can I set content-type and content-disposition as we do in PHP
Any help is greatly appreciated.
-Thanks
Manish Kumar's answer is spot on - just wanted to include a Express 4 syntax variant to accomplish this:
function(req, res) {
var csv = GET_CSV_DATA // Not including for example.
res.setHeader('Content-disposition', 'attachment; filename=testing.csv');
res.set('Content-Type', 'text/csv');
res.status(200).send(csv);
}
I did it something like this :
http.createServer(function(request, response) {
response.setHeader('Content-disposition', 'attachment; filename=testing.csv');
response.writeHead(200, {
'Content-Type': 'text/csv'
});
csv().from(data).to(response)
})
.listen(3000);
Following solution is for Express
Express is evolved, instead of setting attachment and content type header, directly use attachment api http://expressjs.com/4x/api.html#res.attachment
Note: attachment() don't transfer the file, it just sets filename in header.
response.attachment('testing.csv');
csv().from(data).to(response);
Express-csv is a great module for writing csv contents to stream from a node.js server, which will be sent as a response to the client (and downloaded as a file). Very easy to use.
app.get('/', function(req, res) {
res.csv([
["a", "b", "c"]
, ["d", "e", "f"]
]);
});
The docs: https://www.npmjs.com/package/express-csv
When you pass an object, you need to prepend the headers explicitly (if you want them). Here's my my example using npm mysql
router.route('/api/report')
.get(function(req, res) {
query = connection.query('select * from table where table_id=1;', function(err, rows, fields) {
if (err) {
res.send(err);
}
var headers = {};
for (key in rows[0]) {
headers[key] = key;
}
rows.unshift(headers);
res.csv(rows);
});
});
Check out this answer: Nodejs send file in response
Basically, you don't have to save the file to the hard drive. Instead, try sending it directly to the response. If you're using something like Express then it would look something like this:
var csv = require('csv');
req.get('/getCsv', function (req, res) {
csv().from(req.body).to(res);
});

Storing some small (under 1MB) files with MongoDB in NodeJS WITHOUT GridFS

I run a website that runs on a backend of nodeJS + mongoDB. Right now, I'm implementing a system to store some icons (small image files) that will need to be in the database.
From my understanding, it makes more sense NOT to use GridFS, since that seems to be tailored for either large files or large numbers of files. Since every file that I need to save will be well under the BSON maximum file size, I should be able to save them directly into a regular document.
I have 2 questions:
1) Is my reasoning correct? Is it ok to save image files within a regular mongo collection, as opposed to with GridFS? Is there anything I'm not considering here that I should be?
2) If my thought process is sound, how do I go about doing this? Could I do something like the following:
//assume 'things' is a mongoDB collection created properly using node-mongodb-driver
fs.readFile(pathToIconImage, function(err,image){
things.insert({'image':image}, function(err,doc){
if(err) console.log('you have an error! ' + err);
});
});
I'm guessing that there's probably a better way to do this, since mongoDB uses BSON and here I'm trying to save a file in JSON before I send it off to the database. I also don't know if this code will work (haven't tried it).
UPDATE - New Question
If I have a document within a collection that has three pieces of information saved: 1) a name, 2) a date, and 3) an image file (the above icon), and I want to send this document to a client in order to display all three, would this be possible? If not, I guess I'd need to use GridFS and save the fileID in place of the image itself. Thoughts/suggestions?
Best, and thanks for any responses,Sami
If your images truly are small enough to not be a problem with document size and you don't mind a little amount of extra processing, then it's probably fine to just store it directly in your collection. To do that you'll want to base64 encode the image, then store it using mongo's BinData type. As I understand it, that will then save it as a BSON bit array, not actually store the base64 string, so the size won't grow larger than your original binary image.
It will display in json queries as a base64 string, which you can use to get the binary image back.
I have been looking for the same thing.
I know this post is old , but perhaps i can help someone out there.
var fs = require('fs');
var mongo = require('mongodb').MongoClient;
var Binary = require('mongodb').Binary;
var archivobin = fs.readFileSync("vc.exe");
// print it out so you can check that the file is loaded correctly
console.log("Loading file");
console.log(archivobin);
var invoice = {};
invoice.bin = Binary(archivobin);
console.log("largo de invoice.bin= "+ invoice.bin.length());
// set an ID for the document for easy retrieval
invoice._id = 12345;
mongo.connect('mongodb://localhost:27017/nogrid', function(err, db) {
if(err) console.log(err);
db.collection('invoices').insert(invoice, function(err, doc){
if(err) console.log(err);
// check the inserted document
console.log("Inserting file");
console.log(doc);
db.collection('invoices').findOne({_id : 12345}, function(err, doc){
if (err) {
console.error(err);
}
fs.writeFile('vcout.exe', doc.bin.buffer, function(err){
if (err) throw err;
console.log('Sucessfully saved!');
});
});
});
});

Resources