excel sheet upload to mongo - node.js

I am using the mean.js stack. I need to upload a Microsoft excel doc (.xls) and parse through the items. With that information I would like to create a new object from a schema to upload to a mongo database. I'm not sure where to even begin.
I have the schema for mongoose made. I really need help with the parsing and then saving to mongo. If there are any guides or suggested node packages, they would be great appreciated. Thank you.

first you have to convert your data to json. you can use a npm library like excel-to-json. then once you convert your data you can bulk insert them to mongodb with mongoose via your model.
excel2json(configuration, function(err, result) {
if(err) {
console.error(err);
} else {
Model.create(result, function(err, docs) {
if(!err){
console.log('inserted');
}
});
}
});

Related

Passing REST URL parameter as query condition in Mongoose

Newbie in nodejs/mongoDB here. I've tried, with no success,
to find the answer to my problem before posting here.
I'm creating a simple node RESTAPI get services with Mongoose. I'm trying to pass the field value of the collection to retrieve a specific document.
Like this http://localhost:3000/infrakpi/fieldvalue in the browser
I've written this following piece of code.
app.get('/infrakpi/:system',(req, res) => {
Infrakpi.getInfrakpiBySystem(req.params.system, function(err, infrakpibysystem){
if (err) {
throw err;
}
res.json(infrakpibysystem);
});
});
I have defined the get method in my mongoose model like below.
//Get Infrakpi by System
module.exports.getInfrakpiBySystem = function (system, callback) {
Infrakpi.find({system: 'system'}, callback)
}
when system is passed as fieldvalue in the restURL, I want to retrieve the specific document in the collection.
I understand that this may be very basic question but I get result when I use findById for _id field. But client will call only with the specific field.
Appreciate your help.
Not Sure if i can call it stackoverflow luck. I overlooked the quotes in get method in the model. Once I removed them, it worked.
//Get Infrakpi by System
module.exports.getInfrakpiBySystem = function (system, callback) {
Infrakpi.find({system: system}, callback)
}
I'm leaving the question here without deleting, if it can help someone else in the future.

Stripe - storing multiple sources on a user

I am trying to store multiple sources (cards) on a single user object.
Lets say I have a customer that already has one source stored.
With a new source token I then do the following
stripe.customers.update(customer, {source:call.request.source}, function(err, updatedCustomer){
if(err){
return console.log(err);
}
console.log(updatedCustomer.sources.data);
})
When I do this, the customers existing source is lost and the new one is stored.
How can I store multiple sources on the same customer??
Using createSource rather than update done the trick.
stripe.customers.createSource(customer, {source:call.request.source}, function(err, updatedCustomer){
if(err){
return console.log(err);
}
console.log(updatedCustomer.sources.data);
})
This will work for you.
customer = Stripe::Customer.retrieve(stripe_customer_id)
customer.sources.create(stripeToken)
Stripe token is generated using stripe.js.

How to access manually created index in pouchdb find

I am pretty new to pouchDB and couchDB. I've trying to use pouchdb find but having some problems.
I have created a view "test" and source -
function(doc) {
emit(doc.name, doc.occupation);
}
and when i run this -
localDB.query('test/test').then(function (res) {
console.log(res);
}).catch(function (err) {
console.log(err);
});
Everything works as expected.
But when i try pouchdb find -
localDB.find({
selector: {name: 'kittens'}
}).then(function (result) {
console.log(result);
}).catch(function (err) {
console.log(err);
});
I got following error -
Error: couldn't find a usable index. try creating an index on: name.
If i create index by
localDB.createIndex({
index: {
fields: ['name']
}
});
only then pouchdb find code works. But when i manually created an index (shown in image above) then it doesn't.
Any help is appreciated. Thanks in advance.
pouchdb-find uses the new "Mango" query language, which is different from map/reduce. Mango is only supported in CouchDB 2.0+ and PouchDB Server, not CouchDB 1.x.
So at this time you will need to either use CouchDB 2.0 or PouchDB Server with pouchdb-find if you want it to work on both the client and the server, or you will need to use regular map/reduce instead and avoid pouchdb-find.

Upload .xlsx files to express/node and mongoDB

I'm trying to make a web application where a manager can upload a weekly work schedule to the application and then have the application parse out the data into html format.
My question to you guys is how to have the application convert the excel file into JSON while it is being upload. I'm new so I'm not sure how to simultaneously upload the file to Mongo while converting it to JSON.
edit - in my app.js :
xlsxj = require("xlsx-to-json");
xlsxj({
input: "sample.xlsx",
output: "output.json"
}, function(err, result) {
if(err) {
console.error(err);
}else {
console.log(result);
}
});
app.post('/upload' function(req, res) {
??? What to put here ???
});
my input where the user selects file to upload:
.div(style={position: 'absolute', right: '50px', top: '75px'})
unless user.email == "cca#gmail.com"
p Upload New Schedule
#uploadNew
form(action="...", method="post", enctype="multipart/form-data")
input(type="file", name="displayImage")
I'm wondering how I go from there to having the input converted to JSON and stored in the DB
I recommend using js-xlsx to parse the .xlsx file on Node.js. This is a highly tested library that parses the data into a JSON structure you can iterate over to extract data. It doesn't parse charts or handle macros but you likely do not need to.
You may or may not choose to store the data in MongoDB. Mongo is nice for many reasons, but you have to be sure that no attribute has an embedded $ or . as these are reserved and will generate an error. Another approach you may consider to store the XLSX file or the stringified JSON file on S3, using the aws-sdk
xlsxj = require("xlsx-to-json");
app.post('/upload' function(req, res) {
/* upload script here */
xlsxj({
input: "sample.xlsx", //change this names based on the upload file name
output: "output.json"
}, function(err, result) {
if(err) {
console.error(err);
}else {
console.log(result);
}
});
});
There are several npm packages which can convert xlsx data to json format.
You can search them on npm website.
Use this package.
Once the xlsx file is uploaded to the database. Convert it to xlsx and save it to database.

Storing some small (under 1MB) files with MongoDB in NodeJS WITHOUT GridFS

I run a website that runs on a backend of nodeJS + mongoDB. Right now, I'm implementing a system to store some icons (small image files) that will need to be in the database.
From my understanding, it makes more sense NOT to use GridFS, since that seems to be tailored for either large files or large numbers of files. Since every file that I need to save will be well under the BSON maximum file size, I should be able to save them directly into a regular document.
I have 2 questions:
1) Is my reasoning correct? Is it ok to save image files within a regular mongo collection, as opposed to with GridFS? Is there anything I'm not considering here that I should be?
2) If my thought process is sound, how do I go about doing this? Could I do something like the following:
//assume 'things' is a mongoDB collection created properly using node-mongodb-driver
fs.readFile(pathToIconImage, function(err,image){
things.insert({'image':image}, function(err,doc){
if(err) console.log('you have an error! ' + err);
});
});
I'm guessing that there's probably a better way to do this, since mongoDB uses BSON and here I'm trying to save a file in JSON before I send it off to the database. I also don't know if this code will work (haven't tried it).
UPDATE - New Question
If I have a document within a collection that has three pieces of information saved: 1) a name, 2) a date, and 3) an image file (the above icon), and I want to send this document to a client in order to display all three, would this be possible? If not, I guess I'd need to use GridFS and save the fileID in place of the image itself. Thoughts/suggestions?
Best, and thanks for any responses,Sami
If your images truly are small enough to not be a problem with document size and you don't mind a little amount of extra processing, then it's probably fine to just store it directly in your collection. To do that you'll want to base64 encode the image, then store it using mongo's BinData type. As I understand it, that will then save it as a BSON bit array, not actually store the base64 string, so the size won't grow larger than your original binary image.
It will display in json queries as a base64 string, which you can use to get the binary image back.
I have been looking for the same thing.
I know this post is old , but perhaps i can help someone out there.
var fs = require('fs');
var mongo = require('mongodb').MongoClient;
var Binary = require('mongodb').Binary;
var archivobin = fs.readFileSync("vc.exe");
// print it out so you can check that the file is loaded correctly
console.log("Loading file");
console.log(archivobin);
var invoice = {};
invoice.bin = Binary(archivobin);
console.log("largo de invoice.bin= "+ invoice.bin.length());
// set an ID for the document for easy retrieval
invoice._id = 12345;
mongo.connect('mongodb://localhost:27017/nogrid', function(err, db) {
if(err) console.log(err);
db.collection('invoices').insert(invoice, function(err, doc){
if(err) console.log(err);
// check the inserted document
console.log("Inserting file");
console.log(doc);
db.collection('invoices').findOne({_id : 12345}, function(err, doc){
if (err) {
console.error(err);
}
fs.writeFile('vcout.exe', doc.bin.buffer, function(err){
if (err) throw err;
console.log('Sucessfully saved!');
});
});
});
});

Resources