Bluebird, node-mysql, pooling, and disposing - node.js

Currently trying to implement a different approach to connecting to my database using promises and pooling. This is what I have as of the moment:
// databaseConnection.js
var configDB = require('./database.js');
var mysql = require('promise-mysql');
var pool = mysql.createPool(configDB.connectionData);
function getSqlConnection() {
return pool.getConnection(configDB.connectionData).disposer(function(connection) {
connection.release();
});
}
module.exports = getSqlConnection;
Then I use the query like this:
#sqlQuery.js
var Promise = require("bluebird");
var getSqlConnection = require('./databaseConnection')
Promise.using(getSqlConnection(), function(connection) {
return connection.query("SELECT * FROM EXAMPLE_TABLE").then(function(row) {
return process(rows);
}
}
I'm using this library which is just node-mysql wrapped with BlueBird promises. With that, I wanted to take advantage of BlueBird's disposing and using capability so I would only be connected to the DB when I needed to be.
Currently though I'm getting an error from Connection.js of mysql stating: cb is not a function. Based on this question I have somewhat of an idea of what I'm doing wrong but I'm not sure how I would go about using that with BlueBird's dispose/using paradigm. Thanks in advance for anyone that can help!

Huge lack of oversight on my part. The following line:
return pool.getConnection(configDB.connectionData).disposer...
should be:
return pool.getConnection().disposer...
Sorry about that. Still getting an error for connection.release not being a function which is strange but at least I can move forward with debugging that.

Related

read huge json file and how to know when data is all been received?

I am having problem with asynchronous nature of NodeJs.
For example, I have the following code, which reads a huge json file
var json_spot_parser = function(path){
this.count = 0;
var self = this;
let jsonStream = JSONStream.parse('*');
let fileStream = fs.createReadStream(path);
jsonStream.on('data', (item) => {
// console.log(item) // which correctlt logged each json in the file
self.count++; //134,000
});
jsonStream.on('end', function () {
//I know it ends here,
});
fileStream.pipe(jsonStream);
};
json_spot_parser.prototype.print_count=function(){
console.log(this.count);
}
module.export= json_spot_parser;
In another module i use it as
var m_path = path.join(__dirname, '../..', this.pathes.spots);
this.spot_parser = new json_spot_parser(m_path);
this.spot_parser.print_count();
I want to read all json objects and process them. but the asynchronous is my problem. I am not familiar with that kind of programming. I used to program in sequence such as c, c++ so on.
Since I don't know when these program finish reading json objects, I don't know when/where to process them.
after
this.spot_parser = new json_spot_parser(m_path);
I expect to deal with json objects, but as I said i can't do it.
I want someone explain me how to write nodejs program in such case, I want to know the standard practice. So far I read some posts, but I believe most of them are short-term fixes.
So, my question is :
How a NodeJs programmer handles problems?
Please tell me standard way, I want to be good at this NodeJs.
Thx!
You can use callbacks as #paqash suggested but returning a promise would be a better solution.
At first, return a new Promise in the json_spot_parser
var json_spot_parser = function(path){
return new Promise(function(resolve, reject) {
this.count = 0;
var self = this;
let jsonStream = JSONStream.parse('*');
let fileStream = fs.createReadStream(path);
jsonStream.on('data', (item) => {
// console.log(item) // which correctlt logged each json in the file
self.count++; //134,000
});
jsonStream.on('end', function () {
resolve(self.count);
});
fileStream.pipe(jsonStream);
};
json_spot_parser.prototype.print_count=function(){
console.log(this.count);
}
});
module.export= json_spot_parser;
In another module
var m_path = path.join(__dirname, '../..', this.pathes.spots);
this.spot_parser = new json_spot_parser(m_path);
this.spot_parser.then(function(count) {console.log(count)});
As you mentioned, Node.js has an async mechanize and you should learn how to think in that way. It's required if you would like to be good at Node.js. If I can suggest, you should start with this article:
Understanding Async Programming in Node.js
Ps: Try to use camel case variables and follow Airbnb JS style guide.
You should process them in the callbacks - your code above looks pretty good, what exactly are you trying to do but are unable?

Best practices of db connection pool handling in a node js app?

I'm referring to node-postgres package below, but I guess this question is rather generic.
There is this trivial example where you 1) acquire (connect) a connection (client) from the pool in the top level http request handler, 2) do all business inside of that handler and 3) release it back to the pool after you're done.
I guess it works fine for that example, but as soon as your app becomes somewhat bigger this becomes painfull soon.
I'm thinking of these two options, but I'm not quite sure...
do the "get client + work + release client" approach everywhere I need to talk to db.
This seems like a good choice, but will it not lead to eating up more than one connection/client per the top http request (there are parallel async db calls in many places in my project)?
try to assign a globaly shared reference to one client/connection accessible via require()
Is this a good idea and actually reasonably doable? Is it possible to nicely handle the "back to the pool release" in all ugly cases (errors in parallel async stuff for example)?
Thank you.
Well, I lost some time trying to figure that out. At the end, after some consideration and influenced by John Papa's code I decided use a database module like this:
var Q = require('q');
var MongoClient = require('mongodb').MongoClient;
module.exports.getDb = getDb;
var db = null;
function getDb() {
return Q.promise(theDb);
function theDb(resolve, reject, notify) {
if (db) {
resolve(db);
} else {
MongoClient.connect(mongourl, mongoOptions, function(err, theDb) {
resolve(db);
}
});
}
}
}
So, when I need to perform a query:
getDb().then(function(db) {
//performe query here
});
At least for Mongodb this is good practice as seen here.
The best advise would depend on the type of database and the basic framework that represents the database.
In case of Postgres, the basic framework/driver is node-postgres, which has embedded support for connection pool. That support is however low-level.
For high-level access see pg-promise, which provides automatic connection management, support for tasks, transactions and much more.
Here is what has worked well for me.
var pg = require('pg');
var config = { pg : 'postgres://localhost/postgres' };
pg.connect(config.pg, function(err, client, done) {
client.query('SELECT version();', function (err, results) {
done();
//do something with results.rows
});
});

Why isn't my Q promise working?

I'm new to promises and Q, and I'm trying to convert a route that uses query in node-mysql. Here are excerpts from my code:
var Q = require('q');
// connection defined elsewhere
router.get('/', function(req, res) {
var query = Q.denodeify(connection.query);
var promise = query("-- query ommitted --", [req.user.id]);
promise.then(console.log, console.error);
});
I'm trying to convert this from an existing set up that doesn't use promises, so I know the connection is set up properly and the query is valid. Whenever I try to request this route, I get the same message in stderr:
[TypeError: Cannot read property 'connectionConfig' of undefined]
I don't know where it's coming from so I'm not sure how to find the full stacktrace. I also tried an alternate version of this code where I had my own function instead of console.log and console.error, but this function was never called and the same error appeared.
Updated answer:
You're probably losing the lexical scope to connection when you denodeify the query method.
Looking at the Q documentation it says this can be an issue:
If you are working with methods, instead of simple functions, you can easily run in to the usual problems where passing a method to another function—like Q.nfcall—"un-binds" the method from its owner.
To fix this try using Q.nbind instead:
var query = Q.nbind(connection.query, connection);
var promise = query("-- query ommitted --", [req.user.id]);
promise.then(console.log, console.error);
Original answer:
Looking at the node-mysql source, the only place connectionConfig is accessed is in Pool.js. So my guess would be that you've got an issue with how the pool is being configured.

Unit testing with Bookshelf.js and knex.js

I'm relatively new to Node and am working on a project using knex and bookshelf. I'm having a little bit of trouble unit testing my code and I'm not sure what I'm doing wrong.
Basically I have a model (called VorcuProduct) that looks like this:
var VorcuProduct = bs.Model.extend({
tableName: 'vorcu_products'
});
module.exports.VorcuProduct = VorcuProduct
And a function that saves a VorcuProduct if it does not exist on the DB. Quite simple. The function doing this looks like this:
function subscribeToUpdates(productInformation, callback) {
model.VorcuProduct
.where({product_id: productInformation.product_id, store_id: productInformation.store_id})
.fetch()
.then(function(existing_model) {
if (existing_model == undefined) {
new model.VorcuProduct(productInformation)
.save()
.then(function(new_model) { callback(null, new_model)})
.catch(callback);
} else {
callback(null, existing_model)
}
})
}
Which is the correct way to test this without hitting the DB? Do I need to mock fetch to return a model or undefined (depending on the test) and then do the same with save? Should I use rewire for this?
As you can see I'm a little bit lost, so any help will be appreciated.
Thanks!
I have been using in-memory Sqlite3 databases for automated testing with great success. My tests take 10 to 15 minutes to run against MySQL, but only 30 seconds or so with an in-memory sqlite3 database. Use :memory: for your connection string to utilize this technique.
A note about unit tesing - This is not true unit testing, since we're still running a query against a database. This is technically integration testing, however it runs within a reasonable time period and if you have a query-heavy application (like mine) then this technique is going to prove more effective at catching bugs than unit testing anyway.
Gotchas - Knex/Bookshelf initializes the connection at the start of the application, which means that you keep the context between tests. I would recommend writing a schema create/destroy script so that you and build and destroy the tables for each test. Also, Sqlite3 is less sensitive about foreign key constraints than MySQL or PostgreSQL, so make sure you run your app against one of those every now and then to ensure that your constraints will work properly.
This is actually a great question which brings up both the value and limitations of unit testing.
In this particular case the non-stubbed logic is pretty simple -- just a simple if block, so it's arguable whether it's this is worth the unit testing effort, so the accepted answer is a good one and points out the value of small scale integration testing.
On the other hand the exercise of doing unit testing is still valuable in that it points out opportunities for code improvements. In general if the tests are too complicated, the underlying code can probably use some refactoring. In this case a doesProductExist function can likely be refactored out. Returning the promises from knex/bookshelf instead of converting to callbacks would also be a helpful simplification.
But for comparison here's my take on what true unit-testing of the existing code would look like:
var rewire = require('rewire');
var sinon = require('sinon');
var expect = require('chai').expect;
var Promise = require('bluebird');
var subscribeToUpdatesModule = rewire('./service/subscribe_to_updates_module');
var subscribeToUpdates = subscribeToUpdatesModule.__get__(subscribeToUpdates);
describe('subscribeToUpdates', function () {
before(function () {
var self = this;
this.sandbox = sinon.sandbox.create();
var VorcuProduct = subscribeToUpdatesModule.__get__('model').VorcuProduct;
this.saveStub = this.sandbox.stub(VorcuProduct.prototype, 'save');
this.saveStub.returns(this.saveResultPromise);
this.fetchStub = this.sandbox.stub()
this.fetchStub.returns(this.fetchResultPromise);
this.sandbox.stub(VorcuProduct, 'where', function () {
return { fetch: self.fetchStub };
})
});
afterEach(function () {
this.sandbox.restore();
});
it('calls save when fetch of existing_model succeeds', function (done) {
var self = this;
this.fetchResultPromise = Promise.resolve('valid result');
this.saveResultPromise = Promise.resolve('save result');
var callback = function (err, result) {
expect(err).to.be.null;
expect(self.saveStub).to.be.called;
expect(result).to.equal('save result');
done();
};
subscribeToUpdates({}, callback);
});
// ... more it(...) blocks
});

The smallest layer around node-mongodb-native

I wrote what is possibly the smallest wrapper around node-mongodb-native wrapper. But, I feel that it needs improving.
It's so small it fits here comfortably:
function MongoWrapper() {
this.db = null;
};
var mongoWrapper;
module.exports = exports = mongoWrapper = new MongoWrapper;
// This means that you can do `new include('mongoWrapper').MongoWrapper()`
mongoWrapper.MongoWrapper = MongoWrapper;
// ObjectId is the most handy method of all. This will work with
// native BSON or Pure BSON
mongoWrapper.ObjectId = function() {
if (!mongo.BSONNative || !mongo.BSONNative.ObjectID) {
return function(id) {
return mongo.BSONPure.ObjectID.createFromHexString(id);
};
}
return function(id) {
return new mongo.BSONNative.ObjectID(id);
};
}();
MongoWrapper.prototype.connect = function(url, options, cb ){
var that = this;
var MongoClient = mongo.MongoClient;
MongoClient.connect( url, function( err, db ){
if( err ) {
console.log( err );
} else {
that.db = db;
}
cb( err, db );
});
}
Now... The "problem" with this is that I need to wrap my whole server in a callback:
mw.connect('mongodb://localhost/hotplate', {}, function( err, db ){
app.configure(function(){
app.set('port', process.env.PORT || 3000);
app.set('views', __dirname + '/views');
...
app.use(express.session({
// secret: settings.cookie_secret,
secret: 'woodchucks are nasty animals',
store: new MongoStore({
// db: settings.db
// db: hotplate.get('db').client
db: db
})
}));
Other drivers (like Mongoose, or even mongojs) manage not to force to use the callback. I looked at their code, and... well, I didn't quite get it. Mongojs in particular seems to use a library for promises, but I am having trouble understanding it.
Note that express.session for example wants, as a parameter, a fully working connection (which is what I do here). Without using the connection, you cannot actually be sure that the connection will be set.
So: what's the easiest way to get rid of the need for a callback?
The basic idea, I suppose, would be to "clone" the mongodb API calls, wrapping them with code to handle the possibility that the "db" variable is not set. But... how would that work?
Any help would be greatly appreciated!
Merc.
Eventually, you'll hit the situation where you absolutely need to wait for the connection to complete before continuing as it is async. And without a callback, it won't work (as the MongoClient requires a callback).
You could use an Event to wrap it -- but that's just a different type of callback really (conceptually). That's what Mongoose does -- it raises an event when the connection is ready, open.
Using Node.js, there isn't a solution that doesn't involve either an event or callback, somewhere (that's an intentional design choice of Node and the MongoDB driver). It's an async connection in the driver. You just need to delay some of the express setup until after the connection is opened. It only needs to happen at app startup.
Realize this question is a little old, but I use this tiny little wrapper to do the "lifting" and for a small amount of sugar so my db code is a little less verbose. Things like findById without having to wrap the ObjectId, and findArray without having to toArray() a query. Check it out:
https://github.com/dmcaulay/mongo-wrapper

Resources