Mongoose connection/models: Need to always run on open? - node.js

I am using Mongoose 3 and the most obvious way to connect to database is
conn = mongoose.createConnection(...)
conn.on("open", ...)
Question is, do I need to define all my models in the open callback? If that is so, I will have to create a initMongoose.coffee that looks like
# initMongoose.coffee
mongoose = require "mongoose"
module.exports = mongoose.createConnection ...
# modelExample.coffee
conn = require "./initDatabase"
conn.on "open", ->
... define model?
modeule.exports = model # I think this does not work?
I think I read somewhere in Node docs that modules cannot be defined in a callback like that?
Since I am only using 1 connection, I think I can use
mongoose.connect ...
Which doesnt accepts any callbacks so I suppose is synchronous? Can I define all my models and thus queries right after connect()? It works at the moment, but it might be because its fast enough.

Mongoose buffers up commands until it is finished connecting, so you can treat it like it's synchronous and define your models and start using the library whenever you want; only once you want to start actually inserting or retrieving data do you need to make the connection.

Related

Jest mock knex fn.now(), but keep rest of implementation

I am writing tests for a service that uses knex, however, since the knex calls has several uses of knex.fn.now() my tests will produce varied results over time. I'm wondering if it's possible to mock/spy/hijack the inner calls to knex.fn.now() to something I can control, while letting the rest of the code stay in its 'real' implementation. I can only find examples of mocking knex completely which would make the purpose of my testing pointless.
So I'm wondering if it's possible to have jest listen for a specific function call and insert another value in it's stead.
You can mock Knex package by creating a folder __mocks__/knex/index.js.
Inside this file u can require the real knex implementation, change it, and export.
It should look something like this:
// __mocks__/knex/index.js
const knex = require('knex');
const fixedTime = new Date();
knex.fn.now = () => fixedTime;
module.exports = knex;

Express4: Storing db instance

In express4, is it bad practice to store the db instance in app.locals or store it using app.set? Because I was thinking about it, since I will need it throughout my app it will be easier to access.
It should work just fine and no, I don't think it's bad practice (at least not horrible) - after all, app.locals is intended to provide you a safe place to put your global values.
However, using Express to store miscellaneous global values like this does result in your application being tightly bound to Express. If you ever decide that you want to remove Express and replace it with something else, you're going to have to hunt down and change all those references to app.local that are now scattered throughout your code.
If you want to avoid this, one simple pattern is to create a module exporting the value you want - this allows you to keep all the associated code in one place and import it whenever you need it. For example:
// modules/database.js
// initialize the database
const db = initializeDatabase();
// export a "getter" for the database instance
export const get = () => db;
Then, when you want to use the database instance:
// index.js
// import the database "getter"
import { get } from './modules/database';
// perform a query
const rows = get().query('SELECT * FROM table');
Just import modules/database anywhere you want to use the database.

"Global" module object in Node.js

I have a module for connecting to my DB and perform actions.
Then in my main script (app.js) I instantiate it like
var DBConn = require('./common/DBConn');
And from that script it works fine. But then I've got some other scripts that handle routes, and I want to perform some DB stuff on those, but if I use DBConn it returns an error saying "DBConn is not defined".
Then I can just instantiate another DBConn in these other js files, but this would mean I am creating a connection for each file, right? But I want these other scripts to use the DBConn object from the app.js, so that I'm not constantly establishing a connection to the DB and then closing it... (unless this is a good idea, but to me it makes more sense to have just one "global" object dealing with the connection over all the app and that's it).
(BTW: I'm using Express)
You want to require() your module in each file. Node will cache the module.
Typically, the context of a DB connection is abstracted away behind stores or repositories and your other modules interact with those. In cases where people are directly requiring modules like mongoose, they'll require mongoose everywhere but only call the connection code within their main application entry point (app.js/server.js/whatever).
https://nodejs.org/api/modules.html#modules_caching
Modules are cached after the first time they are loaded. This
means (among other things) that every call to require('foo') will get
exactly the same object returned, if it would resolve to the same
file.
Multiple calls to require('foo') may not cause the module code to be
executed multiple times. This is an important feature. With it,
"partially done" objects can be returned, thus allowing transitive
dependencies to be loaded even when they would cause cycles.
If you want to have a module execute code multiple times, then export
a function, and call that function.
You could use a singleton to solve this issue for you. Please remember however that singletons come with their own set of problems (a good discussuon on singletons can be found here What is so bad about singletons?).
That said once you take into consideration the pro's and cons of singletons they can be a fantastic solution to exactly this problem.
Example:
// database.js
var singleton = function singleton() {
this.DbConnection = {};
function setup(database, username, password) {
this.DbConnection = MakeDbConnection()
};
};
singleton.instance = null;
singleton.getInstance = function(){
if(this.instance === null){
this.instance = new singleton();
}
return this.instance;
};
module.exports = singleton.getInstance();
This can then be called elsewhere in your application like so...
// main.js
var db = require("./database");
db.setup('db', 'root', 'pass');
db.DbConnection();
Once you have called db.setup() the first time, the db connection will be available just with the following:
// elsewhere.js
var db = require("./database");
db.DbConnection();

Node.js requiring a script but not running it

In Node.js, when you do
var otherscript = require('otherscript');
it runs the script upon the require
I am wondering if there is a way to "require" a script without running it, so that you can run it later when you want to.
Is there any good reason why not?
If you can edit the 'otherscript' (no one else is using that script) then you can simply enclose the whole code inside a function and add it to exports.
Example:
otherscript:
module.exports = function(){
//original code goes here
};
Then use as:
var otherscript = require('otherscript');
var obj = otherscript();
When you require a file or module, the return of that file/module is cached. In other words, it is really only executed once, and subsequent calls to require() of that file/module will only return a reference to the exact same object.
A common example of this is the Mongoose package, where calling require('mongoose') will return an instance of mongoose, on which you can call connect() to connect to the database. Calling require('mongoose') again in a different part of your program will return the same instance with the same database connection made available.

Mongoose and commander

I'm writing some scripts for some command-line manipulation of Mongoose models with commander.js (eventually, I'd like to run these tools using Cron).
Now, I've written several scripts with commander and they all work fine, but if I connect to the MongoDB database using mongoose, they script just hangs after it's done. Now, I figured the database connection is keeping node alive, so I added a mongoose.disconnect() line and it still hangs.
The only thing I found that allows me to shutdown is to use process.exit(), but I'm reluctant to just terminate the process. Is there something in particular that I should do to trigger a graceful shutdown?
My reading of the API docs implies that .disconnect() must be given a callback function. It looks like it's called for each that's disconnected and may be passed an error.
There is a check in the code to make sure it's not called if it doesn't exist when things work out, but that check isn't being run on errors, so if Mongoose received an error message from the MongoDB client, it may be leaving a connection open and that's why it's not stopping execution.
If you're only opening a single connection to the database, you may just want to call [Connection object].close() since that function correctly inserts a no-op "callback" if no callback is given, and looks like it will correctly destruct things.
(The more I look into Mongoose, the more I want to just write a thin wrapper around the MongoDB client so I don't have to deal with Mongoose's "help.")
I use the async "Series" to perform operations and then call mongoose.connection.close() on completion. It prevents callback hell and allows you to neatly perform operations either one at a time or parallel followed by a function when all the other methods have completed. I use it all the time for scripts that require mongoose but are meant to terminate after all mongoose operations are finished.
Shutdown the node program directly is hiding the symptoms, not fixing the problem!
I finally isolated the problem and found it to be with Mongoose schema definitions. If you try to shutdown the connection too soon after Mongoose schemas are defined1, the application hangs and eventually produces some weird MongDB-related error.
Adding a small timeout before running the program.parse(argv) line to run the commander application fixes the problem. Just wrap the code like so:
var program = require('commander')
, mongoose = require('mongoose')
, models = null
;
// Define command line syntax.
program
.command(...)
;
mongoose.connect(
..., // connection parameters.
function() {
// connected to database, defined schemas.
models = require('./models');
// Wait 1 second before running the application code.
setTimeout(function(){
program.parser(process.argv);
}, 1000);
}
);
1: This is my initial interpretation, I have not (yet) extensively tested this theory. However, removing Mongoose schema definitions from the application successfully prevents the application from hanging.
Actually, just using process.nextTick() instead of the setTimeout() call fixes the situation nicely!

Resources