Express4: Storing db instance - node.js

In express4, is it bad practice to store the db instance in app.locals or store it using app.set? Because I was thinking about it, since I will need it throughout my app it will be easier to access.

It should work just fine and no, I don't think it's bad practice (at least not horrible) - after all, app.locals is intended to provide you a safe place to put your global values.
However, using Express to store miscellaneous global values like this does result in your application being tightly bound to Express. If you ever decide that you want to remove Express and replace it with something else, you're going to have to hunt down and change all those references to app.local that are now scattered throughout your code.
If you want to avoid this, one simple pattern is to create a module exporting the value you want - this allows you to keep all the associated code in one place and import it whenever you need it. For example:
// modules/database.js
// initialize the database
const db = initializeDatabase();
// export a "getter" for the database instance
export const get = () => db;
Then, when you want to use the database instance:
// index.js
// import the database "getter"
import { get } from './modules/database';
// perform a query
const rows = get().query('SELECT * FROM table');
Just import modules/database anywhere you want to use the database.

Related

Expressjs higher order router vs appending to request

Let's say I want to pass to an ExpressJS route callback an object.
I know I can append to app:
// router.js
const getFoo = (req, res) => res.json(req.app.foo);
// index.js
const app = express();
app.foo = {};
app.get('/foo', getFoo);
or I can use a higher order function:
// router.js
const getFoo = foo => (req, res) => res.json(foo);
// index.js
const app = express();
const foo = {};
app.get('/foo', getFoo(foo));
Both are easy to write, extend and test.
But, I don't know the implications of the solutions and whether one is better.
Is there anyone knowing real differences between the two approaches?
I think the second solution is more correct, here's why.
imagine you get used to the first solution and one day you need to send something called post or get or anything with the name of app property and you forget that there is already a property named like that, so you override original property without even realizing and when you call app.post() program will crash.
Believe me, you don't want hours of research wasted on something like that and realizing that you simply overrode original method
Also, in my opinion, it's always a bad idea mutating original object which wasn't generated by you
As #vahe-yavrumian mentioned it is not a good idea to mutate the state of the object created by a third party library.
between you can also use app.get() and app.set() methods to pass any data to the other routers in the queue (seems those methods are there just for this purpose.)
more information at https://expressjs.com/en/api.html.
The second solution easily allows you to pass different value for foo on different routes, if you ever found a need to do that.
The first solution essentially puts the value on the app singleton, which has all the implications of using singletons. (And as mentioned by #Anees, for express specifically the app settings with get and set are the proper place to store this, not a custom property)

What is the best way to separate your code in a Node.js application?

I'm working on a MEAN (Mongo Express.js Angular Node.js) CRUD application. I have it working but everything is in one .js file. The single source code file is quite large. I want to refactor the code so CRUD functionality is in different source code files. Reading through other posts, I've got a working model but am not sure it is the right way in Node using Mongo to get it done.
Here's the code so far:
<pre>
var express = require('express');
var bodyParser = require('body-parser');
var app = express();
var path = require('path');
var db;
var connect = 'mongodb://<<mddbconnect string>>;
const MongoClient = require('mongodb').MongoClient;
var ObjectID = require("mongodb").ObjectID;
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
app.use(express.static(__dirname + '/'));
// viewed at http://localhost:<<port referecnes in app.listen>>
app.get('/', (req, res) => {
res.sendFile(path.join(__dirname + '/index.html'));
});
MongoClient.connect(connect, (err, database) => {
if (err) return console.log(err)
db = database
app.listen(3000, () => {
console.log('listening on 3000' + Date() );
// Here's the require for the search function in another source code file.
var searchroute = require('./serverSearch')(app, db);
})
})
//Handlers
The rest of the CRUD application functions with app.post, app.get. These are other functions I want to move into different source code files, like serverSearch.js.
</pre>
The code I separated right now is the search functionality which is inside of the MongoClient.connection function. This function has to successfully execute to make sure the variable 'db' is valid before passing both variables 'app' and 'db' to the the search function built out in the source code file serverSearch.js.
I could now build out my other CRUD functions in separate files in put them in the same area as 'var searchroute = require('./serverSearch)(app,db);
Is this the best way to separate code in a MEAN application where the main app and db vars need to be instantiated then passed to functions in other source code files?
What you are basically describing is modular coding heading towards "services" perhaps even micro-services. There are a few factors to keep in mind for your system. (I have no doubt that there are many other approaches to this btw). Basically in most NodeJS systems I have worked on (not all) I try to apply the following architecture in development and then bring over as much as possible I to production.
Create a directory under the main one. I usually use some type of name that points to the term functions. In this directory I maintain function and /or class files divided into categories. Function wrappers for DB would be held in DB functions. This file would only contain functions for the DB. Security functions in another file. Helper functions in another. Time manipulation in another. I am sure you get the idea. These are all wrapped in module exports
Now in any file in my project where say I would need DB and helpers I will start it by:
let nhelpers = require("helpfuncs");
let ndb = require("dbfuncs");
Obviously names are different.
And btw I divide all the NPM packages in the same way under an environment directory.
Maintaining that kind of structure allows you to maintain sane order over the code, logical chaining in any decent IDE, and having relevant methods show up in your IDE without having to remember every function name and all the methods within.
It also allows you to write an orderly system of micro-services making sure each part dies exactly what you want and allows for sane debugging.
It took me awhile to settle on this method and refine it.
It paid off for me. Hope this helps.
Edit to clarify for the OP:
When it comes to the process.env variables I became a great fan of dotenv https://www.npmjs.com/package/dotenv
This little package has saved me an incredible amount of headaches. Of course you will have to decide if you include it in production or not. I have seen arguments for both, but i think in a well set up AWS, Google, Azure environment (or in Docker of course) I am of the opinion it can safely be used.
A couple of caveats.
Do not leave your dotenv file in the root. Move it somewhere else in your directory structure. It is simple and I actually put it in the same directory as all my environment files and helper files.
Remember it is simply a text file. So an IDE will not pick up your specific env variables in chaining. (Unless someone knows of a trick which I would love to hear about)
If you put env variables like access info to your DB system or other sensitive stuff, HASH THEM FIRST, put the hash in your env and have a function in your code which specifically just does the hash to the string. Do not under any conditions leave sensitive information in your environment file without hashing it first.
The final Gatcha. These are not PHP magic globals which cannot be overwritten. If you lose track and overwrite one of those process.env variables in your code it will take the new value until you restart your node app and it reads from the dotenv file again. (But then again that is the rule with all environment variables not only user defined ones)
Any typos above excuse me. Done from my cell in the train.

NodeJS - Store Global variable

I'm currently trying to get a list of all of the members that are signed in. I'm using socket.js for this and using event handlers.
I have the following event:
eventEmitter.on('userSignedIn', function(socket, user) {
socket.emit("userJoined", {currentUsers: user.first_name});
});
Then, in JQuery, I am doing the following:
socket.on('userJoined', function (message) {
newItem = $("<li>Item " + message.currentUsers + "</li>").show();
$('.users-list').prepend(newItem);
});
This however is not working and not even showing the correct username(s).
Basically, what I want to do is when the event userSignedIn it appends an array users with the user that's just signed and then send this through the socket. But I'm struggling to see how I can store this inside the global app?
A global variable in node is accessed through process.env.
If you need to store this as a global variable in Node, go to the dotenv module, install it, (farily simple to use it) and use it to store your original global for the counter. In your .env file you would have say counter=1. Then you can increment it by accessing the global in your code by stating process.env.counter and you can increment it as well.
Remember: if you process stops running or goes down the counter will return to the original state of 1 when you bring it back up.
You can if you wish not use the dotenv module and simply create and access a process.env variable straight in your code. However, dotenv is very flexible, and you may want to use it for other purposes.
Hope this helps

Attaching object to Node.js process

I am using the environment variable and arguments parsing module called nconf for my node.js Express web server.
https://github.com/indexzero/nconf
I decided that the best way to make the nconf data global was to simply attach it to the process variable (as in process.env), is this a good idea or bad idea? Will it slow down execution in weighing down "process"?
Here is my code:
var nconf = require('nconf');
nconf.argv()
.env()
.file({ file: './config/config.json' });
nconf.defaults({
'http': {
'port': 3000
}
});
process.nconf = nconf;
//now I can retrieve config settings anywhere like so process.nconf.get('key');
frankly, I kind of like this solution. Now I can retrieve the config data anywhere, without having to require a module. But there may be downsides to this...and it could quite possibly be a very bad idea. IDK.
It won't slow down the execution, but feels "smelly". It's hard to discover, and it will be difficult to test, if you ever decide you need to.
A better solution would be to attach settings to a module and use require() to import it wherever needed.
The best solution would be to just pass your settings object to the classes or modules that need it. Either directly, or as part of some kind of "global context".
Eg.
var global = {
settings: {
port: 8080
}
}
//...
global.api = new Api(global);
//...
function Api(global) {
var port = global.settings.port;
}
UPDATE: more info on why the original pattern is bad:
1) Discoverability
You attach your settings to process.settings and go off to a different project. A year later, someone else takes over or you need to update things. Will you remember you attached your settings to process.nconf? Or was it process.settings?
Now imagine you have 10 different global things, attached under different names, on different places.
It's not as bad as attaching directly to the global context, but it's certainly better to clearly see where the stuff you're using is coming from (constructor or module).
2) Testing
You decide you need to test your module. So now you need to tweak your settings for each test instead of loading them from a file or argv. How do you do that?
In case of the global process.nconf or require("settings") patterns, you need to do something like this:
function canOpenAPIOnTheConfiguredPort(done) {
var nconfSaveApiPort = process.nconf.api.port;
process.nconf.api.port = '1234';
var api = new Api();
test.assertEqual(api.port, '1234');
process.nconf.api.port = nconfSaveApiPort;
done();
}
As your application grows, this quickly becomes annoying (eg. imagine having to mock 10 things). In comparison, here's how you do it using the dependency injection (constructor) pattern.
function canOpenAPIOnTheConfiguredPort(done) {
var api = new Api({
port: '1234'
});
test.assertEqual(api.port, '1234');
done();
}
Notice that nconf is a singleton.
I use to configure at the very beginning of the program and then when I need a setting in another file I do:
var nconf = require ('nconf');
nconf.get('x');

Best way to reuse a large translation file within Node / Express

I'm new to Node but I figured I'd jump right in and start converting a PHP app into Node/Express. It's a bilingual app that uses gettext with PO/MO files. I found a Node module called node-gettext. I'd rather not convert the PO files into another format right now, so it seems this library is my only option.
So my concern is that right now, before every page render, I'm doing something like this:
exports.home_index = function(req, res)
{
var gettext = require('node-gettext'),
gt = new gettext();
var fs = require('fs');
gt.textdomain('de');
var fileContents = fs.readFileSync('./locale/de.mo');
gt.addTextdomain('de', fileContents);
res.render(
'home/index.ejs',
{ gt: gt }
);
};
I'll also be using the translations in classes, so with how it's set up now I'd have to load the entire translation file again every time I want to translate something in another place.
The translation file is about 50k and I really don't like having to do file operations like this on every page load. In Node/Express, what would be the most efficient way to handle this (aside from a database)? Usually a user won't even be changing their language after the first time (if they're changing it from English).
EDIT:
Ok, I have no idea if this is a good approach, but it at least lets me reuse the translation file in other parts of the app without reloading it everywhere I need to get translated text.
In app.js:
var express = require('express'),
app = express(),
...
gettext = require('node-gettext'),
gt = new gettext();
Then, also in app.js, I create the variable app.locals.gt to contain the gettext/translation object, and I include my middleware function:
app.locals.gt = gt;
app.use(locale());
In my middleware file I have this:
mod
module.exports = function locale()
{
return function(req, res, next)
{
// do stuff here to populate lang variable
var fs = require('fs');
req.app.locals.gt.textdomain(lang);
var fileContents = fs.readFileSync('./locales/' + lang + '.mo');
req.app.locals.gt.addTextdomain(lang, fileContents);
next();
};
};
It doesn't seem like a good idea to assign the loaded translation file to app, since depending on the current request that file will be one of two languages. If I assigned the loaded translation file to app instead of a request variable, can that mix up users' languages?
Anyway, I know there's got to be a better way of doing this.
The simplest option would be to do the following:
Add this in app.js:
var languageDomains = {};
Then modify your Middleware:
module.exports = function locale()
{
return function(req, res, next)
{
// do stuff here to populate lang variable
if ( !req.app.locals.languageDomains[lang] ) {
var fs = require('fs');
var fileContents = fs.readFileSync('./locales/' + lang + '.mo');
req.app.locals.languageDomains[lang] = true;
req.app.locals.gt.addTextdomain(lang, fileContents);
}
req.textdomain = req.app.locals.gt.textdomain(lang);
next();
};
};
By checking if the file has already been loaded you are preventing the action from happening multiple times, and the domain data will stay resident in the server's memory. The downside to the simplicity of this solution is that if you ever change the contents of your .mo files whilst the server is running, the changes wont be taken into account. However, this code could be extended to keep an eye on the mtime of the files, and reload accordingly, or make use of fs.watchFile — if required:
if ( !req.app.locals.languageDomains[lang] ) {
var fs = require('fs'), filename = './locales/' + lang + '.mo';
var fileContents = fs.readFileSync(filename);
fs.watchFile(filename, function (curr, prev) {
req.app.locals.gt.addTextdomain(lang, fs.readFileSync(filename));
});
req.app.locals.languageDomains[lang] = true;
req.app.locals.gt.addTextdomain(lang, fileContents);
}
Warning: It should also be noted that using sync versions of functions outside of server initialisation is not a good idea because it can freeze the thread. You'd be better off changing your sync loading to the async equivalent.
After the above changes, rather than passing gt to your template, you should be able to use req.textdomain instead. It seems that the gettext library supports a number of requests directly on each domain object, which means you hopefully don't need to refer to the global gt object on a per request basis (which will be changing it's default domain on each request):
Each domain supports:
getTranslation
getComment
setComment
setTranslation
deleteTranslation
compilePO
compileMO
Taken from here:
https://github.com/andris9/node-gettext/blob/e193c67fdee439ab9710441ffd9dd96d027317b9/lib/domain.js
update
A little bit of further clarity.
Once the server has loaded the file into memory the first time, it should remain there for all subsequent connections it receives (for any visitor/request) because it is stored globally and wont be garbage collected — unless you remove all references to the data, which would mean gettext would need to have some kind of unload/forget domain method.
Node is different to PHP in that its environment is shared and wraps its own HTTP server (if you are using something like Express), which means it is very easy to remember data globally as it has a constant environment that all the code is executed within. PHP is always executed after the HTTP server has received and dealt with the request (e.g. Apache). Each PHP response is then executed in its own separate run-time, which means you have to rely on databases, sessions and cache stores to share even simple information and most resources.
further optimisations
Obviously with the above you are constantly running translations on each page load. Which means the gettext library will still be using the translation data resident in memory, which will take up processing time. To get around this, it would be best to make sure your URLs have something that makes them unique for each different language i.e. my-page/en/ or my.page.fr or even jp.domain.co.uk/my-page and then enable some kind of full page caching using something like memcached or express-view-cache. However, once you start caching pages you need to make certain there aren't any regions that are user specific, if so, you need to start implement more complicated systems that are sensitive to these areas.
Remember: The golden rule of optimisation, don't do so before you need to... basically meaning I wouldn't worry about page caching until you know it's going to be an issue, but it is always worth bearing in mind what your options are, as it should shape your code design.
update 2
Just to illustrate a bit further on the behaviour of a server running in JavaScript, and how the global behaviour is not just a property of req.app, but in fact any object that is further up the scope chain.
So, as an example, instead of adding var languageDomains = {}; to your app.js, you could instantiate it further up the scope of wherever your middleware is placed. It's best to keep your global entities in one place however, so app.js is the better place, but this is just for illustration.
var languageDomains = {};
module.exports = function locale()
{
/// you can still access languageDomains here, and it will behave
/// globally for the entire server.
languageDomains[lang]
}
So basically, where-as with PHP, the entire code-base is re-executed on each request — so the languageDomains would be instantiated a-new each time — in Node the only part of the code to be re-executed is the code within locale() (because it is triggered as part of a new request). This function will still have a reference to the already existing and defined languageDomains via the scope chain. Because languageDomains is never reset (on a per request basis) it will behave globally.
Concurrent users
Node.js is single threaded. This means that in order for it to be concurrent i.e. handle multiple requests at the "same" time, you have to code your app in such a way that each little part can be executed very quickly and then slip into a waiting state, whilst another part of another request is dealt with.
This is the reason for the asynchronous and callback nature of Node, and the reason to avoid Sync calls whilst your app is running. Any one Sync request could halt or freeze execution of the thread and delay handling for all other requests. The reason why I state this is to give you a better idea of how multiple users might interact with your code (and global objects).
Basically once a request is being dealt with by your server, it is it's only focus, until that particular execution cycle ends i.e. your request handler stops calling other code that needs to run synchronously. Once that happens the next queued item is dealt with (a callback or something), this could be part of another request, or it could be the next part in the current request.

Resources