node: how to require different classes depending on env - node.js

In node, module imports (aka require()) are hard coded in every file (aka module) which requires that import. This is can be tens or in our case hundreds of duplicate imports. What we are "requiring" in a dynamic way are mainly services, e.g. "playerService" with find, update, get methods etc. but could also be domain objects or persistence libraries
The crux is we have 3 versions of this "playerService" js file. One which does everything locally (in memory) for development, one which does everything with a local database (test), and one which does everything with an external system via an API (live). The switch in this case in on environment (dev, test or live).
It is worth noting we use classes everywhere we can because we find functions which return functions of functions etc. to be unreadable/maintainable (we are java developers really struggling with js)
We are also exclusively using web sockets in our node app -there is no http code.
So our services look like this:
const Player = require("./player")
class PlayerService {
constructor(timeout) {
this.timeout= 3000 // todo, put in a config file
if (timeout != null) {this.timeout= timeout}
}
updatePlayer(player) {
// logic to lookup player in local array and change it for dev version.
// test version would lookup player in DB and update it.
}
}
module.exports = PlayerService
We are familiar with dependency injection with Grails and spring, but haven't found anything comprehensible (see below) for node. We are not javascript nor node gurus unfortunately, despite extensive reading.
Currently we are considering one of these options, but would like to hear any better suggestions:
option 1:
Hard code the "dev" requires, e.g. require("./dev/playerSerice")
have a jenkins build server rewrite the source code in every file to require("./test/playerSerice").
option 2:
Hard code the "dev" requires, e.g. require("./playerSerice")
have a jenkins build server swap the file /test/playerService/playerService" to ./playerService.
Obviously these make it hard for developers to run the test or pro versions on their local machines without hacking the source.
option 3:
1. put the required module paths in a single config file.
2. swap out just the config file. E.g.
let Config = require("./config")
let PlayerService = require(Config.playerService)
We have tried to make this dependent on env and have a global config which the development, test and prod configs over ride these, but have not found an elegant way to do this. One way might be to duplicate this code at the top of every module:
let env = process.env.NODE_ENV || 'development'
let config = require('./config.'+env);
let PlayerService = require("./" + Config.playerService)
Then in config.development.js:
var config = require("./config.global")
config.PlayerService = "devPlayerService"
module.exports = config
Option 4:
Perhaps something like this would work:
let env = process.env.NODE_ENV || 'development'
require("./" + env + "/playerService")
all the above solutions suffer from lack of singletons - services are stateless. We are guessing that node is going to construct a new copy of each service for each request (or web socket message in our case). Is there a way to minimise this?
Obviously some simple, readable, and officially maintained form of Dependency injection would be nice, with some way to switch between which set of classes were injected.
We have read the following posts:
https://blog.risingstack.com/dependency-injection-in-node-js/ resultant code is unreadable (for us at least). The example being so contrived doesn't help, team is just some sort of proxy wrapper around User, not a service or anything useful. What are options? Why options?
https://medium.com/#Jeffijoe/dependency-injection-in-node-js-2016-edition-f2a88efdd427
But found them incomprehensible. E.g. the examples have keywords which come from thin air - they dont seem to be javascript or node commands and are not explained in the documentation where they come from.
And looked at these projects:
https://github.com/jaredhanson/electrolyte
https://www.npmjs.com/package/node-dependency-injection
https://www.npmjs.com/package/di
but they seemed to be either abandoned (di), not maintained or we just cant figure them out (electrolyte).
Is there some standard or simple di solution that many people are using, ideally documented for mortals and with a non "express" dependent example?
UPDATE 1
It seems the pattern I am using to create my services creates a new instance very time it is used/called. Services should be singletons. The simple solution is to add this to the bottom of my services:
let playerService = new PlayerService();
module.exports = playerService;
Apparently, this only creates one instance of the object, no matter now many times require("./playerService") is called.

For keeping the configuration per env, the right way is probably (similar to what you suggested)- Keeping a config/env directory and putting there a file per env, ie development.js, test.js etc, and in each of them putting the right values. eg:
module.exports = {
playerService: 'dev/PlayerService'
}
and require it:
let env = process.env.NODE_ENV || 'development'
, envConfig = require("./config/" + env)
, playerService = require(envConfig.playerService)
You can also have the all in one file like this:
config.js:
module.exports = {
development:{
playerService: '....'
},
test:{
playerService: '....'
}
}
and require it:
let env = process.env.NODE_ENV || 'development'
, config = require("./config")
, playerService = require(config[env][playerService])
This is a common use-case.
Or, if you have all services in directories per env, ie one directory for dev, one for test etc, you don't need the config, you can require like that:
let env = process.env.NODE_ENV || 'development'
, playerService = require('./' + env + '/playerServcie')
Making the services singleton in node js should be simple, have a look at the following:
https://blog.risingstack.com/fundamental-node-js-design-patterns/
https://www.sitepoint.com/javascript-design-patterns-singleton/
and this
Hope this helps.

Related

How to use `index.js` in a Node.js when creating an Express service?

Hi I am structuring my Node.js project based on this, like so:
Root
product name
index.js: (contains requires for the product and the main export)
productName.js: contains application logic
test
test1.js
test2.js
...
Now I have two questions
What should logically go in index.js? At the moment I have this (would this be a good way to do things and what else might I include in index.js?):
// index.js
var myServer = require('./myServer.js'); // "product name" = "myServer"
module.exports = {
run: myServer.listen
}
Does it matter what I call the object key in module.exports (currently "run")? Why does the server always run when I execute index.js with $ node index.js how does it automatically know to run myServer.listen?
P.S.: I am aware of web structure auto-generation tools, I just wish to understand the logical reason for this suggested structure (the idea of not having any logic in index.js)
As you mentioned this is a Express service, if it is only handling backend of some application or more specifically this is only backend application, I would suggest you change name of your index.js to server.js(Thus explicitly stating that it'll process all service requests).
But if not then even index.js is fine.
Now for
1
What you've put is absolutely fine, apart from this you could require all modules, routes(or controllers whatever you name them) in a way that it serves as entry point to your application. Try not to put any logic in here.
2
Actually the server runs because it executes the script in the file called index.js, the script says myServer.listen, now if you had written console.log("Hello World") and used $ node index.js it would've printed Hello World instead.
Node just expects and executes script that is there in index.js, in your case it is to start the server.
About the logic that why not put anything else in index.js, for me the reasoning I consider good enough is it provides abstraction as it is the entry point I don't want index.js to worry about things like what to do with this data and all. I believe it should provide a base to setup server. Thus following single responsibility to some extent. Also I won't have to touch it ever in projects lifetime unless some major change occurs e.g. I decide to shift from express to something else.
EDIT
Why have a key called run
You seem to have answered it yourself(in comments), you are giving or more proper description would be you're attaching an object to module.exports as it is a object similar to JSON it was supposed to have a key(which could be anything not necessarily run it could've been hii). Now if you don't want to pass a key and export only one thing that is server.listen then you could write same as module.exports = myServer.listen; instead of
module.exports = {
hii: myServer.listen
}
Note that you could export more modules using the way you did. For more details about module.exports refer this or better google about it as this link might expire anytime and does not seem an ideal thing to put on SO.

What is the best way to separate your code in a Node.js application?

I'm working on a MEAN (Mongo Express.js Angular Node.js) CRUD application. I have it working but everything is in one .js file. The single source code file is quite large. I want to refactor the code so CRUD functionality is in different source code files. Reading through other posts, I've got a working model but am not sure it is the right way in Node using Mongo to get it done.
Here's the code so far:
<pre>
var express = require('express');
var bodyParser = require('body-parser');
var app = express();
var path = require('path');
var db;
var connect = 'mongodb://<<mddbconnect string>>;
const MongoClient = require('mongodb').MongoClient;
var ObjectID = require("mongodb").ObjectID;
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
app.use(express.static(__dirname + '/'));
// viewed at http://localhost:<<port referecnes in app.listen>>
app.get('/', (req, res) => {
res.sendFile(path.join(__dirname + '/index.html'));
});
MongoClient.connect(connect, (err, database) => {
if (err) return console.log(err)
db = database
app.listen(3000, () => {
console.log('listening on 3000' + Date() );
// Here's the require for the search function in another source code file.
var searchroute = require('./serverSearch')(app, db);
})
})
//Handlers
The rest of the CRUD application functions with app.post, app.get. These are other functions I want to move into different source code files, like serverSearch.js.
</pre>
The code I separated right now is the search functionality which is inside of the MongoClient.connection function. This function has to successfully execute to make sure the variable 'db' is valid before passing both variables 'app' and 'db' to the the search function built out in the source code file serverSearch.js.
I could now build out my other CRUD functions in separate files in put them in the same area as 'var searchroute = require('./serverSearch)(app,db);
Is this the best way to separate code in a MEAN application where the main app and db vars need to be instantiated then passed to functions in other source code files?
What you are basically describing is modular coding heading towards "services" perhaps even micro-services. There are a few factors to keep in mind for your system. (I have no doubt that there are many other approaches to this btw). Basically in most NodeJS systems I have worked on (not all) I try to apply the following architecture in development and then bring over as much as possible I to production.
Create a directory under the main one. I usually use some type of name that points to the term functions. In this directory I maintain function and /or class files divided into categories. Function wrappers for DB would be held in DB functions. This file would only contain functions for the DB. Security functions in another file. Helper functions in another. Time manipulation in another. I am sure you get the idea. These are all wrapped in module exports
Now in any file in my project where say I would need DB and helpers I will start it by:
let nhelpers = require("helpfuncs");
let ndb = require("dbfuncs");
Obviously names are different.
And btw I divide all the NPM packages in the same way under an environment directory.
Maintaining that kind of structure allows you to maintain sane order over the code, logical chaining in any decent IDE, and having relevant methods show up in your IDE without having to remember every function name and all the methods within.
It also allows you to write an orderly system of micro-services making sure each part dies exactly what you want and allows for sane debugging.
It took me awhile to settle on this method and refine it.
It paid off for me. Hope this helps.
Edit to clarify for the OP:
When it comes to the process.env variables I became a great fan of dotenv https://www.npmjs.com/package/dotenv
This little package has saved me an incredible amount of headaches. Of course you will have to decide if you include it in production or not. I have seen arguments for both, but i think in a well set up AWS, Google, Azure environment (or in Docker of course) I am of the opinion it can safely be used.
A couple of caveats.
Do not leave your dotenv file in the root. Move it somewhere else in your directory structure. It is simple and I actually put it in the same directory as all my environment files and helper files.
Remember it is simply a text file. So an IDE will not pick up your specific env variables in chaining. (Unless someone knows of a trick which I would love to hear about)
If you put env variables like access info to your DB system or other sensitive stuff, HASH THEM FIRST, put the hash in your env and have a function in your code which specifically just does the hash to the string. Do not under any conditions leave sensitive information in your environment file without hashing it first.
The final Gatcha. These are not PHP magic globals which cannot be overwritten. If you lose track and overwrite one of those process.env variables in your code it will take the new value until you restart your node app and it reads from the dotenv file again. (But then again that is the rule with all environment variables not only user defined ones)
Any typos above excuse me. Done from my cell in the train.

Webpack Aliases in Node JS Server code

I'm building an isomorphic React/React-Router/Redux/Webpack application and I'm attempting to implement server side rendering.
My directory looks like:
/client
/actions
/components
/containers
/server
/server.js
In my webpack config, I have aliases set up for all the folders inside client:
var path_base = path.resolve(__dirname, '..');
const resolve = path.resolve;
const base = function() {
var args = [path_base];
args.push.apply(args, arguments);
return resolve.apply(resolve,args);
};
const resolve_alias = base.bind(null, 'src/client');
const aliases = [
'actions',
'components',
'constants',
'containers',
'middleware',
'reducers',
'routes',
'store',
'styles',
'utils',
'validation'
];
so that inside the code that gets bundled by webpack, I can do:
import { Widget } from 'components';
and that import gets resolved by webpack.
Now in my server code, in order to do the rendering, I have to import some of my client files, like routes/index.js. The problem I'm running into when I import my routes file, it's using a webpack alias to another file, say components or containers so naturally, the node js require system can't resolve it.
How do I fix something like that? I looked at this question and it talks about essentially setting up the same aliases that exist in webpack with mock-require. But then the issue becomes that my routes file imports all my components which then all import things like stylesheets, images, etc. Should I then be using something like webpack-isomorphic-tools?
The guides I've been looking at (this for example) are all great at showing how server side rendering is accomplished but none of them really talk about how to resolve all the requires and whatnot.
After battling with this issue for 2 days I settled on babel-plugin-webpack-alias.
What you need to do to resolve paths with that is:
$ npm install --save-dev babel-plugin-webpack-alias
Add the plugin to your .babelrc
Add the aliases to your webpack.config (make sure you use path.join())
Refer to this post if you have problems loading styles
The other option I tried was universal-webpack but I found it to be a bit verbose. If you want to see roughly how the whole server-side loading works, you can check out this video.
If you really want them, run your server side code through babel and use this plugin: https://www.npmjs.com/package/babel-plugin-module-alias which will let you do the same thing as webpack.
Edit: This one works a lot better: https://github.com/jagrem/babel-resolve-relative-module it allows multiple paths
Try to use NODE_PATH. Node will always look for a module in this path during require calls. It allows to short cut your relative paths as you want.
// turn this
import {Widget} from '../../components';
// into this
import {Widget} from 'components';
See Node.js docs for more information.
P.S. this thing is very sensitive, so use it carefully. Now your code tightly depends from the environment and may break somewhere.
If you use webpack-isomorphic-tools then it'll take your webpack config into account for your server side which will make all your aliases work.
https://www.npmjs.com/package/webpack-isomorphic-tools

Attaching object to Node.js process

I am using the environment variable and arguments parsing module called nconf for my node.js Express web server.
https://github.com/indexzero/nconf
I decided that the best way to make the nconf data global was to simply attach it to the process variable (as in process.env), is this a good idea or bad idea? Will it slow down execution in weighing down "process"?
Here is my code:
var nconf = require('nconf');
nconf.argv()
.env()
.file({ file: './config/config.json' });
nconf.defaults({
'http': {
'port': 3000
}
});
process.nconf = nconf;
//now I can retrieve config settings anywhere like so process.nconf.get('key');
frankly, I kind of like this solution. Now I can retrieve the config data anywhere, without having to require a module. But there may be downsides to this...and it could quite possibly be a very bad idea. IDK.
It won't slow down the execution, but feels "smelly". It's hard to discover, and it will be difficult to test, if you ever decide you need to.
A better solution would be to attach settings to a module and use require() to import it wherever needed.
The best solution would be to just pass your settings object to the classes or modules that need it. Either directly, or as part of some kind of "global context".
Eg.
var global = {
settings: {
port: 8080
}
}
//...
global.api = new Api(global);
//...
function Api(global) {
var port = global.settings.port;
}
UPDATE: more info on why the original pattern is bad:
1) Discoverability
You attach your settings to process.settings and go off to a different project. A year later, someone else takes over or you need to update things. Will you remember you attached your settings to process.nconf? Or was it process.settings?
Now imagine you have 10 different global things, attached under different names, on different places.
It's not as bad as attaching directly to the global context, but it's certainly better to clearly see where the stuff you're using is coming from (constructor or module).
2) Testing
You decide you need to test your module. So now you need to tweak your settings for each test instead of loading them from a file or argv. How do you do that?
In case of the global process.nconf or require("settings") patterns, you need to do something like this:
function canOpenAPIOnTheConfiguredPort(done) {
var nconfSaveApiPort = process.nconf.api.port;
process.nconf.api.port = '1234';
var api = new Api();
test.assertEqual(api.port, '1234');
process.nconf.api.port = nconfSaveApiPort;
done();
}
As your application grows, this quickly becomes annoying (eg. imagine having to mock 10 things). In comparison, here's how you do it using the dependency injection (constructor) pattern.
function canOpenAPIOnTheConfiguredPort(done) {
var api = new Api({
port: '1234'
});
test.assertEqual(api.port, '1234');
done();
}
Notice that nconf is a singleton.
I use to configure at the very beginning of the program and then when I need a setting in another file I do:
var nconf = require ('nconf');
nconf.get('x');

How to set process.env before a browserified script is run?

The initial html comes from the back-end. The server has a defined process.env.NODE_ENV (as well as other environment variables). The browserified code is built once and runs on multiple environments (staging, production, etc.), so it isn't possible to inline the environment variables into the browserified script (via envify for example). I'd like to be able to write out the environment variables in the rendered html and for browserified code to use those variables. Is that possible?
Here's how I imagine that being done:
<html>
<head>
<script>window.process = {env: {NODE_ENV: 'production'}};</script>
<script src="/build/browserified_app.js"></script>
</head>
</html>
Instead of hardcoding enviroment variables here and there, use the envify plugin.
npm install envify
This plugin automatically modifies the process.env.VARIABLE_HERE with what you passed as an argument to envify.
For example:
browserify index.js -t [ envify --DEBUG app:* --NODE_ENV production --FOO bar ] > bundle.js
In your application process.env.DEBUG will be replaced by app:*, process.env.NODE_ENV will be replaced by production and so on. This is a clean && elegant way in my opinion to deal with this.
You can change your entry point file, which would basically to do such setup and then require the original main file.
process.env.NODE_ENV = 'production';
require('app.js');
Other way (imo much cleaner) is to use transform like envify which replaces your NODE_ENV in the code with the string value directly.
Option 1
I think your approach should generally work, but I would't write directly to process.env since I am pretty much sure that it gets overwritten in the bundle. Instead you can make global variable like __env and then in the actual bundle code set it to process.env in your entry file. This is untested solution, but I believe it should work.
Option 2
Use localStorage and let your main script read variables from there upon initialization. You can set variables to localStorage manually or you can even let the server provide them if you have them in there. Developer would just open console and type something like loadEnv('production'), it would do XHR and store the result in the localStorage. Even with manual approach there is still an advantage that these doesn't need to hard-coded in html.
If manual doesn't sound good enough and server is a dead end too, you could just include all variables from all environments (if you have them somewhere) in the bundle and then use switch statement to choose correct ones based on some conditions (eg. localhost, production host).
Thinking about this, you are definitely out of scope of Browserify with your needs. It can make bundle for you, but if you don't want these information in the bundle, you are on your own.
So I've decided it's the web server's job to insert the environment variables. My scenario required different loggers per environment (e.g. 'local','test','prod').
code:
var express = require('express')
, replace = require('replace');
...
var app = express();
var fileToReplace = <your browserified js here>;
replace({
regex: 'ENV_ENVIRONMENT'
, replacement: process.env.ENVIRONMENT
, paths: [fileToReplace]
});
...
app.listen(process.env.PORT);
I hardcoded 'ENV_ENVIRONMENT', but you could create an object in your package.json and make it configurable.
This certainly works, and it makes sense because it's possibly the only server entry point you have.
I had success writing to a json file first, then importing that json file into anywhere that needed to read the environment.
So in my gulp file:
import settings from './settings';
import fs from 'fs';
...
fs.writeFileSync('./settings.json', JSON.stringify(settings));
In the settings.js file:
if(process.env.NODE_ENV) {
console.log('Starting ' + process.env.NODE_ENV + ' environment...');
} else {
console.log('No environment variable set.');
process.exit();
}
export default (() => {
let settings;
switch(process.env.NODE_ENV) {
case 'development':
settings = {
baseUrl: '...'
};
break;
case 'production':
settings = {
baseUrl: 'some other url...'
};
break;
}
return settings;
})();
Then you can import the settings.json file in any other file and it will be static, but contain your current environment:
import settings from './settings.json';
...
console.log(settings.baseUrl);
I came here looking for a cleaner solution...good luck!
I run into this problemn building isomorphic react apps. I use the following (ok, it's a little hacky) solution:
I assign the env to the window object, ofcourse I don't expose all env vars, only the ones that may be public (no secret keys of passwords and such).
// code...
const expose = ["ROOT_PATH", "ENDPOINT"];
const exposeEnv = expose.reduce(
(exposeEnv, key) => Object.assign(exposeEnv, { [key]: env[key] }), {}
);
// code...
res.send(`<DOCTYPE html>
// html...
<script>window.env = ${JSON.stringify(exposeEnv)}</script>
// html...
`);
// code...
then, in my applications clients entry point (oh yeah you have to have a single entry point) I do this:
process.env = window.env;
YMMV AKA WFM!

Resources