Scenario:
I want to keep a clean structure, something like
/org/myapp/events/Events.js
/org/myapp/events/EventDispatcher.js
/org/myapp/game/Game.js
/org/myapp/characters/Worker.js
/org/myapp/characters/Warrior.js
/org/myapp/characters/Elite.js
etc etc
Currently I'm using require, and module.exports. It works but I'm curious as to what the community is doing outside of what I've been reading.
Question:
What is a good way to organize a medium to large scale js application (30+ classes) while keeping a good handle on performance as well as overall organization?
Current Implementation
Based on my example above I would be doing
Events.js
class Events {
...
}
module.exports = Events;
Then for every class that uses Events
const Events = require("/org/myapp/events/Events.js");
What you're doing works fine, but personally I prefer to have each directory (e.g. /org/myapp/events) have an index.js file which exports the classes within the folder, like so:
// /org/myapp/events/index.js
module.exports = {
EventDispatcher: require('./event-dispatcher.js')
}
Then consuming code can require the whole directory once and then access the parts as needed:
const events = require('./events');
const dispatcher = new events.EventDispatcher();
Related
I currently have a project in a loose ES6 module format and my database connection is hard coded. I am wanting to turn this into an npm module and am now facing the issue of how to best allow the end user to configure the code. My first attempt was to rewrite it as classes to be instantiated but it is making the use of the code more convoluted than before so am looking at alternatives. I am exploring my configuration options. It looks like writing to the process env would be the way but I am pondering potential issues, no-nos and other options I have not considered.
Is having the user write config to process env an acceptable method of configuring an npm module? It's a bit like a global write so am dealing with namespace considerations for one. I have also considered using package.json but that's not going to work for things like credentials. Likewise using an rc file is cumbersome. I have not found any docs on the proper methodology if any.
process.env['MY_COOL_MODULE_DB'] = ...
There are basically 5ish options as I see it:
hardcode - not an option
create a configured scope such as classes - what I have now and bleh
use a config such as node-config - not really a user friendly option for npm
store as globals/env. As suggested in comment I can wrap that process in an exported function and thereby ensure that I have a complex non collisive namespace while abstracting that from end user
Ask user to create some .rc file - I would if I was big time like AWS but not in this case.
I mention this npm use case but this really applies to the general challenge of configuring code that is exported as functions. I have use cases for classes but when the only need is creating a configured scope at the expense (in my case) of more complex code I am not sure its worth it.
Update I realize this is a bit of a discussion question but it's helped me wrap my brain around options. I think something like this:
// options.js
let options = {}
export function setOptions(o) { options = o }
export function getOptions(o) { return options }
Then have the user call setOptions() and call this getOptions internally. I realize that since Node requires the module just once that my options object will be kept configured as I pass it around.
NPM modules should IMO be agnostic as to where configuration is stored. That should be left up to the developer, and they may pick their favorite method (env vars, rc files, JSON files, whatever).
The configuration can be passed to your module in various ways. A common way is to export a function that takes an options object:
export default options => {
let db = database.connect(options.database);
...
}
From there, it really depends on what exactly your module provides. If it's just a bunch of loosely coupled functions, you can just return an object:
export default options => {
let db = database.connect(options.database);
return {
getUsers() { return db.getUsers() }
}
}
If you want to allow multiple versions of that object to exist simultaneously, you can use classes:
class MyClass {
constructor(options) {
...
}
...
}
export default options => {
return new MyClass(options)
}
Or export the entire class itself.
If the number of configuration options is limited (say 3 or less), you can also allow them to be passed as separate arguments, instead of passing an object.
I'm having trouble deciding/determining what the standard/best approach for passing dependencies over to my Express routes would be. The problem I'm encountering is that some of my libraries have bindings associated with them ie. my session manager is hooked up to my redis queue client. My logs have configuration settings mapping them accordingly etc.
So here are my routes:
// Include routes and endpoints
app.use(require('./routes/account')(queue, models, log));
app.use(require('./routes/message')(queue, models, log));
app.use(require('./routes/admin') (queue, models, log));
As you can see I'm passing in my dependencies as parameters. I haven't started writing my unit tests yet, but I suspect this will bring more painful when I get there.
The other thing I've thought of doing is attaching the libraries to my request object as such:
req.log = log;
req.nconf = nconf;
req.sessions = sessions;
What I'm unsure of is: a) what the best/standard practice is, b) do any of these methods impact performance/memory usage, c) how will this impact my future unit testing.
Any insights on this would be great!
Thanks.
I would set them at the app level:
app.set('queue', queue)
Then pass your app to your routing functions:
app.use(require ('./routes/admin')(app))
Then retrieve:
queue = app.get('queue')
You might also consider requiring the libraries at the top of the file as needed.
So in the example with the log it would be something like:
var log = require('myLogger');
// or require('../path/to/logger') if it's your own module
Then you can go on to use the log in that file.
In the case of models, it makes it more clear by just including the one you need.
Ex:
// messageHandler.js
var Message = require('../models/message');
function sendMessage(params){
// do some logic on params
Message.send(params);
}
This works well for unit testing, because you can require things as needed, and also stub things by just changing which file you're requiring.
I hope this helps!
I have a node toplevel myapp variable that contains some key application state - loggers, db handlers and some other data. The modules downstream in directory hierarchy need access to these data. How can I set up a key/value system in node to do that?
A highly upticked and accepted answer in Express: How to pass app-instance to routes from a different file? suggests using, in a lower level module
//in routes/index.js
var app = require("../app");
But this injects a hard-coded knowledge of the directory structure and file names which should be a bigger no-no jimho. Is there some other method, like something native in JavaScript? Nor do I relish the idea of declaring variables without var.
What is the node way of making a value available to objects created in lower scopes? (I am very much new to node and all-things-node aren't yet obvious to me)
Thanks a lot.
Since using node global (docs here) seems to be the solution that OP used, thought I'd add it as an official answer to collect my valuable points.
I strongly suggest that you namespace your variables, so something like
global.myApp.logger = { info here }
global.myApp.db = {
url: 'mongodb://localhost:27017/test',
connectOptions : {}
}
If you are in app.js and just want to allow access to it
global.myApp = this;
As always, use globals with care...
This is not really related to node but rather general software architecture decisions.
When you have a client and a server module/packages/classes (call them whichever way you like) one way is to define routines on the server module that takes as arguments whichever state data your client keeps on the 'global' scope, completes its tasks and reports back to the client with results.
This way, it is perfectly decoupled and you have a strict control of what data goes where.
Hope this helps :)
One way to do this is in an anonymous function - i.e. instead of returning an object with module.exports, return a function that returns an appropriate value.
So, let's say we want to pass var1 down to our two modules, ./module1.js and ./module2.js. This is how the module code would look:
module.exports = function(var1) {
return {
doSomething: function() { return var1; }
};
}
Then, we can call it like so:
var downstream = require('./module1')('This is var1');
Giving you exactly what you want.
I just created an empty module and installed it under node_modules as appglobals.js
// index.js
module.exports = {};
// package.json too is barebones
{ "name": "appGlobals" }
And then strut it around as without fearing refactoring in future:
var g = require("appglobals");
g.foo = "bar";
I wish it came built in as setter/getter, but the flexibility has to be admired.
(Now I only need to figure out how to package it for production)
I'm starting out a long term project, based on Node.js, and so I'm looking to build upon a solid dependency injection (DI) system.
Although Node.js at its core implies using simple module require()s for wiring components, I find this approach not best suited for a large project (e.g. requiring modules in each file is not that maintainable, testable or dynamic).
Now, I'd done my bits of research before posting this question and I've found out some interesting DI libraries for Node.js (see wire.js and dependable.js).
However, for maximal simplicity and minimal repetition I've come up with my own proposition of implementing DI:
You have a module, di.js, which acts as the container and is initialized by pointing to a JSON file storing a map of dependency names and their respective .js files.
This already provides a dynamic nature to the DI, as you may easily swap test/development dependencies.
The container can return dependencies by using an inject() function, which finds the dependency mapping and calls require() with it.
For simplicity, the module is assigned to a global variable, i.e. global.$di, so that any file in the project may use the container/injector by calling $di.inject().
Here's the gist of the implementation:
File di.js
module.exports = function(path) {
this.deps = require(path);
return {
inject: function(name) {
if (!deps[name])
throw new Error('dependency "' + name + '" isn\'t registered');
return require(deps[name]);
}
};
};
Dependency map JSON file
{
"vehicle": "lib/jetpack",
"fuel": "lib/benzine",
"octane": "lib/octane98"
}
Initialize the $di in the main JavaScript file, according to development/test mode:
var path = 'dep-map-' + process.env.NODE_ENV + '.json;
$di = require('di')(path);
Use it in some file:
var vehicle = $di.inject('vehicle');
vehicle.go();
So far, the only problem I could think of using this approach is the global variable $di.
Supposedly, global variables are a bad practice, but it seems to me like I'm saving a lot of repetition for the cost of a single global variable.
What can be suggested against my proposal?
Overall this approach sounds fine to me.
The way global variables work in Node.js is that when you declare a variable without the var keyword, and it gets added to the global object which is shared between all modules. You can also explicitly use global.varname. Example:
vehicle = "jetpack"
fuel = "benzine"
console.log(vehicle) // "jetpack"
console.log(global.fuel) // "benzine"
Variables declared with var will only be local to the module.
var vehicle = "car"
console.log(vehicle) // "car"
console.log(global.vehicle) // "jetpack"
So in your code if you are doing $di = require('di')(path) (without var), then you should be able to use it in other modules without any issues. Using global.$di might make the code more readable.
Your approach is a clear and simple one which is good. Whether you have a global variable or require your module every time is not important.
Regarding testability it allows you to replace your modules with mocks. For unit testing you should add a function that makes it easy for you to apply different mocks for each test. Something that extends your dependency map temporarily.
For further reading I can recommend a great blog article on dependency injection in Node.js as well as a talk on the future dependency injector of angular.js which is designed by some serious masterminds.
BTW, you might be interested in Fire Up! which is a dependency injection container I implemented.
Say I create a library in ./libname which contains one main file: main.js and multiple optional library files which are occasionally used with the main object: a.js and b.js.
I create index.js file with the following:
exports.MainClass = require('main.js').MainClass; // shortcut
exports.a = require('a');
exports.b = require('b');
And now I can use the library as follows:
var lib = require('./libname');
lib.MainClass;
lib.a.Something; // Here I need the optional utility object
lib.b.SomeOtherThing;
However, that means, I load 'a.js' and 'b.js' always and not when I really need them.
Sure I can manually load the optional modules with require('./libname/a.js'), but then I lose the pretty lib.a dot-notation :)
Is there a way to implement on-demand loading with some kind of index file? Maybe, some package.json magic can play here well?
This may possible if you called the "MainClass" to dynamically load the additional modules on-demand. But I suspect this will also mean an adjustment in the api to access the module.
I am guess your motivation is to "avoid" the extra processing used by "non-required modules".
But remember Node is single threaded so the memory footprint of loading a module is not per-connection, it's per-process. Loading a module is a one-off to get it into memory.
In other words the modules are only ever loaded when you start your server not every-time someone makes a request.
I think you looking at this from client-side programming where it's advantages to load scripts when they are required to save on both processing and or http requests.
On the server the most you will be saving is few extra bites in memory.
Looks like the only way is to use getters. In short, like this:
exports = {
MainClass : require('main.js').MainClass,
get a(){ return require('./a.js'); },
get b(){ return require('./a.js'); }
}