I'm working on a refactor of my application. I'm using require.js at the top of a service class to get my sequelize models -- I have about 15 models.
For some reason, the models are an empty object unless I require them further down in my flow (moving the require statement inside a function call works, for example).
So for example, when the require is at the top, a statement like models.Foo.findOne() throws Cannot read property Foo of undefined.
Figured out I had a circular dependency -- essentially a model ended up dependent on a file that required models.
Sometime I use similar code to avoid this problem:
(() => models.Foo)().findOne()
or
(() => Bar.sequelize.models.Foo)().findOne()
Related
I'm doing this Ensime package for Atom.io https://github.com/ensime/ensime-atom and I've been thinking about the possibility to use scala.js instead of writing Coffeescript.
Atom is a web based editor which is scripted with js and is node.js based. A plugin/package defines it's main entry point by pointing out a javascript object with a few specific.
I figured I should start out simple and try using scala.js replacing the simplest coffeescript file I have:
{View} = require 'atom-space-pen-views'
# View for the little status messages down there where messages from Ensime server can be shown
module.exports =
class StatusbarView extends View
#content: ->
#div class: 'ensime-status inline-block'
initialize: ->
serialize: ->
init: ->
#attach()
attach: =>
statusbar = document.querySelector('status-bar')
statusbar?.addLeftTile {item: this}
setText: (text) =>
#text("Ensime: #{text}").show()
destroy: ->
#detach()
As you can see this exports a require.js module and is a class extending a class fetched with require as well.
Sooo.
I'm thinking I'd just use Dynamic for the require dep as I've seen on SO How to invoke nodejs modules from scala.js?:
import js.Dynamic.{global => g}
import js.DynamicImplicits._
private[views] object SpacePen {
private val spacePenViews = require("atom-space-pen-views")
val view = spacePenViews.view
}
But if I wanted to type the super-class, could I just make a facade-trait and do asInstanceOf?
Secondly, I wonder how I can export my class as a node module. I found this:
https://github.com/rockymadden/scala-node/blob/master/main/src/main/coffeescript/example.coffee
Is this the right way? Do I need to do the sandboxing? Couldn't I just get moduleimported from global and write module.exports = _some_scala_object_?
I'm also wondering how I could extend existing js classes. The same problem as asked here, but I don't really understand the answer:
https://groups.google.com/forum/#!topic/scala-js/l0gSOSiqubs
My code so far:
private[views] object SpacePen {
private val spacePenViews = js.Dynamic.global.require("atom-space-pen-views")
type View = spacePenViews.view
}
class StatusBarView extends SpacePen.View {
override def content =
super.div()
}
gives me compile errors that I can't extend sealed trait Dynamic. Of course.
Any pointers highly appreciated!
I'm not particularly expert in Node per se, but to answer your first question, yes -- if you have a pointer to a JS object, and you know the details of its type, you can pretty much always define a facade trait and asInstanceOf to use it. That ought to work.
As for the last bit, you basically can't extend JS classes in Scala.js -- it just doesn't work. The way most of us get around that is by defining implicit classes, or using implicit def's, to get the appearance of extending without actually doing so.
For example, given JS class Foo, I can write
implicit class RichFoo(foo:Foo) {
def method1() = { ... }
}
This is actually a wrapper around Foo, but calling code can simply call foo.method1() without worrying about that detail.
You can see this approach in action very heavily in jquery-facade, particularly in the relationship between JQuery (the pure facade), JQueryTyped (some tweaked methods over JQuery to make them work better in Scala), and JQueryExtensions (some higher-level functions built around JQuery). These are held together using implicit def's in package.scala. As far as calling code is concerned, all of these simply look like methods on JQuery.
How do you require a file within itself in node.js? E.g. api.js:
var api = require(./api.js);
What is the best practice for doing this?
You can totally do it. Try this, for instance (in a file named a.js):
exports.foo = 'foo';
var a = require('./a');
console.log(a);
exports.bar = 'bar';
console.log(a);
At the point where require executes, it will return the module a as it exists at the point where require runs so the field foo will be defined but not bar.
There's no point to doing this though. You use require to bring into your current scope an object which would otherwise not be available (namely, a module). But you don't need to do this to access the module you are currently in: it is already fully available.
The code above works because Node has rules to handle cyclic dependencies. And here you have a module which is cyclicly dependent on itself. Rather than go into an infinite loop of requires, Node has require return the module as built up to that point: a partial module. And the modules in a cyclic dependency have to be designed to handle the fact that they may get partial modules.
Cyclic dependencies are to be avoided as much as possible. Most of the time this means refactoring the modules to avoid the mutual dependency by moving functionality into one or more new modules.
So, again, the best practice is to not do this in the first place.
I'm starting out a long term project, based on Node.js, and so I'm looking to build upon a solid dependency injection (DI) system.
Although Node.js at its core implies using simple module require()s for wiring components, I find this approach not best suited for a large project (e.g. requiring modules in each file is not that maintainable, testable or dynamic).
Now, I'd done my bits of research before posting this question and I've found out some interesting DI libraries for Node.js (see wire.js and dependable.js).
However, for maximal simplicity and minimal repetition I've come up with my own proposition of implementing DI:
You have a module, di.js, which acts as the container and is initialized by pointing to a JSON file storing a map of dependency names and their respective .js files.
This already provides a dynamic nature to the DI, as you may easily swap test/development dependencies.
The container can return dependencies by using an inject() function, which finds the dependency mapping and calls require() with it.
For simplicity, the module is assigned to a global variable, i.e. global.$di, so that any file in the project may use the container/injector by calling $di.inject().
Here's the gist of the implementation:
File di.js
module.exports = function(path) {
this.deps = require(path);
return {
inject: function(name) {
if (!deps[name])
throw new Error('dependency "' + name + '" isn\'t registered');
return require(deps[name]);
}
};
};
Dependency map JSON file
{
"vehicle": "lib/jetpack",
"fuel": "lib/benzine",
"octane": "lib/octane98"
}
Initialize the $di in the main JavaScript file, according to development/test mode:
var path = 'dep-map-' + process.env.NODE_ENV + '.json;
$di = require('di')(path);
Use it in some file:
var vehicle = $di.inject('vehicle');
vehicle.go();
So far, the only problem I could think of using this approach is the global variable $di.
Supposedly, global variables are a bad practice, but it seems to me like I'm saving a lot of repetition for the cost of a single global variable.
What can be suggested against my proposal?
Overall this approach sounds fine to me.
The way global variables work in Node.js is that when you declare a variable without the var keyword, and it gets added to the global object which is shared between all modules. You can also explicitly use global.varname. Example:
vehicle = "jetpack"
fuel = "benzine"
console.log(vehicle) // "jetpack"
console.log(global.fuel) // "benzine"
Variables declared with var will only be local to the module.
var vehicle = "car"
console.log(vehicle) // "car"
console.log(global.vehicle) // "jetpack"
So in your code if you are doing $di = require('di')(path) (without var), then you should be able to use it in other modules without any issues. Using global.$di might make the code more readable.
Your approach is a clear and simple one which is good. Whether you have a global variable or require your module every time is not important.
Regarding testability it allows you to replace your modules with mocks. For unit testing you should add a function that makes it easy for you to apply different mocks for each test. Something that extends your dependency map temporarily.
For further reading I can recommend a great blog article on dependency injection in Node.js as well as a talk on the future dependency injector of angular.js which is designed by some serious masterminds.
BTW, you might be interested in Fire Up! which is a dependency injection container I implemented.
I'm refactoring a large javascript codebase to use RequireJS. Unfortunately, many of the files I'm dealing with are not object-oriented, and cannot return an object without significant modification. Is there a more efficient way to give 'dependent' modules access to the functions and variables contained in a module (without returning an object) ?
I have read about using the exports syntax for defining modules, but it is very unclear whether that would be a valid solution for this situation.
In a defined module, the exports object is what gets exported from the module and passed to whatever module requires it.
Consider this:
define(["exports"], function(exports){
exports.myCustomFunction = function(){};
exports.myCustomObject = {};
exports.myCustomVariable = true;
})
This module will place all the disparate functions and/or objects that you want made public onto the exports object.
At this point RequireJS will use that exports object to pass to a module that requires it:
require(["nameOfCustomModule|filename"], function(myCustomModule){
//evaluates to true
console.log(myCustomModule.myCustomVariable);
})
Here's a simple fiddle. Just bring up your console and you will see the value logged there. http://jsfiddle.net/xeucv/
Hope this clears it up a bit!
I've recently started working on a non-trivial project in CoffeeScript and I'm struggling with how best to deal with registering exports etc. I'm writing it in a very 'pythonesque' manner, with individual files effectively being 'modules' of related classes and functions. What I'm looking for is the best way to define classes and functions locally AND in exports/window with as little repetition as possible.
At the moment, I'm using the following in every file, to save writing exports.X = X for everything in the file:
class module
# All classes/functions to be included in exports should be defined with `#`
# E.g.
class #DatClass
exports[name] = item for own name, item of module
I've also looked at the possibility of using a function (say, publish) that puts the passed class in exports/window depending on its name:
publish = (f) ->
throw new Error 'publish only works with named functions' unless f.name?
((exports ? window).namespace ?= {})[f.name] = f
publish class A
# A is now available in the local scope and in `exports.namespace`
# or `window.namespace`
This, however, does not work with functions as, as far as I know, they cannot be 'named' in CoffeeScript (e.g. f.name is always '') and so publish cannot determine the correct name.
Is there any method that works like publish but works with functions? Or any alternative ways of handling this?
It's an ugly hack but you can use the following :
class module.exports
class #foo
#bar = 3
And then :
require(...).foo.bar // 3
The old
(function (exports) {
// my code
exports.someLib = ...
})(typeof exports === "undefined" ? window : exports);
Is a neat trick that should do what you want.
If writing that wrapper boilerplate is a pain then automate it with a build script.
What I'm looking for is the best way to define classes and functions locally AND in exports/window with as little repetition as possible.
It's impossible to do something like
exports.x = var x = ...;
without writing x twice in JavaScript (without resorting to black magicks, i.e. eval), and the same goes for CoffeeScript. Bummer, I know, but that's how it is.
My advice would be to not get too hung up on it; that kind of repetition is common. But do ask yourself: "Do I really need to export this function or variable and make it locally available?" Cleanly decoupled code doesn't usually work that way.
There's an exception to the "no named functions" rule: classes. This works: http://jsfiddle.net/PxBgn/
exported = (clas) ->
console.log clas.name
window[clas.name] = clas
...
exported class Snake extends Animal
move: ->
alert "Slithering..."
super 5