Creating a yieldable Node Module/Object - node.js

I am trying to create a Node module (using harmony) that upon loading by another module/application, has to be yielded to so that things in it's construct can be executed and loaded before any of it's exposed functions can be called.
The issue I am having is that I cannot seem to yield to the internal function that is being executed, using module.exports. An example would help.
module.exports = function*(s_id){
console.log('loading the module lets it execute up till here');
if (!(this instanceof Tester)) return yield new Tester();
}
function* Tester(){
console.log('but we never execute this generator function');
}
Tester.prototype = {
model : function*(){
// other functions
}
}
It's been stumping me for hours now! I feel like the solution is super simple but I cannot seem to wrap my head around it. I have tried to simply make the Tester() function the export, but am still having the same issue. Why can't I seem to yield to the Tester() function?
Also, what may an alternative be to this approach? I want to maintain the Object nature of the module so that the module can be loaded with different inputs, such as the s_id variable/object in the example above.

a Node module (using harmony) that upon loading by another module/application, has to be yielded to so that things in it's construct can be executed and loaded before any of it's exposed functions can be called
Don't do that. Generators are not made for asynchrony. yield doesn't do what you want here. A module is not "yielded" to await something in it to load. yield is magic, but not async magic.
If you must use an asynchronous module loading process, export a promise for your actual module. That is a standard interface for something to await, and can be consumed using a standardized approach that does not rely on internals of your module.
You still can use yield syntax for constructing that promise, just use your favorite coroutine library.
return yield new Tester();
…
function* Tester(){…}
Uh oh. Well yes, apparently it is possible to call generator functions as constructors. But believe me, it is not what you want. A constructor for an arbitrary object should return that object, instead of an iterator (just like it shouldn't return a promise). If some of your object methods are generator methods, that's fine, but your constructor should be not.
If you really want to use a generator function in your code like that (and it's not meant to be a constructor), you
will need to yield* the iterator you've created (tester()) instead of yielding it
must not overwrite its .prototype property like you did, as that causes the iterator to malfunction. (Of course you should never do that at all, even though most of the time it works)

Related

synchronous require so entire Node.js script isn't in callback

When using WebAssembly, there is a callback for onRuntimeInitialized(). You basically can't do anything until it happens.
So if you have a library that is implemented in it, you have to say:
var mylib = require('mylib')
mylib.onRuntimeInitialized = function() {
...
// Anything that wants to use *anything* from mylib
// (doesn't matter if it's synchronous or asynchronous)
...
}
On the plus side, you're not making Node wait to do any initialization that might not rely on mylib...so other modules can be doing fetches or whatever they need. On the negative side, it's pretty bad ergonomics--especially if everything you're doing depends on this library.
One possibility might seem to be to fold the initialization waiting into a promise, and then wait on it:
var mylib = require('mylib')
await mylib.Startup()
But people apparently write about how much they don't like the idea of top-level AWAIT. And my opinion on it is fairly irrelevant either way, as it's not allowed. :-/
So is there really no way to hold up code at the top level besides wrapping the whole app in a callback?
One thing to note with Node is that requires will return the same object, no matter what file that require is called in. Order does matter, but it will be the same object in all files.
So in your main index.js you could do something like
var myLib = require('mylib')
myLib.libby = myLib.initialize()
and then in another file, doesStuff.js, you can do:
const libby = require('mlib').libby
module.exports = function doStuff() {
/* do stuff with initialized libby object */
}
Typically the way this works is that call in doStuff.js is not called until everything is initialized and say the web route is handled. So your server is running already and so libby will be initialized and ready to use once it's called.
If you have something that absolutely cannot fail, like the server will not run if DB connection is not successful or something, then waiting is appropriate, so yes, you'd need to wrap everything (at least the core of your actions) in a callback so your server knows when it's safe to start.

Jest how to assert that function is not called

In Jest there are functions like tobeCalled or toBeCalledWith to check if a particular function is called.
Is there any way to check that a function is not called?
Just use not.
expect(mockFn).not.toHaveBeenCalled()
See the jest documentation
not did not work for me, throwing a Invalid Chai property: toHaveBeenCalled
But using toHaveBeenCalledTimes with zero does the trick:
expect(mock).toHaveBeenCalledTimes(0)
Recent versions of Jest (22.x and onwards) collect quite decent statistics of mock functions calls, just check out their docs.
The calls property shows you the number of calls, the arguments passed to the mock, the result returned out of it and whatnot. You can access it directly, as a property of mock (e.g. in a way how #Christian Bonzelet suggested in his answer):
// The function was called exactly once
expect(someMockFunction.mock.calls.length).toBe(1);
// The first arg of the first call to the function was 'first arg'
expect(someMockFunction.mock.calls[0][0]).toBe('first arg');
// The second arg of the first call to the function was 'second arg'
expect(someMockFunction.mock.calls[0][1]).toBe('second arg');
I personally prefer this way as it gives you more flexibility and keeps code cleaner in case if you test for different inputs that produce a different number of calls.
However, you can also use shorthand aliases for Jest's expect since recently (spy matchers aliases PR). I guess .toHaveBeenCalledTimes would suit fine here:
test('drinkEach drinks each drink', () => {
const drink = jest.fn();
drinkEach(drink, ['lemon', 'octopus']);
expect(drink).toHaveBeenCalledTimes(2); // or check for 0 if needed
});
In rare cases, you might even want to consider writing your own fixture that'd do the counting. It could be useful if you're heavy on conditioning or working with state, for example.
Hope this helps!
Please follow the documentation from jest:
https://jestjs.io/docs/en/mock-functions#mock-property
All mock functions have this special .mock property, which is where data about how the function has been called and what the function returned is kept. The .mock property also tracks the value of this for each call, so it is possible to inspect this as well: [...]
These mock members are very useful in tests to assert how these functions get called, instantiated, or what they returned:
// The function was called exactly once
expect(someMockFunction.mock.calls.length).toBe(1);
Or...
// The function was not called
expect(someMockFunction.mock.calls.length).toBe(0);

Is it ever OK to write an NPM module that sets module.exports to a generator function?

If you want to publish a module that has sequenced IO, is it ever OK to write,
./sequenced_actions.js
module.exports = function * () {}
Thereby permitting something like,
co( function * {
yield require('./sequenced_actions');
} )();
if you want your modules to reach the largest audience possible, just write them in promises. hopefully node v0.12 will have native promises, so this will make things easier
Yes, it's okay to do that.
Generator function is just an ordinary function under the hood. And since node.js allows an arbitrary value to be exports object of a module, you can export whatever you want there.

Node.js + Lazy: Iterate file lines

I'm trying to lazy read a file, however I can't use each().
I want to read the first line of a file, then the first of another file, and so on.
I'm trying to use the Iterator, but with no success.
This is my code:
var Lazy = require('lazy.js');
var it = Lazy.readFile("log.txt")
.lines()
.getIterator();
while(it.moveNext()){
console.log(it.current());
}
Lazy.readFile("log.txt").lines().size() returns 0.
However, this works fine:
Lazy.readFile("log.txt")
.lines()
.each(function(line){
console.log(line);
});
This is a part of Lazy.js that I admittedly haven't done a great job of explaining. Let me copy a snippet from the current documentation for the getIterator method here:
This method is used when asynchronously iterating over sequences. Any
type inheriting from Sequence must implement this method or it can't
support asynchronous iteration.
Note that this method is not intended to be used directly by
application code. Rather, it is intended as a means for implementors
to potentially define custom sequence types that support either
synchronous or asynchronous iteration.
The issue you've run into here is that Lazy.readFile returns an asynchronous sequence. So getIterator will not work, because the Iterator type only exposes a synchronous interface.
I've actually updated Lazy.js since you posted this question; as of 0.3.2, calling getIterator on an async sequence will throw an exception.

Models with dependency injection in Nodejs

What is the best practice for injecting dependencies into models? And especially, what if their getter are asynchronous, as with mongodb.getCollection()?
The point is to inject dependencies once with
var model = require('./model')({dep1: foo, dep2: bar});
and call all member methods without having to pass them as arguments. Neither do I want to have each method to begin with a waterfall of async getters.
I ended up with a dedicated exports wrapper that proxies all calls and passes the async dependencies.
However, this creates a lot of overhead, it's repetitive a lot and I generally do not like it.
var Entity = require('./entity');
function findById(id, callback, collection) {
// ...
// callback(null, Entity(...));
};
module.exports = function(di) {
function getCollection(callback) {
di.database.collection('users', callback);
};
return {
findById: function(id, callback) {
getCollection(function(err, collection) {
findById(id, callback, collection);
});
},
// ... more methods, all expecting `collection`
};
};
What is the best practice for injecting dependencies, especially those with async getters?
If your need is to support unit testing, dependency injection in a dynamic language like javascript is probably more trouble than it's worth. Note that just about none of the modules you require from others are likely to use the patterns for DI you see in Java, .NET, and with other statically compiled languages.
If you want to mock out behavior in order to isolate specific units of code for testing, see the 'sinon' module http://sinonjs.org/. It allows you to dynamically swap in/out interceptors that can either spy on method calls or replace them altogether. In practice, you would write a mocha test where you require your module, then require a module that's leveraged in your code. Use sinon to spy or stub a method on that module and as a result, you can isolate your code.
There is one scenario where I've not been able to completely isolate 3rd party code with sinon, and this is when the act of require()ing a module executes some behavior that you don't want to run in your test. For that scenario, I made a super simple module called 'mockrequire' https://github.com/mateodelnorte/mockrequire that allows you to provide an inline mock to be required instead of the actual module. You can provide a mock that uses spy or stub from sinon and have the same syntax and patterns as all the rest of your tests.
Hopefully this answers the underlying question from your post. ;)
In very simple situations, you could simply export a function that modifies objects in your file scope and returns your actual exports object, but if you want to inject more variably (i.e. for more than one use from your app) it's generally better to create a wrapper object like you have done.
You can reduce some overhead and indentation in some situations by using a wrapper class instead of a function returning an object.
For instance
function findById(id, callback, collection) {
// ...
// callback(null, Entity(...));
};
function Wrapper(di) {
this.di = di;
}
module.exports = Wrapper; // or do 'new' usage in a function if preferred
Wrapper.prototype.findById = function (id, callback) {
// use this.di to call findById and getCollection
}, // etc
Other than that, it's not a whole lot you can do to improve things. I like this approach though. Keeps the state di explicit and separate from the function body of findById and by using a class you reduce the nesting of indentation a little bit at least.

Resources