Is there a way to test functions with Jasmine without exposing them? - node.js

I have a utils.js file with some generic utility functions. I use module.exports to expose some of the functions while keep the rest hidden.
module.exports = {
utils_1: function_1,
utils_2: function_2
};
Suppose there are other functions, namely function_3 and function_4 that I would like not to expose, yet I want to test with Jasmine. The only way I can currently do it is by exposing them with the rest of the functions and writing my test suites as usual.
Is there a way to test some functions via Jasmine while not exposing them?
The only thing I can think of is creating a different javascript file containing (and exposing) function_3 and function_4, require that file in my utils.js and at the same time test the new file with Jasmine, but this would split my original file in two (and the same would apply for all other similar files).

function_3 and function_4 are used inside function_1 and function_2, right? So, do you really need to unit test it? I would not. But other people might think the contrary.
Here is my example
// utils.js
function function_3(obj) {
obj.a = 3;
return obj;
}
function function_4(obj) {
obj = function_3(obj); // invoke function_3
obj.b = 4;
return obj;
}
module.exports = { utils_4: function_4 };
The test:
const { utils_4 } = require('utils');
describe('test 1', () => {
it('should pass', () => {
const obj = utils_4 ({});
expect(obj.a).toBeDefined();
expect(obj.a).toEqual(3);
expect(obj.b).toBeDefined();
expect(obj.b).toEqual(4);
});
});
It should pass, right? See that I only test function_4, but, if I change the implementation of function_3 in a way that brakes my test, the test will detect it.
If we change function_3 to
function function_3(obj) {
obj.c = 3;
return obj;
}
The test will fail.
This is an awful example but it is just illustrative. Take the good part of it. But again, other people might think the contrary and will go with an approach to test function_3 directly. Use a coverage lib to see what is covered by your tests.
Hope it helps

Your private / unexposed functions are used within your exported functions, are they not? Thereby, they are tested implicitly as soon as you write all the test cases for your public / exposed functions.
Exposing functions or writing tests on private functions is missing the goal of unit tests: Testing the public interface.
Think of private / unexposed methods as implementation details. You should not write tests for those, as the private methods are abstracting the implementation detail. It becomes tedious and over-complicated to have tests break because some internal behavior changed slightly.
You should be able to refactor or rewrite private functions at a whim, yet as long as they public interface is fulfilled, your tests shall remain green.
It may make sense to create a new module for your function_3 and function_4, depending on what they are doing for you. If I discover, that I want to test something internally, that's a sign to make it into its own module, with its own public interface.
So I'd say that your intention of moving the functions into a different JavaScript file is actually the right idea. (Yet you may realize that here only function_3 needs to be exposed, whereas function_4 can still be hidden).

Related

how to pass a shared variable to downstream modules?

I have a node toplevel myapp variable that contains some key application state - loggers, db handlers and some other data. The modules downstream in directory hierarchy need access to these data. How can I set up a key/value system in node to do that?
A highly upticked and accepted answer in Express: How to pass app-instance to routes from a different file? suggests using, in a lower level module
//in routes/index.js
var app = require("../app");
But this injects a hard-coded knowledge of the directory structure and file names which should be a bigger no-no jimho. Is there some other method, like something native in JavaScript? Nor do I relish the idea of declaring variables without var.
What is the node way of making a value available to objects created in lower scopes? (I am very much new to node and all-things-node aren't yet obvious to me)
Thanks a lot.
Since using node global (docs here) seems to be the solution that OP used, thought I'd add it as an official answer to collect my valuable points.
I strongly suggest that you namespace your variables, so something like
global.myApp.logger = { info here }
global.myApp.db = {
url: 'mongodb://localhost:27017/test',
connectOptions : {}
}
If you are in app.js and just want to allow access to it
global.myApp = this;
As always, use globals with care...
This is not really related to node but rather general software architecture decisions.
When you have a client and a server module/packages/classes (call them whichever way you like) one way is to define routines on the server module that takes as arguments whichever state data your client keeps on the 'global' scope, completes its tasks and reports back to the client with results.
This way, it is perfectly decoupled and you have a strict control of what data goes where.
Hope this helps :)
One way to do this is in an anonymous function - i.e. instead of returning an object with module.exports, return a function that returns an appropriate value.
So, let's say we want to pass var1 down to our two modules, ./module1.js and ./module2.js. This is how the module code would look:
module.exports = function(var1) {
return {
doSomething: function() { return var1; }
};
}
Then, we can call it like so:
var downstream = require('./module1')('This is var1');
Giving you exactly what you want.
I just created an empty module and installed it under node_modules as appglobals.js
// index.js
module.exports = {};
// package.json too is barebones
{ "name": "appGlobals" }
And then strut it around as without fearing refactoring in future:
var g = require("appglobals");
g.foo = "bar";
I wish it came built in as setter/getter, but the flexibility has to be admired.
(Now I only need to figure out how to package it for production)

Mockito isNotNull passes null

Thanks in advance for the help -
I am new to mockito but have spent the last day looking at examples and the documentation but haven't been able to find a solution to my problem, so hopefully this is not too dumb of a question.
I want to verify that deleteLogs() calls deleteLog(Path) NUM_LOGS_TO_DELETE number of times, per path marked for delete. I don't care what the path is in the mock (since I don't want to go to the file system, cluster, etc. for the test) so I verify that deleteLog was called NUM_LOGS_TO_DELETE times with any non-null Path as a parameter. When I step through the execution however, deleteLog gets passed a null argument - this results in a NullPointerException (based on the behavior of the code I inherited).
Maybe I am doing something wrong, but verify and the use of isNotNull seems pretty straight forward...here is my code:
MonitoringController mockController = mock(MonitoringController.class);
// Call the function whose behavior I want to verify
mockController.deleteLogs();
// Verify that mockController called deleteLog the appropriate number of times
verify(mockController, Mockito.times(NUM_LOGS_TO_DELETE)).deleteLog(isNotNull(Path.class));
Thanks again
I've never used isNotNull for arguments so I can't really say what's going wrong with you code - I always use an ArgumentCaptor. Basically you tell it what type of arguments to look for, it captures them, and then after the call you can assert the values you were looking for. Give the below code a try:
ArgumentCaptor<Path> pathCaptor = ArgumentCaptor.forClass(Path.class);
verify(mockController, Mockito.times(NUM_LOGS_TO_DELETE)).deleteLog(pathCaptor.capture());
for (Path path : pathCaptor.getAllValues()) {
assertNotNull(path);
}
As it turns out, isNotNull is a method that returns null, and that's deliberate. Mockito matchers work via side effects, so it's more-or-less expected for all matchers to return dummy values like null or 0 and instead record their expectations on a stack within the Mockito framework.
The unexpected part of this is that your MonitoringController.deleteLog is actually calling your code, rather than calling Mockito's verification code. Typically this happens because deleteLog is final: Mockito works through subclasses (actually dynamic proxies), and because final prohibits subclassing, the compiler basically skips the virtual method lookup and inlines a call directly to the implementation instead of Mockito's mock. Double-check that methods you're trying to stub or verify are not final, because you're counting on them not behaving as final in your test.
It's almost never correct to call a method on a mock directly in your test; if this is a MonitoringControllerTest, you should be using a real MonitoringController and mocking its dependencies. I hope your mockController.deleteLogs() is just meant to stand in for your actual test code, where you exercise some other component that depends on and interacts with MonitoringController.
Most tests don't need mocking at all. Let's say you have this class:
class MonitoringController {
private List<Log> logs = new ArrayList<>();
public void deleteLogs() {
logs.clear();
}
public int getLogCount() {
return logs.size();
}
}
Then this would be a valid test that doesn't use Mockito:
#Test public void deleteLogsShouldReturnZeroLogCount() {
MonitoringController controllerUnderTest = new MonitoringController();
controllerUnderTest.logSomeStuff(); // presumably you've tested elsewhere
// that this works
controllerUnderTest.deleteLogs();
assertEquals(0, controllerUnderTest.getLogCount());
}
But your monitoring controller could also look like this:
class MonitoringController {
private final LogRepository logRepository;
public MonitoringController(LogRepository logRepository) {
// By passing in your dependency, you have made the creator of your class
// responsible. This is called "Inversion-of-Control" (IoC), and is a key
// tenet of dependency injection.
this.logRepository = logRepository;
}
public void deleteLogs() {
logRepository.delete(RecordMatcher.ALL);
}
public int getLogCount() {
return logRepository.count(RecordMatcher.ALL);
}
}
Suddenly it may not be so easy to test your code, because it doesn't keep state of its own. To use the same test as the above one, you would need a working LogRepository. You could write a FakeLogRepository that keeps things in memory, which is a great strategy, or you could use Mockito to make a mock for you:
#Test public void deleteLogsShouldCallRepositoryDelete() {
LogRepository mockLogRepository = Mockito.mock(LogRepository.class);
MonitoringController controllerUnderTest =
new MonitoringController(mockLogRepository);
controllerUnderTest.deleteLogs();
// Now you can check that your REAL MonitoringController calls
// the right method on your MOCK dependency.
Mockito.verify(mockLogRepository).delete(Mockito.eq(RecordMatcher.ALL));
}
This shows some of the benefits and limitations of Mockito:
You don't need the implementation to keep state any more. You don't even need getLogCount to exist.
You can also skip creating the logs, because you're testing the interaction, not the state.
You're more tightly-bound to the implementation of MonitoringController: You can't simply test that it's holding to its general contract.
Mockito can stub individual interactions, but getting them consistent is hard. If you want your LogRepository.count to return 2 until you call delete, then return 0, that would be difficult to express in Mockito. This is why it may make sense to write fake implementations to represent stateful objects and leave Mockito mocks for stateless service interfaces.

testing a function from node jasmin

I'm still fairly new to testing, and am trying to figure out how to test a function in jasmine, node. Up to now I've only tested methods on my object.
I have a node.js module which only exports two of my functions, but I need to be able to test the other functions without exporting them, as I don't want them to be public.
This is some example code, as the real code isn't important
function initialize(item){
//do some initializing
return item;
}
function update(x){
if(!this.initialized){
initialize(this);
}
this.value = x;
return this;
}
module.exports.update = update;
How would I write a test for the initialize function, without using the update method?
Is there a way to do this? Or does everything have to be a part of an object, and then I only export the parts that I need?
By design, functions you don't export are not accessible from outside. Your tests that require your module only see the exports. So you can test the initialize function only through calling the update function. Since the module just initializes once you may clear the require cache before each test to require the module again with a fresh / uninitialized state. Clearing the require cache is officially allowed / not a hack.

Best practices for callbacks within a scope

Typically, in "constructor" you subscribe to events with lambda-functions:
function Something(){
this.on('message', function(){ ... });
}
util.inherits(Something, events.EventEmitter);
This works well but extends bad. Methods play better with inheritance:
function Something(){
this.on('message', this._onMessage);
}
util.inherits(Something, events.EventEmitter);
Something.prototype._onMessage = function(){ ... };
What are the best practices to keep these event handler functions?
if i understood the question correctly then i think that it depends on how much open for changes you are willing to be.
your second example opens the option for subclasses (or, actually, any class) to override the handler's code, which isn't necessarily a good thing.
the first example prevents overriding but at the cost of having anonymous functions (sometimes containing a lot of code) inside your constructor. however, this code can be extracted to another private function (not on the prototype, just a regular function inside the module's file).
the open-close principal deals with this kind of questions.

Models with dependency injection in Nodejs

What is the best practice for injecting dependencies into models? And especially, what if their getter are asynchronous, as with mongodb.getCollection()?
The point is to inject dependencies once with
var model = require('./model')({dep1: foo, dep2: bar});
and call all member methods without having to pass them as arguments. Neither do I want to have each method to begin with a waterfall of async getters.
I ended up with a dedicated exports wrapper that proxies all calls and passes the async dependencies.
However, this creates a lot of overhead, it's repetitive a lot and I generally do not like it.
var Entity = require('./entity');
function findById(id, callback, collection) {
// ...
// callback(null, Entity(...));
};
module.exports = function(di) {
function getCollection(callback) {
di.database.collection('users', callback);
};
return {
findById: function(id, callback) {
getCollection(function(err, collection) {
findById(id, callback, collection);
});
},
// ... more methods, all expecting `collection`
};
};
What is the best practice for injecting dependencies, especially those with async getters?
If your need is to support unit testing, dependency injection in a dynamic language like javascript is probably more trouble than it's worth. Note that just about none of the modules you require from others are likely to use the patterns for DI you see in Java, .NET, and with other statically compiled languages.
If you want to mock out behavior in order to isolate specific units of code for testing, see the 'sinon' module http://sinonjs.org/. It allows you to dynamically swap in/out interceptors that can either spy on method calls or replace them altogether. In practice, you would write a mocha test where you require your module, then require a module that's leveraged in your code. Use sinon to spy or stub a method on that module and as a result, you can isolate your code.
There is one scenario where I've not been able to completely isolate 3rd party code with sinon, and this is when the act of require()ing a module executes some behavior that you don't want to run in your test. For that scenario, I made a super simple module called 'mockrequire' https://github.com/mateodelnorte/mockrequire that allows you to provide an inline mock to be required instead of the actual module. You can provide a mock that uses spy or stub from sinon and have the same syntax and patterns as all the rest of your tests.
Hopefully this answers the underlying question from your post. ;)
In very simple situations, you could simply export a function that modifies objects in your file scope and returns your actual exports object, but if you want to inject more variably (i.e. for more than one use from your app) it's generally better to create a wrapper object like you have done.
You can reduce some overhead and indentation in some situations by using a wrapper class instead of a function returning an object.
For instance
function findById(id, callback, collection) {
// ...
// callback(null, Entity(...));
};
function Wrapper(di) {
this.di = di;
}
module.exports = Wrapper; // or do 'new' usage in a function if preferred
Wrapper.prototype.findById = function (id, callback) {
// use this.di to call findById and getCollection
}, // etc
Other than that, it's not a whole lot you can do to improve things. I like this approach though. Keeps the state di explicit and separate from the function body of findById and by using a class you reduce the nesting of indentation a little bit at least.

Resources