Should I switch from Vows to Mocha? - node.js

I'm trying to decide whether to switch from Vows to Mocha for a large Node app.
I've enjoyed almost all of the Vows experience - but there is just something strange about the argument passing. I always have to scratch my head to remember how topics work, and that interferes with the basics of getting the tests written. It is particularly problematic on deeply nested asynchronous tests. Though I find that combining Vows with async.js can help a little.
So Mocha seems more flexible in its reporting. I like the freedom to choose the testing style & importantly it runs in the browser too, which will be very useful. But I'm worried that it still doesn't solve the readability problem for deeply nested asynchronous tests.
Does anyone have any practical advice - can Mocha make deeply nested tests readable? Am I missing something?

Mocha is ace. It provides a done callback, rather than waitsFor that jasmine provides. I cant speak about migration from vows but from jasmine it was straight forward. Inside you mocha test function you can use async if you want (or Seq etc.. if you want to be legacy) though if you required nested callbacks at that point its an integration test, which might make you think about the granularity of your tests.
OT: 100% test coverage rarely delivers any value.

Deeply nested tests are solved by using flow control in your unit test suite.
Vows does not allow this easily because the exports style requires creating flow control libraries that support this.
Either write a flow control library for vows or switch to mocha and re-use an existing flow control library.

Related

What is role of the suite function in Mocha?

I read the book Web Development with Node.js and Express. And there is used the function suite().
var assert = require('chai').assert;
suite('tests', function () {
// set of tests
});
I don't understand where it comes from. I can't find any documentation about this function.
Seems that it looks and has same functionality like the describe() function in Mocha.
Mocha supports several different ways of writing tests (interfaces) so that you can choose a style that suits your methodology. describe() and suite() essentially do the same thing: they let you label and group together a set of tests; the grouped tests are organised under a common label in the output and can use common setup and teardown functions.
The choice of which function to use depends on whether you are using a Behaviour Driven Development (BDD) methodology (where you describe() the behaviour you want it() to do), or Test Driven Development (TDD), where you define a suite() of test()s you want your code to pass. You should choose whichever style you feel makes your code more readable.
Here's a blog explaining the Difference Between TDD and BDD with regard to test design.
Documentation can be found on the mocha website: https://mochajs.org/#tdd
suite is the TDD version of describe. You generally use it to describe and isolate the functionality/features/behaviour that you are going to test.

How to avoid code redundancy in large amounts of Node.JS BDD tests

For the last few months, I was working on the backend (REST API) of a quite big project that we started from scratch. We were following BDD (behavior-driven-development) standards, so now we have a large amount of tests (~1000). The tests were written using chai - a BDD framework for Node.JS, but I think that this question can be expanded to general good practices when writing tests.
At first, we tried to avoid code redundancy as much as possible and it went quite well. As the number of lines of code and people working on the project grew it was becoming more and more chaotical, but readable. Sometimes minor changes in the code that could be applied in 15 minutes caused the need to change e.g. mock data and methods in 30+ files etc which meant 6 hours of changes and running tests (extreme example).
TL:DR
We want to refactor now these BDD tests. As an example we have such a function:
function RegisterUserAndGetJWTToken(user_data, next: any){
chai.request(server).post(REGISTER_URL).send(user_data).end((err: any, res: any) => {
token = res.body.token;
next(token);
})
}
This function is used in most of our test files. Does it make sense to create something like a test-suite that would contain this kind of functions or are there better ways to avoid redundancy when writing tests? Then we could use imports like these:
import {RegisterUserAndGetJWTToken} from "./test-suite";
import {user_data} from "./test-mock-data";
Do you have any good practices that you can share?
Are there any npm packages that could be useful (or packages for
other programming languages)?
Do you think that this approach has also downsides (like chaos when
there would be multiple imports)?
Maybe there is a way to inject or inherit the test-suite for
each file, to avoid imports and have it by default in each file?
EDIT: Forgot to mention - I mean integration tests.
Thanks in advance!
Refactoring current test suite
Your principle should be raising the level of abstraction in the tests themselves. This means that a test should consist of high-level method calls, expressed in domain language. For example:
registerUser('John', 'john#smith.com')
lastEmail = getLastEmailSent()
lastEmail.receipient.should.be 'john#smith.com'
lastEmail.contents.should.contain 'Dear John'
Now in the implementation of those methods, there could be a lot of things happening. In particular, the registerUser function could do a post request (like in your example). The getLastEmailSent could read from a message queue or a fake SMTP server. The thing is you hide the details behind an API.
If you follow this principle, you end up creating an Automation Layer - a domain-oriented, programmatic API to your system. When creating this layer, you follow all the good design principles, like DRY.
The benefit is that when a change in the code happens, there will be only one place to change in the test code - in the Automation Layer, and not in the test themselves.
I see that what you propose (extracting the RegisterUserAndGetJWTToken and test data) is a good step towards creating an automation layer. I wouldn't worry about the require calls. I don't see any reason for not being explicit about what our test depends on. Maybe at a later stage some of those could be gathered in larger modules (registration, emailing etc.).
Good practices towards a maintainable test suite
Automate at the right level.
Sometimes it's better to go through the UI or REST, but often a direct call to a function will be more sensible. For example, if you write a test for calculating taxes on an invoice, going through the whole application for each of the test-cases would be an overkill. It's much better to leave one end-to-end test see if all the pieces act together, and automate all the specific cases at the lowest possible level. That way we get both good coverage, as well as speed and robustness of the test-suite.
The guiding principle when writing a test is readability.
You can refer to this discussion for a good explanation.
Treat your test helper code / Automation Layer with the same care as you treat your production code.
This means you should refactor it with great care and attention, following all the good design principles.

Node.js "should" library assertion, how does it work?

Our Mocha test suite has this line:
model.getResourceDependencies.should.be.a.Function;
the test code uses the should library
as you can see the above expression is neither an assignment nor an invocation, or is it?
How does this work? Is there some sort of underlying mechanism onPropertyRead() or something like that so that the should library can execute something even if no function is explicitly called?
By the way, it's damn near impossible to remember any of the should or chai APIs.
should.js uses ES5 getter.
https://github.com/shouldjs/should.js/blob/9.0.2/lib/should.js#L105
chai uses it too.
https://github.com/chaijs/chai/blob/3.5.0/lib/chai/interface/should.js#L35
In general, such behavior is available with ES5 getter or ES6 Proxy (and Object.prototype.__noSuchMethod__
in old days)

In vows, is there a `beforeEach` / `setup` feature?

Vows has an undocumented teardown feature, but I cannot see any way to setup stuff before each test (a.k.a. beforeEach).
One would think it would be possible to cheat and use the topic, but a topic is only run once (like teardown), whereas I would like this to be run before each test. Can this not be done in vows?
You can create a topic that does the setup, and the tests come after that. If you want it to run multiple times, create a function and have multiple topics that call that function.
It is a bit convoluted because it is not explicit, you should definitely consider mocha not only because it is actively maintained, but it makes tests easier to read than what you end up with when using vows.

Node.js programming workflow - Tests, Code, Tests

Before you start developing something useful in Node.js, what's your process? Do you create tests on VowJS, Expresso? Do you use Selenium tests? When?
I'm interested in gaining a nice workflow to develop all my node.js applications similar to Rails (Cucumber, Rspec, Code).
Sorry for the amount of questions.
Let me know how it works out with you.
The first thing I do is to write some documentation or do some wireframes. It helps to visualize what do I want to implement.
Then I code the interface/skeleton of my module/application, without implementations.
Then I add specs and tests using testosterone (although vows and expresso are more popular options) and I make them pass by implementing them.
If you find that a private method needs to be tested (it deals with I/O, has complex logic ...) move it to a another class and test it independently.
Stub your I/O calls as much as you can. Tests will run faster and you will not have to deal with side effects. I recommend gently.
My testing methodology isn't up the snuff as in for example Java/Junit and I should really work more on this(improve). I should really practice TDD more.
I played a little bit with expresso and liked to the fact that you could generate code coverage reports. What I thought was missing was something like #before #beforeclass #after which you can find in java.
I also played a bit with nodeunit which does have setup/teardown. I still like to play a little bit more with this framework.
I don't like the vowjs syntax, but it is very popular BDD framework, so maybe I should use it (more) to get sold like a lot of other users. But for now I am going to dismiss vowjs.
I also played with zombie.js a litle bit which is also pretty cool. I also lately saw another cool testing framework which I can't remember the name, but there are luckily enough options to do testing in node.js.
The only thing I don't like is that the integration with IDE is not up to snuff in my opinion. The IDE I had for Java cannot be compared with what I have found for node.js, but I think with a little bit effort I can make a more useful programming environment. I will try and keep you guys informed about this progress.
P.S: But what I do like a lot is the npm package manager. When you compare it to for example maven you just say wow. I still has some minor bugs because it is still a young project. But still npm is very good in my opinion!

Resources