Coverage drops when using babel - node.js

I stalled the decision of using babel but found, that it is necessary to write better code.
Before babel I used mocha and chai I started to test my code and reached up to a 100%. But since using it, my code coverage drops significantly (of course) as I only try to cover the resulting ES5 output.
So my question would be: How to test my source code without having a huge drop at my statistics.

Generally the core issue with this, is that Babel has to insert code to cover all of the edge cases of the spec, but may not matter from the standpoint of coverage calculation.
The best approach currently would be to use https://github.com/istanbuljs/babel-plugin-istanbul to add the coverage tracking metadata to your original ES6 code, which means that even though Babel eventually converts it to ES5, the coverage will be about the ES6 code.

Related

Is there a way to include a secondary coverage report during a Jest run through?

Using Jest I have created a series of tests. Some of the tests run through Puppeteer others are just basic tests. They all run however in one test run(for reasons I have).
The coverage for the basic tests are running. I am also getting a semi coherent coverage report using puppeteer-to-istanbul. I have to clean it up but it's workable at the moment.
I would like to find a way, as part of that single run, to combine the coverage data coming from Puppeteer into the coverage report being generated by Jest and the coverage provider it is using and have them reported in a single step at the end of the test run.
I have found various methods of possibly combining the reports after the test run. But not to bog this question down I will say this is not an ideal solution.

Test both source code and bundled code with jest

Let's say I am developing an NPM module.
I am using Jest for the testing, Webpack to bundle it and TypeScript in general.
When I test the source code, everything is fine, with also a very good code coverage and all of that. But I think that it is not enough. It could be possible that something breaks after the Webpack bundle is generated, for instance a dynamic import (a require with a variable instead of a fixed path) that would become incorrect after the bundle, or other possible scenarios.
How should I write tests that cover also the bundle? Should I test against both the source code (so that I get good coverage) and the bundle? Usually I import things directly from a specific files (e.g. /utils/myutil.ts), but with the bundle this would be impossible. How to handle this?
I do test against the bundle for some of my projects. I do this for some libraries (npm).
To do this I create some code that imports the bundle and write tests against this code. Don't care about coverage in this case, I just want to verify that my library does what it's supposed to do.
In another case (not a library) I'm testing against the bundle but I'm running more integration/e2e tests.
Don't worry about coverage that much unless every functions (or most of them) of your code are going to be used by the final user. You should test something the way it is used. 100% coverage is nice to see but very impractical to achieve when projects get big and in any case it's a waste of time. Of course, some people will disagree :)

How to avoid code redundancy in large amounts of Node.JS BDD tests

For the last few months, I was working on the backend (REST API) of a quite big project that we started from scratch. We were following BDD (behavior-driven-development) standards, so now we have a large amount of tests (~1000). The tests were written using chai - a BDD framework for Node.JS, but I think that this question can be expanded to general good practices when writing tests.
At first, we tried to avoid code redundancy as much as possible and it went quite well. As the number of lines of code and people working on the project grew it was becoming more and more chaotical, but readable. Sometimes minor changes in the code that could be applied in 15 minutes caused the need to change e.g. mock data and methods in 30+ files etc which meant 6 hours of changes and running tests (extreme example).
TL:DR
We want to refactor now these BDD tests. As an example we have such a function:
function RegisterUserAndGetJWTToken(user_data, next: any){
chai.request(server).post(REGISTER_URL).send(user_data).end((err: any, res: any) => {
token = res.body.token;
next(token);
})
}
This function is used in most of our test files. Does it make sense to create something like a test-suite that would contain this kind of functions or are there better ways to avoid redundancy when writing tests? Then we could use imports like these:
import {RegisterUserAndGetJWTToken} from "./test-suite";
import {user_data} from "./test-mock-data";
Do you have any good practices that you can share?
Are there any npm packages that could be useful (or packages for
other programming languages)?
Do you think that this approach has also downsides (like chaos when
there would be multiple imports)?
Maybe there is a way to inject or inherit the test-suite for
each file, to avoid imports and have it by default in each file?
EDIT: Forgot to mention - I mean integration tests.
Thanks in advance!
Refactoring current test suite
Your principle should be raising the level of abstraction in the tests themselves. This means that a test should consist of high-level method calls, expressed in domain language. For example:
registerUser('John', 'john#smith.com')
lastEmail = getLastEmailSent()
lastEmail.receipient.should.be 'john#smith.com'
lastEmail.contents.should.contain 'Dear John'
Now in the implementation of those methods, there could be a lot of things happening. In particular, the registerUser function could do a post request (like in your example). The getLastEmailSent could read from a message queue or a fake SMTP server. The thing is you hide the details behind an API.
If you follow this principle, you end up creating an Automation Layer - a domain-oriented, programmatic API to your system. When creating this layer, you follow all the good design principles, like DRY.
The benefit is that when a change in the code happens, there will be only one place to change in the test code - in the Automation Layer, and not in the test themselves.
I see that what you propose (extracting the RegisterUserAndGetJWTToken and test data) is a good step towards creating an automation layer. I wouldn't worry about the require calls. I don't see any reason for not being explicit about what our test depends on. Maybe at a later stage some of those could be gathered in larger modules (registration, emailing etc.).
Good practices towards a maintainable test suite
Automate at the right level.
Sometimes it's better to go through the UI or REST, but often a direct call to a function will be more sensible. For example, if you write a test for calculating taxes on an invoice, going through the whole application for each of the test-cases would be an overkill. It's much better to leave one end-to-end test see if all the pieces act together, and automate all the specific cases at the lowest possible level. That way we get both good coverage, as well as speed and robustness of the test-suite.
The guiding principle when writing a test is readability.
You can refer to this discussion for a good explanation.
Treat your test helper code / Automation Layer with the same care as you treat your production code.
This means you should refactor it with great care and attention, following all the good design principles.

What is the best and fastest way to clean-up traces of JHipster?

When you build with a code generator, sometimes you would like to remove traces, if you will, of anything in the source that clearly shows that you have used a generator to bootstrap your project.
This is something that possibly can be done manually and by refactoring, with some efforts and knowledge of the code and its structure.
Of course, this what you would want to do when your code is stable and you know that you won't be using the generator again.
The Question:
What is the best and fastest way to clean-up traces of JHipster and there scripts out there to do this?

Should I switch from Vows to Mocha?

I'm trying to decide whether to switch from Vows to Mocha for a large Node app.
I've enjoyed almost all of the Vows experience - but there is just something strange about the argument passing. I always have to scratch my head to remember how topics work, and that interferes with the basics of getting the tests written. It is particularly problematic on deeply nested asynchronous tests. Though I find that combining Vows with async.js can help a little.
So Mocha seems more flexible in its reporting. I like the freedom to choose the testing style & importantly it runs in the browser too, which will be very useful. But I'm worried that it still doesn't solve the readability problem for deeply nested asynchronous tests.
Does anyone have any practical advice - can Mocha make deeply nested tests readable? Am I missing something?
Mocha is ace. It provides a done callback, rather than waitsFor that jasmine provides. I cant speak about migration from vows but from jasmine it was straight forward. Inside you mocha test function you can use async if you want (or Seq etc.. if you want to be legacy) though if you required nested callbacks at that point its an integration test, which might make you think about the granularity of your tests.
OT: 100% test coverage rarely delivers any value.
Deeply nested tests are solved by using flow control in your unit test suite.
Vows does not allow this easily because the exports style requires creating flow control libraries that support this.
Either write a flow control library for vows or switch to mocha and re-use an existing flow control library.

Resources