How to get full coverage of a CLI nodeJS app with istanbul? - node.js

I am using this config :Istanbul/Mocha/Chai/supertest(for http tests)/sinon (for timer tests) but I am having some problem with testing CLI tools
My question is simple: How can I test my cli program and achieve at the same time 100% code coverage with istanbul? No matter what tool you are using, I would like to understand how you are doing it please!
I found this article which was very helpful at the beginning but
It was written in 2014
The module mock-utf8-stream does not seem standard
It does not explain clearly the code architecture
cheers

This will be done in 2 steps:
Make sure your test suite is set up to correctly spawn the CLI execution
Set up nyc (reason for switching from istanbul to nyc explained below) to go through the script files behind your CLI tool
Setting up your tests to run spawn subprocesses
I had to set up some CLI tests a few months ago on Fulky (the project is paused right now but it's temporary) and wrote my test suite as such:
const expect = require('chai').expect;
const spawnSync = require('child_process').spawnSync;
describe('Executing my CLI tool', function () {
// If your CLI tool is taking some expected time to start up / tear down, you
// might want to set this to avoid slowness warnings.
this.slow(600);
it('should pass given 2 arguments', () => {
const result = spawnSync(
'./my-CLI-tool',
['argument1', 'argument2'],
{ encoding: 'utf-8' }
);
expect(result.status).to.equal(0);
expect(result.stdout).to.include('Something from the output');
});
});
You can see an example here but bear in mind that this is a test file run with Mocha, that runs Mocha in a spawned process.
A bit Inception for your need here so it might be confusing, but it's testing a Mocha plugin hence the added brain game. That should apply to your use case though if you forget about that complexity.
Setting up coverage
You will then want to install nyc with npm i nyc --save-dev, nowadays' CLI tool for Istanbul, because as opposed to the previous CLI (istanbul itself), it allows coverage for applications that spawn subprocesses.
Good thing is it's still the same tool, the same team, etc. behind nyc, so the switch is really trivial (for example, see this transition).
In your package.json, then add to your scripts:
"scripts": {
"coverage": "nyc mocha"
}
You will then get a report with npm run coverage (you will probably have to set the reporter option in .nycrc) that goes through your CLI scripts as well.
I haven't set up this coverage part with the project mentioned above, but I have just applied these steps locally and it works as expected so I invite you to try it out on your end.

Related

Express +Jest. Test files are running sequentially instead of in parallel

I have an Express.JS server which uses jest and supertest as a testing framework.
It has been working excellently.
When I call my test npm script, it runs npx jest and all of my test files run in parallel.
However I ran my tests recently and they ran sequentially which takes a very long time, they have done this ever since.
I haven't changed any jest or npm configuration, nor have I changed my test files themselves.
Has anyone experienced this? Or is it possible that something in my configuration is incorrect?
jest.config
export default {
setupFilesAfterEnv: ['./__tests__/jest.setup.js'],
}
jest.setup.js
import { connectToDatabase } from '/conn'
// Override the dotenv to use different .env file
require('dotenv').config({
path: '.env.test',
})
beforeAll(() => {
connectToDatabase()
})
test('', () => {
// just a dummy test case
})
EDIT: Immediately after posting the question, I re ran the tests and they ran in parallel, without me changing anything. If anyone has any knowledge around this i'd be interested to get a second opinion
After intermittent switching between parallel and sequential for unknown reasons. I have found it work consistently by adding the --no-cache arg to the npx jest call.
See below where I found the answer
Github -> jest not always running in parallel

How to collect code coverage with Istanbul when executing HTTP endpoints via Postman or Karate

I have a JS project that provides a set of endpoints leveraging Express with a typical express/router pattern.
const express = require('express');
const router = new express.Router();
router.post('/', async (req, res, next) => { });
router.get('/:abc', async (req, res, next) => { });
module.exports = router;
I can successfully start the server with npm start which calls node ./src/index.js and makes the endpoints available at https://localhost:8080
I can also successfully test these endpoints utilizing a tool like Postman or automation like Karate.
The problem i'm having is that I can't seem to collect code coverage using Istanbul when exercising the product source JS through http://localhost:8080.
I've tried npm start followed by nyc --all src/**/*.js gradlew test. The latter being automation that tests the endpoints. This results in 0% coverage which i'm assuming was due to not running nyc with npm start.
Next I tried nyc --all src/**/*.js npm start and noticed some coverage, but this was just coverage from starting the Express server.
Next I tried nyc --all src/**/*.js npm start followed by gradlew test and noticed the code coverage results were the same as when no endpoint tests were run.
Next I tried putting the prior two commands into a single JS script(myscript.js) running each asynchronously wherein the Express server was started before the gradle tests started running and ran nyc --all src/**/*.js myscript.js. The results from this were the same as my previous trial wherein only npm start received code coverage.
Next I tried nyc --all src/**/*.js npm start followed by nyc --all src/**/*.js -no-clean gradlew test and noticed the code coverage results were the same as when no endpoint tests were run.
Next I tried all of the attempts above by wrapping them into package.json scripts and running npm run <scriptName> getting the same exact behavior.
Finally I tried nyc instrument src instrumented/src --compact=false followed by npm run start:coverage wherein this start:coverage script calls the instrumented index.js at node ./instrumented/src/index.js followed by gradlew test followed by nyc report --reporter=lcov. This attempt also failing to produce any additional code coverage from the gradlew endpoint tests.
Doing some research online I came across this post
How do I setup code coverage on my Express based API?
And thought this looks eerily similar to my problems. Eg Istanbul doesn't know how to cover code when exercising the code through executing endpoints.
I decided to still post this as the above post is quite a bit old and wanted to get opinions and see if there is a better solution than
https://github.com/gotwarlost/istanbul-middleware
EDIT
Adding more specifics about how we start the Express server and run automation without Istanbul today. Just to clarify what we're working with and automation tools we're invested in. (Mainly Karate and Java)
/*
calls --> node -r dotenv/config src/index.js
*/
npm start
/*
calls --> gradlew clean test
this effectively calls a tool called Karate
Karate's base url is pointed to: https://locahost:8080
Karate tests execute endpoints on that base url
This would be akin to using Postman however Karate has quite a bit of configuration options
https://github.com/intuit/karate
*/
npm test
Through many hours of investigation we've managed to solve this. Prior project posted by #balexandre has been updated to illustrate how to do this.
https://github.com/kirksl/karate-istanbul
As said on the comments, you never start your server to run the tests... the tests will point to your server when you require the server file.
in my example, I'm running mocha with chai and the chai-http package helps to call the server
server.js
const app = require("express")();
// everything else ...
exports.server = app;
in your end-to-end tests, you can easily have:
const chai = require('chai');
const chaiHttp = require('chai-http');
chai.use(chaiHttp);
const server = require("./server.js").server;
...
it("should calculate the circumference", done => {
chai
.request(server) // <-- attach your server here
.get('/v1/circumference/10')
.end((err, res) => {
expect(res.status).to.be.eql(200);
expect(res.type).to.be.eql('application/json');
expect(res.body.result).to.eql(62.83185307179586);
done();
});
});
});
I've made a very simple project and pushed to GitHub so you can checkout and run everything, in order to see how all work together
GitHub Project
Added
I've added a route so it can show the coverage report (I used the html report) and created a static route to it ...
when you run the coverage npm run coverage it will generate the report inside ./report folder and a simple express route pointed to that folder, will enable one to see it as an endpoint.
commit info for such change

How to setup mocha to run unit tests first and then start a server and run integration tests

I have unit tests and integration tests that need to be run with one 'npm run mocha' command. The suite before my integration tests was set up and working properly with mocha.opts including the before step like so:
test/beforeTests.js (sets up some important variables)
test/*.x.js (bulk of the tests, all the files ending in .x.js are run)
When including my integrations tests, a unit test fails because the running server clashes with some stubbed functions. Therefore I wanted to run unit tests first, then star the server, run integration tests and close the server. Which I have tried by changing mocha.opts to:
test/beforeTests.js (sets up some important variables)
test/*.x.js (bulk of the tests, all the files ending in .x.js are run)
test/startServer.js (start the server)
test/integration/*.x.js (run integration tests)
test/afterTests.js (close the server)
beforeTests.js:
before(function(done) {
// do something
});
startServer.js:
before(function() {
return runServer()
});
both beforeTests, and startServer work however they are both executed in the beginning of the test instead of
beforeTest > do unit tests > start server > do integration tests > stop server
I cannot merge the integration tests in one file, or the unit tests as there are too many. Is there a way to set up mocha.opts to do what I want. I've looked around and nothing really fits what I want to do.
One of technique that I use before for this is to have two npm commands for unit and integration test then combine them in one npm command if I want to run them in single execution.
"test": "npm run test:unit && npm run test:integration"
"test:unit": "mocha test/unit",
"test:integration": "mocha test/integration"
You can start server as part of integration test.
I suggest to use test for npm test instead of mocha for npm run mocha because it is more standard test command name in node project.
Hope it helps

How to get coverage report for external APIs?

I'm trying to get coverage report for the API code. I have all the test cases running perfectly in mocha. My problem is that my API server and the test cases are written in separate repositories.
I start my node API server on localhost, on a particular port, and then using supertest in mocha, hit the localhost url to test server's response.
Can you suggest me the best way to generate a coverage report for those APIs?
Testing env
If you want to get coverage, supertest should be able to bootstrap the app server, like in the express example.
The drawback is that you must not run your tests against a running server, like
var api = request('http://127.0.0.1:8080');
but you must include your app entrypoint to allow supertest to start it like
var app = require('../yourapp');
var api = request(app);
Of course, this may (or may not) result in a bit of refactoring on your app bootstrap process.
As other options, you can use node CLI debug capabilities or use node-inspector.
Coverage setup
Supposing you are willing to install istanbul in association with mocha to get coverage.
npm install -g istanbul
then
istanbul cover mocha --root <path> -- --recursive <test-path>
cover is the command use to generate code coverage
mocha is the executable js file used to run tests
--root <path> the root path to look for files to instrument (aka the "source files")
-- is used to pass arguments to your test runner
--recursive <test-path> the root path to look for test files
You can then add --include-all-sources to get cover info on all your source files.
In addition you can get more help running
istanbul help cover

Run "node test" as part of Visual Studio Team Services build task with results in "tests" tab

I have a project that contains tests that I am running with Mocha from the command line. I have set up a test script in my packages.json, which looks as follows:
"test": "mocha ./**/*.spec.js --reporter dot --require jsdom-global/register"
I have currently got a simple task set up in Visual Studio Team Services, which just runs the npm test command, this runs Mocha within a console and continues/fails the build depending on whether the tests pass.
What I'd like to be able to do is have the results of my tests populate the "tests" tab in the build definition after it's run. In the same way that I can get this tab populated if I'm running tests on C# code.
I've tried using Chutzpah for this, but it's overly complicated and seems to require that I jump through all sorts of hoops that mean changing my tests and writing long config files. I already have loads of tests written, so really don't want to have to do that. When it did finally discover any of my tests, it complained about require and other things related to Node modules.
Is what I'm asking for actually possible? Is there a simple way of achieving this that's compatible with running my tests in Node?
I've found a good way of doing it that requires no third-party adapter (eg. Chutzpah). It involves getting Mocha to output its report in an XML format, and setting up Visual Studio Team Services to publish the results in an extra step of the build definition.
I installed mocha-junit-reporter (https://www.npmjs.com/package/mocha-junit-reporter) and altered my test script to the following:
"test": "mocha ./temp/**/*.spec.js --reporter mocha-junit-reporter --require jsdom-global/register"
I then created a new step in my build definition using the "Publish Test Results" task. I set the result format to "JUnit" and added the correct path for the outputted test-results.xml file created by the reporter.
It is worth noting that although Mocha comes with an "XUnit" reporter, this format appears to not work correctly with VSTS even though it's listed as an option.
The results of npm test now show up in the "tests" tab alongside any other tests from MSTest etc.
I'm using karma and got this to work in the same way as #dylan-parry suggested. Some excepts below in case it helps others:
package.json
"scripts": {
"test": "cross-env NODE_ENV=test karma start"
}
karma.conf.js
const webpackCfg = require('./webpack.config')('test');
module.exports = function karmaConfig(config) {
config.set({
reporters: ['mocha', 'coverage', 'junit'],
junitReporter: {
outputDir: 'coverage',
outputFile: 'junit-result.xml',
useBrowserName: false
}
})
...
TFS
It may also be worth adding I'm using branch policies on my git branch to prevent PR's being merged if the tests fail, info via this link:
https://www.visualstudio.com/en-us/docs/git/branch-policies
Here's the output in TFS:
Next step is to get the coverage working too!

Resources