How to get coverage report for external APIs? - node.js

I'm trying to get coverage report for the API code. I have all the test cases running perfectly in mocha. My problem is that my API server and the test cases are written in separate repositories.
I start my node API server on localhost, on a particular port, and then using supertest in mocha, hit the localhost url to test server's response.
Can you suggest me the best way to generate a coverage report for those APIs?

Testing env
If you want to get coverage, supertest should be able to bootstrap the app server, like in the express example.
The drawback is that you must not run your tests against a running server, like
var api = request('http://127.0.0.1:8080');
but you must include your app entrypoint to allow supertest to start it like
var app = require('../yourapp');
var api = request(app);
Of course, this may (or may not) result in a bit of refactoring on your app bootstrap process.
As other options, you can use node CLI debug capabilities or use node-inspector.
Coverage setup
Supposing you are willing to install istanbul in association with mocha to get coverage.
npm install -g istanbul
then
istanbul cover mocha --root <path> -- --recursive <test-path>
cover is the command use to generate code coverage
mocha is the executable js file used to run tests
--root <path> the root path to look for files to instrument (aka the "source files")
-- is used to pass arguments to your test runner
--recursive <test-path> the root path to look for test files
You can then add --include-all-sources to get cover info on all your source files.
In addition you can get more help running
istanbul help cover

Related

How to collect code coverage with Istanbul when executing HTTP endpoints via Postman or Karate

I have a JS project that provides a set of endpoints leveraging Express with a typical express/router pattern.
const express = require('express');
const router = new express.Router();
router.post('/', async (req, res, next) => { });
router.get('/:abc', async (req, res, next) => { });
module.exports = router;
I can successfully start the server with npm start which calls node ./src/index.js and makes the endpoints available at https://localhost:8080
I can also successfully test these endpoints utilizing a tool like Postman or automation like Karate.
The problem i'm having is that I can't seem to collect code coverage using Istanbul when exercising the product source JS through http://localhost:8080.
I've tried npm start followed by nyc --all src/**/*.js gradlew test. The latter being automation that tests the endpoints. This results in 0% coverage which i'm assuming was due to not running nyc with npm start.
Next I tried nyc --all src/**/*.js npm start and noticed some coverage, but this was just coverage from starting the Express server.
Next I tried nyc --all src/**/*.js npm start followed by gradlew test and noticed the code coverage results were the same as when no endpoint tests were run.
Next I tried putting the prior two commands into a single JS script(myscript.js) running each asynchronously wherein the Express server was started before the gradle tests started running and ran nyc --all src/**/*.js myscript.js. The results from this were the same as my previous trial wherein only npm start received code coverage.
Next I tried nyc --all src/**/*.js npm start followed by nyc --all src/**/*.js -no-clean gradlew test and noticed the code coverage results were the same as when no endpoint tests were run.
Next I tried all of the attempts above by wrapping them into package.json scripts and running npm run <scriptName> getting the same exact behavior.
Finally I tried nyc instrument src instrumented/src --compact=false followed by npm run start:coverage wherein this start:coverage script calls the instrumented index.js at node ./instrumented/src/index.js followed by gradlew test followed by nyc report --reporter=lcov. This attempt also failing to produce any additional code coverage from the gradlew endpoint tests.
Doing some research online I came across this post
How do I setup code coverage on my Express based API?
And thought this looks eerily similar to my problems. Eg Istanbul doesn't know how to cover code when exercising the code through executing endpoints.
I decided to still post this as the above post is quite a bit old and wanted to get opinions and see if there is a better solution than
https://github.com/gotwarlost/istanbul-middleware
EDIT
Adding more specifics about how we start the Express server and run automation without Istanbul today. Just to clarify what we're working with and automation tools we're invested in. (Mainly Karate and Java)
/*
calls --> node -r dotenv/config src/index.js
*/
npm start
/*
calls --> gradlew clean test
this effectively calls a tool called Karate
Karate's base url is pointed to: https://locahost:8080
Karate tests execute endpoints on that base url
This would be akin to using Postman however Karate has quite a bit of configuration options
https://github.com/intuit/karate
*/
npm test
Through many hours of investigation we've managed to solve this. Prior project posted by #balexandre has been updated to illustrate how to do this.
https://github.com/kirksl/karate-istanbul
As said on the comments, you never start your server to run the tests... the tests will point to your server when you require the server file.
in my example, I'm running mocha with chai and the chai-http package helps to call the server
server.js
const app = require("express")();
// everything else ...
exports.server = app;
in your end-to-end tests, you can easily have:
const chai = require('chai');
const chaiHttp = require('chai-http');
chai.use(chaiHttp);
const server = require("./server.js").server;
...
it("should calculate the circumference", done => {
chai
.request(server) // <-- attach your server here
.get('/v1/circumference/10')
.end((err, res) => {
expect(res.status).to.be.eql(200);
expect(res.type).to.be.eql('application/json');
expect(res.body.result).to.eql(62.83185307179586);
done();
});
});
});
I've made a very simple project and pushed to GitHub so you can checkout and run everything, in order to see how all work together
GitHub Project
Added
I've added a route so it can show the coverage report (I used the html report) and created a static route to it ...
when you run the coverage npm run coverage it will generate the report inside ./report folder and a simple express route pointed to that folder, will enable one to see it as an endpoint.
commit info for such change

How to setup mocha to run unit tests first and then start a server and run integration tests

I have unit tests and integration tests that need to be run with one 'npm run mocha' command. The suite before my integration tests was set up and working properly with mocha.opts including the before step like so:
test/beforeTests.js (sets up some important variables)
test/*.x.js (bulk of the tests, all the files ending in .x.js are run)
When including my integrations tests, a unit test fails because the running server clashes with some stubbed functions. Therefore I wanted to run unit tests first, then star the server, run integration tests and close the server. Which I have tried by changing mocha.opts to:
test/beforeTests.js (sets up some important variables)
test/*.x.js (bulk of the tests, all the files ending in .x.js are run)
test/startServer.js (start the server)
test/integration/*.x.js (run integration tests)
test/afterTests.js (close the server)
beforeTests.js:
before(function(done) {
// do something
});
startServer.js:
before(function() {
return runServer()
});
both beforeTests, and startServer work however they are both executed in the beginning of the test instead of
beforeTest > do unit tests > start server > do integration tests > stop server
I cannot merge the integration tests in one file, or the unit tests as there are too many. Is there a way to set up mocha.opts to do what I want. I've looked around and nothing really fits what I want to do.
One of technique that I use before for this is to have two npm commands for unit and integration test then combine them in one npm command if I want to run them in single execution.
"test": "npm run test:unit && npm run test:integration"
"test:unit": "mocha test/unit",
"test:integration": "mocha test/integration"
You can start server as part of integration test.
I suggest to use test for npm test instead of mocha for npm run mocha because it is more standard test command name in node project.
Hope it helps

How to check whether it a test running under AVA, or in normal env

I'm using AVA to run my tests. I use a config.js file at the root and I'd like it to be different when running test units. For example the path to the databases, in production ~/.app/data.db would be /tmp/<randomid>/data.db while testing. With mocha I'd just check for it existence but here I can't
I am using the NODE_ENV variable to load different configs for production, staging and development. Works like a charm.
Read this article to get you started with a nice setup: http://eng.datafox.co/nodejs/2014/09/28/nodejs-config-best-practices/

Separation of unit tests and integration tests

I'm interested in creating full mocked unit tests, as well as integration tests that check if some async operation has returned correctly. I'd like one command for the unit tests and one for the integration tests that way I can run them separately in my CI tools. What's the best way to do this? Tools like mocha, and jest only seem to focus on one way of doing things.
The only option I see is using mocha and having two folders in a directory.
Something like:
__unit__
__integration__
Then I'd need some way of telling mocha to run all the __unit__ tests in the src directory, and another to tell it to run all the __integration__ tests.
Thoughts?
Mocha supports directories, file globbing and test name grepping which can be used to create "groups" of tests.
Directories
test/unit/whatever_spec.js
test/int/whatever_spec.js
Then run tests against all js files in a directory with
mocha test/unit
mocha test/int
mocha test/unit test/int
File Prefix
test/unit_whatever_spec.js
test/int_whatever_spec.js
Then run mocha against specific files with
mocha test/unit_*_spec.js
mocha test/int_*_spec.js
mocha
Test Names
Create outer blocks in mocha that describe the test type and class/subject.
describe('Unit::Whatever', function(){})
describe('Integration::Whatever', function(){})
Then run the named blocks with mochas "grep" argument --grep/-g
mocha -g ^Unit::
mocha -g ^Integration::
mocha
It is still useful to keep the file or directory separation when using test names so you can easily differentiate the source file of a failing test.
package.json
Store each test command in your package.json scripts section so it's easy to run with something like yarn test:int or npm run test:int.
{
scripts: {
"test": "mocha test/unit test/int",
"test:unit": "mocha test/unit",
"test:int": "mocha test/int"
}
}
mocha does not support label or category. You understand correctly.
You must create two folders, unit and integration, and call mocha like this
mocha unit
mocha integration

Hooking up protractor E2E tests with node-replay

I've been messing around with node-replay (https://github.com/assaf/node-replay) to see if there is a way I can hook it up with my protractor tests to get my tests to run with recorded data (so they run quicker and not so damn slow).
I installed node-replay as instructed on the github page. Then in my test file I include some node replay code as follow
describe('E2E: Checking Initial Content', function(){
'use strict';
var ptor;
var Replay = require('replay');
Replay.localhost('127.0.0.1:9000/');
// keep track of the protractor instance
beforeEach(function(){
browser.get('http://127.0.0.1:9000/');
ptor = protractor.getInstance();
});
and my config file looks like this:
exports.config = {
seleniumAddress: 'http://0.0.0.0:4444/wd/hub',
// Capabilities to be passed to the webdriver instance.
capabilities: {
'browserName': 'chrome'
},
// Spec patterns are relatie to the current working directly when
// protractor is called.
specs: ['test/e2e/**/*.spec.js'],
// Options to be passed to Jasmine-node.
jasmineNodeOpts: {
showColors: true,
defaultTimeoutInterval: 300000
}
};
Then I try to rub my tests with grunt by saying
REPLAY=record grunt protractor
But I get tons of failures. Grunt protractor was running all of tests fine and with no failures before I added node-replay so maybe my logic is flawed in how to connect these two together. Any suggestions as to what I'm missing
1) E2E: Sample test 1
Message:
UnknownError:
Stacktrace:
UnknownError:
at <anonymous>
Problem is that http requests to 127.0.0.1:9000 are done by the Browser, not within your NodeJS Protractor code, so replay won't work in this infrastructure scenario.
There is ongoing discussion on Protractor Tests without a Backend here and some folks relies on mocking the backend client side with Protractor's addMockModule in a similar way they already do for Karma unit tests.
Personally I don't agree with mocking for e2e since the whole point of end-to-end was to test the whole real app.
HTTP replay may not be such a bad idea to get things go faster.
Ideally what i hoped to find was a tool that works like this:
Run a proxy capture http server the first time for later replay:
capture 127.0.0.1:9000 --into-port 3333
Run your e2e tests against a baseUrl = '127.0.0.1:3333';. All requests/responses will be cached/saved.
Serve the cached content from now on:
replay --at-port 3333
Run your e2e tests again still on baseUrl por 3333. This time it should run faster since it's serving cached content.
Couldn't find it, let me know if you have better luck!

Resources