I'm a newbie on Node.js. I have to set up some tests in my application and I'm getting really mad trying to generate a back-end code coverage report with mocha and istanbul in my loopback application.
Searching through thousand of dab explained articles on Github I found some good articles and then I figured out that I had to use something like this:
istanbul cover _mocha -- [path/to/test/files] -R spec
I was happy because it says: "What you are essentially doing is passing the command to run your tests to Istanbul which, in turn, will run those tests on your behalf." However, every time I try to run Istanbul, I get this error:
No coverage information was collected, exit without writing coverage information
C:\...\proj-name\node_modules\.bin\_mocha:2
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
^^^^^^^
SyntaxError: missing ) after argument list
My working test file is:
var userService = require('../TestBusinessLogic.js');
var should = require('chai').should();
describe('API Utenti', function() {
it('should throw Exception on missing UserName', function() {
(function() {
userService({ Name: 'Pippo', Surname: 'Baudo' });
}).should.Throw(Error);
});
});
Is this command good to use? If not, could someone please explain me how to make a coverage report using istanbul with mocha?
Figured that I was running node_modules\.bin\_mocha instead of node_modules\mocha\bin\_mocha and this solved my problem.
When running istanbul from the command line you need to run it from the root of your project directory, it by default looks for the files to run the coverage reporting for at the root of your directory.
Additionally make sure your path to your test folder is relative to your project directory.
So you should navigate to your project directory using cd and then when inside your project directory then run
istanbul cover _mocha -- ./path-to/test.js -R spec
Related
I have a JS project that provides a set of endpoints leveraging Express with a typical express/router pattern.
const express = require('express');
const router = new express.Router();
router.post('/', async (req, res, next) => { });
router.get('/:abc', async (req, res, next) => { });
module.exports = router;
I can successfully start the server with npm start which calls node ./src/index.js and makes the endpoints available at https://localhost:8080
I can also successfully test these endpoints utilizing a tool like Postman or automation like Karate.
The problem i'm having is that I can't seem to collect code coverage using Istanbul when exercising the product source JS through http://localhost:8080.
I've tried npm start followed by nyc --all src/**/*.js gradlew test. The latter being automation that tests the endpoints. This results in 0% coverage which i'm assuming was due to not running nyc with npm start.
Next I tried nyc --all src/**/*.js npm start and noticed some coverage, but this was just coverage from starting the Express server.
Next I tried nyc --all src/**/*.js npm start followed by gradlew test and noticed the code coverage results were the same as when no endpoint tests were run.
Next I tried putting the prior two commands into a single JS script(myscript.js) running each asynchronously wherein the Express server was started before the gradle tests started running and ran nyc --all src/**/*.js myscript.js. The results from this were the same as my previous trial wherein only npm start received code coverage.
Next I tried nyc --all src/**/*.js npm start followed by nyc --all src/**/*.js -no-clean gradlew test and noticed the code coverage results were the same as when no endpoint tests were run.
Next I tried all of the attempts above by wrapping them into package.json scripts and running npm run <scriptName> getting the same exact behavior.
Finally I tried nyc instrument src instrumented/src --compact=false followed by npm run start:coverage wherein this start:coverage script calls the instrumented index.js at node ./instrumented/src/index.js followed by gradlew test followed by nyc report --reporter=lcov. This attempt also failing to produce any additional code coverage from the gradlew endpoint tests.
Doing some research online I came across this post
How do I setup code coverage on my Express based API?
And thought this looks eerily similar to my problems. Eg Istanbul doesn't know how to cover code when exercising the code through executing endpoints.
I decided to still post this as the above post is quite a bit old and wanted to get opinions and see if there is a better solution than
https://github.com/gotwarlost/istanbul-middleware
EDIT
Adding more specifics about how we start the Express server and run automation without Istanbul today. Just to clarify what we're working with and automation tools we're invested in. (Mainly Karate and Java)
/*
calls --> node -r dotenv/config src/index.js
*/
npm start
/*
calls --> gradlew clean test
this effectively calls a tool called Karate
Karate's base url is pointed to: https://locahost:8080
Karate tests execute endpoints on that base url
This would be akin to using Postman however Karate has quite a bit of configuration options
https://github.com/intuit/karate
*/
npm test
Through many hours of investigation we've managed to solve this. Prior project posted by #balexandre has been updated to illustrate how to do this.
https://github.com/kirksl/karate-istanbul
As said on the comments, you never start your server to run the tests... the tests will point to your server when you require the server file.
in my example, I'm running mocha with chai and the chai-http package helps to call the server
server.js
const app = require("express")();
// everything else ...
exports.server = app;
in your end-to-end tests, you can easily have:
const chai = require('chai');
const chaiHttp = require('chai-http');
chai.use(chaiHttp);
const server = require("./server.js").server;
...
it("should calculate the circumference", done => {
chai
.request(server) // <-- attach your server here
.get('/v1/circumference/10')
.end((err, res) => {
expect(res.status).to.be.eql(200);
expect(res.type).to.be.eql('application/json');
expect(res.body.result).to.eql(62.83185307179586);
done();
});
});
});
I've made a very simple project and pushed to GitHub so you can checkout and run everything, in order to see how all work together
GitHub Project
Added
I've added a route so it can show the coverage report (I used the html report) and created a static route to it ...
when you run the coverage npm run coverage it will generate the report inside ./report folder and a simple express route pointed to that folder, will enable one to see it as an endpoint.
commit info for such change
I'm trying to get coverage report for the API code. I have all the test cases running perfectly in mocha. My problem is that my API server and the test cases are written in separate repositories.
I start my node API server on localhost, on a particular port, and then using supertest in mocha, hit the localhost url to test server's response.
Can you suggest me the best way to generate a coverage report for those APIs?
Testing env
If you want to get coverage, supertest should be able to bootstrap the app server, like in the express example.
The drawback is that you must not run your tests against a running server, like
var api = request('http://127.0.0.1:8080');
but you must include your app entrypoint to allow supertest to start it like
var app = require('../yourapp');
var api = request(app);
Of course, this may (or may not) result in a bit of refactoring on your app bootstrap process.
As other options, you can use node CLI debug capabilities or use node-inspector.
Coverage setup
Supposing you are willing to install istanbul in association with mocha to get coverage.
npm install -g istanbul
then
istanbul cover mocha --root <path> -- --recursive <test-path>
cover is the command use to generate code coverage
mocha is the executable js file used to run tests
--root <path> the root path to look for files to instrument (aka the "source files")
-- is used to pass arguments to your test runner
--recursive <test-path> the root path to look for test files
You can then add --include-all-sources to get cover info on all your source files.
In addition you can get more help running
istanbul help cover
I'm interested in creating full mocked unit tests, as well as integration tests that check if some async operation has returned correctly. I'd like one command for the unit tests and one for the integration tests that way I can run them separately in my CI tools. What's the best way to do this? Tools like mocha, and jest only seem to focus on one way of doing things.
The only option I see is using mocha and having two folders in a directory.
Something like:
__unit__
__integration__
Then I'd need some way of telling mocha to run all the __unit__ tests in the src directory, and another to tell it to run all the __integration__ tests.
Thoughts?
Mocha supports directories, file globbing and test name grepping which can be used to create "groups" of tests.
Directories
test/unit/whatever_spec.js
test/int/whatever_spec.js
Then run tests against all js files in a directory with
mocha test/unit
mocha test/int
mocha test/unit test/int
File Prefix
test/unit_whatever_spec.js
test/int_whatever_spec.js
Then run mocha against specific files with
mocha test/unit_*_spec.js
mocha test/int_*_spec.js
mocha
Test Names
Create outer blocks in mocha that describe the test type and class/subject.
describe('Unit::Whatever', function(){})
describe('Integration::Whatever', function(){})
Then run the named blocks with mochas "grep" argument --grep/-g
mocha -g ^Unit::
mocha -g ^Integration::
mocha
It is still useful to keep the file or directory separation when using test names so you can easily differentiate the source file of a failing test.
package.json
Store each test command in your package.json scripts section so it's easy to run with something like yarn test:int or npm run test:int.
{
scripts: {
"test": "mocha test/unit test/int",
"test:unit": "mocha test/unit",
"test:int": "mocha test/int"
}
}
mocha does not support label or category. You understand correctly.
You must create two folders, unit and integration, and call mocha like this
mocha unit
mocha integration
I followed the sails.js testing example at http://sailsjs.org/#!/documentation/concepts/Testing . I have got it to run, but it only runs one test. The command line in package.json script.test is:
mocha test/bootstrap.test.js test/unit/**/*.test.js
"test/unit/**/*.test.js" should be catching 2 tests, UsersControllers.test.js and Users.test.js . It is only running Users.test.js . And yes, both tests are in the test/unit/ directory.
What am I doing wrong here ?
After copying the documented test cases from the sails js docs, make sure to change the following line in the User.test.js file
describe.only('UsersModel', function() {
to
describe('UsersModel', function() {
then you should see the controller test as well.
I'm trying to do something simple, but it's not working.. I must be doing something dumb.
I am using Istanbul with Mocha for code coverage + unit testing.
In the code being tested, it is using functions from modules which are being require'd, and I want those imported modules to be included in the code coverage - but it's not.
I am explicitly including a library from a require with a full path to it (it is not the same dir as where the test case resides)
var d = require(srcroot + '/scripting/wf_daemon/daemon_lib');
And then later, the test case is making a call to a function in that module startWorkFlow.
d.startWorkflow(workflow, function (msg) { // do something })
However, Istanbul does not go into the referenced function startWorkFlow, it only gives me coverage for the test file.
What I need is code coverage to extend into all functions from the modules require'd by the test case.
I am calling Istanbul like this:
istanbul cover --include-all-source --dir C:\Build\buildarea --print none "C:\Program Files\nodejs\node_modules\mocha/bin/_mocha" -- --reporter mocha-teamcity-reporter ./test.js
Is there any way to get Istanbul to instrument the files which are not in the directory (or subdirectories) where the test case resides? What simple mistake am I making?
Cheers!