Mocha -chai coverage Unknown - node.js

Hi all I am trying to get the mocha chai Unit test coverage report.
I am getting the test result passed.
Also generated the coverage as html but, it showing Unknown. Please see below image .
Package.json added the below config in script.
"scripts": {
"test": "mocha",
"test-with-coverage": "nyc --reporter=html mocha"
},
And using the below command for run the test.
npm run test-with-coverage
Edit:
When change report as text , getting the below report in terminal.
----------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
----------|---------|----------|---------|---------|-------------------
All files | 0 | 0 | 0 | 0 |
----------|---------|----------|---------|---------|-------------------

Please try with the below command,
nyc -x \"**/tests/**\" --reporter=cobertura --reporter=html mocha 'your test folder path'

Related

istambul nyc fails to detect any files when running ava test suite

Started trying to integrate nyc/istambul into my ava test suites:
./node_modules/.bin/nyc --include modules/todo.js --es-modules ./node_modules/.bin/ava
1 tests passed
----------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
----------|---------|----------|---------|---------|-------------------
All files | 0 | 0 | 0 | 0 |
----------|---------|----------|---------|---------|-------------------
The output fails to list any of the files!
If I add the --all flag I get the following even though the tests cover both functions:
ERROR: Coverage for lines (0%) does not meet global threshold (90%)
----------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
----------|---------|----------|---------|---------|-------------------
All files | 0 | 0 | 0 | 0 |
todo.js | 0 | 0 | 0 | 0 | 6-30
----------|---------|----------|---------|---------|-------------------
So it looks as if the output from the ava testing tool is not being picked up by the nyc coverage tool.
The simple setup is as follows:
/* todo.js (uses ES6 modules) */
'use strict'
let data = []
function clear() {
data = []
}
function add(item, qty = 1) {
qty = Number(qty)
if(isNaN(qty)) throw new Error('qty must be a number')
data.push({item: item, qty: qty})
return true
}
export { add, clear }
/* todo.spec.js */
import { add, clear } from './modules/todo.js'
import test from 'ava'
test('add a single item', test => {
clear()
const ok = add('bread')
test.truthy(ok)
})
I tried creating a .nycrc.json file but this didn't help:
{
"check-coverage": true,
"include": [
"modules/todo.js"
],
"reporter": ["text", "html"]
}

get error when getting beta version with fastlane

When I release beta with fastlane in React native project build_app, I have this problem.
platform :ios do
desc "Push a new beta build to TestFlight"
lane :beta do
increment_build_number(xcodeproj: "TestApp.xcodeproj")
build_app(
workspace: "TestApp.xcworkspace",
scheme: "TestApp",
include_bitcode: true)
end
end
[14:13:21]: Error packaging up the application
------+------------------------+-------------+
| fastlane summary |
+------+------------------------+-------------+
| Step | Action | Time (in s) |
+------+------------------------+-------------+
| 1 | default_platform | 0 |
| 2 | increment_build_number | 1 |
| 💥 | build_app | 366 |
+------+------------------------+-------------+
[14:13:21]: fastlane finished with errors
[!] Error packaging up the application
Could someone help me with this one.
In my case, I added a new distribution certificate which was in my keychain in first place. I removed the distribution certificate and it worked. It may not be the solution but can check

Applying jest coverageThreshold to aggregated report

Problem
I have a Node project with 2 different jest runs that I perform through different npm tasks. They both use the same jest.config.js, which has a declaration for coverageThreshold. I want to apply that threshold to the combined output of the jest runs.
How can I apply a jest coverage threshold to the combined output of multiple test runs?
Background
For reference, see the code for the sample project described below here
Sample project
To illustrate the problem, assume the project has a single file that looks like this:
Code under test
# src/index.js
function run(cond) {
if (cond) {
return "foo";
}
return "bar";
}
module.exports = run;
There are also 2 tests in the project:
Test 1
# src/__tests__/index.group1.test.js
var run = require("../index");
describe("when cond is true", () => {
it("should return 'foo'", () => {
expect(run(true)).toEqual("foo");
});
});
Test 2
# src/__tests__/index.group1.test.js
var run = require("../index");
describe("when cond is false", () => {
it("should return 'bar'", () => {
expect(run(false)).toEqual("bar");
})
});
The test coverage npm scripts in package.json look like this:
"test:group1:coverage": "jest --testPathPattern=group1 --coverage",
"test:group2:coverage": "jest --testPathPattern=group2 --coverage"
Running npm run test:group1:coverage generates:
----------|----------|----------|----------|----------|-------------------|
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s |
----------|----------|----------|----------|----------|-------------------|
All files | 75 | 50 | 100 | 75 | |
index.js | 75 | 50 | 100 | 75 | 5 |
----------|----------|----------|----------|----------|-------------------|
Running npm run test:group2:coverage generates:
----------|----------|----------|----------|----------|-------------------|
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s |
----------|----------|----------|----------|----------|-------------------|
All files | 75 | 50 | 100 | 75 | |
index.js | 75 | 50 | 100 | 75 | 3 |
----------|----------|----------|----------|----------|-------------------|
Combining coverage reports
Based on this post, I can have a separate npm task like this ...
"test:coverage": "npm run test:group1:coverage && mv ./coverage/coverage-final.json ./coverage/coverage-group1-final.json && npm run test:group2:coverage && mv ./coverage/coverage-final.json ./coverage/coverage-group2-final.json && node ./scripts/map-coverage.js"
... to generate a combined coverage report that looks like:
----------|----------|----------|----------|----------|-------------------|
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s |
----------|----------|----------|----------|----------|-------------------|
All files | 100 | 100 | 100 | 100 | |
index.js | 100 | 100 | 100 | 100 | |
----------|----------|----------|----------|----------|-------------------|
It's all good until this point. The problem arises when trying to apply a coverage threshold to the combined run.
Issue
Now, if I add coverageThreshold to the jest.config.js like this ...
module.exports = {
coverageThreshold: {
global: {
branches: 90,
functions: 90,
lines: 90,
statements: 90,
},
},
};
... and run npm run test:coverage, the tests fails ...
----------|----------|----------|----------|----------|-------------------|
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s |
----------|----------|----------|----------|----------|-------------------|
All files | 75 | 50 | 100 | 75 | |
index.js | 75 | 50 | 100 | 75 | 5 |
----------|----------|----------|----------|----------|-------------------|
Jest: "global" coverage threshold for statements (90%) not met: 75%
Jest: "global" coverage threshold for branches (90%) not met: 50%
Jest: "global" coverage threshold for lines (90%) not met: 75%
... because it first runs group1 tests, which on its own fails to get over the coverage threshold set in jest.config.js. I want to ignore this threshold for individual runs, and only apply them to the combined report.
Note
I know I could split the jest.config.js file into separate ones and apply individual coverage thresholds per test group, but I don't want to manage thresholds individually per test group. I want to measure/apply it to the project as a whole, after all individual test runs have been completed and combined.
Instead of combining the tasks, combine only the pathPattern.
You can try to create a single task for group1 and group2 something like below
"test:commongroup:coverage": "jest --testPathPattern=commonGroup --coverage",
where commonGroup is pattern that includes both group1 and group2.

SCons does not find file it should build itself

I have a simple SConstruct file to build the google test library with MinGW:
env = Environment(platform='posix') # necessary to use gcc and not MS
env.Append(CPPPATH=['googletest/'])
env.Append(CCFLAGS=[('-isystem', 'googletest/include/'), '-pthread'])
obj = env.Object(source='googletest/src/gtest-all.cc')
# linking skipped due to error search
# env.Append(LINKFLAGS=['-rv'])
# bin = env.StaticLibrary(target='libgtest', source=[obj])
The script resides in the main googletest\ folder. When running it - with or without linking - the output is this:
scons: Reading SConscript files ...
scons: done reading SConscript files.
scons: Building targets ...
g++ -o googletest\src\gtest-all.o -c -isystem googletest/include/ -pthread -Igoogletest googletest\src\gtest-all.cc
scons: *** [googletest\src\gtest-all.o] The system cannot find the file specified
+-.
+-googletest
| +-googletest\src
| +-googletest\src\gtest-all.cc
| +-googletest\src\gtest-all.o
| | +-googletest\src\gtest-all.cc
| | +-googletest\src\gtest-death-test.cc
| | +-googletest\src\gtest-filepath.cc
| | +-googletest\src\gtest-port.cc
| | +-googletest\src\gtest-printers.cc
| | +-googletest\src\gtest-test-part.cc
| | +-googletest\src\gtest-typed-test.cc
| | +-googletest\src\gtest.cc
| | +-googletest\src\gtest-internal-inl.h
| +-googletest\src\gtest-death-test.cc
| +-googletest\src\gtest-filepath.cc
| +-googletest\src\gtest-internal-inl.h
| +-googletest\src\gtest-port.cc
| +-googletest\src\gtest-printers.cc
| +-googletest\src\gtest-test-part.cc
| +-googletest\src\gtest-typed-test.cc
| +-googletest\src\gtest.cc
| +-googletest\src\libgtest-all.a
| +-googletest\src\gtest-all.o
| +-googletest\src\gtest-all.cc
| +-googletest\src\gtest-death-test.cc
| +-googletest\src\gtest-filepath.cc
| +-googletest\src\gtest-port.cc
| +-googletest\src\gtest-printers.cc
| +-googletest\src\gtest-test-part.cc
| +-googletest\src\gtest-typed-test.cc
| +-googletest\src\gtest.cc
| +-googletest\src\gtest-internal-inl.h
+-SConstruct
scons: building terminated because of errors.
I also tried to build the library in one line: env.StaticLibrary(source='googletest/src/gtest-all.cc') - the result is the same.
Just executing the actuall g++ call gives me the object file I want.
What confuses me is that SCons should see the object file as an artifact it creates itself. I wondering why it tries to use it before it is finished. So what am I missing here? How can I make SCons wait until the compiling is done?
BTW: I just have some experience in using SCons and and did tweak a script once a while - but I do not really have profound knowledger about it.
Versions used: SCons 3.0.1, Python 3.6.3, MinGW 7.3.0
Does this work?
env = Environment(tools=['mingw','gnulink','ar']) # You should specify the tools
env.Append(CPPPATH=['googletest/'])
env.Append(CCFLAGS=[('-isystem', 'googletest/include/'), '-pthread'])
obj = env.Object(source='googletest/src/gtest-all.cc')
# linking skipped due to error search
# env.Append(LINKFLAGS=['-rv'])
# bin = env.StaticLibrary(target='libgtest', source=[obj])

How to include modules for code coverage for unit testing?

My assumption is that any module tested using Intern will automatically be covered by Istanbul's code coverage. For reasons unknown to me, my module is not being included.
I am:
running Intern 1.6.2 (installed with npm locally)
testing NodeJS code
using callbacks, not promises
using CommonJS modules, not AMD modules
Directory Structure (only showing relevant files):
plister
|
|--libraries
| |--file-type-support.js
|
|--tests
| |--intern.js
| |--unit
| |--file-type-support.js
|
|--node_modules
|--intern
plister/tests/intern.js
define({
useLoader: {
'host-node': 'dojo/dojo'
},
loader: {
packages: [
{name: 'libraries', location: 'libraries'}
]
},
reporters: ['console'],
suites: ['tests/unit/file-type-support'],
functionalSuites: [],
excludeInstrumentation: /^(tests|node_modules)\//
});
plister/tests/unit/file-type-support.js
define([
'intern!bdd',
'intern/chai!expect',
'intern/dojo/node!fs',
'intern/dojo/node!path',
'intern/dojo/node!stream-equal',
'intern/dojo/node!../../libraries/file-type-support'
], function (bdd, expect, fs, path, streamEqual, fileTypeSupport) {
'use strict';
bdd.describe('file-type-support', function doTest() {
bdd.it('should show that the example output.plist matches the ' +
'temp.plist generated by the module', function () {
var deferred = this.async(),
input = path.normalize('tests/resources/input.plist'),
output = path.normalize('tests/resources/output.plist'),
temporary = path.normalize('tests/resources/temp.plist');
// Test deactivate function by checking output produced by
// function against test output.
fileTypeSupport.deactivate(fs.createReadStream(input),
fs.createWriteStream(temporary),
deferred.rejectOnError(function onFinish() {
streamEqual(fs.createReadStream(output),
fs.createReadStream(temporary),
deferred.callback(function checkEqual(error, equal) {
expect(equal).to.be.true;
}));
}));
});
});
});
Output:
PASS: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (29ms)
1/1 tests passed
1/1 tests passed
Output (on failure):
FAIL: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (30ms)
AssertionError: expected true to be false
AssertionError: expected true to be false
0/1 tests passed
0/1 tests passed
npm ERR! Test failed. See above for more details.
npm ERR! not ok code 0
Output (after removing excludeInstrumentation):
PASS: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (25ms)
1/1 tests passed
1/1 tests passed
------------------------------------------+-----------+-----------+-----------+-----------+
File | % Stmts |% Branches | % Funcs | % Lines |
------------------------------------------+-----------+-----------+-----------+-----------+
node_modules/intern/ | 70 | 50 | 100 | 70 |
chai.js | 70 | 50 | 100 | 70 |
node_modules/intern/lib/ | 79.71 | 42.86 | 72.22 | 79.71 |
Test.js | 79.71 | 42.86 | 72.22 | 79.71 |
node_modules/intern/lib/interfaces/ | 80 | 50 | 63.64 | 80 |
bdd.js | 100 | 100 | 100 | 100 |
tdd.js | 76.19 | 50 | 55.56 | 76.19 |
node_modules/intern/lib/reporters/ | 56.52 | 35 | 57.14 | 56.52 |
console.js | 56.52 | 35 | 57.14 | 56.52 |
node_modules/intern/node_modules/chai/ | 37.9 | 8.73 | 26.38 | 39.34 |
chai.js | 37.9 | 8.73 | 26.38 | 39.34 |
tests/unit/ | 100 | 100 | 100 | 100 |
file-type-support.js | 100 | 100 | 100 | 100 |
------------------------------------------+-----------+-----------+-----------+-----------+
All files | 42.14 | 11.35 | 33.45 | 43.63 |
------------------------------------------+-----------+-----------+-----------+-----------+
My module passes the test and I can make it fail too. It just will not show up in the code coverage. I have done the tutorial hosted on GitHub without any problems.
I tried dissecting the Istanbul and Intern dependencies. I place a console.log where it seems files to be covered go through, but my module doesn't get passed. I have tried every variation of deferred.callback and deferred.rejectOnError with no difference to the code coverage.
Also, any feedback on my use of deferred.callback and deferred.rejectOnError will be greatly appreciated. I am still a little uncertain on their usage.
Thanks!
As of Intern 1.6, only require('vm').runInThisContext is hooked to add code coverage data, not require. Instrumentation of require was added in Intern 2.0.
The use of callback/rejectOnError in the above code is correct.

Resources