istambul nyc fails to detect any files when running ava test suite - node.js

Started trying to integrate nyc/istambul into my ava test suites:
./node_modules/.bin/nyc --include modules/todo.js --es-modules ./node_modules/.bin/ava
1 tests passed
----------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
----------|---------|----------|---------|---------|-------------------
All files | 0 | 0 | 0 | 0 |
----------|---------|----------|---------|---------|-------------------
The output fails to list any of the files!
If I add the --all flag I get the following even though the tests cover both functions:
ERROR: Coverage for lines (0%) does not meet global threshold (90%)
----------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
----------|---------|----------|---------|---------|-------------------
All files | 0 | 0 | 0 | 0 |
todo.js | 0 | 0 | 0 | 0 | 6-30
----------|---------|----------|---------|---------|-------------------
So it looks as if the output from the ava testing tool is not being picked up by the nyc coverage tool.
The simple setup is as follows:
/* todo.js (uses ES6 modules) */
'use strict'
let data = []
function clear() {
data = []
}
function add(item, qty = 1) {
qty = Number(qty)
if(isNaN(qty)) throw new Error('qty must be a number')
data.push({item: item, qty: qty})
return true
}
export { add, clear }
/* todo.spec.js */
import { add, clear } from './modules/todo.js'
import test from 'ava'
test('add a single item', test => {
clear()
const ok = add('bread')
test.truthy(ok)
})
I tried creating a .nycrc.json file but this didn't help:
{
"check-coverage": true,
"include": [
"modules/todo.js"
],
"reporter": ["text", "html"]
}

Related

Mocha -chai coverage Unknown

Hi all I am trying to get the mocha chai Unit test coverage report.
I am getting the test result passed.
Also generated the coverage as html but, it showing Unknown. Please see below image .
Package.json added the below config in script.
"scripts": {
"test": "mocha",
"test-with-coverage": "nyc --reporter=html mocha"
},
And using the below command for run the test.
npm run test-with-coverage
Edit:
When change report as text , getting the below report in terminal.
----------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
----------|---------|----------|---------|---------|-------------------
All files | 0 | 0 | 0 | 0 |
----------|---------|----------|---------|---------|-------------------
Please try with the below command,
nyc -x \"**/tests/**\" --reporter=cobertura --reporter=html mocha 'your test folder path'

Unable to connect to the PYMQI Client facing FAILED: MQRC_ENVIRONMENT_ERROR

I am getting the below error while connecting to IBM MQ using library pymqi.
Its a clustered MQ channel
Traceback (most recent call last):
File "postToQueue.py", line 432, in <module>
qmgr = pymqi.connect(queue_manager, channel, conn_info)
File "C:\Python\lib\site-packages\pymqi\__init__.py", line 2608, in connect
qmgr.connect_tcp_client(queue_manager or '', CD(), channel, conn_info, user, password)
File "C:\Python\lib\site-packages\pymqi\__init__.py", line 1441, in connect_tcp_client
self.connect_with_options(name, cd, user=user, password=password)
File "C:\Python\lib\site-packages\pymqi\__init__.py", line 1423, in connect_with_options
raise MQMIError(rv[1], rv[2])
pymqi.MQMIError: MQI Error. Comp: 2, Reason 2012: FAILED: MQRC_ENVIRONMENT_ERROR'
Please see my code below.
queue_manager = 'quename here'
channel = 'channel name here'
host ='host-name here'
port = '2333'
queue_name = 'queue name here'
message = 'my message here'
conn_info = '%s(%s)' % (host, port)
print(conn_info)
qmgr = pymqi.connect(queue_manager, channel, conn_info)
queue = pymqi.Queue(qmgr, queue_name)
queue.put(message)
print("message sent")
queue.close()
qmgr.disconnect()
Getting error at the line below
qmgr = pymqi.connect(queue_manager, channel, conn_info)
Added the IBM client to scripts folder as well , using Windows 10 , Python 3.8.1 and IBM Client 9.1 windows client installation image, Below is the header
-----------------------------------------------------------------------------+
| |
| WebSphere MQ First Failure Symptom Report |
| ========================================= |
| |
| Date/Time :- Tue January 28 2020 16:27:51 Eastern Standard Time |
| UTC Time :- 1580246871.853000 |
| UTC Time Offset :- -300 (Eastern Standard Time) |
| Host Name :- CA-LDLD0SQ2 |
| Operating System :- Windows 10 Enterprise x64 Edition, Build 17763 |
| PIDS :- 5724H7251 |
| LVLS :- 8.0.0.11 |
| Product Long Name :- IBM MQ for Windows (x64 platform) |
| Vendor :- IBM |
| O/S Registered :- 0 |
| Data Path :- C:\Python\Scripts\IBM |
| Installation Path :- C:\Python |
| Installation Name :- MQNI08000011 (126) |
| License Type :- Unknown |
| Probe Id :- XC207013 |
| Application Name :- MQM |
| Component :- xxxInitialize |
| SCCS Info :- F:\build\slot1\p800_P\src\lib\cs\amqxeida.c, |
| Line Number :- 5085 |
| Build Date :- Dec 12 2018 |
| Build Level :- p800-011-181212.1 |
| Build Type :- IKAP - (Production) |
| UserID :- alekhya.machiraju |
| Process Name :- C:\Python\python.exe |
| Arguments :- |
| Addressing mode :- 32-bit |
| Process :- 00010908 |
| Thread :- 00000001 |
| Session :- 00000001 |
| UserApp :- TRUE |
| Last HQC :- 0.0.0-0 |
| Last HSHMEMB :- 0.0.0-0 |
| Last ObjectName :- |
| Major Errorcode :- xecF_E_UNEXPECTED_SYSTEM_RC |
| Minor Errorcode :- OK |
| Probe Type :- INCORROUT |
| Probe Severity :- 2 |
| Probe Description :- AMQ6090: MQM could not display the text for error |
| 536895781. |
| FDCSequenceNumber :- 0 |
| Comment1 :- WinNT error 1082155270 from Open ccsid.tbl. |
| |
+-----------------------------------------------------------------------------+

Applying jest coverageThreshold to aggregated report

Problem
I have a Node project with 2 different jest runs that I perform through different npm tasks. They both use the same jest.config.js, which has a declaration for coverageThreshold. I want to apply that threshold to the combined output of the jest runs.
How can I apply a jest coverage threshold to the combined output of multiple test runs?
Background
For reference, see the code for the sample project described below here
Sample project
To illustrate the problem, assume the project has a single file that looks like this:
Code under test
# src/index.js
function run(cond) {
if (cond) {
return "foo";
}
return "bar";
}
module.exports = run;
There are also 2 tests in the project:
Test 1
# src/__tests__/index.group1.test.js
var run = require("../index");
describe("when cond is true", () => {
it("should return 'foo'", () => {
expect(run(true)).toEqual("foo");
});
});
Test 2
# src/__tests__/index.group1.test.js
var run = require("../index");
describe("when cond is false", () => {
it("should return 'bar'", () => {
expect(run(false)).toEqual("bar");
})
});
The test coverage npm scripts in package.json look like this:
"test:group1:coverage": "jest --testPathPattern=group1 --coverage",
"test:group2:coverage": "jest --testPathPattern=group2 --coverage"
Running npm run test:group1:coverage generates:
----------|----------|----------|----------|----------|-------------------|
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s |
----------|----------|----------|----------|----------|-------------------|
All files | 75 | 50 | 100 | 75 | |
index.js | 75 | 50 | 100 | 75 | 5 |
----------|----------|----------|----------|----------|-------------------|
Running npm run test:group2:coverage generates:
----------|----------|----------|----------|----------|-------------------|
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s |
----------|----------|----------|----------|----------|-------------------|
All files | 75 | 50 | 100 | 75 | |
index.js | 75 | 50 | 100 | 75 | 3 |
----------|----------|----------|----------|----------|-------------------|
Combining coverage reports
Based on this post, I can have a separate npm task like this ...
"test:coverage": "npm run test:group1:coverage && mv ./coverage/coverage-final.json ./coverage/coverage-group1-final.json && npm run test:group2:coverage && mv ./coverage/coverage-final.json ./coverage/coverage-group2-final.json && node ./scripts/map-coverage.js"
... to generate a combined coverage report that looks like:
----------|----------|----------|----------|----------|-------------------|
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s |
----------|----------|----------|----------|----------|-------------------|
All files | 100 | 100 | 100 | 100 | |
index.js | 100 | 100 | 100 | 100 | |
----------|----------|----------|----------|----------|-------------------|
It's all good until this point. The problem arises when trying to apply a coverage threshold to the combined run.
Issue
Now, if I add coverageThreshold to the jest.config.js like this ...
module.exports = {
coverageThreshold: {
global: {
branches: 90,
functions: 90,
lines: 90,
statements: 90,
},
},
};
... and run npm run test:coverage, the tests fails ...
----------|----------|----------|----------|----------|-------------------|
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s |
----------|----------|----------|----------|----------|-------------------|
All files | 75 | 50 | 100 | 75 | |
index.js | 75 | 50 | 100 | 75 | 5 |
----------|----------|----------|----------|----------|-------------------|
Jest: "global" coverage threshold for statements (90%) not met: 75%
Jest: "global" coverage threshold for branches (90%) not met: 50%
Jest: "global" coverage threshold for lines (90%) not met: 75%
... because it first runs group1 tests, which on its own fails to get over the coverage threshold set in jest.config.js. I want to ignore this threshold for individual runs, and only apply them to the combined report.
Note
I know I could split the jest.config.js file into separate ones and apply individual coverage thresholds per test group, but I don't want to manage thresholds individually per test group. I want to measure/apply it to the project as a whole, after all individual test runs have been completed and combined.
Instead of combining the tasks, combine only the pathPattern.
You can try to create a single task for group1 and group2 something like below
"test:commongroup:coverage": "jest --testPathPattern=commonGroup --coverage",
where commonGroup is pattern that includes both group1 and group2.

Why does sorting take so long?

I am currently trying to learn the syntax of Rust by solving little tasks. I compare the execution time as sanity-checks if I am using the language the right way.
One task is:
Create an array of 10000000 random integers in the range 0 - 1000000000
Sort it and measure the time
Print the time for sorting it
I got the following results:
| # | Language | Speed | LOCs |
| --- | -------------------- | ------ | ---- |
| 1 | C++ (with -O3) | 1.36s | 1 |
| 2 | Python (with PyPy) | 3.14s | 1 |
| 3 | Ruby | 5.04s | 1 |
| 4 | Go | 6.17s | 1 |
| 5 | C++ | 7.95s | 1 |
| 6 | Python (with Cython) | 11.51s | 1 |
| 7 | PHP | 36.28s | 1 |
Now I wrote the following Rust code:
rust.rs
extern crate rand;
extern crate time;
use rand::Rng;
use time::PreciseTime;
fn main() {
let n = 10000000;
let mut array = Vec::new();
let mut rng = rand::thread_rng();
for _ in 0..n {
//array[i] = rng.gen::<i32>();
array.push(rng.gen::<i32>());
}
// Sort
let start = PreciseTime::now();
array.sort();
let end = PreciseTime::now();
println!("{} seconds for sorting {} integers.", start.to(end), n);
}
with the following Cargo.toml:
[package]
name = "hello_world" # the name of the package
version = "0.0.1" # the current version, obeying semver
authors = [ "you#example.com" ]
[[bin]]
name = "rust"
path = "rust.rs"
[dependencies]
rand = "*" # Or a specific version
time = "*"
I compiled it with cargo run rust.rs and ran the binary. It outputs
PT18.207168155S seconds for sorting 10000000 integers.
Note that this is much slower than Python. I guess I am doing something wrong. (The complete code of rust and of the other languages is here if you are interested.)
Why does it take so long to sort with Rust? How can I make it faster?
I Tried your code on my computer, running it with cargo run gives:
PT11.634640178S seconds for sorting 10000000 integers.
And with cargo run --release (turning on optimizations) gives:
PT1.004434739S seconds for sorting 10000000 integers.

How to include modules for code coverage for unit testing?

My assumption is that any module tested using Intern will automatically be covered by Istanbul's code coverage. For reasons unknown to me, my module is not being included.
I am:
running Intern 1.6.2 (installed with npm locally)
testing NodeJS code
using callbacks, not promises
using CommonJS modules, not AMD modules
Directory Structure (only showing relevant files):
plister
|
|--libraries
| |--file-type-support.js
|
|--tests
| |--intern.js
| |--unit
| |--file-type-support.js
|
|--node_modules
|--intern
plister/tests/intern.js
define({
useLoader: {
'host-node': 'dojo/dojo'
},
loader: {
packages: [
{name: 'libraries', location: 'libraries'}
]
},
reporters: ['console'],
suites: ['tests/unit/file-type-support'],
functionalSuites: [],
excludeInstrumentation: /^(tests|node_modules)\//
});
plister/tests/unit/file-type-support.js
define([
'intern!bdd',
'intern/chai!expect',
'intern/dojo/node!fs',
'intern/dojo/node!path',
'intern/dojo/node!stream-equal',
'intern/dojo/node!../../libraries/file-type-support'
], function (bdd, expect, fs, path, streamEqual, fileTypeSupport) {
'use strict';
bdd.describe('file-type-support', function doTest() {
bdd.it('should show that the example output.plist matches the ' +
'temp.plist generated by the module', function () {
var deferred = this.async(),
input = path.normalize('tests/resources/input.plist'),
output = path.normalize('tests/resources/output.plist'),
temporary = path.normalize('tests/resources/temp.plist');
// Test deactivate function by checking output produced by
// function against test output.
fileTypeSupport.deactivate(fs.createReadStream(input),
fs.createWriteStream(temporary),
deferred.rejectOnError(function onFinish() {
streamEqual(fs.createReadStream(output),
fs.createReadStream(temporary),
deferred.callback(function checkEqual(error, equal) {
expect(equal).to.be.true;
}));
}));
});
});
});
Output:
PASS: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (29ms)
1/1 tests passed
1/1 tests passed
Output (on failure):
FAIL: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (30ms)
AssertionError: expected true to be false
AssertionError: expected true to be false
0/1 tests passed
0/1 tests passed
npm ERR! Test failed. See above for more details.
npm ERR! not ok code 0
Output (after removing excludeInstrumentation):
PASS: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (25ms)
1/1 tests passed
1/1 tests passed
------------------------------------------+-----------+-----------+-----------+-----------+
File | % Stmts |% Branches | % Funcs | % Lines |
------------------------------------------+-----------+-----------+-----------+-----------+
node_modules/intern/ | 70 | 50 | 100 | 70 |
chai.js | 70 | 50 | 100 | 70 |
node_modules/intern/lib/ | 79.71 | 42.86 | 72.22 | 79.71 |
Test.js | 79.71 | 42.86 | 72.22 | 79.71 |
node_modules/intern/lib/interfaces/ | 80 | 50 | 63.64 | 80 |
bdd.js | 100 | 100 | 100 | 100 |
tdd.js | 76.19 | 50 | 55.56 | 76.19 |
node_modules/intern/lib/reporters/ | 56.52 | 35 | 57.14 | 56.52 |
console.js | 56.52 | 35 | 57.14 | 56.52 |
node_modules/intern/node_modules/chai/ | 37.9 | 8.73 | 26.38 | 39.34 |
chai.js | 37.9 | 8.73 | 26.38 | 39.34 |
tests/unit/ | 100 | 100 | 100 | 100 |
file-type-support.js | 100 | 100 | 100 | 100 |
------------------------------------------+-----------+-----------+-----------+-----------+
All files | 42.14 | 11.35 | 33.45 | 43.63 |
------------------------------------------+-----------+-----------+-----------+-----------+
My module passes the test and I can make it fail too. It just will not show up in the code coverage. I have done the tutorial hosted on GitHub without any problems.
I tried dissecting the Istanbul and Intern dependencies. I place a console.log where it seems files to be covered go through, but my module doesn't get passed. I have tried every variation of deferred.callback and deferred.rejectOnError with no difference to the code coverage.
Also, any feedback on my use of deferred.callback and deferred.rejectOnError will be greatly appreciated. I am still a little uncertain on their usage.
Thanks!
As of Intern 1.6, only require('vm').runInThisContext is hooked to add code coverage data, not require. Instrumentation of require was added in Intern 2.0.
The use of callback/rejectOnError in the above code is correct.

Resources