How to include modules for code coverage for unit testing? - intern

My assumption is that any module tested using Intern will automatically be covered by Istanbul's code coverage. For reasons unknown to me, my module is not being included.
I am:
running Intern 1.6.2 (installed with npm locally)
testing NodeJS code
using callbacks, not promises
using CommonJS modules, not AMD modules
Directory Structure (only showing relevant files):
plister
|
|--libraries
| |--file-type-support.js
|
|--tests
| |--intern.js
| |--unit
| |--file-type-support.js
|
|--node_modules
|--intern
plister/tests/intern.js
define({
useLoader: {
'host-node': 'dojo/dojo'
},
loader: {
packages: [
{name: 'libraries', location: 'libraries'}
]
},
reporters: ['console'],
suites: ['tests/unit/file-type-support'],
functionalSuites: [],
excludeInstrumentation: /^(tests|node_modules)\//
});
plister/tests/unit/file-type-support.js
define([
'intern!bdd',
'intern/chai!expect',
'intern/dojo/node!fs',
'intern/dojo/node!path',
'intern/dojo/node!stream-equal',
'intern/dojo/node!../../libraries/file-type-support'
], function (bdd, expect, fs, path, streamEqual, fileTypeSupport) {
'use strict';
bdd.describe('file-type-support', function doTest() {
bdd.it('should show that the example output.plist matches the ' +
'temp.plist generated by the module', function () {
var deferred = this.async(),
input = path.normalize('tests/resources/input.plist'),
output = path.normalize('tests/resources/output.plist'),
temporary = path.normalize('tests/resources/temp.plist');
// Test deactivate function by checking output produced by
// function against test output.
fileTypeSupport.deactivate(fs.createReadStream(input),
fs.createWriteStream(temporary),
deferred.rejectOnError(function onFinish() {
streamEqual(fs.createReadStream(output),
fs.createReadStream(temporary),
deferred.callback(function checkEqual(error, equal) {
expect(equal).to.be.true;
}));
}));
});
});
});
Output:
PASS: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (29ms)
1/1 tests passed
1/1 tests passed
Output (on failure):
FAIL: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (30ms)
AssertionError: expected true to be false
AssertionError: expected true to be false
0/1 tests passed
0/1 tests passed
npm ERR! Test failed. See above for more details.
npm ERR! not ok code 0
Output (after removing excludeInstrumentation):
PASS: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (25ms)
1/1 tests passed
1/1 tests passed
------------------------------------------+-----------+-----------+-----------+-----------+
File | % Stmts |% Branches | % Funcs | % Lines |
------------------------------------------+-----------+-----------+-----------+-----------+
node_modules/intern/ | 70 | 50 | 100 | 70 |
chai.js | 70 | 50 | 100 | 70 |
node_modules/intern/lib/ | 79.71 | 42.86 | 72.22 | 79.71 |
Test.js | 79.71 | 42.86 | 72.22 | 79.71 |
node_modules/intern/lib/interfaces/ | 80 | 50 | 63.64 | 80 |
bdd.js | 100 | 100 | 100 | 100 |
tdd.js | 76.19 | 50 | 55.56 | 76.19 |
node_modules/intern/lib/reporters/ | 56.52 | 35 | 57.14 | 56.52 |
console.js | 56.52 | 35 | 57.14 | 56.52 |
node_modules/intern/node_modules/chai/ | 37.9 | 8.73 | 26.38 | 39.34 |
chai.js | 37.9 | 8.73 | 26.38 | 39.34 |
tests/unit/ | 100 | 100 | 100 | 100 |
file-type-support.js | 100 | 100 | 100 | 100 |
------------------------------------------+-----------+-----------+-----------+-----------+
All files | 42.14 | 11.35 | 33.45 | 43.63 |
------------------------------------------+-----------+-----------+-----------+-----------+
My module passes the test and I can make it fail too. It just will not show up in the code coverage. I have done the tutorial hosted on GitHub without any problems.
I tried dissecting the Istanbul and Intern dependencies. I place a console.log where it seems files to be covered go through, but my module doesn't get passed. I have tried every variation of deferred.callback and deferred.rejectOnError with no difference to the code coverage.
Also, any feedback on my use of deferred.callback and deferred.rejectOnError will be greatly appreciated. I am still a little uncertain on their usage.
Thanks!

As of Intern 1.6, only require('vm').runInThisContext is hooked to add code coverage data, not require. Instrumentation of require was added in Intern 2.0.
The use of callback/rejectOnError in the above code is correct.

Related

rust-sqlx: Lazy instance has previously been poisoned

I'm trying to run cargo fix on a project that uses slqx and am getting the following error:
error: proc macro panicked
--> src/twitter/domain/user.rs:54:5
|
54 | / sqlx::query!(
55 | | r#"
56 | | INSERT INTO users
57 | | (id, created_at,
... |
84 | | user["public_metrics"]["tweet_count"].as_i64(),
85 | | )
| |_____^
|
= help: message: Lazy instance has previously been poisoned
= note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
...for every instance of sqlx macro I have in my code. The weird thing is it used to work just fine, but for some reason doesn't anymore.
The only mention of the error on Google that I found is here, but I don't think it's relevant.
What might be wrong?

istambul nyc fails to detect any files when running ava test suite

Started trying to integrate nyc/istambul into my ava test suites:
./node_modules/.bin/nyc --include modules/todo.js --es-modules ./node_modules/.bin/ava
1 tests passed
----------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
----------|---------|----------|---------|---------|-------------------
All files | 0 | 0 | 0 | 0 |
----------|---------|----------|---------|---------|-------------------
The output fails to list any of the files!
If I add the --all flag I get the following even though the tests cover both functions:
ERROR: Coverage for lines (0%) does not meet global threshold (90%)
----------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
----------|---------|----------|---------|---------|-------------------
All files | 0 | 0 | 0 | 0 |
todo.js | 0 | 0 | 0 | 0 | 6-30
----------|---------|----------|---------|---------|-------------------
So it looks as if the output from the ava testing tool is not being picked up by the nyc coverage tool.
The simple setup is as follows:
/* todo.js (uses ES6 modules) */
'use strict'
let data = []
function clear() {
data = []
}
function add(item, qty = 1) {
qty = Number(qty)
if(isNaN(qty)) throw new Error('qty must be a number')
data.push({item: item, qty: qty})
return true
}
export { add, clear }
/* todo.spec.js */
import { add, clear } from './modules/todo.js'
import test from 'ava'
test('add a single item', test => {
clear()
const ok = add('bread')
test.truthy(ok)
})
I tried creating a .nycrc.json file but this didn't help:
{
"check-coverage": true,
"include": [
"modules/todo.js"
],
"reporter": ["text", "html"]
}

Unable to connect to the PYMQI Client facing FAILED: MQRC_ENVIRONMENT_ERROR

I am getting the below error while connecting to IBM MQ using library pymqi.
Its a clustered MQ channel
Traceback (most recent call last):
File "postToQueue.py", line 432, in <module>
qmgr = pymqi.connect(queue_manager, channel, conn_info)
File "C:\Python\lib\site-packages\pymqi\__init__.py", line 2608, in connect
qmgr.connect_tcp_client(queue_manager or '', CD(), channel, conn_info, user, password)
File "C:\Python\lib\site-packages\pymqi\__init__.py", line 1441, in connect_tcp_client
self.connect_with_options(name, cd, user=user, password=password)
File "C:\Python\lib\site-packages\pymqi\__init__.py", line 1423, in connect_with_options
raise MQMIError(rv[1], rv[2])
pymqi.MQMIError: MQI Error. Comp: 2, Reason 2012: FAILED: MQRC_ENVIRONMENT_ERROR'
Please see my code below.
queue_manager = 'quename here'
channel = 'channel name here'
host ='host-name here'
port = '2333'
queue_name = 'queue name here'
message = 'my message here'
conn_info = '%s(%s)' % (host, port)
print(conn_info)
qmgr = pymqi.connect(queue_manager, channel, conn_info)
queue = pymqi.Queue(qmgr, queue_name)
queue.put(message)
print("message sent")
queue.close()
qmgr.disconnect()
Getting error at the line below
qmgr = pymqi.connect(queue_manager, channel, conn_info)
Added the IBM client to scripts folder as well , using Windows 10 , Python 3.8.1 and IBM Client 9.1 windows client installation image, Below is the header
-----------------------------------------------------------------------------+
| |
| WebSphere MQ First Failure Symptom Report |
| ========================================= |
| |
| Date/Time :- Tue January 28 2020 16:27:51 Eastern Standard Time |
| UTC Time :- 1580246871.853000 |
| UTC Time Offset :- -300 (Eastern Standard Time) |
| Host Name :- CA-LDLD0SQ2 |
| Operating System :- Windows 10 Enterprise x64 Edition, Build 17763 |
| PIDS :- 5724H7251 |
| LVLS :- 8.0.0.11 |
| Product Long Name :- IBM MQ for Windows (x64 platform) |
| Vendor :- IBM |
| O/S Registered :- 0 |
| Data Path :- C:\Python\Scripts\IBM |
| Installation Path :- C:\Python |
| Installation Name :- MQNI08000011 (126) |
| License Type :- Unknown |
| Probe Id :- XC207013 |
| Application Name :- MQM |
| Component :- xxxInitialize |
| SCCS Info :- F:\build\slot1\p800_P\src\lib\cs\amqxeida.c, |
| Line Number :- 5085 |
| Build Date :- Dec 12 2018 |
| Build Level :- p800-011-181212.1 |
| Build Type :- IKAP - (Production) |
| UserID :- alekhya.machiraju |
| Process Name :- C:\Python\python.exe |
| Arguments :- |
| Addressing mode :- 32-bit |
| Process :- 00010908 |
| Thread :- 00000001 |
| Session :- 00000001 |
| UserApp :- TRUE |
| Last HQC :- 0.0.0-0 |
| Last HSHMEMB :- 0.0.0-0 |
| Last ObjectName :- |
| Major Errorcode :- xecF_E_UNEXPECTED_SYSTEM_RC |
| Minor Errorcode :- OK |
| Probe Type :- INCORROUT |
| Probe Severity :- 2 |
| Probe Description :- AMQ6090: MQM could not display the text for error |
| 536895781. |
| FDCSequenceNumber :- 0 |
| Comment1 :- WinNT error 1082155270 from Open ccsid.tbl. |
| |
+-----------------------------------------------------------------------------+

get error when getting beta version with fastlane

When I release beta with fastlane in React native project build_app, I have this problem.
platform :ios do
desc "Push a new beta build to TestFlight"
lane :beta do
increment_build_number(xcodeproj: "TestApp.xcodeproj")
build_app(
workspace: "TestApp.xcworkspace",
scheme: "TestApp",
include_bitcode: true)
end
end
[14:13:21]: Error packaging up the application
------+------------------------+-------------+
| fastlane summary |
+------+------------------------+-------------+
| Step | Action | Time (in s) |
+------+------------------------+-------------+
| 1 | default_platform | 0 |
| 2 | increment_build_number | 1 |
| 💥 | build_app | 366 |
+------+------------------------+-------------+
[14:13:21]: fastlane finished with errors
[!] Error packaging up the application
Could someone help me with this one.
In my case, I added a new distribution certificate which was in my keychain in first place. I removed the distribution certificate and it worked. It may not be the solution but can check

Applying jest coverageThreshold to aggregated report

Problem
I have a Node project with 2 different jest runs that I perform through different npm tasks. They both use the same jest.config.js, which has a declaration for coverageThreshold. I want to apply that threshold to the combined output of the jest runs.
How can I apply a jest coverage threshold to the combined output of multiple test runs?
Background
For reference, see the code for the sample project described below here
Sample project
To illustrate the problem, assume the project has a single file that looks like this:
Code under test
# src/index.js
function run(cond) {
if (cond) {
return "foo";
}
return "bar";
}
module.exports = run;
There are also 2 tests in the project:
Test 1
# src/__tests__/index.group1.test.js
var run = require("../index");
describe("when cond is true", () => {
it("should return 'foo'", () => {
expect(run(true)).toEqual("foo");
});
});
Test 2
# src/__tests__/index.group1.test.js
var run = require("../index");
describe("when cond is false", () => {
it("should return 'bar'", () => {
expect(run(false)).toEqual("bar");
})
});
The test coverage npm scripts in package.json look like this:
"test:group1:coverage": "jest --testPathPattern=group1 --coverage",
"test:group2:coverage": "jest --testPathPattern=group2 --coverage"
Running npm run test:group1:coverage generates:
----------|----------|----------|----------|----------|-------------------|
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s |
----------|----------|----------|----------|----------|-------------------|
All files | 75 | 50 | 100 | 75 | |
index.js | 75 | 50 | 100 | 75 | 5 |
----------|----------|----------|----------|----------|-------------------|
Running npm run test:group2:coverage generates:
----------|----------|----------|----------|----------|-------------------|
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s |
----------|----------|----------|----------|----------|-------------------|
All files | 75 | 50 | 100 | 75 | |
index.js | 75 | 50 | 100 | 75 | 3 |
----------|----------|----------|----------|----------|-------------------|
Combining coverage reports
Based on this post, I can have a separate npm task like this ...
"test:coverage": "npm run test:group1:coverage && mv ./coverage/coverage-final.json ./coverage/coverage-group1-final.json && npm run test:group2:coverage && mv ./coverage/coverage-final.json ./coverage/coverage-group2-final.json && node ./scripts/map-coverage.js"
... to generate a combined coverage report that looks like:
----------|----------|----------|----------|----------|-------------------|
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s |
----------|----------|----------|----------|----------|-------------------|
All files | 100 | 100 | 100 | 100 | |
index.js | 100 | 100 | 100 | 100 | |
----------|----------|----------|----------|----------|-------------------|
It's all good until this point. The problem arises when trying to apply a coverage threshold to the combined run.
Issue
Now, if I add coverageThreshold to the jest.config.js like this ...
module.exports = {
coverageThreshold: {
global: {
branches: 90,
functions: 90,
lines: 90,
statements: 90,
},
},
};
... and run npm run test:coverage, the tests fails ...
----------|----------|----------|----------|----------|-------------------|
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s |
----------|----------|----------|----------|----------|-------------------|
All files | 75 | 50 | 100 | 75 | |
index.js | 75 | 50 | 100 | 75 | 5 |
----------|----------|----------|----------|----------|-------------------|
Jest: "global" coverage threshold for statements (90%) not met: 75%
Jest: "global" coverage threshold for branches (90%) not met: 50%
Jest: "global" coverage threshold for lines (90%) not met: 75%
... because it first runs group1 tests, which on its own fails to get over the coverage threshold set in jest.config.js. I want to ignore this threshold for individual runs, and only apply them to the combined report.
Note
I know I could split the jest.config.js file into separate ones and apply individual coverage thresholds per test group, but I don't want to manage thresholds individually per test group. I want to measure/apply it to the project as a whole, after all individual test runs have been completed and combined.
Instead of combining the tasks, combine only the pathPattern.
You can try to create a single task for group1 and group2 something like below
"test:commongroup:coverage": "jest --testPathPattern=commonGroup --coverage",
where commonGroup is pattern that includes both group1 and group2.

Resources