While in development we occasionally use skip or only to debug a particular test or test suit. Accidentally, we might forget to revert the cases and push the code for PR. I am looking for a way to detect or automatically run all tests even for skip and only tests in our CI pipeline(using Github action). It can be in either case as follow.
Fail the test when there are skip or only tests.
Run all tests even for skip and only.
Very much appreciate any help.
I came up with a solution for the second part of the question about running all tests even for skip and only. I don't think it's elegant solution, but it works and it's easy to implement.
First of all you need to change test runner to jest-circus if you work with jest bellow 27.x version. We need it so our custom test environment will use handleTestEvent function to watch for setup events. To do so, install jest-circus with npm i jest-circus and then in your jest.config.js set testRunner property:
//jest.config.js
module.exports = {
testRunner: 'jest-circus/runner',
...
}
From Jest 27.0 they changed default test runner to jest-circus so you can skip this step if you have this or higher version.
Then you have to write custom test environment. I suggest to write it based on jsdom so for example we also have access to window object in tests and etc. To do so run in terminal npm i jest-environment-jsdom and then create custom environment like so:
//custom-jsdom-environment.js
const JsDomEnvironment = require('jest-environment-jsdom')
class CustomJsDomEnvironment extends JsDomEnvironment {
async handleTestEvent(event, state) {
if(process.env.IS_CI === 'true' && event.name === 'setup') {
this.global.describe.only = this.global.describe
this.global.describe.skip = this.global.describe
this.global.fdescribe = this.global.describe
this.global.xdescribe = this.global.describe
this.global.it.only = this.global.it
this.global.it.skip = this.global.it
this.global.fit = this.global.it
this.global.xit = this.global.it
this.global.test.only = this.global.test
this.global.test.skip = this.global.test
this.global.ftest = this.global.test
this.global.xtest = this.global.test
}
}
}
module.exports = CustomJsDomEnvironment
And inform jest to properly use it:
//jest.config.js
module.exports = {
testRunner: 'jest-circus/runner',
testEnvironment: 'path/to/custom/jsdom/environment.js',
...
}
Then you just have to setup custom environment value IS_CI in your CI pipeline and from now on all your skipped tests will run.
Also in custom test environment you could watch for skipped test and throw an error when your runner find skip/only. Unfortunately throwing an error in this place won't fail a test. You would need to find a way to fail a test outside of a test.
//custom-jsdom-environment.js
const JsDomEnvironment = require('jest-environment-jsdom')
const path = require('path')
class CustomJsDomEnvironment extends JsDomEnvironment {
constructor(config, context) {
super(config, context)
const testPath = context.testPath
this.testFile = path.basename(testPath)
}
async handleTestEvent(event, state) {
if(process.env.IS_CI === 'true' && event.name === 'add_test') {
if(event.mode === 'skip' || event.mode === 'only') {
const msg = `Run ${event.mode} test: '${event.testName}' in ${this.testFile}`
throw new Error(msg)
}
}
}
}
module.exports = CustomJsDomEnvironment
Related
I am using Jest for testing. How do I access the filename or file path of the current test being run?
I need a conditional statement that runs different lines of code based on whether it is a unit test file or integration test file.
Here is an example of what I am trying to achieve:
beforeAll(() => {
if (integration_test_file){
// run this this code
} else if (unit_test_file){
// run this code instead
}
})
This information is available in Jest environment. This is the case for custom environment:
const Environment = require('jest-environment-node'); // or jest-environment-jsdom
module.exports = class MyEnvironment extends Environment {
constructor(config, context) {
super(config, context);
this.testPath = context.testPath;
}
async setup() {
await super.setup();
this.global.IS_INTEGRATION = /match integration/.test(this.testPath);
}
}
The environment is instantiated for each test suite, testPath contains full path to current test file.
IS_INTEGRATION global variable will be available in setupFilesAfterEnv and tests themselves. In case the code needs to be evaluated for all tests, it may belong to environment setup and teardown methods.
You can access the name of the file path of the test being ran in jest via its global variables.
the file path can be found under the global variable
jasmine.testPath
or
global.jasmine.testPath
this answer only applies if you're using jest with its default test runner "jasmine" or "jasmine2". results will differ based on the test runner you use. see
https://jestjs.io/docs/en/configuration#testrunner-string
__dirname works a treat (ref: https://www.geeksforgeeks.org/how-to-get-the-path-of-current-script-using-node-js/)
We're building a NodeJS Framework MidwayJS, and we recommend our users to use Jest to make test cases. We modified Jest Env like this:
'use strict';
const NodeEnvironment = require('jest-environment-node');
/* eslint-disable no-useless-constructor */
class JestEnvironment extends NodeEnvironment {
constructor(config) {
super(config);
}
async setup() {
require('ts-node/register');
this.global.process.env.MIDWAY_TS_MODE = 'true';
this.global.process.env.MIDWAY_JEST_MODE = 'true';
this.global.setTimeout(3000)
await super.setup();
}
async teardown() {
await super.teardown();
}
runScript(script) {
return super.runScript(script);
}
}
module.exports = JestEnvironment;
With configuration above it goes well, but we got a problem:
Some users create huge project directory and our framework will scan the whole directory before application start, when running test cases it acts also in this way, which makes it in some suitations take more than 3000ms--Jest set default async callback called time to be 3000ms, so if the scanning didn't finished in 3000ms the test would crash.
we solved this easily by use jest.setup.js in one line : jest.setTimeout(30000), but now we want to solve it in the file which showed in the first, and I didn't find a way to modify setup config in it.
I'd appreciate it greatly if you can help me to make it out.
How do I use jest.run() or jest.runCLI() to run all tests programmatically? What am I suppose to feed as an argument?
I tried to find documentation regarding them but fail.
And if the above functions don't work, what am I supposed to call if I want to run jest programmatically?
Jest is not supposed to be run programmatically. Maybe it will in the future.
Try to run following:
const jest = require("jest");
const options = {
projects: [__dirname],
silent: true,
};
jest
.runCLI(options, options.projects)
.then((success) => {
console.log(success);
})
.catch((failure) => {
console.error(failure);
});
As success in then callback an object will be passed, containing globalConfig and results keys. Have a look on them, maybe it will help you.
From what I have experienced so far, utilizing run() requires to you define a static config and then pass arguments to Jest much like you would normally using the Jest CLI.
Utilizing runCLI() allows you to dynamically create a config and provide it to Jest.
I opted for the former just because I wanted to only expose a few of the Jest CLI options for a global configuration:
import jest from "jest";
import { configPaths } from "../_paths";
import { Logger } from "../_utils";
process.env.BABEL_ENV = "test";
process.env.NODE_ENV = "test";
const defaultArgs = ["--config", configPaths.jestConfig];
const log = new Logger();
const resolveTestArgs = async args => {
let resolvedArgs = [];
if (args.file || args.f) {
return [args.file || args.f, ...defaultArgs];
}
// updates the snapshots
if (args.update || args.u) {
resolvedArgs = [...resolvedArgs, "--updateSnapshot"];
}
// tests the coverage
if (args.coverage || args.cov) {
resolvedArgs = [...resolvedArgs, "--coverage"];
}
// runs the watcher
if (args.watch || args.w) {
resolvedArgs = [...resolvedArgs, "--watch"];
}
// ci arg to update default snapshot feature
if (args.ci) {
resolvedArgs = [...resolvedArgs, "--ci"];
}
// tests only tests that have changed
if (args.changed || args.ch) {
resolvedArgs = [...resolvedArgs, "--onlyChanged"];
}
return [...defaultArgs, ...resolvedArgs];
};
export const test = async cliArgs => {
try {
const jestArgs = await resolveTestArgs(cliArgs);
jest.run(jestArgs);
} catch (error) {
log.error(error);
process.exit(1);
}
};
Here's example from my post on How to run Jest programmatically in node.js (Jest JavaScript API).
This time using TypeScript.
Install the dependencies
npm i -S jest-cli
npm i -D #types/jest-cli #types/jest
Make a call
import {runCLI} from 'jest-cli';
import ProjectConfig = jest.ProjectConfig;
const projectRootPath = '/path/to/project/root';
// Add any Jest configuration options here
const jestConfig: ProjectConfig = {
roots: ['./dist/tests'],
testRegex: '\\.spec\\.js$'
};
// Run the Jest asynchronously
const result = await runCLI(jestConfig as any, [projectRootPath]);
// Analyze the results
// (see typings for result format)
if (result.results.success) {
console.log(`Tests completed`);
} else {
console.error(`Tests failed`);
}
Also, regarding #PeterDanis answer, I'm not sure Jest will reject the promise in case of a failed tests. In my experience it will resovle with result.results.success === false.
If all your configs are in the jest.config.js, you can just code like this:
const jest = require('jest')
jest.run([])
I would like to use test.before() to bootstrap my tests. The setup I have tried does not work:
// bootstrap.js
const test = require('ava')
test.before(t => {
// do this exactly once for all tests
})
module.exports = { test }
// test1.js
const { test } = require('../bootstrap')
test(t => { ... {)
AVA will run the before() function before each test file. I could make a check within the before call to check if it has been called but I'd like to find a cleaner process. I have tried using the require parameter with:
"ava": {
"require": [
"./test/run.js"
]
}
With:
// bootstrap,js
const test = require('ava')
module.exports = { test }
// run.js
const { test } = require('./bootstrap')
test.before(t => { })
// test1.js
const { test } = require('../bootstrap')
test(t => { ... {)
But that just breaks with worker.setRunner is not a function. Not sure what it expects there.
AVA runs each test file in its own process. test.before() should be used to set up fixtures that are used just by the process it's called in.
It sounds like you want to do setup that is reused across your test files / processes. Ideally that's avoided since you can end up creating hard-to-detect dependencies between the execution of different tests.
Still, if this is what you need then I'd suggest using a pretest npm script, which is run automatically when you do npm test.
In your package.json you could run a setup script first...
"scripts": {
"test": "node setup-test-database.js && ava '*.test.js'"
}
Then...
In that setup-test-database.js file, have it do all your bootstrappy needs, and save a test-config.json file with whatever you need to pass to the tests.
In each test you just need to add const config = require('./test-config.json'); and you'll have access to the data you need.
I'd like to override some values at test-time, specifically setting my retries for an http service to 1 (immediate failure, no retries). Our project uses node-config. According to the docs I can override with NODE_CONFIG env variable:
node myapp.js --NODE_CONFIG='{"Customer":{"dbConfig":{"host":"customerdb.prod"}}}'
Well I would prefer to do this in my test, but not for all tests. The code says that you can allow config mutations by setting ALLOW_CONFIG_MUTATIONS.
process.env.ALLOW_CONFIG_MUTATIONS = "true";
const importFresh = require('import-fresh');
importFresh("config");
process.env.NODE_CONFIG = JSON.stringify({httpServices:{integration:{enrich: {retryInterval: 1, retries: 1}}}});
expect(process.env.NODE_CONFIG, 'NODE_CONFIG not set').to.exist();
expect(process.env.NODE_CONFIG, 'NODE_CONFIG not set').to.match(/retryInterval/);
expect(process.env.ALLOW_CONFIG_MUTATIONS, 'ALLOW_CONFIG_MUTATIONS not set').to.equal("true");
const testConfig = require("config");
console.dir(testConfig.get("httpServices.integration.enrich"));
expect(testConfig.get("httpServices.integration.enrich.retryInterval"), 'config value not set to 1').to.equal(1);
Result:
{ url: 'https://internal-**********',
retryInterval: 5000,
retries: 5 }
`Error: config value not set to 1: Expected 5000 to equal specified value: 1`
How do I get this override to work?
(expect is from Hapi.js Code library)
I'm one of the maintainers of node-config. Your bug is that you used require the second time when you should have used importFresh again.
Your first use of "importFresh()" does nothing different than require() would, because it is the first use of require().
After setting some variables, you call require(), which will return the copy of config already generated and cached, ignoring the effects of the environment variables set.
You only needed to use importFresh() once, where you currently use require(). This will cause a "fresh" copy of the config object to be returned, as you expected.
Simply changing config's property worked for me.
For example:
const config = require( 'config' );
config.httpServices.integration.enrich.retryInterval = 1;
// Do your tests...
UPD: Make sure that overrides are done before anyone calls the first config.get(), because the config object is made immutable as soon as any client uses the values via get().
Joining late, but other answers did not fit with the testing standard in my project, so here is what I came up with
TL;DR
Use mocks..
Detailed Answer
node-config uses a function get to get the configuration values.
By mocking the function get you can easily modify any configuration you see fit..
My personal favorite library is sinon
Here is an implementation of a mock with sinon
const config = require('config');
const sinon = require('sinon');
class MockConfig {
constructor () {
this.params = {};
this.sandbox = sinon.sandbox.create();
}
withConfValue (confKey, confValue) {
this.params.confValues[confKey] = confValue;
return this;
}
reset () {
this.params.confValues: {};
return this;
}
restore() {
this.sandbox.restore();
}
apply () {
this.restore(); // avoid duplicate wrapping
this.sandbox.stub(config, 'get').callsFake((configKey) => {
if (this.params.confValues.hasOwnProperty(configKey)) {
return this.params.confValues[configKey];
}
// not ideal.. however `wrappedMethod` approach did not work for me
// https://stackoverflow.com/a/57017971/1068746
return configKey
.split('.')
.reduce((result, item) => result[item], config)
});
}
}
const instance = new MockConfig();
MockConfig.instance = () => instance;
module.exports = MockConfig;
Usage would be
const mockConfig = require('./mock_config').instance();
...
beforeEach(function () {
mockConfig.reset().apply();
})
afterEach(function () {
mockConfig.reset().clear();
})
it('should do something') {
mockConfig.withConfValue('some_topic.some_field.property', someValue);
... rest of the test ...
}
Assumptions
The only assumption this approach makes is that you adhere to node-config way of reading the configuration (using the get function) and not bypass it by accessing fields directly.
It's better to create a development.json, production.json et test.json in your config folder node-config will use it your app configuration.
you just net to set your NODE_ENV to use the specific file.
Hope it helps :)