Jest initialize and shared objects once per test suite and across test cases - jestjs

I want to use shared resources between jest test suites. I read in the internet and found that this could be the solution. But the setup is invoked per each test file.
I have two test files links.test.js and 'subscritpions.test.js'. I usually call them with one command jest and that all.
The problem is that the setup function of my custom environment custom-environment.js:
const NodeEnvironment = require('jest-environment-node');
const MySql = require('../../lib/databases/myslq/db');
class CustomEnvironment extends NodeEnvironment {
constructor(config) {
super(config)
}
async setup() {
await super.setup();
console.log(`Global Setup !!!!!!!!!`);
this.global.gObject = "I am global object"
this.global.liveUsers = await new MySql("Live Users");
this.global.stageUsers = await new MySql("Stage Users");
}
async teardown() {
console.log(`Global terdown !!!!!!!!!`);
await super.teardown();
this.global.gObject = "I am destroyed";
this.global.liveUsers.closeConnection();
this.global.stageUsers.closeConnection();
}
runScript(script) {
return super.runScript(script)
}
}
module.exports = CustomEnvironment;
is called twice for each test:
Global Setup !!!!!!!!!
Global Setup !!!!!!!!!
ERROR>>> Error: listen EADDRINUSE: address already in use 127.0.0.1:3306
So it tries to establish second connection to the same port - while I could simply use the existing connection.
The way it works seems to me makes no difference from defining
beforeAll(async () => {
});
afterAll(() => {
});
hooks.
So to wrap up, the question is: Using jest command (thus running all test suits), how can I invoke setup function once for all test and share global objects across them?

setup and teardown are indeed executed for each test suite, similarly to top-level beforeAll and afterAll.
Test suites run in separate processes. Test environment is initialized for each test suite, e.g. jsdom environment provides fake DOM instance for each suite and cannot be cross-contaminated between them.
As the documentation states,
Note: TestEnvironment is sandboxed. Each test suite will trigger setup/teardown in their own TestEnvironment.
The environment isn't suitable for global setup and teardown. globalSetup and globalTeardown should be used for that. They are appropriate for setting up and shutting down server instances, this is what documentation example shows:
// setup.js
module.exports = async () => {
// ...
// Set reference to mongod in order to close the server during teardown.
global.__MONGOD__ = mongod;
};
// teardown.js
module.exports = async function () {
await global.__MONGOD__.stop();
};
Since this happens in parent process, __MONGOD__ is unavailable in test suites.

Related

Jest test fail: "● default root route"

I'm trying to write Jest tests for a Fastify project. But I'm stuck with the example code failing with an ambiguous error: "● default root route".
// root.test.ts
import { build } from '../helper'
const app = build()
test('default root route', async () => {
const res = await app.inject({
url: '/'
})
expect(res.json()).toEqual({ root: true })
})
// helper.ts
import Fastify from "fastify"
import fp from "fastify-plugin"
import App from "../src/app"
export function build() {
const app = Fastify()
beforeAll(async () => {
void app.register(fp(App))
await app.ready()
})
afterAll(() => app.close())
return app
}
// console error:
FAIL test/routes/root.test.ts (8.547 s)
● default root route
A worker process has failed to exit gracefully and has been force exited. This is likely caused by tests leaking due to improper teardown. Try running with --detectOpenHandles to find leaks. Active timers can also cause this, ensure that .unref() was called on them.
What am I doing wrong?
After running --detectOpenHandles, Jest reported that open ioredis connections were timing out.
I hooked up ioredis instances to Fastify lifecycle with fastify-redis and the test passed.

Mock WebAPI interface using ts-mockito

I'm writing a unit test for a class which uses browser WebAPI interface.
I use ts-mockito to mock the interface (a WebGL2RenderingContext in my case).
When I run the test, Node throws ReferenceError: WebGL2RenderingContext is not defined
which is understandable, because the test is run under NodeJS environment, not browser, so the class/interface doesn't exist.
Is there any way to make NodeJS environment aware of the WebAPI interfaces, so that it's possible to be mocked?
NOTE: Since it's a unit test, it should NOT be run on a real browser.
jsdom seems to be a possible solution, but I have no idea how to mock it with ts-mockito.
The following snippet illustrate what I'm trying to do:
import { mock, instance, verify } from 'ts-mockito'
// ========== CLASS ==========
class DummyClass {
dummyMethod() : void {}
}
class TestedClass {
static testDummy(dummy : DummyClass) : void {
dummy.dummyMethod();
}
static testGlCtx(glCtx : WebGL2RenderingContext) : void {
glCtx.flush();
}
}
// ========== TEST ==========
describe('DummyClass', () => {
// This test passed successfully
it('works fine', () => {
const mockDummy = mock(DummyClass);
TestedClass.testDummy( instance(mockDummy) );
verify( mockDummy.dummyMethod() ).once();
});
});
describe('WebGL interface', () => {
it('works fine', () => {
// This line failed with 'ReferenceError: WebGL2RenderingContext is not defined'
const mockGLCtx = mock(WebGL2RenderingContext);
TestedClass.testGlCtx( instance(mockGLCtx) );
verify( mockGLCtx.flush() ).once();
});
});
Run using mocha with the command mocha --require ts-node/register 'test.ts'.
There are two solutions: For common DOM APIs, and for generic mocking.
For common DOM APIs
As detailed in this StackOverflow answer, jsdom can be used to bring DOM APIs into NodeJS runtime environment.
Run npm install --save-dev jsdom global-jsdom
and change Mocha's command to
mocha --require ts-node/register --require global-jsdom/register 'test.ts'
NOTE: global-jsdom is the newer & updated version of jsdom-global.
This solution works for common DOM APIs (such as HTMLElement, SVGElement, File),
but it doesn't work for more specialized APIs (WebGL, Crypto, audio & video).
For generic interface mocking
Turns out ts-mockito has a way to mock interfaces, including DOM & any browser Web APIs.
So the above test code can be changed to:
describe('WebGL interface', () => {
it('works fine', () => {
const mockGLCtx = mock<WebGL2RenderingContext>();
TestedClass.testGlCtx( instance(mockGLCtx) );
verify( mockGLCtx.flush() ).once();
});
});
and the test will run successfully.

Mocha how to use utils function stackTraceFilter()

i try to use the mocha utils stackTraceFilter() function
but i cannot find an example usage case where someone explains how to use it in ones test. I found the official tests here: link
But how can i implement it in my tests, which somehow look like that:
import { expect } from 'chai'
import 'mocha'
import { main, main2 } from './'
describe.only('index.ts', async () => {
it('should start a job', async () => {
// const R_RUN_MAIN = await main()
await main2()
// TEST
expect(1).to.equal(1) // fails
})
})
In the tests i can see the line
expect(filter(stack.join('\n')), 'to be', stack.slice(0, 3).join('\n'));
But how do i get the Stack for my test?
expect(1).to.equal(1) // fails
or in general, how do i get the stack and initialize the filter function for the whole file when, for example, code from an imported file is already failing and creating a long stack trace?
UPDATE (2018.08.15)
so i got mocha running in a programmatic way:
export {}
import * as MOCHA from 'mocha'
async function run() {
const mocha = new MOCHA({
reporter: 'progress',
reporterOptions: {
verbose: true,
},
})
mocha.addFile(`./src/utils/mocha/index.spec.ts`)
const R = mocha.run((failures) => {
process.on('exit', () => {
process.exit(failures)
})
})
}
run()
I dont know where to add and run the Filter function?
const filter = MOCHA.utils.stackTraceFilter
The stackTraceFilter() function in mocha isn't meant to filter your code, but rather the mocha internals that in theory shouldn't be relevant to your tests. You can view the source code, but to sum it up it just filters out 'mocha' and 'node' lines from the stack, depending on the environment you're in.
I think what you're looking for could be accomplished through the package StackTraceJS, which allows you to grab a stack from anywhere, and do what you want with it. We created a custom reporter for mocha which uses it, and it works quite well.
So, using the example from their site:
StackTrace.get()
.then(function(stack){
// you now have a stack, and can filter as you wish
})
.catch(function(err){});

Jest beforeAll() share between multiple test files

I have a Node.js project that I'm testing using Jest. I have several test files that have the same setup requirement. Previously, all these tests were in one file, so I just had a beforeAll(...) that performed the common setup. Now, with the tests split into multiple files, it seems like I have to copy/paste that beforeAll(...) code into each of the files. That seems inelegant - is there a better way to do this, ideally where I can just write my beforeAll(...)/setup logic once, and "require" it from multiple test files? Note that there are other tests in my test suite that don't require this setup functionality, so I don't want to make all my tests run this setup (just a particular subset of test files).
If you're using Jest >=20, you might want to look into creating a custom jest-environment for the tests that require this common setup. This would be a module that extends either jest-environment-node or jest-environment-jsdom, and implements async setup(), async teardown(), and async runScript() to do this setup work.
You can then add a #jest-environment my-custom-env directive to those files that require this setup.
See the Jest config docs for testEnvironment for details on how to set this up; there's a simple example there.
I am using a simple "test hooks" pattern for this:
// This function wraps beforeAll and afterAll into a single RAII-like call.
// That makes the describe code further down easier to read and makes
// sure you don't forget the afterAll part. Can easily be shared between tests.
function useFakeServer() {
let server;
beforeAll(() => server = sinon.fakeServer.create());
afterAll(() => server.restore());
return () => server;
}
describe('Some scenario', () => {
const getServer = useFakeServer();
it('accesses the server', () => {
const server = getServer();
// Test as you normally would..
expect(server.requests[0]. /* ... */);
});
});
If you need a script to run before all your test files, you can use globalSetup
This option allows the use of a custom global setup module which exports an async function that is triggered once before all test suites.
in your jest.config.js
//jest.config.js
module.exports = {
...
testTimeout: 20000,
globalSetup: "./setup.js"
};
then create a file named setup.js
// setup.js
module.exports = async () => {
console.log("I'll be called first before any test cases run");
//add in what you need to do here
};
Docs
You can move your beforeAll logic into one file and reference it in jest.config.js setupFilesAfterEnv section:
module.exports = {
...
setupFilesAfterEnv: ['<rootDir>/testHelper.ts'],
...
}
https://jestjs.io/docs/en/configuration#setupfilesafterenv-array
Create a function somewhere like so:
export function setupBeforeAndAfter(putParamsHereIfYouHaveAny) {
beforeAll(() => shared-before-all-code);
afterAll(() => shared-after-all-code);
beforeEach(() => shared-before-each-code);
afterEach(() => shared-after-each-code);
}
Then just call it wherever you would otherwise have manually written these functions:
describe('My test', () => {
setupBeforeAndAfter(putParamsHereIfYouHaveAny)
it('is amazing', () => {
// Stuff in setupBeforeAndAfter() will run before/after this test as appropriate
})
})

Is there a jest config that will fail tests on console.warn?

How do I configure jest tests to fail on warnings?
console.warn('stuff');
// fail test
You can use this simple override :
let error = console.error
console.error = function (message) {
error.apply(console, arguments) // keep default behaviour
throw (message instanceof Error ? message : new Error(message))
}
You can make it available across all tests using Jest setupFiles.
In package.json :
"jest": {
"setupFiles": [
"./tests/jest.overrides.js"
]
}
Then put the snippet into jest.overrides.js
For those using create-react-app, not wanting to run npm run eject, you can add the following code to ./src/setupTests.js:
global.console.warn = (message) => {
throw message
}
global.console.error = (message) => {
throw message
}
Now, jest will fail when messages are passed to console.warn or console.error.
create-react-app Docs - Initializing Test Environment
I implemented this recently using jest.spyOn introduced in v19.0.0 to mock the warn method of console (which is accesses via the global context / object).
Can then expect that the mocked warn was not called, as shown below.
describe('A function that does something', () => {
it('Should not trigger a warning', () => {
var warn = jest.spyOn(global.console, 'warn');
// Do something that may trigger warning via `console.warn`
doSomething();
// ... i.e.
console.warn('stuff');
// Check that warn was not called (fail on warning)
expect(warn).not.toHaveBeenCalled();
// Cleanup
warn.mockReset();
warn.mockRestore();
});
});
There is a useful npm package that helps you to achieve that: jest-fail-on-console
It's easily configurable.
Install:
npm i -D jest-fail-on-console
Configure:
In a file used in the setupFilesAfterEnv option of Jest, add this code:
import failOnConsole from 'jest-fail-on-console'
failOnConsole()
// or with options:
failOnConsole({ shouldFailOnWarn: false })
I decided to post a full example based on user1823021 answer
describe('#perform', () => {
var api
// the global.fetch is set to jest.fn() object globally
global.fetch = jest.fn()
var warn = jest.spyOn(global.console, 'warn');
beforeEach(function() {
// before every test, all mocks need to be resetted
api = new Api()
global.fetch.mockReset()
warn.mockReset()
});
it('triggers an console.warn if fetch fails', function() {
// In this test fetch mock throws an error
global.fetch.mockImplementationOnce(() => {
throw 'error triggered'
})
// I run the test
api.perform()
// I verify that the warn spy has been triggered
expect(warn).toHaveBeenCalledTimes(1);
expect(warn).toBeCalledWith("api call failed with error: ", "error triggered")
});
it('calls fetch function', function() {
// I create 2 more mock objects to verify the fetch parameters
const url = jest.fn()
const config = jest.fn()
api.url = url
api.config = config
// I run the test
api.perform()
// I verify that fetch has been called with url and config mocks
expect(global.fetch).toHaveBeenCalledTimes(1)
expect(global.fetch).toBeCalledWith(url, config)
expect(warn).toHaveBeenCalledTimes(0)
});
})
the #perform method I am testing
class Api {
constructor(auth) {
this._credentials = auth
}
perform = async () => {
try {
return await fetch(this.url, this.config)
} catch(error) {
console.warn('api call failed with error: ', error)
}
}
}
You can set the environment variable CI=true before running jest which will cause it to fail tests on warnings in addition to errors.
Example which runs all test files in the test folder:
CI=true jest ./test
Automated CI/CD pipelines such as Github Actions set CI to true by default, which can be one reason why a unit test will pass on your local machine when warnings are thrown, but fail in the pipeline.
(Here is the Github Actions documentation on default environment variables: https://docs.github.com/en/actions/learn-github-actions/environment-variables#default-environment-variables)

Resources