I am running Jest and am trying to log the start and end timestamp for each of my tests. I am trying to stick my timestamp logging inside the beforeEach() and afterEach() blocks. How would I log the name of my Jest test within the beforeEach() and afterEach() block?
Also, is there a more global way of logging test name and timestamp before and after all the tests without using beforeEach() and afterEach()?
You can access the name of the current test in jest like this:
expect.getState().currentTestName
This method also works inside beforeEach / afterEach
The only downside is that it will also contain the name of your current describe section. (which may be fine depending on what you are trying to do.
Also it does not give you the timing information that you asked for.
The information on currently running test is unavailable in beforeEach. Similarly to Jasmine, suite object is available in Jest as this context in describe function, it's possible to patch spec definitions to expose needed data. A more trivial way would be to define custom wrapper function for global it that intercepts test name.
Custom reporter is a better way to do this. Reporter interface is self-documented, necessary data is available in testResult.
Performance measurements are already available:
module.exports = class TimeReporter {
onTestResult(test, testResult, aggregatedResult) {
for (let { title, duration } of testResult.testResults)
console.log(`test '${title}': ${duration} ms`);
}
}
Can be used like:
reporters: ['default', "<rootDir>/time-reporter.js"]
As it was noted, there are beforeAll and afterAll, they run once per describe test group.
You can set up a test environment and either log times directly or write the name and timing info into a global variable that is only available inside the tests in question:
./tests/testEnvironment.js
const NodeEnvironment = require('jest-environment-node');
class TestEnvironment extends NodeEnvironment {
constructor(config, context) {
super(config, context);
}
async setup() {
await super.setup();
}
async teardown() {
await super.teardown();
}
async handleTestEvent(event, state) {
if (event.name === 'test_start') {
// Log things when the test starts
} else if (event.name === 'test_done') {
console.log(event.test.name);
console.log(event.test.startedAt);
console.log(event.test.duration);
this.global.someVar = 'set up vars that are available as globals inside the tests';
}
}
}
module.exports = TestEnvironment;
For each test suite the following comment is needed to use this environment:
/**
* #jest-environment ./tests/testEnvironment
*/
Also see https://jestjs.io/docs/configuration#testenvironment-string
Related
I want to test my service methods which is dependent on the external module azure-devops-extension-api.
Currently I have created __mocks__\azure-devops-extension-sdk.ts and provide the custom implementation which needs to be mocked. It works fine. But there are some issues with unit testing as asked in Methods called on jest mocked response fails with error.
But I don't have to feasibility to change values for each of the unit test. Values that needs to be provided in mock will vary based on the unit test case.
for example, below is my service method which uses the getService() method from SDK of azure-devops-extension-api module.
import * as SDK from 'azure-devops-extension-sdk';
// more stmts
const projectService = await SDK.getService<IProjectPageService>(CommonServiceIds.ProjectPageService);
const project = await projectService.getProject();
For first unit test, I want to mock the getProject() method to return a valid value and for the second unit test, I want to be a null.
Most common solution is the below approach, but as requested, I want the flexity to mock differently for each unit test.
jest.mock('azure-devops-extension-sdk', () => {
return { getService: (client: any) => { id: 1, name: 'project' } };
});
In a Node/Express server, we use a repository that needs to be unit-tested using Jest.
//Private things
let products;
function loadProducts() {
if (!products)
products = fetchProductsFromSomeDbOrServiceOrWhatever()
}
function saveProducts() {
persistPrivateProductsToADbOrServiceOrWhatever()
}
// Exported/public things
export function read() {
loadProducts();
return products;
}
export function add(product) {
loadProducts();
products.push(product);
saveProducts();
}
We want to unit test like this:
import { read, add } from './productRepo';
it('can read products', () => {
expect(read().length).toBe(5);
});
it('can add a product', () => {
const oldNum = read().length;
add({id:0, name:'test prod', moreProps});
expect(read().length).toBe(oldNum+1)
});
You get the idea. It's not a class so we can't mess with the prototype.
Problem: How do I mock the private products and/or loadProducts and/or saveProducts so that it isn't reading from the actual data source?
Presumably these private functions call out to other pieces of functionality you've written yourself or imported from libraries.
function loadProducts() {
if (!products)
products = fetchProductsFromSomeDbOrServiceOrWhatever()
}
function saveProducts() {
persistPrivateProductsToADbOrServiceOrWhatever()
}
Let's take fetchProductsFromSomeDbOrServiceOrWhatever as the example. One basic architectural consideration to make the code properly encapsulated and testable is to put this functionality in a separate module. So I would expect an import at the head of the file:
import fetchProductsFromSomeDbOrServiceOrWhatever from './fetchProductsFromSomeDbOrServiceOrWhatever'
So in this case just mock it in your test file:
jest.mock('./fetchProductsFromSomeDbOrServiceOrWhatever');
If the functionality is not extracted into a separate module this makes your code less testable; this on its own is a good reason to refactor.
Note: the other replies on this thread are correct when they say that private functions of classes should not be tested, but I think that is a slightly different issue from the one you are asking.
First, start initializing products to an empty array, else tests are doomed to fail because of the null value. Also change the null check
Then parametrize your loader and saver functions so your functions can be testable. Last write tests for you loader and saver functions outside of this repo function.
// assummed imports
fetchProductsFromSomeDbOrServiceOrWhatever=()=>{}
persistPrivateProductsToADbOrServiceOrWhatever=()=>{}
//Private things
let products=[];
function loadProducts(loader) {
loader=loader || fetchProductsFromSomeDbOrServiceOrWhatever
if (products.length==0)
products = loader()
}
function saveProducts(saver) {
saver=saver || persistPrivateProductsToADbOrServiceOrWhatever
saver()
}
// Exported/public things
export function read(loader) {
loadProducts(loader);
return products;
}
export function add(product,loader,saver) {
loadProducts(loader);
products.push(product);
saveProducts(saver);
}
both exported functions can now use fetch/persist functions either by importing or as arguments.
Now the remaining is the mocking loader and saver function. saver function does not change anything so it can be null or empty. but if you want to check if it is called inside, then you need to mock it.
import {jest} from '#jest/globals'
import { read, add } from './productRepo';
it('can read products', () => {
loader=jest.fn().mockReturnValue([{id:7},{id:42}])
expect(read(loader).length).toBe(2);
expect(loader).toBeCalledTimes(1)
});
it('can add a product', () => {
loader=jest.fn().mockReturnValue([{id:7},{id:42}])
saver=jest.fn()
const oldNum = read(loader).length;
add({id:0, name:'test prod'},loader,saver);
expect(read(loader).length).toBe(oldNum+1)
expect(loader).toBeCalledTimes(0)
expect(saver).toBeCalledTimes(1)
});
There is a "gotcha" here. Since productRepo is imported once, loader is called in the first test but will not be called again in the second test since the first has already changed the products. Thus subsequent tests must take this into account when using non-class packages.
you must not get access to private properties or methodes anyway.
instead you can provide setter and getter for your properties.
for methodes I believe you can break it into some private parts and some public parts. private parts for your actual data source and public parts that can be used in test either.
I suggest implementing an initialize method on productRepo.js.
export function init(data) {
products = data
}
Then, you can init products with mocked data.
Also, if you can't change the file, you could use the rewire library, which lets you access non-exported functions and variables.
Preparation
Hi i am using CasperJS in combination with grunt-casper (github.com/iamchrismiller/grunt-casper) for running automated functional and regression tests in our GUI Development process for verification.
We use it like this, casper runner in gruntfile.js:
casper: {
componentTests: {
options: {
args: ['--ssl-protocol=any', '--ignore-ssl-errors=true', '--web-security=no'],
test: true,
includes: ['tests/testutils/testutils.js']
},
files: {
'tests/testruns/logfiles/<%= grunt.template.today("yyyy-mm-dd-hhMMss") %>/componenttests/concat-testresults.xml': [
'tests/functionaltests/componenttests/componentTestController.js']
}
},
so as it can be seen here we just normally run casper tests with SSL params and calling only ONE Controllerclass here instead of listing the single tests (this is one of the roots of my problem). grunt-casper delivers the object which is in charge for testing and inside every single Controllerclass the tests are included and concatenated....
...now the componentTestController.js looks like the following:
var config = require('../../../testconfiguration');
var urls = config.test_styleguide_components.urls;
var viewportSizes = config.test_styleguide_components.viewportSizes;
var testfiles = config.test_styleguide_components.testfiles;
var tempCaptureFolder = 'tests/testruns/temprun/';
var testutils = new testutils();
var x = require('casper').selectXPath;
casper.test.begin('COMPONENT TEST CONTROLLER', function(test) {
casper.start();
/* Run tests for all given URLs */
casper.each(urls, function(self, url, i) {
casper.thenOpen(url, function() {
/* Test different viewport resolutions for every URL */
casper.each(viewportSizes, function(self, actViewport, j) {
/* Reset the viewport */
casper.then(function() {
casper.viewport(actViewport[0], actViewport[1]);
});
/* Run the respective tests */
casper.then(function() {
/* Single tests for every resolution and link */
casper.each(testfiles, function(self, actTest, k) {
casper.then(function() {
require('.'+actTest);
});
});
});
});
});
});
casper.run(function() {
test.done();
});
});
Here you can see that we running a 3 level loop for testing
ALL URLs given in a JSON config file which are contained in an ARRAY of String ["url1.com","url2.com"....."urln.com"]
ALL VIEWPORT SIZES so that every URL is tested in our desired Viewport resolutions to test the correct Responsibility behaviour of the components
ALL TESTFILES, all testfiles only include a TEST STUB what means, no start, begin or something else, its all in a large Testsourrounding.
MAYBE this is already mocky and can be done in a bette way, so if this is the case i would glad if someone has proposals here, but don't forget that grunt-casper is involved as runner.
Question
So far, so good, the tool in general works fine and the construction we built works as we desired. But the problem is, because all testfiles are ran in a large single context, one failing component fails the whole suite.
In normal cases this is a behaviour i would support, but in our circumstances i do not see any proper solution than log the error / fail the single testcomponent and run on.
Example:
I run a test, which is setUp like described above and in this part:
/* Single tests for every resolution and link */
casper.each(testfiles, function(self, actTest, k) {
casper.then(function() {
require('.'+actTest);
});
});
we include 2 testfiles looking like the following:
Included testfile1.js
casper.then(function () {
casper.waitForSelector(x("//a[normalize-space(text())='Atoms']"),
function success() {
casper.test.assertExists(x("//a[normalize-space(text())='Atoms']"));
casper.click(x("//a[normalize-space(text())='Atoms']"));
},
function fail() {
casper.test.assertExists(x("//a[normalize-space(text())='Atoms']"));
});
});
Included testfile2.js
casper.then(function () {
casper.waitForSelector(x("//a[normalize-space(text())='Buttons']"),
function success() {
casper.test.assertExists(x("//a[normalize-space(text())='Buttons']"));
casper.click(x("//a[normalize-space(text())='Buttons']"));
},
function fail() {
testutils.createErrorScreenshot('#menu > li.active > ul > li:nth-child(7)', tempCaptureFolder, casper, 'BUTTONGROUPS#2-buttons-menu-does-not-exist.png');
casper.test.assertExists(x("//a[normalize-space(text())='Buttons']"));
});
});
So if the assert in testfile1.js fails, everthing failes. So how can i move on to testfile2.js, even if the first fails? Is this possible to configure? Can it be encapsulated somehow?
FYI, this did not work:
https://groups.google.com/d/msg/casperjs/3jlBIx96Tb8/RRPA9X8v6w4J
Almost similar problems
My problem is almost the same like this here:
https://stackoverflow.com/a/27755205/4353553
And this guy here has almost another approach i tried but got his problems too because multiple testsuites ran in a loop occuring problems:
groups.google.com/forum/#!topic/casperjs/VrtkdGQl3FA
MUCH THANKS IN ADVICE
Hopefully I understood what you ware asking - is there a way to suppress the failed assertion's exception throwing behavior?
The Tester's assert method actually allows for overriding the default behavior of throwing an exception on a failed assertion:
var message = "This test will always fail, but never throw an exception and fail the whole suite.";
test.assert(false, message, { doThrow: false });
Not all assert helpers have this option though and the only other solution I can think of is catching the exception:
var message = "This test will always fail, but never throw an exception and fail the whole suite.";
try {
test.assertEquals(true, false, message);
} catch (e) { /* Ignore thrown exception. */ }
Both of these approaches are far from ideal since the requirement in our cases is to not throw for all assertions.
Another short term solution that requires overriding the Tester instance's core assert method is (but is still quite hacky):
// Override the default assert method. Hopefully the test
// casper property doesn't change between running suites.
casper.test.assert =
casper.test.assertTrue = (function () {
// Save original assert.
var __assert = casper.test.assert;
return function (subject, message, context) {
casper.log('Custom assert called!', 'debug');
try {
return __assert.apply(casper.test, arguments);
}
catch (error) {
casper.test.processAssertionResult(error.result);
return false;
}
};
})();
That said, I'm currently looking for a non-intrusive solution to this "issue" (not being able to set a default value for doThrow) as well. The above is relative to Casper JS 1.1-beta3 which I'm currently using.
The documentation at the official Mocha site contains this example:
describe('User', function(){
describe('#save()', function(){
it('should save without error', function(done){
var user = new User('Luna');
user.save(function(err){
if (err) throw err;
done();
});
})
})
})
I want to know when I should nest my tests in the describe function and what the basic purpose of describe is. Can I compare the first argument passed to describe to comments in a programming language? Nothing is shown of describe in the output on the console. Is it only for readability purposes, or there is some other use for this function?
Is there anything wrong if I use it like this?
describe('User', function(){
describe('#save()', function(){
var user = new User('Luna');
user.save(function(err){
if (err) throw err;
done();
})
})
})
If I do it this way, the test still passes.
The it call identifies each individual tests but by itself it does not tell Mocha anything about how your test suite is structured. How you use the describe call is what gives structure to your test suite. Here are some of the things that using describe to structure your test suite does for you. Here's an example of a test suite, simplified for the purpose of discussion:
function Foo() {
}
describe("Foo", function () {
var foo;
beforeEach(function () {
foo = new Foo();
});
describe("#clone", function () {
beforeEach(function () {
// Some other hook
});
it("clones the object", function () {
});
});
describe("#equals", function () {
it("returns true when the object passed is the same", function () {
});
it("returns false, when...", function () {
});
});
afterEach(function () {
// Destroy the foo that was created.
// foo.destroy();
});
});
function Bar() {
}
describe("Bar", function () {
describe("#clone", function () {
it("clones the object", function () {
});
});
});
Imagine that Foo and Bar are full-fledged classes. Foo has clone and equals methods. Bar has clone. The structure I have above is one possible way to structure tests for these classes.
(The # notation is used by some systems (like for instance, jsdoc) to indicate an instance field. So when used with a method name, it indicates a method called on an instance of the class (rather than a class method, which is called on the class itself). The test suite would run just as well without the presence of #.)
Provide Banners
Some of Mocha's reporters show the names you give to describe in the reports they produce. For instance, the spec reporter (which you can use by running $ mocha -R spec), would report:
Foo
#clone
✓ clones the object
#equals
✓ returns true when the object passed is the same
✓ returns false, when...
Bar
#clone
✓ clones the object
4 passing (4ms)
Help Select Parts to Run
If you want to run only some of the tests, you can use the --grep option. So if you care only about the Bar class, you can do $ mocha -R spec --grep Bar, and get the output:
Bar
#clone
✓ clones the object
1 passing (4ms)
Or if you care only about the clone methods of all classes, then $ mocha -R spec --grep '\bclone\b' and get the output:
Foo
#clone
✓ clones the object
Bar
#clone
✓ clones the object
2 passing (5ms)
The value given to --grep is interpreted as a regex so when I pass \bclone\b I'm asking only for the word clone, and not things like clones or cloned.
Provide Hooks
In the example above the beforeEach and afterEach calls are hooks. Each hook affects the it calls that are inside the describe call which is the parent of the hook. The various hooks are:
beforeEach which runs before each individual it inside the describe call.
afterEach which runs after each individual it inside the describe call.
before which runs once before any of the individual it inside the describe call is run.
after which runs once after all the individual it inside the describe call are run.
These hooks can be used to acquire resources or create data structures needed for the tests and then release resources or destroy these structures (if needed) after the tests are done.
The snippet you show at the end of your question won't generate an error but it does not actually contain any test, because tests are defined by it.
It's hard to add to Louis' excellent answer. There are a couple of advantages of the describe block that he didn't mention which are the skip and only functions.
describe.skip(...) {
...
}
will skip this describe and all its nested describe and it functions while:
describe.only(...) {
...
}
will only execute that describe and its nested describe and it functions. The skip() and only() modifiers can also be applied to the it() functions.
To my knowledge, describe is really just there for humans... So we can see different areas of the app. You can nest describe n levels deep.
describe('user',function(){
describe('create',function(){}
});
Describe is just used for the sake of understanding the purpose of the tests , it is also used to logically group the tests . Lets say you are testing the database API's , all the database tests could come under the outer describe , so the outer describe logically groups all the database related . Lets say there are 10 database related API's to test , each of the inner describe functions defines what those tests are ....
The particular role of describe is to indicate which component is being tested and which method of that component is also being tested.
for example, lets say we have a User Prototype
var User = function() {
const self = this;
function setName(name) {
self.name = name
}
function getName(name) {
return self.name;
}
return{setName, getName};
}
module.exports = User;
And it needs to be tested, so a spec file is created for unit test
var assert = require('assert');
var User = require("../controllers/user.controller");
describe("User", function() {
describe('setName', function() {
it("should set the name on user", function() {
const pedro = new User();
name = "Pedro"
pedro.setName(name);
assert(pedro.getName(), name);
});
});
});
It is easy to see that the purpose of describe is indicating the component to be tested and the nested describe methods indicate which methods needs to be tested
I'm trying to unit test a CouchDB design doc (written using couchapp.js), example:
var ddoc = {
_id: '_design/example',
views: {
example: {
map: function(doc) {
emit(doc.owner.id, contact);
}
}
}
}
module.exports = contacts
I can then require this file into a mocha test very easily.
The problem is CouchDB exposes a few global functions that the map functions use ("emit" function above) which are unavailable outside of CouchDB (i.e. in these unit tests).
I attempted to declare a global function in each test, for example:
var ddoc = require('../example.js')
describe('views', function() {
describe('example', function() {
it('should return the id and same doc', function() {
var doc = {
owner: {
id: 'a123456789'
}
}
// Globally-scoped mocks of unavailable couchdb 'emit' function
emit = function(id, doc) {
assert.equal(contact.owner.id, id);
assert.equal(contact, doc);
}
ddoc.views.example.map(doc);
})
})
})
But Mocha fails with complaints of global leak.
All of this together started to "smells wrong", so wondering if there's better/simpler approach via any libraries, even outside of Mocha?
Basically I'd like to make mock implementations available per test which I can call asserts from.
Any ideas?
I'd use sinon to stub and spy the tests. http://sinonjs.org/ and https://github.com/domenic/sinon-chai
Globals are well, undesirable but hard to eliminate. I'm doing some jQuery related testing right now and have to use --globals window,document,navigator,jQuery,$ at the end of my mocha command line so... yeah.
You aren't testing CouchDb's emit, so you should stub it since a) you assume that it works and b) you know what it will return
global.emit = sinon.stub().returns(42);
// run your tests etc
// assert that the emit was called
This part of the sinon docs might be helpful:
it("makes a GET request for todo items", function () {
sinon.stub(jQuery, "ajax");
getTodos(42, sinon.spy());
assert(jQuery.ajax.calledWithMatch({ url: "/todo/42/items" }));
});
Hope that helps.