jestjs see progress of rest execution - jestjs

i have test which dynamic and can run 20min
but i am getting output only at the end
is there any way to see progress ?
my code
describe('scenarios', () => {
function doTest(path) {
it(`should pass ${path}`, () => {
...
});
});
for(var i = 0; i < 100; i++) {
doTest(i);
}
});

Unfortunately, there's no such possibility.
But, since Jest treats files as test suites, you could split your test across separate files, name them accordingly and let them be ran by CLI. I know it can get cumbersome, but at least that's an option.
You can also get some inspiration from prettier on how to handle loads of tests with customised functions:
https://github.com/prettier/prettier/blob/master/tests_config/run_spec.js

Related

Jest test won't pass, even if the first test is commented

I was writing tests for some authentication endpoints when I came across a bug like issue that I cannot seem to figure out.
This test won't pass:
it("Testing /auth/signup/lol", () => {
const test = true;
expect(test).toBe(true);
console.log("finished test?");
});
The only way I can reproduce this issue is taking all the setup and teardown code, then moving it to another file, along with the test that's troubling me (as I have done, this test copied from auth.test.js).
I've spent the morning trying to figure out the issue, the only time the test has passed is when I removed the setup code. However on notebooks.test.js and auth.test.js (excluding the last two three tests at the bottom of the auth.tests.js script) the setup codes works as intended.
What's wrong with this code?
Steps to reproduce:
Clone this repository
Switch to the develop branch
Go to the backend directory
Install all packages
Run 'npm test tests/endpoints/auth2.test.js'
I would post a small program reproducing the issue, but all attempts to do so failed.
there's a timeout in the afterEach() method because collections.users.drop() returns an undefined value (result is undefined).
At the minimum, add an else clause when testing result value so you can exit method before reaching the 10s timeout.
Maybe some additional code is required but I don't know the logic of your code and test so maybe afterEach should do more things.
Here is the working code:
afterEach(done => {
server.getTokenManager().empty()
let collections = server.getClient()?.connection.collections
if (collections === undefined) {
done()
return
}
if (collections.users !== undefined) {
collections.users.drop((err: any, result: any) => {
if (result) {
done()
} else {
done()
}
})
}
}, 10000)
The rest of the code is ok.

Catch error when created custome expect in jest

I created testing Helper Functions as Globals in Jest to be able to use them throughout our application in every single one of our test files.
I am going to cut this out of our testing file. I am going to go to setup-global.js file, and then I will say global.loopExpect = expect() // for looping case
global.loopExpect = (actuals) => {
return {
toContainKeys: (keys) => {
if (actuals.length > 0) {
for(var i = 0; i < actuals.length && i < 5; i++){
expect(actuals[i]).toContainKeys(keys)
}
}
}
}
}
get_array.test.js
test('check contains Book property', async () => {
loopExpect([{title: 'SOLID Principles', author: 'Uncle Bob Martin'}])
.toContainKeys(['year'])
})
And when the test is run, I should get an error because the actuals object does not containr object's year property, right .
But I get an error report on my terminal, and showing the error from at Object.toContainKeys in the file setup-global.js. Even though I want the error message on the failed test to appear at Object.toContainKeys in in the test file get_array.test.js.
How to the test file catch the error or global.loopExpect() function throwing error so JEST thinks the error coming from the test file ???
First, I don't think you need loopExpect, because there is a simpler and more durable way to implement this:
test('check contains Book property', async () =>
[{title: 'SOLID Principles', author: 'Uncle Bob Martin'}].forEach(book =>
expect(book).toContainKeys(['year'])
)
)
But if you really want a helper function and you need to know where the test failed, one way to know is to look at the stack trace. And if you want the error to have a different stack trace, with the line from your test file on top, then you can wrap your test in a try block, and throw a new error from the catch block.

how to set a particular test file as the first when running mocha?

Is there a way to set a particular test file as the first in mocha and then the rest of the test files can execute in any order.
One technique that can be used is to involve number in test filename such as
01-first-test.js
02-second-test.js
03-third-test.js
So by defining this, the test will be executed from first test until third test.
No. There is no guarantee your tests will run in any particular order. If you need to do some setup for tests inside of a given describe block, try using the before hook like so.
There is no direct way, but there is certainly a solution to this. Wrap your describe block in function and call function accordingly.
firstFile.js
function first(){
describe("first test ", function () {
it("should run first ", function () {
//your code
});
});
}
module.exports = {
first
}
secondFile.js
function second(){
describe("second test ", function () {
it("should run after first ", function () {
//your code
})
})
}
module.exports = {
second
}
Then create one main file and import modules.
main.spec.js
const firstThis = require('./first.js)
const secondSecond = require(./second.js)
firstThis.first();
secondSecond.second();
In this way you can use core javaScript features and play around with mocha as well.
This is the solution I have been using since long. would highly appreciate if anyone would come with better approach.

How can I build my test suite asynchronously?

I'm trying to create mocha tests for my controllers using a config that has to be loaded async. Below is my code. However, when the mocha test is run, it doesn't run any tests, displaying 0 passing. The console.logs are never even called. I tried doing before(next => config.build().then(next)) inside of the describe, but even though the tests run, before is never called. Is there a way to have the config be loaded one time before any tests are run?
'use strict';
const common = require('./common');
const config = require('../config');
config
.build()
.then(test);
function test() {
console.log(1);
describe('Unit Testing', () => {
console.log(2);
require('./auth');
});
}
You should run Mocha with the --delay option, and then use run() once you are done building your test suite. Here is an example derived from the code you show in the question:
'use strict';
function test() {
console.log(1);
describe('Unit Testing', () => {
console.log(2);
it("test", () => {
console.log(3);
});
});
// You must use --delay for `run()` to be available to you.
run();
}
setTimeout(test, 1000);
I'm using setTimeout to simulate an asynchronous operation. Using --delay and run() allows you to build a suite that is the result of an asynchronous computation. Note, however, that the suite must be built in one shot. (You cannot have an asynchronous process inside describe that will make calls to it. This won't work.)
One thing you should definitely not do is what rob3c suggests: calling describe or it (or both) from inside a hook. This is a mistake that every now and then people make so it is worth addressing in details. The problem is that it is just not supported by Mocha, and therefore there are no established semantics associated with calling describe or it from inside a hook. Oh, it is possible to write simple examples that work as one might expect but:
When the suite becomes more complex, the suite's behavior no longer corresponds to anything sensible.
Since there are no semantics associated with this approach, newer Mocha releases may handle the erroneous usage differently and break your suite.
Consider this simple example:
const assert = require("assert");
const p = Promise.resolve(["foo", "bar", "baz"]);
describe("top", () => {
let flag;
before(() => {
flag = true;
return p.then((names) => {
describe("embedded", () => {
for (const name of names) {
it(name, () => {
assert(flag);
});
}
});
});
});
after(() => {
flag = false;
});
it("regular test", () => {
assert(flag);
});
});
When we run it, we get:
top
✓ regular test
embedded
1) foo
2) bar
3) baz
1 passing (32ms)
3 failing
// [stack traces omitted for brevity]
What's going on here? Shouldn't all the tests pass? We set flag to true in the before hook for the top describe. All tests we create in it should see flag as true, no? The clue is in the output above: when we create tests inside a hook, Mocha will put the tests somewhere but it may not be in a location that reflects the structure of the describe blocks in the code. What happens in this case is that Mocha just appends the tests created in the hook the the very end of the suite, outside the top describe, so the after hook runs before the dynamically created tests, and we get a counter-intuitive result.
Using --delay and run(), we can write a suite that behaves in a way concordant with intuition:
const assert = require("assert");
const p = Promise.resolve(["foo", "bar", "baz"]).then((names) => {
describe("top", () => {
let flag;
before(() => {
flag = true;
});
after(() => {
flag = false;
});
describe("embedded", () => {
for (const name of names) {
it(name, () => {
assert(flag);
});
}
});
it("regular test", () => {
assert(flag);
});
});
run();
});
Output:
top
✓ regular test
embedded
✓ foo
✓ bar
✓ baz
4 passing (19ms)
In modern environments, you can use top-level await to fetch your data up front. This is a documented approach for mocha: https://mochajs.org/#dynamically-generating-tests
Slightly adapting the example from the mocha docs to show the general idea:
function fetchData() {
return new Promise((resolve) => setTimeout(resolve, 5000, [1, 2, 3]));
}
// top-level await: Node >= v14.8.0 with ESM test file
const data = await fetchData();
describe("dynamic tests", function () {
data.forEach((value) => {
it(`can use async data: ${value}`, function () {
// do something with data here
});
});
});
This is nice as it is on a per-file basis, and doesn't involve you taking on management responsibility of the test runner as you do with --delay.
The problem with using the --delay command line flag and run() callback that #Louis mentioned in his accepted answer, is that run() is a single global hook that delays the root test suite. Therefore, you have to build them all at once (as he mentioned), which can make organizing tests a hassle (to say the least).
However, I prefer to avoid magic flags whenever possible, and I certainly don't want to have to manage my entire test suite in a single global run() callback. Fortunately, there's a way to dynamically create the tests on a per-file basis, and it doesn't require any special flags, either :-)
To dynamically create It() tests in any test source file using data obtained asynchronously, you can (ab)use the before() hook with a placeholder It() test to ensure mocha waits until before() is run. Here's the example from my answer to a related question, for convenience:
before(function () {
console.log('Let the abuse begin...');
return promiseFn().
then(function (testSuite) {
describe('here are some dynamic It() tests', function () {
testSuite.specs.forEach(function (spec) {
it(spec.description, function () {
var actualResult = runMyTest(spec);
assert.equal(actualResult, spec.expectedResult);
});
});
});
});
});
it('This is a required placeholder to allow before() to work', function () {
console.log('Mocha should not require this hack IMHO');
});

casperJS - grunt-casper: Running multiple Test-Suites in a loop

Preparation
Hi i am using CasperJS in combination with grunt-casper (github.com/iamchrismiller/grunt-casper) for running automated functional and regression tests in our GUI Development process for verification.
We use it like this, casper runner in gruntfile.js:
casper: {
componentTests: {
options: {
args: ['--ssl-protocol=any', '--ignore-ssl-errors=true', '--web-security=no'],
test: true,
includes: ['tests/testutils/testutils.js']
},
files: {
'tests/testruns/logfiles/<%= grunt.template.today("yyyy-mm-dd-hhMMss") %>/componenttests/concat-testresults.xml': [
'tests/functionaltests/componenttests/componentTestController.js']
}
},
so as it can be seen here we just normally run casper tests with SSL params and calling only ONE Controllerclass here instead of listing the single tests (this is one of the roots of my problem). grunt-casper delivers the object which is in charge for testing and inside every single Controllerclass the tests are included and concatenated....
...now the componentTestController.js looks like the following:
var config = require('../../../testconfiguration');
var urls = config.test_styleguide_components.urls;
var viewportSizes = config.test_styleguide_components.viewportSizes;
var testfiles = config.test_styleguide_components.testfiles;
var tempCaptureFolder = 'tests/testruns/temprun/';
var testutils = new testutils();
var x = require('casper').selectXPath;
casper.test.begin('COMPONENT TEST CONTROLLER', function(test) {
casper.start();
/* Run tests for all given URLs */
casper.each(urls, function(self, url, i) {
casper.thenOpen(url, function() {
/* Test different viewport resolutions for every URL */
casper.each(viewportSizes, function(self, actViewport, j) {
/* Reset the viewport */
casper.then(function() {
casper.viewport(actViewport[0], actViewport[1]);
});
/* Run the respective tests */
casper.then(function() {
/* Single tests for every resolution and link */
casper.each(testfiles, function(self, actTest, k) {
casper.then(function() {
require('.'+actTest);
});
});
});
});
});
});
casper.run(function() {
test.done();
});
});
Here you can see that we running a 3 level loop for testing
ALL URLs given in a JSON config file which are contained in an ARRAY of String ["url1.com","url2.com"....."urln.com"]
ALL VIEWPORT SIZES so that every URL is tested in our desired Viewport resolutions to test the correct Responsibility behaviour of the components
ALL TESTFILES, all testfiles only include a TEST STUB what means, no start, begin or something else, its all in a large Testsourrounding.
MAYBE this is already mocky and can be done in a bette way, so if this is the case i would glad if someone has proposals here, but don't forget that grunt-casper is involved as runner.
Question
So far, so good, the tool in general works fine and the construction we built works as we desired. But the problem is, because all testfiles are ran in a large single context, one failing component fails the whole suite.
In normal cases this is a behaviour i would support, but in our circumstances i do not see any proper solution than log the error / fail the single testcomponent and run on.
Example:
I run a test, which is setUp like described above and in this part:
/* Single tests for every resolution and link */
casper.each(testfiles, function(self, actTest, k) {
casper.then(function() {
require('.'+actTest);
});
});
we include 2 testfiles looking like the following:
Included testfile1.js
casper.then(function () {
casper.waitForSelector(x("//a[normalize-space(text())='Atoms']"),
function success() {
casper.test.assertExists(x("//a[normalize-space(text())='Atoms']"));
casper.click(x("//a[normalize-space(text())='Atoms']"));
},
function fail() {
casper.test.assertExists(x("//a[normalize-space(text())='Atoms']"));
});
});
Included testfile2.js
casper.then(function () {
casper.waitForSelector(x("//a[normalize-space(text())='Buttons']"),
function success() {
casper.test.assertExists(x("//a[normalize-space(text())='Buttons']"));
casper.click(x("//a[normalize-space(text())='Buttons']"));
},
function fail() {
testutils.createErrorScreenshot('#menu > li.active > ul > li:nth-child(7)', tempCaptureFolder, casper, 'BUTTONGROUPS#2-buttons-menu-does-not-exist.png');
casper.test.assertExists(x("//a[normalize-space(text())='Buttons']"));
});
});
So if the assert in testfile1.js fails, everthing failes. So how can i move on to testfile2.js, even if the first fails? Is this possible to configure? Can it be encapsulated somehow?
FYI, this did not work:
https://groups.google.com/d/msg/casperjs/3jlBIx96Tb8/RRPA9X8v6w4J
Almost similar problems
My problem is almost the same like this here:
https://stackoverflow.com/a/27755205/4353553
And this guy here has almost another approach i tried but got his problems too because multiple testsuites ran in a loop occuring problems:
groups.google.com/forum/#!topic/casperjs/VrtkdGQl3FA
MUCH THANKS IN ADVICE
Hopefully I understood what you ware asking - is there a way to suppress the failed assertion's exception throwing behavior?
The Tester's assert method actually allows for overriding the default behavior of throwing an exception on a failed assertion:
var message = "This test will always fail, but never throw an exception and fail the whole suite.";
test.assert(false, message, { doThrow: false });
Not all assert helpers have this option though and the only other solution I can think of is catching the exception:
var message = "This test will always fail, but never throw an exception and fail the whole suite.";
try {
test.assertEquals(true, false, message);
} catch (e) { /* Ignore thrown exception. */ }
Both of these approaches are far from ideal since the requirement in our cases is to not throw for all assertions.
Another short term solution that requires overriding the Tester instance's core assert method is (but is still quite hacky):
// Override the default assert method. Hopefully the test
// casper property doesn't change between running suites.
casper.test.assert =
casper.test.assertTrue = (function () {
// Save original assert.
var __assert = casper.test.assert;
return function (subject, message, context) {
casper.log('Custom assert called!', 'debug');
try {
return __assert.apply(casper.test, arguments);
}
catch (error) {
casper.test.processAssertionResult(error.result);
return false;
}
};
})();
That said, I'm currently looking for a non-intrusive solution to this "issue" (not being able to set a default value for doThrow) as well. The above is relative to Casper JS 1.1-beta3 which I'm currently using.

Resources