How to run tests related to changed files in cypress - jestjs

I'm using cypress to set E2E tests.
But i'm facing some troubles, because every time I implement a new feature or refactor some code, i need to run all my tests to see if my modifications doesn't broke something in my application.
In Jest we have the flag --findRelatedTests that run only related tests files modified.
I wonder if there's a way to do the same in cypress.

Are you looking for the plugin cypress-watch-and-reload?
// cypress.json
{
"cypress-watch-and-reload": {
"watch": ["page/*", "src/*.js"] // watch source and page-object (helper) files
}
}
YouTube - Re-run Cypress Tests When Application Files Change

if you try in locally, then
1, If you want to skip a specific test group or test case, one approach is to add .skip at the end of context or it blocks. For example, context.skip() or it.skip().
context.skip('Test group', () => {
// This whole test group will be skipped
it('Test case 1', () => {
// This test case will not run because test group is skipped
});
});
context('Test group', () => {
it.skip('test case1', () => {
// Test case one will be skipped
});
it('test case2', () => {
// Detail for test case two
// This will execute
});
});
Run Specific/modified Tests Only
you can add .only at the end of context or it block, such as context.only() or it.only().
// Only 1st test group will run
context.only('test group1', () => {
it('test case', () => {
// Test case one will run
});
it('test case2', () => {
// Test case two will run
});
});
// The 2nd test group will not run
context('test group2', () => {
it('test case3', () => {
// Test case three will be skipped
});
it('test cas4', () => {
// Test case three will be skipped
});
});
context('test group', () => {
it.only('test case1', () => {
// Test case one will run
});
it('test case2', () => {
// Test case two will be skipped
});
});
Run Tests Conditionally by Using cypress.json
run tests conditionally is to use cypress.json file, if you want to run a specific test file.
If you only want to run tests in test.spec.js file, you can just add file path to test files in the cypress.json.
{
"testFiles": "**/test.spec.js"
}
run multiple test files
{
"testFiles": ["**/test-1.spec.js", "**/test-2.spec.js"]
}
ignore specific tests from your test runs
{
"ignoreTestFiles": "**/*.ignore.js"
}
Run Tests Conditionally by Using command line
npx cypress run --spec "cypress/integration/test.spec.js"
To run all the tests in a specific folder with file name end with .spec.js
npx cypress run --spec "cypress/integration/testgroup-directory-name/*.spec.js"
If you are using CI/CD then this will help you and you can get an idea.
Faster test execution with cypress-grep

Related

Jest extremely slow on failed tests

I have a jest test file that looks like this:
import Client from "socket.io-client"
describe("my awesome project", () => {
let clientSocket;
beforeAll((done) => {
clientSocket = new Client(`http://localhost:3000`);
clientSocket.on("connect", done);
});
afterAll(() => {
clientSocket.close();
});
it("should work", (done) => {
clientSocket.on("redacted", (message) => {
expect(2 + 2).toBe(56);
//expect(message === "foobar").toEqual(true);
done();
});
clientSocket.emit("redacted", "world");
});
});
This is a POC and this currently the entire implementation.
The jest.config looks like this:
export default {
// Automatically clear mock calls, instances, contexts and results before every test
clearMocks: true,
// Indicates whether the coverage information should be collected while executing the test
collectCoverage: true,
// The directory where Jest should output its coverage files
coverageDirectory: "coverage",
// Indicates which provider should be used to instrument code for coverage
coverageProvider: "v8",
// A preset that is used as a base for Jest's configuration
preset: "ts-jest",
};
Which is just the file the --intit command generated.
The core of my problem is that any expect I use that results in a failed test, no matter how trivial takes an absurd amount of time to complete. I accidentally left it running as above overnight and it eventually completed in 14 hours.
But with a passing test Jest is absolutely fine and completes rapidly.expect(2 + 2).toBe(4); for example runs perfectly. On the failed tests I see the data come back from the socket as expected in the time expected. Its only when the expect is hit that it stalls out. So I don't believe the problem is in the socket setup or some sort of communication problem.
I've tried the config based solutions I've found to no effect - eg Jest - Simple tests are slow
This is being run on my local windows machine from a terminal I'm fully starting and stopping for each test on my IDE.
OK so now I see the problem, I needed a try catch.
Can't quite believe none of the examples or docs I looked at as much as hinted this is necessary to handle something so basic.
test("should work", (done) => {
clientSocket.on("redacted", (message: string) => {
try {
expect(2 + 2).toBe(56);
//expect(message === "foobar").toEqual(true);
done();
} catch (err) {
done(err)
}
});
clientSocket.emit("redacted", "world");
});

How to stop jest describe without throw all the test

I want to stop a whole describe of JEST without throwing an error or stoping the other describes.
Im writing e2e test for my app with JEST and PUPPETEER, I write the test in a way every DESCRIBE its a flow of the path and every IT its a step, inside a IT I want to stop the flow if the pages dont match some conditions.
describe('Book a Room', ()=> {
it ('enter on main page' async() => await mainPage.navigateToMainPage())
it('go to book room page', async() => await bookRoomPage.navigateToBookRoomPage())
// The function its inside the "bookRoomPage"
it('check if the user can book room', () => {
if (!page.userCanOpenARoom()) {
// DONT EXECUTE THE NEXT IT BUT CONTINUE WITH THE OTHER DESCRIBE
}
})
it('go to book preview...', async() => bookRoomPreviewPage.navigateToBookRoomPreviewPage());
// REMAINING FLOW
})
I already try with process.exit(0) but exit the whole process
You can try out what this blog says here its for sharing specs in your test suites which is pretty handy. But for your case specifically you could extract your page cases in separate suites and then dynamically include the test case on runtime if a condition is met.
Something like:
Include Spec function shared_specs/index.js
const includeSpec = (sharedExampleName, args) => {
require(`./${sharedExampleName}`)(args);
};
exports.includeSpec = includeSpec;
Test A shared_specs/test_a.js
describe('some_page', () => {
it...
})
Test B shared_specs/test_b.js
describe('some_other_page', () => {
it...
})
and then in your test case
// Something like this would be your path I guess
import {includeSpec} from '../shared_specs/includeSpec.js'
describe('Book a Room', async ()=> {
if (page.userCanOpenARoom()) {
includeSpec('test_a', page);
} else {
includeSpec('test_b', page); // Or dont do anything
}
});
Just make sure that you check the paths since
require(`./${sharedExampleName}`)(args);
will load it dynamically at runtime, and use includeSpec in your describe blocks not it blocks. You should be able to split up your test suites pretty nicely with this.

Mocha ignores some tests although they should be run

I'm correctly working on refactoring my clone of the express-decorator NPM package. This includes refactoring the unit tests that were previously done using AVA. I decided to rewrite them using Mocha and Chai because I like the way they define tests a lot more.
So, what is my issue? Take a look at this code (I broke it down to illustrate the problem):
test('express', (t) => {
#web.basePath('/test')
class Test {
#web.get('/foo/:id')
foo(request, response) {
/* The test in question. */
t.is(parseInt(request.params.id), 5);
response.send();
}
}
let app = express();
let controller = new Test();
web.register(app, controller);
t.plan(1);
return supertest(app)
.get('/test/foo/5')
.expect(200);
});
This code works.
Here's (basically) the same code, now using Mocha and Chai and multiple tests:
describe('The test express server', () => {
#web.basePath('/test')
class Test {
#web.get('/foo/:id')
foo(request, response) {
/* The test in question. */
it('should pass TEST #1',
() => expect(toInteger(request.params.id)).to.equal(5))
response.send()
}
}
const app = express()
const controller = new Test()
web.register(app, controller)
it('should pass TEST #2', (done) => {
return chai.request(app)
.get('/test/foo/5')
.end((err, res) => {
expect(err).to.be.null
expect(res).to.have.status(200)
done()
})
})
})
The problem is that the TEST #1 is ignored by Mocha although that part of the code is run during the tests. I tried to console.log something there and it appeared in the Mocha log where I expected it to appear.
So how do I get that test to work? My idea would be to somehow pass down the context (the test suite) to the it function, but that's not possible with Mocha, or is it?
It looks like you are moving from tape or some similar test runner to Mocha. You're going to need to significantly change your approach because Mocha works significantly differently.
tape and similar runners don't need to know ahead of time what tests exist in the suite. They discover tests as they go along executing your test code, and a test can contain another test. Mocha on the other hand requires that the entire suite be discoverable before running any test.* It needs to know each and every test that will exist in your suite. It has some disadvantages in that you cannot add tests while the Mocha is running the test. You could not have a before hook for instance do a query from a database and from that create tests. You'd have instead to perform the query before the suite has started. However, this way of doing things also has some advantages. You can use the --grep option to select only a subset of tests and Mocha will do it without any trouble. You can also use it.only to select a single test without trouble. Last I checked, tape and its siblings have trouble doing this.
So the reason your Mocha code is not working is because you are creating a test after Mocha has started running the tests. Mocha won't right out crash on you but when you do this, the behavior you get is undefined. I've seen cases where Mocha would ignore the new test, and I've seen cases where it executed it in an unexpected order.
If this were my test what I'd do is:
Remove the call to it from foo.
Modify foo to simply record the request parameters I care about on the controller instance.
foo(request, response) {
// Remember to initialize this.requests in the constructor...
this.requests.push(request);
response.send()
}
Have the test it("should pass test #2" check the requests recorded on the controller:
it('should pass TEST #2', (done) => {
return chai.request(app)
.get('/test/foo/5')
.end((err, res) => {
expect(err).to.be.null
expect(res).to.have.status(200)
expect(controler.requests).to.have.lengthOf(1);
// etc...
done()
})
})
And would use a beforeEach hook to reset the controller between tests so that tests are isolated.

How can I build my test suite asynchronously?

I'm trying to create mocha tests for my controllers using a config that has to be loaded async. Below is my code. However, when the mocha test is run, it doesn't run any tests, displaying 0 passing. The console.logs are never even called. I tried doing before(next => config.build().then(next)) inside of the describe, but even though the tests run, before is never called. Is there a way to have the config be loaded one time before any tests are run?
'use strict';
const common = require('./common');
const config = require('../config');
config
.build()
.then(test);
function test() {
console.log(1);
describe('Unit Testing', () => {
console.log(2);
require('./auth');
});
}
You should run Mocha with the --delay option, and then use run() once you are done building your test suite. Here is an example derived from the code you show in the question:
'use strict';
function test() {
console.log(1);
describe('Unit Testing', () => {
console.log(2);
it("test", () => {
console.log(3);
});
});
// You must use --delay for `run()` to be available to you.
run();
}
setTimeout(test, 1000);
I'm using setTimeout to simulate an asynchronous operation. Using --delay and run() allows you to build a suite that is the result of an asynchronous computation. Note, however, that the suite must be built in one shot. (You cannot have an asynchronous process inside describe that will make calls to it. This won't work.)
One thing you should definitely not do is what rob3c suggests: calling describe or it (or both) from inside a hook. This is a mistake that every now and then people make so it is worth addressing in details. The problem is that it is just not supported by Mocha, and therefore there are no established semantics associated with calling describe or it from inside a hook. Oh, it is possible to write simple examples that work as one might expect but:
When the suite becomes more complex, the suite's behavior no longer corresponds to anything sensible.
Since there are no semantics associated with this approach, newer Mocha releases may handle the erroneous usage differently and break your suite.
Consider this simple example:
const assert = require("assert");
const p = Promise.resolve(["foo", "bar", "baz"]);
describe("top", () => {
let flag;
before(() => {
flag = true;
return p.then((names) => {
describe("embedded", () => {
for (const name of names) {
it(name, () => {
assert(flag);
});
}
});
});
});
after(() => {
flag = false;
});
it("regular test", () => {
assert(flag);
});
});
When we run it, we get:
top
✓ regular test
embedded
1) foo
2) bar
3) baz
1 passing (32ms)
3 failing
// [stack traces omitted for brevity]
What's going on here? Shouldn't all the tests pass? We set flag to true in the before hook for the top describe. All tests we create in it should see flag as true, no? The clue is in the output above: when we create tests inside a hook, Mocha will put the tests somewhere but it may not be in a location that reflects the structure of the describe blocks in the code. What happens in this case is that Mocha just appends the tests created in the hook the the very end of the suite, outside the top describe, so the after hook runs before the dynamically created tests, and we get a counter-intuitive result.
Using --delay and run(), we can write a suite that behaves in a way concordant with intuition:
const assert = require("assert");
const p = Promise.resolve(["foo", "bar", "baz"]).then((names) => {
describe("top", () => {
let flag;
before(() => {
flag = true;
});
after(() => {
flag = false;
});
describe("embedded", () => {
for (const name of names) {
it(name, () => {
assert(flag);
});
}
});
it("regular test", () => {
assert(flag);
});
});
run();
});
Output:
top
✓ regular test
embedded
✓ foo
✓ bar
✓ baz
4 passing (19ms)
In modern environments, you can use top-level await to fetch your data up front. This is a documented approach for mocha: https://mochajs.org/#dynamically-generating-tests
Slightly adapting the example from the mocha docs to show the general idea:
function fetchData() {
return new Promise((resolve) => setTimeout(resolve, 5000, [1, 2, 3]));
}
// top-level await: Node >= v14.8.0 with ESM test file
const data = await fetchData();
describe("dynamic tests", function () {
data.forEach((value) => {
it(`can use async data: ${value}`, function () {
// do something with data here
});
});
});
This is nice as it is on a per-file basis, and doesn't involve you taking on management responsibility of the test runner as you do with --delay.
The problem with using the --delay command line flag and run() callback that #Louis mentioned in his accepted answer, is that run() is a single global hook that delays the root test suite. Therefore, you have to build them all at once (as he mentioned), which can make organizing tests a hassle (to say the least).
However, I prefer to avoid magic flags whenever possible, and I certainly don't want to have to manage my entire test suite in a single global run() callback. Fortunately, there's a way to dynamically create the tests on a per-file basis, and it doesn't require any special flags, either :-)
To dynamically create It() tests in any test source file using data obtained asynchronously, you can (ab)use the before() hook with a placeholder It() test to ensure mocha waits until before() is run. Here's the example from my answer to a related question, for convenience:
before(function () {
console.log('Let the abuse begin...');
return promiseFn().
then(function (testSuite) {
describe('here are some dynamic It() tests', function () {
testSuite.specs.forEach(function (spec) {
it(spec.description, function () {
var actualResult = runMyTest(spec);
assert.equal(actualResult, spec.expectedResult);
});
});
});
});
});
it('This is a required placeholder to allow before() to work', function () {
console.log('Mocha should not require this hack IMHO');
});

Cleaning out test database before running tests

What is the best way to clean out a database before running a test suite (is there a npm library or recommended method of doing this).
I know about the before() function.
I'm using node/express, mocha and sequelize.
The before function is about as good as you will do for cleaning out your database. If you only need to clean out the database once i.e. before you run all your tests, you can have a global before function in a separate file
globalBefore.js
before(function(done) {
// remove database data here
done()
})
single-test-1.js
require('./globalBefore')
// actual test 1 here
single-test-2.js
require('./globalBefore')
// actual test 2 here
Note that the globalBefore will only run once even though it has been required twice
Testing Principles
Try to limit the use of external dependecies such as databases in your tests. The less external dependencies the easier the testing. You want to be able to run all your unit tests in parallel and a shared resource such as a database makes this difficult.
Take a look at this Google Tech talk about writing testable javascript
http://www.youtube.com/watch?v=JjqKQ8ezwKQ
Also take look at the rewire module. It works quite well for stubbing out functions.
I usually do it like this (say for a User model):
describe('User', function() {
before(function(done) {
User.sync({ force : true }) // drops table and re-creates it
.success(function() {
done(null);
})
.error(function(error) {
done(error);
});
});
describe('#create', function() {
...
});
});
There's also sequelize.sync({force: true}) which will drop and re-create all tables (.sync() is described here).
Add this in your test file . This would undo all the migrations and create new tables while adding the seeded data to let you do testing on the seeded data.
const SCRIPT_TO_TRUNCATE_AND_SEED_DATABASE = 'cd apps/backend && npx sequelize-cli db:migrate:undo:all && npx sequelize-cli db:migrate && cd ../.. && npx sequelize-cli db:seed:all'
test(
'TRUNCATE_AND_SEED_DATABASE',
done => {
exec(SCRIPT_TO_TRUNCATE_AND_SEED_DATABASE, (err, out) => {
try {
console.log(out);
expect(err).toBe(null);
done();
} catch (e) {
done(e);
}
});
},
TIME_CONSTANT.ONE_MINUTE,
);
I made this lib to clean and import fixtures for your test.
This way, you can import fixtures, test and then clean your database.
Take a look at the following:
before(function (done) {
prepare.start(['people'], function () {
done();
});
});
after(function () {
prepare.end();
});
https://github.com/diogolmenezes/test_prepare

Resources