Cypress: interrupt all tests on first failure - semaphore

How to interrupt all Cypress tests on the first test failure?
We are using semaphore to launch complete e2e tests with Cypress for each PR. But it takes too much time.
I'd like to interrupt all tests on the first test failure.
Getting the complete errors is each developer's business when they develop. I just want to be informed ASAP if there is anything wrong prior to deploy, and don't have to wait for the full tests to complete.
So far the only solution I came up with was interrupting the tests on the current spec file with Cypress.
afterEach(() => {
if (this.currentTest.state === 'failed') {
Cypress.runner.end();
}
});
But this is not enough since it only interrupts the tests located on the spec file, not ALL the other files. I've done some intensive search on this matter today and it doesn't seem like this is a thing on Cypress.
So I'm trying other solutions.
1: with Semaphore
fail_fast:
stop:
when: "true"
It is supposed to interrupt the script on error. But it doesn't work: tests keep running after error. My guess is that Cypress will throw an error only when all tests are complete.
2: maybe with the script launching Cypress, but I'm out of ideas
Right now here are my scripts
"cy:run": "npx cypress run",
"cy:run:dev": "CYPRESS_env=dev npx cypress run",
"cy:test": "start-server-and-test start http-get://localhost:4202 cy:run"

EDIT: It seems like this feature was introduced, but it requires paid version of Cypress (Business Plan). More about it: Docs, comment in the thread
Original answer:
This has been a long-requested feature in Cypress for some reason still has not been introduced. There are some workarounds proposed by the community, however it is not guaranteed they will work. Check this thread on Cypress' Github for more details, maybe you will find a workaround that works for your case.

The solution by #user3504541 is excellent! Thanks a ton. I already started giving up on using Cypress since these issues keep popping up. But in any case, here's my config:
support/index.ts
declare global {
// eslint-disable-next-line
namespace Cypress {
interface Chainable {
interrupt: () => void
}
}
}
function abortEarly() {
if (this.currentTest.state === 'failed') {
return cy.task('shouldSkip', true)
}
cy.task('shouldSkip').then(value => {
if (value) return cy.interrupt()
})
}
commands/index.ts
Cypress.Commands.add('interrupt', () => {
eval("window.top.document.body.querySelector('header button.stop').click()")
})
In my case the Cypress tests were left pending indefinitely on the CI (Github action workflow) but with this fix they interrupt properly.

A little hack that worked for me
Cypress.Commands.add('interrupt', () => {
eval("window.top.document.body.querySelector('header button.stop').click()");
});

This is available as the Auto Cancelation feature, which is part of Smart Orchestration, but is only available to Business Plan. From the Auto Cancelation docs:
Continuous Integration (CI) pipelines are typically costly processes that can demand significant compute time. When a test failure occurs in CI, it often does not make sense to continue running the remainder of a test suite since the process has to start again upon merging of subsequent fixes and other code changes. When Auto Cancellation is enabled, once the number of failed tests goes over a preset threshold, the entire test run is canceled. Note that any in-progress specs will continue to run to completion.

Related

BeforeEach step is repeated with cy.session using cypress-cucumber-preprocessor

I have a Cypress project where I use the Cypress session API to maintain a session throughout features.
Now I try switching from the deprecated Klaveness Cypress Cucumber Preprocessor to the replacement, Badeball's Cypress Cucumber Preprocessor. But I am running into an issue; the beforeEach() step where my authentication takes place gets repeated several times before the tests start. Eventually, Cypress "snaps out of it" and starts running the actual tests - but obviously this is very resource and time intensive, something is going wrong.
My setup:
Dependencies:
"cypress": "^9.6.1",
"#badeball/cypress-cucumber-preprocessor": "^9.1.3",
index.ts:
beforeEach(() => {
let isAuthInitialized = false;
function spyOnAuthInitialized(window: Window) {
window.addEventListener('react:authIsInitialized', () => {
isAuthInitialized = true;
});
}
login();
cy.visit('/', { onBeforeLoad: spyOnAuthInitialized });
cy.waitUntil(() => isAuthInitialized, { timeout: 30000 });
});
login() function:
export function login() {
cy.session('auth', () => {
cy.authenticate();
});
}
As far as I can see, I follow the docs for cy.session almost literally.
My authenticate command has only application specific steps, it does include a cy.visit('/') - after which my application is redirected to a login service (different domain) and then continues.
The problem
cy.session works OK, it creates a session on the first try - then each subsequent time it logs a succesful restore of a valid session. But this happens a number of times, it seems to get stuck in a loop.
Screenshot:
It looks to me like cy.visit() is somehow triggering the beforeEach() again. Perhaps clearing some session data (localstorage?) that causes my authentication redirect to happen again - or somehow makes Cypress think the test starts fresh. But of course beforeEach() should only happen once per feature.
I am looking at a diff of my code changes, and the only difference except the preprocessor change is:
my .cypress-cucumber-preprocessorrc.json (which I set up according to the docs
typing changes, this preprocessor is stricter about typings
plugins/index.ts file, also set up according to the docs
Am I looking at a bug in the preprocessor? Did I make a mistake? Or something else?
There are two aspects of Cypress + Cucumber with preprocessor that make this potentially confusing
Cypress >10 "Run all specs" behaviour
As demonstrated in Gleb Bahmutov PhD's great blog post, if you don't configure Cypress to do otherwise, running all specs runs each hook before each test. His proposed solution is to not use the "run all specs" button, which I find excessive - because there are ways around this; see below for a working solution with the Cucumber preprocessor.
Note: as of Cypress 10, "run all specs" is no longer supported (for reasons related to this unclarity).
Cucumber preprocessor config
The Cypress Cucumber preprocessor recommends to not use the config option nonGlobalStepDefinitions, but instead configure specific paths like (source):
"stepDefinitions": [
"cypress/integration/[filepath]/**/*.{js,ts}",
"cypress/integration/[filepath].{js,ts}",
"cypress/support/step_definitions/**/*.{js,ts}",
]
}
What it doesn't explicitly state though, is that the file which includes your hooks (in my case index.ts) should be excluded from these paths if you don't want them to run for each test! I could see how one might think this is obvious, but it's easy to accidentally include your hooks' file in this filepath config.
TLDR: If I exclude my index.ts file which includes my hooks from my stepDefinitions config, I can use "run all specs" as intended - with beforeEach() running only once before each test.

Is there a good way to print the time after each run with `mocha -w`?

I like letting mocha -w run in a terminal while I work on test so I get immediate feedback, but I can't always tell from a glance if it's changed or not when the status doesn't change - did it run, or did it get stuck (it's happened)?
I'd like to have a way to append a timestamp to the end of each test run, but ideally only when run in 'watch' mode - if I'm running it manually, of course I know if it ran or not.
For now, I'm appending an asynchronous console log to the last test that runs:
it('description', function () {
// real test parts.should.test.things();
// Trick - schedule the time to be printed to the log - so I can see when it was run last
setTimeout(() => console.log(new Date().toDateString() + " # " + new Date().toTimeString()), 5);
});
Obviously this is ugly and bad for several reasons:
It's manually added to the last test - have to know which that is
It is added every time that test is run, but never others - so if I run a different file or test -> no log; if I run only that test manually -> log
It's just kind of an affront to the purpose of the tests - subverting it to serve my will
I have seen some references to mocha adding a global.it object with the command line args, which could be searched for the '-w' flag, but that is even uglier, and still doesn't solve most of the problems.
Is there some other mocha add-in module which provides this? Or perhaps I've overlooked something in the options? Or perhaps I really shouldn't need this and I'm doing it all wrong to begin with?
Mocha supports root level hooks. If you place an after hook (for example) outside any describe block, it should run at the end of all tests. It won't run only in watch mode, of course, but should otherwise be fit for purpose.

Jest toMatchSnapshot not throwing an exception

Most of Jest's expect(arg1).xxxx() methods will throw an exception if the comparison fails to match expectations. One exception to this pattern seems to be the toMatchSnapshot() method. It seems to never throw an exception and instead stores the failure information for later Jest code to process.
How can we cause toMatchSnapshot() to throw an exception? If that's not possible, is there another way that our tests can detect when the snapshot comparison failed?
This will work! After running your toMatchSnapshot assertion, check the global state: expect(global[GLOBAL_STATE].state.snapshotState.matched).toEqual(1);
Just spent the last hour trying to figure it out for our own tests. This doesn't feel hacky to me either, though a maintainer of Jest may be able to tell me whether accessing Symbol.for('$$jest-matchers-object') is a good idea or not. Here's a full code snippet for context:
const GLOBAL_STATE = Symbol.for('$$jest-matchers-object');
describe('Describe test', () => {
it('should test something', () => {
try {
expect({}).toMatchSnapshot(); // replace with whatever you're trying to test
expect(global[GLOBAL_STATE].state.snapshotState.matched).toEqual(1);
} catch (e) {
console.log(`\x1b[31mWARNING!!! Catch snapshot failure here and print some message about it...`);
throw e;
}
});
});
If you run a test (e.g. /Foobar.test.js) which contains a toMatchSnapshot matcher jest by default will create a snapshot file on the first run (e.g. /__snapshots__/Foobar.test.js.snap).
This first run that creates the snapshot will pass.
If you want the test to fail you need to commit the snapshot alongside with your test.
The next test builds will compare the changes you make to the committed snapshot and if they differ the test will fail.
Here is the official link to the Documentation on 'Snapshot Testing' with Jest.
One, less than ideal, way to cause toMatchSnapshot to throw an exception when there is a snapshot mismatch is to edit the implementation of toMatchSnapshot. Experienced Node developers will consider this to be bad practice, but if you are very strongly motivated to have that method throw an exception, this approach is actually easy and depending on how you periodically update your tooling, only somewhat error-prone.
The file of interest will be named something like "node_modules/jest-snapshot/build/index.js".
The line of interest is the first line in the method:
const toMatchSnapshot = function (received, testName) {
this.dontThrow && this.dontThrow(); const
currentTestName = ....
You'll want to split that first line and omit the calling of this.dontThrow(). The resulting code should look similar to this:
const toMatchSnapshot = function (received, testName) {
//this.dontThrow && this.dontThrow();
const
currentTestName = ....
A final step you might want to take is to send a feature request to the Jest team or support an existing feature request that is of your liking like the following: link

Kicking off mocha describes in parallel

I want to be able to have all my describe statements in Mocha get kicked off in parallel. Can someone help me figure out how to do that?
You can't do this directly with mocha because it creates a list of it() callbacks and invokes them in order.
mocha-parallel-tests can do this if you're willing to move your describes into separate .js files. To convince yourself, install it somewhere and invoke it with a short --slow so it reports each time:
laptop:/tmp/delme$ npm install mocha-parallel-tests
laptop:/tmp/delme$ cd node_modules/mocha-parallel-tests
laptop:/tmp/delme/node_modules/mocha-parallel-tests$ ./bin/mocha-parallel-tests test/parallel/tests --timeout 10000 --slow 100
You will see that it ran three (very simple) tests suites in the time it took to run the longest.
If your tests don't depend on side-effects of earlier tests, you can make them all asynchronous.
A simple way to do this is to initiate the stuff that takes a while before the describe and use the regular mocha apparatus to evaluate it. Here, I create a bunch of promises which take a while to resolve and then iterate through the tests again, examining their results in a .then() function:
var expect = require("chai").expect;
var SlowTests = [
{ name: "a" , time: 250 },
{ name: "b" , time: 500 },
{ name: "c" , time: 750 },
{ name: "d" , time:1000 },
{ name: "e" , time:1250 },
{ name: "f" , time:1500 }
];
SlowTests.forEach(function (test) {
test.promise = takeAWhile(test.time);
});
describe("SlowTests", function () {
// mocha defaults to 2s timeout. change to 5s with: this.timeout(5000);
SlowTests.forEach(function (test) {
it("should pass '" + test.name + "' in around "+ test.time +" mseconds.",
function (done) {
test.promise.then(function (res) {
expect(res).to.be.equal(test.time);
done();
}).catch(function (err) {
done(err);
});
});
});
});
function takeAWhile (time) {
return new Promise(function (resolve, reject) {
setTimeout(function () {
resolve(time);
}, time);
});
}
(Save this as foo.js and invoke with mocha foo.js.)
Meta I disagree with the assertion that tests should be primarily be synchronous. before and after pragmas are easier but it's rare that one test invalidates all remaining tests. All discouraging asynchronous tests does is discourage extensive testing of network tasks.
Mocha does not support what you are trying to do out of the box. It runs tests sequentially. This has a big advantage when dealing with an unhandled exception: Mocha can be sure that it happened in the test that it is currently running. So it ascribes the exception to the current test. It is certainly possible to support parallel testing but it would complicate Mocha quite a bit.
And I tend to agree with David's comment. I would not do it. At the level at which Mocha usually operates, parallelism does not seem to me particularly desirable. Where I have used test parallelism before is at the level of running end-to-end suites. For instance, run a suite against Firefox in Windows 8.1 while at the same time running the same suite against Chrome in Linux.
Just to update this question, Mocha version 8+ now natively supports parallel runs. You can use the --parallel flag to run your tests in parallel.
Parallel tests should work out-of-the box for many use cases. However, you must be aware of some important implications of the behavior
1 thing to note, some reporters don't currently support this execution (mocha-junit-reporter for example)
If you are using karma to start your tests, you can use karma-parallel to split up your tests across multiple browser instances. It runs specs in different browser instances in parallel and is very simple and easy to install:
npm i karma-parallel
and then add the 'parallel' to the frameworks list in karma.conf.js
module.exports = function(config) {
config.set({
frameworks: ['parallel', 'mocha']
});
};
karma-parallel
Disclosure: I am the author

How to Completely End a Test in Node Mocha Without Continuing

How do I force a Mochajs test to end completely without continuing on to the next tests. A scenario could be prevent any further tests if the environment was accidentally set to production and I need to prevent the tests from continuing.
I've tried throwing Errors but those don't stop the entire test because it's running asynchronously.
The kind of "test" you are talking about --- namely checking whether the environment is properly set for the test suite to run --- should be done in a before hook. (Or perhaps in a beforeEach hook but before seems more appropriate to do what you are describing.)
However, it would be better to use this before hook to set an isolated environment to run your test suite with. It would take the form:
describe("suite", function () {
before(function () {
// Set the environment for testing here.
// e.g. Connect to a test database, etc.
});
it("blah", ...
});
If there is some overriding reason that makes it so that you cannot create a test environment with a hook and you must perform a check instead you could do it like this:
describe("suite", function () {
before(function () {
if (production_environment)
throw new Error("production environment! Aborting!");
});
it("blah", ...
});
A failure in the before hook will prevent the execution of any callbacks given to it. At most, Mocha will perform the after hook (if you specify one) to perform cleanup after the failure.
Note that whether the before hook is asynchronous or not does not matter. (Nor does it matter whether your tests are asynchronous.) If you write it correctly (and call done when you are done, if it is asynchronous), Mocha will detect that an error occurred in the hook and won't execute tests.
And the fact that Mocha continues testing after you have a failure in a test (in a callback to it) is not dependent on whether the tests are asynchronous. Mocha does not interpret a failure of a test as a reason to stop the whole suite. It will continue trying to execute tests even if an earlier test has failed. (As I said above, a failure in a hook is a different matter.)
I generally agree with Glen, but since you have a decent use case, you should be able to trigger node to exit with the process.exit() command. See http://nodejs.org/api/process.html#process_process_exit_code. You can use it like so:
process.exit(1);
As of the mocha documentation, you can add --exit flag when you are executing the tests.
It will stop the execution whenever all the tests have been executed successfully or not.
ex:
mocha **/*.spec.js --exit

Resources