I'm using mocha in node.js with have bdd-style specs.
Is it possible to bail a sub-suite after the first error but continue it's parent/sibling suites?
Say I test different routes to access an api, then I want to abort a specific route if it's connection fails because there's no use hammering with calls if first action failed, but it can still attempt to check other things.
If a high level tests sees a server is completely down or misconfigured then I could abort instead of having to wait all the failing tests to timeout and fill the report with unnecessary mayhem.
I see the following answer but that's not what I want, it bails everything, which is too much. I want something to only bail a branch in the spec tree if an assertion fails.
Skip subsequent Mocha tests from spec if one fails
If you want mocha to continue processing other test files after failing on one, you could use find to run a separate instance of mocha on each file:
find test/ -name "*.js" -exec mocha {} \;
It sounds like mocha-steps may work for this:
Global step() function, as a drop-in replacement for it(). Any failing step will abort the parent describe immediately. This is handy for BDD-like scenarios, or smoke tests that need to run through specific steps.
Related
I am trying to figure out how to use Jest properly. The interaction of Jest and console.log is very poorly documented (and apparently very inconsistent according to lots of tickets).
The Jest documentation is terrible.
When is console.log output supposed to show up? Always? Sometimes?
Seems to me the desired behavior would be for Jest to show console.log output for the tests that failed. I don't see anything about that in the documentation. What I do see is that console.log output seems to correlate somehow being either before or after tests.
Can you explain?
By default - Jest will print all console log results on the terminal which the tests are executed from. Instances when users do not see their logs are usually caused by the silent property being set to true in their config file or the --silent flag being added to their cli execution, which is not default Jest behaviour.
The ---verbose flag causes the results of all individual it unit tests to be printed in the terminal (whereas by default the terminal would only print the names of the test suites that have passed and only unique unit tests (it) that have failed). Additionally each individual test will also display the length of time they took to execute if they they take longer than 1ms. Another interesting fact is that --verbose will also print logs of console.log executions that are run during the evaluation of a module (when a module is imported - e.g. a console statement that simply exists in the body of a javascript file or an implicitly invoked function) even though it is not used (e.g. the tests that uses it are not executed due to the xdescribe keyword in their test suite).
The ability to print console log results exclusively for failed tests is not present in Jest.
I have a weird behaviour in cypress lately
When i debug, the issue is that the test starts with this url account.domain.com
as the test goes on, it naturally move to app.domain.com.
All is good for the first test.
The second one uses the same logic, start with account.domain.com ...
While the first one ended the test with the url app.domain.com, it seems that cypress is unable to load account.domain.com in the next test, it doesn't show any error, it just keep loading
Do you have any solution for this please ?
I'm using cucumber by the way
I'm quite new in using Node JS, and I have been working on a test script that take screenshots whenever a test fails. And I'm trying to do this without the use of Jasmine reporter. I tried to use this approach instead Check if test failed in 'afterEach' of Jest without jasmine, however, I'm working with different files I have a file fail_test.spec.js that is used as my main file, and a test_fail1.js as another testscript file. Here is what happening, my test on fail_test.spec.js works fine with the use of AfterEach, just like in the link, it gives me "true" value if the test passed and "false" value when the test fails and it performs screenshot. The problem is the test_fail1.js is also being check by the AfterEach and it constantly gives of a "false" value even if the test passed. I do intend to use AfterEach with the test_fail1.js and on other tests in the future. So my questions are:
Why does the test_fail1.js only gives of constant "false" value?
Is there any work around with this? Because I just only need to know the status of the test in every testscripts within or with other files (ex.fail_test1.js, fail_test2.js, and so on)
Is there any way I can stop the whole robot test execution with PASS status?
For some specific reasons, I need to stop the whole test but still get a GREEN report.
Currently I am using FATAL ERROR which will raise a assertion error and return FAIL to report.
I was trying to create a user keyword to do this, but I am not really familiar with the robot error handling process, could anyone help?
There's an attribute ROBOT_EXIT_ON_FAILURE in BuiltIn.py, and I am thinking about to create another attribute like ROBOT_EXIT_ON_SUCCESS, but have no idea how to.
Environment: robotframework==3.0.2 with Python 3.6.5
There is nothing built-in to support this. By design, a fatal error will cause all remaining tests and suites to have a FAIL status.
Just about your only choice is to write a keyword that sets a global variable, and then have every test include a setup that uses pass execution if to skip the test if the flag is set.
If I understood you correctly, you need to pass the test execution forcefully and return green status for that test, is that right? You have a built in keyword "Pass Execution" for that. Did you try using that?
I have a simple Node JS application and am using Istanbul with Mocha to generate code coverage reports. This is working fine.
If I write a new function, but do not create any tests for it (or even create a test file) is it possible to check for this?
My ultimate goal is for any code which has no tests at all to be picked up by our continuous integration process and for it to fail that build.
Is this possible?
One way you could achieve this is by using code coverage.
"check-coverage": "istanbul check-coverage --root coverage --lines 98 --functions 98 --statements 98 --branches 98"
Just add this in your package.json file, change the threshold if needed. If code is written but no test then the coverage will go down.
I'm not sure if this is the correct way to solve the problem but by running the cover command first and adding the parameter --include-all-sources this then reported on any code without a test file and added them to the coverage.json file it generated.
Then running the check-coverage would fail which is what I'm after. In my CI process I would run cover first, then check-coverage
Personally I find the documentation on Istanbul a little bit confusing/un-clear which is why I didn't see this at first!