I'm quite new in using Node JS, and I have been working on a test script that take screenshots whenever a test fails. And I'm trying to do this without the use of Jasmine reporter. I tried to use this approach instead Check if test failed in 'afterEach' of Jest without jasmine, however, I'm working with different files I have a file fail_test.spec.js that is used as my main file, and a test_fail1.js as another testscript file. Here is what happening, my test on fail_test.spec.js works fine with the use of AfterEach, just like in the link, it gives me "true" value if the test passed and "false" value when the test fails and it performs screenshot. The problem is the test_fail1.js is also being check by the AfterEach and it constantly gives of a "false" value even if the test passed. I do intend to use AfterEach with the test_fail1.js and on other tests in the future. So my questions are:
Why does the test_fail1.js only gives of constant "false" value?
Is there any work around with this? Because I just only need to know the status of the test in every testscripts within or with other files (ex.fail_test1.js, fail_test2.js, and so on)
Related
I am trying to figure out how to use Jest properly. The interaction of Jest and console.log is very poorly documented (and apparently very inconsistent according to lots of tickets).
The Jest documentation is terrible.
When is console.log output supposed to show up? Always? Sometimes?
Seems to me the desired behavior would be for Jest to show console.log output for the tests that failed. I don't see anything about that in the documentation. What I do see is that console.log output seems to correlate somehow being either before or after tests.
Can you explain?
By default - Jest will print all console log results on the terminal which the tests are executed from. Instances when users do not see their logs are usually caused by the silent property being set to true in their config file or the --silent flag being added to their cli execution, which is not default Jest behaviour.
The ---verbose flag causes the results of all individual it unit tests to be printed in the terminal (whereas by default the terminal would only print the names of the test suites that have passed and only unique unit tests (it) that have failed). Additionally each individual test will also display the length of time they took to execute if they they take longer than 1ms. Another interesting fact is that --verbose will also print logs of console.log executions that are run during the evaluation of a module (when a module is imported - e.g. a console statement that simply exists in the body of a javascript file or an implicitly invoked function) even though it is not used (e.g. the tests that uses it are not executed due to the xdescribe keyword in their test suite).
The ability to print console log results exclusively for failed tests is not present in Jest.
I have a weird behaviour in cypress lately
When i debug, the issue is that the test starts with this url account.domain.com
as the test goes on, it naturally move to app.domain.com.
All is good for the first test.
The second one uses the same logic, start with account.domain.com ...
While the first one ended the test with the url app.domain.com, it seems that cypress is unable to load account.domain.com in the next test, it doesn't show any error, it just keep loading
Do you have any solution for this please ?
I'm using cucumber by the way
I'm looking for a way to provide a single boolean output from my mocha tests instead of the standard test list. If all tests passed - it should return true, otherwise false. It's fine if it's written in a separate file on a server, we don't even need to suppress the logs, but I couldn't find a way to use the after() function to get a list of all tests that had been run and their results.
Any ideas?
I am writing snapshot tests using Jest for a node.js and React app and have installed snapshot-tools extension in VS code.
Some of my tests are displaying this warning in the editor:
[snapshot-tools] The snapshot is redunant
(Presumably it is supposed to say redundant)
What does this warning mean? I am wondering how I can fix it.
I was having the same problem, so I took a look at the "snapshot-tools" code. It marks a snapshot section as redundant, if it doesn't see a corresponding test in the test file that has a matching name and that calls "expect().toMatchSnapshot()" or something similar.
The problem is (as it says on the "Limitations" section of the plugin's marketplace page), it does a static analysis of the test file to find those tests that use snapshots. And the static analysis cannot detect tests that have dynamically generated names, or that don't directly call "expect().toMatchSnapshot()" in the test's body.
For example, I was getting false positive "redundant" warnings, because I had some tests that were doing "expect().toMatchSnapshot()" in their "afterEach()" function, rather than directly in the test body.
This could indicate that the snapshot is no longer linked to a valid test - have you changed your describe/it strings without updating the snapshots? Try running the tests with -- -u appended (eg: npm test -- -u). If that doesn't work, have a look at your snapshots file and compare the titles to your test descriptions.
I have a set of 60 testcases in a project in SoapUI that I want to run concurrently. Each testcase needs to use a value to work. The values are stored in an external file (spreadsheet or textfile). Each testcase needs to get a value from this file and use it. However when I run the testsuite, multiple tests are picking up the same value however only one value can be used for a test (same value cannot be used in more than 1 test at the same time). I would like the external file to be accessed by one testcase at a time in soapUI. Does this involve locking or some sort of queueing system or what groovyscript could I use? thanks
I can't figure out how to get this to work with your external file, but I can think of another way only using SoapUI. Here's my suggestion for a solution:
Create a new TestCase containing only a DataGen TestStep.
Configure it so that it generates the numbers you want.
Change its mode to "READ", so that it will generate a new value every time the test step is run.
Now, wherever you want one of these values, instead of accessing your external file, add a Run TestCase TestStep to run your new DataGen test case, and make sure to return the generated number as a property. Use it where you need the generated number.
As I'm typing this, I just realized this only works with the pro version of SoapUI. If you don't have a license you can get a trial from the website.