Using TDD I'd like to write some new tests that create data in slightly different ways, and verify that that test data gets sanitized down to the same data as a previous test.
So after writing Test 1 and generating a snapshot, Test 2/3/4 should generate the same snapshot as Test 1.
How can I make that happen? Jest appears to prepend the test name to custom snapshot names so I can't use .match(test1name).
(Using all-new identical snapshots for each test bloats the snapshots file and seems far from ideal.)
You could do something like:
const r1 = fn1()
expect(r1).toMatchSnapshot()
const r2 = fn1()
expect(r2).toEqual(r1)
Related
I'm quite new in using Node JS, and I have been working on a test script that take screenshots whenever a test fails. And I'm trying to do this without the use of Jasmine reporter. I tried to use this approach instead Check if test failed in 'afterEach' of Jest without jasmine, however, I'm working with different files I have a file fail_test.spec.js that is used as my main file, and a test_fail1.js as another testscript file. Here is what happening, my test on fail_test.spec.js works fine with the use of AfterEach, just like in the link, it gives me "true" value if the test passed and "false" value when the test fails and it performs screenshot. The problem is the test_fail1.js is also being check by the AfterEach and it constantly gives of a "false" value even if the test passed. I do intend to use AfterEach with the test_fail1.js and on other tests in the future. So my questions are:
Why does the test_fail1.js only gives of constant "false" value?
Is there any work around with this? Because I just only need to know the status of the test in every testscripts within or with other files (ex.fail_test1.js, fail_test2.js, and so on)
I am writing snapshot tests using Jest for a node.js and React app and have installed snapshot-tools extension in VS code.
Some of my tests are displaying this warning in the editor:
[snapshot-tools] The snapshot is redunant
(Presumably it is supposed to say redundant)
What does this warning mean? I am wondering how I can fix it.
I was having the same problem, so I took a look at the "snapshot-tools" code. It marks a snapshot section as redundant, if it doesn't see a corresponding test in the test file that has a matching name and that calls "expect().toMatchSnapshot()" or something similar.
The problem is (as it says on the "Limitations" section of the plugin's marketplace page), it does a static analysis of the test file to find those tests that use snapshots. And the static analysis cannot detect tests that have dynamically generated names, or that don't directly call "expect().toMatchSnapshot()" in the test's body.
For example, I was getting false positive "redundant" warnings, because I had some tests that were doing "expect().toMatchSnapshot()" in their "afterEach()" function, rather than directly in the test body.
This could indicate that the snapshot is no longer linked to a valid test - have you changed your describe/it strings without updating the snapshots? Try running the tests with -- -u appended (eg: npm test -- -u). If that doesn't work, have a look at your snapshots file and compare the titles to your test descriptions.
I have a simple Node JS application and am using Istanbul with Mocha to generate code coverage reports. This is working fine.
If I write a new function, but do not create any tests for it (or even create a test file) is it possible to check for this?
My ultimate goal is for any code which has no tests at all to be picked up by our continuous integration process and for it to fail that build.
Is this possible?
One way you could achieve this is by using code coverage.
"check-coverage": "istanbul check-coverage --root coverage --lines 98 --functions 98 --statements 98 --branches 98"
Just add this in your package.json file, change the threshold if needed. If code is written but no test then the coverage will go down.
I'm not sure if this is the correct way to solve the problem but by running the cover command first and adding the parameter --include-all-sources this then reported on any code without a test file and added them to the coverage.json file it generated.
Then running the check-coverage would fail which is what I'm after. In my CI process I would run cover first, then check-coverage
Personally I find the documentation on Istanbul a little bit confusing/un-clear which is why I didn't see this at first!
I have a cucumber feature file 'A' that serves as setting up environment (data clean up and initialization). I want to have it executed before all other feature files can run.
It's it kind of like #before hook as in http://zsoltfabok.com/blog/2012/09/cucumber-jvm-hooks/. However, that does not work because my feature files 'A' contains hundreds of cucumber steps and it is not as simple as:
#Before
public void beforeScenario() {
tomcat.start();
tomcat.deploy("munger");
browser = new FirefoxDriver();
}
instead it's better to be able to run 'A' as a feature file as a whole.
I've searched around but did not find a answer. I am so surprised that no one has this type of requirement before.
The closest i found is 'background'. But that means i can have only one huge feature file with the content of 'A' as 'background' at the top, and rest of my test in the same file. I really do not want to do that.
Any suggestions?
By default, Cucumber features are run single thread in order by:
Alphabetically by feature file directory
Alphabetically by feature file name within directory
Scenario execution is then by order within the feature file.
So have your initialization feature in the first directory (alhpabetically) with a file name that sorts first (alphabetically) in that directory.
That being said it is generally a bad practice to require an execution order in your feature files. We run our feature files in parallel so order is meaningless. For Jenkins or TeamCity you could add a build step that executes the one feature file followed by a second build step that executes the rest of your feature files.
I have also a project, where we have a single feature file, that contains a very long scenario called Scenario: Test data with a lot of very long scenarios, like this:
Given the system knows about the following employees
|uuid|user-key|name|nickname|
|1|0101140000|Anna|annie|
... hundreds of lines like this follow ...
We see this long SystemKnows scenarios as quite valuable, so that our testers, Product Owner and developers have a baseline of what data are in the system. Our domain is quite complex, and we need this baseline of reference data for everyone to be able to understand the tests.
(These reference data become almost like well known personas, and are a shared team metaphore)
In the beginning, we were relying on the alphabetic naming convention, to have the AAA.feature to be run first.
Later, we discovered that this setup was brittle, and decided to use the following trick, inspired by the PageObject pattern:
Add a background with the single line Given(~'^I set test data for all feature files$')
In the step definition, have a factory to create the test data, and make sure inside the factore method, that it is only created once, like testFactory.createTestData()
In this way, you have both the convenience of expressing reference setup as a scenario, that enhances team communication, but you also have a stable test setup.
Hope this is helpful!
Agata
I have a set of 60 testcases in a project in SoapUI that I want to run concurrently. Each testcase needs to use a value to work. The values are stored in an external file (spreadsheet or textfile). Each testcase needs to get a value from this file and use it. However when I run the testsuite, multiple tests are picking up the same value however only one value can be used for a test (same value cannot be used in more than 1 test at the same time). I would like the external file to be accessed by one testcase at a time in soapUI. Does this involve locking or some sort of queueing system or what groovyscript could I use? thanks
I can't figure out how to get this to work with your external file, but I can think of another way only using SoapUI. Here's my suggestion for a solution:
Create a new TestCase containing only a DataGen TestStep.
Configure it so that it generates the numbers you want.
Change its mode to "READ", so that it will generate a new value every time the test step is run.
Now, wherever you want one of these values, instead of accessing your external file, add a Run TestCase TestStep to run your new DataGen test case, and make sure to return the generated number as a property. Use it where you need the generated number.
As I'm typing this, I just realized this only works with the pro version of SoapUI. If you don't have a license you can get a trial from the website.