Change coverage combination method for Gitlab from different sources - jestjs

In the project I mainly use Playwright to test and collect coverage. It's around 50 pages which are covered in this way. Now I added Jest to test some util functions which do not directly show up in the Frontend.
Now I am trying to combine the coverage for both playwright and Jest. My problem is, that jest only covers two or three files while playwright covers around 50. So the default behavior of GitLab to display the average between the two percentage values does not display the coverage accurately. E.g. Playwright has a Coverage of 80% over 50 files while Jest only covers 10% for one file atm. Coverage is now at 45% overall.
I tried to use the Regex in my gitlab.yml described on GitLab in both jobs, which allows me to display the coverage in the first place.
I also tried to move the final coverage collection to a new job where I use
npx cobertura-merge -o ./coverage/combined-coverage.xml package1=./coverage/cobertura-coverage.xml package2=./coverage/cobertura-jest-coverage.xml
to create a combined coverage report to use in the reporting system. But If I try to use the Regex in this job instead of the other two, I see no coverage value at all.
I would expect to get a coverage value that matches the actual files I covered so e.g. 80% in the playwright files and added to that the value for one file covered in jest and not the 50:50 split between playwright and jest.
It looks like this in my MR where I add Jest. As you can See, my overall coverage is dropping by 34.90% which is a bit misleading.

I solved this by adding a -p tag to my merge command which prints the result. Then I adapted the regex to match the value I wanted to present:
coverage: '/Total line Coverage: \d+\.\d+/'

Related

How can I merge coverage reports from jest and playwright using nyc without getting wrong results for functions?

In the project I use playwright to test most of the code. I added jest to test some specific util stuff directly. Playwright and jest generate a json test coverage report which I want to combine to show the total coverage as a HTML report. I managed to merge the two reports and the coverage values seem to be correct. But the function coverage is off. Before adding jest it showed 16/16 Functions covered with 100 % now it shows 17/34 and 50% after adding the jest tests.
I use the following commands to merge my reports:
npx nyc merge ./coverageReports/ merged-output/result.json
npx nyc report --reporter=html --reporter=text -t merged-output/
Now I would expect the HTML report to look something like this:
But my coverage looks like this:
The rest off the report looks correct and for files where I only have tests for one of the tools it works as expected. Utilities is currently the only file with playwright and jest tests covering functions. But this will change when more jest tests are added. Is there a way to get correct numbers in this case as well?

Can I include only some aspect in the coverage report of Jest/Enzyme

I am using the Jest coverage report after my tests are run to check the coverage of my code. Although in this, I have the 4 values
1. Statements
2. Branches
3. Functions
4. Lines
Is there any way I can display only the statements and not the other 3 properties in the coverage report or is it always autogenerated with all 4 properties?
If there is a way, how do I do that?

Jest snapshot is redundant

I am writing snapshot tests using Jest for a node.js and React app and have installed snapshot-tools extension in VS code.
Some of my tests are displaying this warning in the editor:
[snapshot-tools] The snapshot is redunant
(Presumably it is supposed to say redundant)
What does this warning mean? I am wondering how I can fix it.
I was having the same problem, so I took a look at the "snapshot-tools" code. It marks a snapshot section as redundant, if it doesn't see a corresponding test in the test file that has a matching name and that calls "expect().toMatchSnapshot()" or something similar.
The problem is (as it says on the "Limitations" section of the plugin's marketplace page), it does a static analysis of the test file to find those tests that use snapshots. And the static analysis cannot detect tests that have dynamically generated names, or that don't directly call "expect().toMatchSnapshot()" in the test's body.
For example, I was getting false positive "redundant" warnings, because I had some tests that were doing "expect().toMatchSnapshot()" in their "afterEach()" function, rather than directly in the test body.
This could indicate that the snapshot is no longer linked to a valid test - have you changed your describe/it strings without updating the snapshots? Try running the tests with -- -u appended (eg: npm test -- -u). If that doesn't work, have a look at your snapshots file and compare the titles to your test descriptions.

Fail code coverage if no tests exist for code

I have a simple Node JS application and am using Istanbul with Mocha to generate code coverage reports. This is working fine.
If I write a new function, but do not create any tests for it (or even create a test file) is it possible to check for this?
My ultimate goal is for any code which has no tests at all to be picked up by our continuous integration process and for it to fail that build.
Is this possible?
One way you could achieve this is by using code coverage.
"check-coverage": "istanbul check-coverage --root coverage --lines 98 --functions 98 --statements 98 --branches 98"
Just add this in your package.json file, change the threshold if needed. If code is written but no test then the coverage will go down.
I'm not sure if this is the correct way to solve the problem but by running the cover command first and adding the parameter --include-all-sources this then reported on any code without a test file and added them to the coverage.json file it generated.
Then running the check-coverage would fail which is what I'm after. In my CI process I would run cover first, then check-coverage
Personally I find the documentation on Istanbul a little bit confusing/un-clear which is why I didn't see this at first!

Using cucumber to run different tagged features sequentially

I'm attempting to run tagged features in the order that they are submitted.
example:
I have tests that i'd like to run in a specific order (#test1, #test2, #test3). After looking at the cucumber documentation is looks like i'm only able to run them in an and/or option like
cucumber features/.feature --t #test1; cucumber features/.feature --t #test2; cucumber features/*.feature --t #test3;
but this prevents me from having a single report which contains all of the results.
Is there anyway which I can run these tests in their respective order and have all of the results contained in the same report?
If you put the tests that have to run in a specific order in a feature file together cucumber will run them in the order they are given. As this will be in your normal test run it should all show up in the same report.
But it might be worth looking into why your tests are dependant on each other and if there is a way to remove this dependancy as it is generally bad practice to have it.

Resources