Is there a way to include a secondary coverage report during a Jest run through? - jestjs

Using Jest I have created a series of tests. Some of the tests run through Puppeteer others are just basic tests. They all run however in one test run(for reasons I have).
The coverage for the basic tests are running. I am also getting a semi coherent coverage report using puppeteer-to-istanbul. I have to clean it up but it's workable at the moment.
I would like to find a way, as part of that single run, to combine the coverage data coming from Puppeteer into the coverage report being generated by Jest and the coverage provider it is using and have them reported in a single step at the end of the test run.
I have found various methods of possibly combining the reports after the test run. But not to bog this question down I will say this is not an ideal solution.

Related

GitLab CI external test for coverage "fails"

I am using GitLab and its CI for a project.
I used to test coverage with some CI jobs until these scripts stopped working ("keyword cobertura not valid").
Simultaneously I found that the CI added some "external" jobs automatically handling coverage (see screenshot).
I don't why it appeared, maybe because I have linked the project with Codecov external site.
This was a pleasant surprise at the time because I didn't have to maintain a special script for coverage.
However eventually now these external coverage tests are failing and I can't merge my changes because of it.
Worst part is that these are not normal scripts so I can't see what is wrong with it. And, there is no Retry button even (see screenshot, on the right).
I don't want to throw away my otherwise perfectly working merge request.
How can I see what is wrong about this part of the CI?
Clicking on the failed test send me to Codecov website and I don't see anything wrong with it.
Here it the link to the pipeline: https://gitlab.com/correaa/boost-multi/-/pipelines/540520025
I think I solved the problem, it could have been that coverage percentage decreased (by 0.01% !) and that was interpreted by "the system" as failure.
I added test to cover some uncovered lines and the problem was solved.
If this is the right interpretation, this is indeed nice, but also scary, because some big changes sometimes require a hit in coverage.
In my particular example, what happened is that I simplified code and the number of total lines when down, making the covered fraction go lower than previously.
I think this error might have something to do with the coverage range you have declared.
Looking at your .codecov.yml file:
coverage:
precision: 2
round: down
range: "99...100"
You're excluding 100% when using three dots in the range, and you have achieved 100% coverage with this branch. I feel like this shouldn't matter, but you could be hitting an edge case with codecov. Maybe file a bug report with them.
Try changing the range to 99..100. Quotes should be unnecessary.
https://docs.codecov.com/docs/coverage-configuration

How can I run only integration tests

Is there a way to only run integration tests, but not unit-tests?
I've tried:
cargo test --tests: runs unit + integration tests
cargo test --test test_name: runs one specified test
Is it currently not possible to only run integration tests or am I missing something?
You can run ONLY the integration tests by:
cargo test --test '*'
Please note that only '*' will work; neither * nor "*" works.
Reference: https://github.com/rust-lang/cargo/issues/8396
Thing is, Cargo doesn't really distinguish between integration tests and unit tests, since there isn't a real difference between the two in terms of how you manage and implement them; the difference is purely semantic. Not all codebases even have that separation. The book and the reference call them unit tests and integration tests for simplicity and to avoid confusion, but technically there is no such distinction.
Instead of separating tests into two logical categories, Cargo has a flexible filtering system, which allows you to only run tests when their name matches a certain pattern. The book has a section dedicated to this system. If you'd like to filter out certain tests because they took a long time to run or are otherwise undesirable to be run along with all others, annotate a test with #[ignore]. Otherwise, use a certain naming methodology for the tests so that you can filter them out by their name.
The Cargo reference page also mentions the fact that you can use the target options in the Cargo.toml manifest to control what is run when you use --tests.

How to compare WebdriverIO reports and generate the difference?

We are using WebdriverIO for our automated tests and we generate HTML reports with Mochawesome in the end based on the result JSON files.
Now we have a lot of implemented tests and we want to fetch the difference between two testruns as fast as possible. Therefore it would be cool if we will have a possibility to compare two testrun results with each other and to generate also a HTML report only with the test result differences.
Maybe there is a still existing implemantation/package to do that? Yes, of course it is possible to compare the two different JSON result files with each other, but I prefer a still implemented solution to save effort.
How would you do the comparison in my case?
Thanks,
Martin
You could set up a job in a CI tool like Jenkins.
Here it always compares the latest results with the previous build and tells you if it is a new failure, regression issue or a fixed script.
Regression indicates that test passed in the previous build, but failing in the new build
Failed indicates, it is failing from the past couple of builds
Fixed indicates, that it was failing in the last build, but now passing in the latest build

Coverage drops when using babel

I stalled the decision of using babel but found, that it is necessary to write better code.
Before babel I used mocha and chai I started to test my code and reached up to a 100%. But since using it, my code coverage drops significantly (of course) as I only try to cover the resulting ES5 output.
So my question would be: How to test my source code without having a huge drop at my statistics.
Generally the core issue with this, is that Babel has to insert code to cover all of the edge cases of the spec, but may not matter from the standpoint of coverage calculation.
The best approach currently would be to use https://github.com/istanbuljs/babel-plugin-istanbul to add the coverage tracking metadata to your original ES6 code, which means that even though Babel eventually converts it to ES5, the coverage will be about the ES6 code.

Check test isolation tool

I have some tests written with mocha.
Probably something wrong with test isolation.
When I start all test it's ok.
When I pick only some describe block some test falls.
Did any knows any instruments to check test isolation. Maybe some tools that automatically starts tests/blocks in different order multiple times, or something else?
The rocha is an answer.
Finding at mocha GitHub discussion

Resources