I can see jest cover lines on defined scope i.e. functions and classes. However, oftentimes it does not provide a file line test track. Do I miss or misunderstand something?
I am particularly interested in test coverage of this project
Related
I am using GitLab and its CI for a project.
I used to test coverage with some CI jobs until these scripts stopped working ("keyword cobertura not valid").
Simultaneously I found that the CI added some "external" jobs automatically handling coverage (see screenshot).
I don't why it appeared, maybe because I have linked the project with Codecov external site.
This was a pleasant surprise at the time because I didn't have to maintain a special script for coverage.
However eventually now these external coverage tests are failing and I can't merge my changes because of it.
Worst part is that these are not normal scripts so I can't see what is wrong with it. And, there is no Retry button even (see screenshot, on the right).
I don't want to throw away my otherwise perfectly working merge request.
How can I see what is wrong about this part of the CI?
Clicking on the failed test send me to Codecov website and I don't see anything wrong with it.
Here it the link to the pipeline: https://gitlab.com/correaa/boost-multi/-/pipelines/540520025
I think I solved the problem, it could have been that coverage percentage decreased (by 0.01% !) and that was interpreted by "the system" as failure.
I added test to cover some uncovered lines and the problem was solved.
If this is the right interpretation, this is indeed nice, but also scary, because some big changes sometimes require a hit in coverage.
In my particular example, what happened is that I simplified code and the number of total lines when down, making the covered fraction go lower than previously.
I think this error might have something to do with the coverage range you have declared.
Looking at your .codecov.yml file:
coverage:
precision: 2
round: down
range: "99...100"
You're excluding 100% when using three dots in the range, and you have achieved 100% coverage with this branch. I feel like this shouldn't matter, but you could be hitting an edge case with codecov. Maybe file a bug report with them.
Try changing the range to 99..100. Quotes should be unnecessary.
https://docs.codecov.com/docs/coverage-configuration
Let's say I am developing an NPM module.
I am using Jest for the testing, Webpack to bundle it and TypeScript in general.
When I test the source code, everything is fine, with also a very good code coverage and all of that. But I think that it is not enough. It could be possible that something breaks after the Webpack bundle is generated, for instance a dynamic import (a require with a variable instead of a fixed path) that would become incorrect after the bundle, or other possible scenarios.
How should I write tests that cover also the bundle? Should I test against both the source code (so that I get good coverage) and the bundle? Usually I import things directly from a specific files (e.g. /utils/myutil.ts), but with the bundle this would be impossible. How to handle this?
I do test against the bundle for some of my projects. I do this for some libraries (npm).
To do this I create some code that imports the bundle and write tests against this code. Don't care about coverage in this case, I just want to verify that my library does what it's supposed to do.
In another case (not a library) I'm testing against the bundle but I'm running more integration/e2e tests.
Don't worry about coverage that much unless every functions (or most of them) of your code are going to be used by the final user. You should test something the way it is used. 100% coverage is nice to see but very impractical to achieve when projects get big and in any case it's a waste of time. Of course, some people will disagree :)
Suppose I have a feature file listing a number of scenarios (the actual contents of the feature file are irrelevant).
I would like to reuse the same feature file and provide a number of implementations:
Unit test suite implementation.
This implementation would mock away external aspects such as the DB/repositories.
This implementation would be fast to run.
Acceptance integration test suite implementation.
This would be run in a staging environment and would not mock anything (except perhaps external services where appropriate).
This implementation would be slow to run (because it requires all infrastructure to be up and running).
I have tried:
Placing the feature files in their own sub-project in a mono-repo.
Have other sub-projects depend on the feature files.
Implementing the tests.
Although this works I can no longer jump from the feature file to the step definitions (because they are in a different module) in IntelliJ (which lessens the appeal).
Has anyone else had any experience of doing something similar? Or would you recommend against doing this?
We have done something similar.
What you could do is specify 2 different runners, RunCucumberTest and RunCucumberIT, where the first would be used to run unit tests and should point to the step definitions for the unit tests and the second would be used to run integration tests and should point to the step definitions for the integration tests. In the #CucumberOptions you can specify which step definitions (glue) the runner should use; just make sure to separate the "unit test" step definitions and the "integration test" step definitions in separate files/directories.
If there are any step definitions that don't depend on the distinction between unit and integration test, those could be in a "shared" step definitions file/directory, and called by both runners.
Files ending in *Test should be picked up in unit testing phase by Surefire.
Files ending in *IT should be picked up in unit testing phase by Failsafe.
Hope this helps.
I have hundreds of test specifications written in Spock. All of these are functional tests and can be run independently. But I have come across a situation where I need to run a specific test before running some other test.
This was very easy to achieve using Junit Test Suite and it was very straight forward in Eclipse. But since all my tests are groovy tests there is no easy way to create a Test Suite in Spring IDE for the spock tests (written in Groovy).
Can someone please share some ideas as to how we can create a Test suite and run some specific tests and also define the order of tests.
Any help would be much appreciated.
Spock specifications are valid JUnit tests (or suites) as well. That's why they are recognized by tools such as STS. You should be able to add it to the test suites as well as other JUnit test.
On the other hand it doesn't sound as a good practise if your tests depend on execution order.
If certain tasks needs to be performed before the test execution, it should be placed in setup() method. If that logic is common to more than one test, consider extracting it to the parent class.
If all you need is sequential execution of methods within a spec, have a look at #spock.lang.Stepwise, which is handy for testing workflows. Otherwise, you have the same possibilities as with plain JUnit: you can use JUnit (4) test suites, model test suites in your build tool of choice (which might not help within STS), or define test suites via Eclipse run configurations. I don't know how far support for the latter goes, but at the very least, it should allow you to run all tests in a package.
Although I think that it won't allow you to specify the order of the tests, you could use Spock's Runner configuration or #IgnoreIf/#Require built-in extensions. Have a look at my response to a similar question. It's probably also worth having a look at RunnerConfiguration javadoc as it shows that you can include classes directly instead of using annotations.
If the tests you want to run in a specific order are part of the same spock Specification, then you can use the #Stepwise annotation to direct that the tests (feature methods) are executed in the order they appear in the Specification class.
As others mentioned, its best to avoid this dependency if you can because of the complexity it introduces. For example, what happens if the first test fails? Does that leave the system in a undefined state for the subsequent tests? So it would be better to prevent the intra-test dependencies with setup() and cleanup() methods (or setupSpec() and cleanupSpec()).
Another option is to combine two dependent tests into a single multi-stage test with multiple when:/then: block pairs in sequence.
I'm using mocha to test my node.js application.
I notice that my spec files getting bigger and bigger over time. Are there any pattern to organize the test files (e.g. one specs file per test)? Or are there other frameworks on top of mocha to help me structure the tests? Or do you prefere other test frameworks for that reason?
Large test/spec files tend to mean the code under test might be doing too much. This is not always the case though, often your test code will always out weigh the code under test, but if you are finding them hard to manage this might be a sign.
I tend to group tests based on functionality. Imagine if we have example.js, I would expect example.tests.js to begin with.
Rather than one spec called ExampleSpec I tend to have many specs/tests based around different contexts. For example I might have EmptyExample, ErrorExample, and DefaultExample which have different pre-condidtions. If these become too large, you either have missing abstractions, or should then think about splitting the files out. So you could end up with a directory structure such as:
specs/
Example/
EmptyExample.js
ErrorExample.js
DefaultExample.js
To begin with though, one test/spec file per production file should be the starting point. Only separate if needs be.