I have a multimodule maven project with modules such as: moduleA, moduleB, moduleC. Then I have an entirely separate moduleTest that has integration tests in it run by the failsafe plugin.
I want to have a report generated by cobertura(or any other maven plugin) that can tell me which lines in all of moduleA, B and C are covered by my integration tests.
I don't think http://jira.codehaus.org/browse/MCOBERTURA-65 helps me. Is there an easy way to achieve this?
One of possible solutions is to use Emma. You shall setup code instrumentation in your source code modules by using instrument goal:
http://mojo.codehaus.org/emma-maven-plugin/instrument-mojo.html
After successful compilation and instrumentation, tests execution will generate coverage data. Then You can execute emma standalone tool to generate report based on it:
java emma report -r txt,xml,html -in coverageA.em,coverageB.em,coverageC.em,coverage.ec -sp srcA/,srcB,srcC
coverage*.em shall be replaced with proper paths to generated by Emma metadata in source code modules, coverage.ec is path to coverage file generated in test module, src* directories shall be replaced with paths to source code directories. Here is detailed documentation:
http://emma.sourceforge.net/reference/ch02s04s03.html
You can do it also with jacoco (also in a quite tricky way) but due too low reputation I cannot put more than 2 links. So like it like it! :)
Related
I've tried to look at the source code but I am a little confused. Does the Jest changedSince option also test related files?
Yes, to be more specific it's doing an inverse resolve of dependents.
When it comes to actually reporting coverage, it will run tests as mentioned, but it will only report the coverage of the changed file(s).
I created a simple hierarchial C++ project to help me learn the use of scons as I want to get away from cmake and qmake. I have registered it in a github repository at https://github.com/pleopard777/SConsEx . This project is organized into two primary subdirs; packages contains two libraries and testing contains two apps. The packages dir needs to be built first and when complete the testing dir needs to be built. Under the packages library the core library must be compiled first and the numerics library second. The numerics library depends on the core library. Under the testing dir the core_tests app depends on the core library and the numerics_tests app depends on core and numerics.
I am struggling with what seems to be limited documentation and examples for scons so I am posting this here in search of some guidance. Here are some of the initial problems I am having, any guidance will be greatly appreciated:
1) [Edit/FIXED]
2) In the packages/numerics/ dir the source files depend on the core library. The file numerics_config.h requires the file ../core/core_config.h however when building that core file cannot be found. The following SConstruct lines don't help:
[code]
include = '../../packages'
env = Environment(CPPPATH=include)
[/code]
Again, this is just a start to the project and I am using it to learn scons. Any guidance will be appreciated ... I'm sure I will be asking lots more questions as this project progresses.
Thanks!
P
Fixed in pull request to your repo.
Note you had some c++ issues as well. I've fixed them too.
See:
https://github.com/pleopard777/SConsEx/pull/1
(Please don't delete your repo so others can find the solution as well)
If in my package.json I define a yarn script test that only calls nightwatch command; It seems as it is that it'll run both the scenarios that are found in the features folder as well as any test that is not necessarily a test made with Cucumber (plain nightwatch tests under the tests/ folder).
Is there a way for me to distinguish the execution of only the "cucumber+nightwatch" tests from the plain nightwatch ones, so I filter and only run from one of the two sets?
The author of nightwatch-cucumber suggested this approach via e-mail:
use an environment variable in nightwatch configuration (JS based).
This environment variable can decide the source of the tests.
One note: this package will be deprecated if nightwatch 1.x will come
out. So I suggest not to invest to much time with it instead use the
new nightwatch-api package.
A quick workaround to filter out plain nightwatch tests from the cucumber+nightwatch ones was to edit the src_folders in the nightwatch.json file to an empty array, like this:
{
"src_folders" : [],
...
}
I've recently started getting into unit testing for my Node projects with the help of Mocha. Things are going great so far and I've found that my code has improved significantly now that I'm thinking about all the angles to cover in my tests.
Now, I'd like to share my experience with the rest of my team and get them going with their own tests. Part of the information I'd like to share is how much of my code is actually covered.
Below is a sample of my application structure which I've separated into different components, or modules. In order to promote reuse I'm trying to keep all dependencies to a minimum and isolated to the component folder. This includes keeping tests isolated as well instead of the default test/ folder in the project root.
| app/
| - component/
| -- index.js
| -- test/
| ---- index.js
Currently my package.json looks like this. I'm toying around with Istanbul, but I'm in no way tied to it. I have also tried using Blanket with similar levels of success.
{
"scripts": {
"test": "clear && mocha app/ app/**/test/*.js",
"test-cov": "clear && istanbul cover npm test"
}
If I run my test-cov command as it is, I get the following error from Istanbul (which is not helpful):
No coverage information was collected, exit without writing coverage information
So my question would be this: Given my current application structure and environment, how can I correctly report on my code coverage using Istanbul (or another tool)?
TL;DR
How can I report on my code coverage using Node, Mocha, and my current application structure?
EDIT
To be clear, Mocha is running tests correctly in this current state. Getting the code coverage report is what I'm struggling with getting to work.
EDIT 2
I received a notification that another question may have answered my question already. It only suggested installing Istanbul and running the cover command, which I have done already. Another suggestion recommends running the test commands with _mocha, which from some research I have done is to prevent Istanbul from swallowing the flags meant for Mocha and is not necessary in newer versions of Mocha.
You should try running your test like this :
istanbul cover _mocha test/**/*.js
You need an .istanbul.yml file. I don't see any reference to it - hard to say without knowing the contents of it.
I don't think there's quite enough information to solve this in the question. I'll update this answer if you update the question, especially before the bounty expires, eh?
This is how i get code coverage on all my js projects (looks like the one from Sachacr) :
istanbul cover _mocha -- -R spec --recursive test
My team is creating a build system based on SCons. We have created a bunch of helper classes in our own site_scons/site_tools folder.
My task is to create and run tests on our code, using pyunit. The test code would probably live in a subfolder, with the directory layout looking something like:
SConstruct
our_source_code/
Sconscript
site_scons/
site_tools/
a.py
b.py
c.py
tests/
test_a.py
test_b.py
test_c.py
Now my question is: What is the best way to invoke our tests, given they will probably require the correct SCons environment set up? (that is a.py uses SCons.Environment)
Do I add a Builder or a Command? Or something else?
I think the best approach would be to use the test setup code from SCons itself. This requires a SVN checkout of SCons, as the test files are not shipped with the regular SCons tarballs. This is probably workable, as not everyone in your team would be writing tools and running tests on them.
For example, this is the test for javac. Basically you write out files that you want, run a SConstruct, then check the results are what you expected. You can mock tools with Python scripts, to ensure they are really called with the flags and files that you expect. For example:
import TestSCons
test = TestSCons.TestSCons()
test.write('SConstruct', '''env = Environment(tools = ["yourtool"])
env.RunYourTool()''')
test.write('sourcefile.x', 'Content goes here')
test.run(arguments = '.', stderr = None)
test.must_match('outputfile', 'desired contents')
test.pass_test()
There are also more instructions on writing SCons tools tests on the swtoolkit wiki, which is a seemingly-defunct SCons extension from Google. The info on the wiki is still useful, and there are some good examples on how to write tests for custom SCons tools.