I want to exclude cobertura from instrumenting code if test=true - cobertura

I want to exclude cobertura from instrumenting code if test=true.
Right now I have all my required statments at the top:
require 'buildr/java/cobertura'
require 'buildr/scala'
Then each time the build runs I get this:
Instrumenting classes with cobertura data file
C:/usr/git_workspaces/reports/cobertura.ser
Which means that my production code has cobertura instrumentation in it.
This is the next section of my build
compile.with CORE, SLF4J, LOG4J, WS_CLIENTS, JODA_TIME,
[omitted for brevity]
compile.options.other = %w(-encoding UTF-8)
cobertura.exclude '[omitted for brevity]'
resources.filter.using *RESOURCES_FILTER
test.using :junit
# need this because of forking. It does not appear to use the environmental variables defined above.
test.using :java_args => ["-XX:MaxPermSize=128M"]
test.with JUNIT, SCALATEST, MOCKITO, POWERMOCK, HAMCREST, SPRING.test
# Pakcage is below here but the code has already been instrumented...
Is the compile.with the place where compilation actually occurs? So I could make an if test then add cobertura ?

if you do not require "buildr/java/cobertura" then there will be no instrumenting.
We 'solved' like this (at least commandline parameter has to include "cobertura" otherwise the classes are not instrumented)
ARGV.each do |a|
if (a.include?("cobertura"))
require "buildr/java/cobertura"
break
end
end
You could do this
require_cb = ture
ARGV.each do |a|
if (a.include?("test=yes"))
require_cb = false
break
end
end
require "buildr/java/cobertura" if require_cb
hth

Related

Tarpaulin reports log::info() lines as uncovered. How to ignore trace call in code coverage?

Running code coverage in Rust with tarpaulin.
Tarpaulin executes the same test cases as cargo test which does not print debug traces e.g. log::info() and log::warn() in the original code. Since the traces are turned off code will not be generated for them and the lines never gets covered by the tests. The metric we look for is how well covered the code is by our test cases.
I know it is possible to to use #[cfg_attr(tarpaulin, ignore)] to ignore some code but it would not harm readability to prepend all logs with this.
How to turn off tarpaulin code coverage for log::info()?

Jest snapshot is redundant

I am writing snapshot tests using Jest for a node.js and React app and have installed snapshot-tools extension in VS code.
Some of my tests are displaying this warning in the editor:
[snapshot-tools] The snapshot is redunant
(Presumably it is supposed to say redundant)
What does this warning mean? I am wondering how I can fix it.
I was having the same problem, so I took a look at the "snapshot-tools" code. It marks a snapshot section as redundant, if it doesn't see a corresponding test in the test file that has a matching name and that calls "expect().toMatchSnapshot()" or something similar.
The problem is (as it says on the "Limitations" section of the plugin's marketplace page), it does a static analysis of the test file to find those tests that use snapshots. And the static analysis cannot detect tests that have dynamically generated names, or that don't directly call "expect().toMatchSnapshot()" in the test's body.
For example, I was getting false positive "redundant" warnings, because I had some tests that were doing "expect().toMatchSnapshot()" in their "afterEach()" function, rather than directly in the test body.
This could indicate that the snapshot is no longer linked to a valid test - have you changed your describe/it strings without updating the snapshots? Try running the tests with -- -u appended (eg: npm test -- -u). If that doesn't work, have a look at your snapshots file and compare the titles to your test descriptions.

apps and plugin revisions, making an Origen app production ready

Our development app, which has ~12 homegrown plugins and many Origen gems, needs to move from development to production level quality. I see many miscellaneous topics starting here but I don't see much on app/plugin versioning, Origen modes (debug and production) and items such as reference files and examples. I know there is the 'origen specs' command which runs rspec but what about 'origen test' and other related items.
thx
Yes documentation on this topic is a bit thin on the ground, here is some info to remedy that:
Runtime mode
Yes Origen does have the concept of debug and production modes - http://origen-sdk.org/origen/guides/runtime/mode/
Running in production mode does give you some rudimentary protection against local edits finding their way into production builds, but in reality I think there are many projects in production today which just use debug mode all the time.
I think the main benefit of production mode is that you can hook into it to implement domain specific checks that you care about. For example, in one of my apps I have a before_generate callback handler which checks that the product firmware we are using is the latest released version when we are building for production.
Another useful technique is to embed the mode in things like the program, pattern or test names generated by your application. That makes it very clear if someone has used a debug build for something that they want to release to production.
Versioning
Company internal plugins should be version tagged and released in the same way as Origen and the open source plugins are.
Normally everyone does that by running the origen rc tag command, which will tag the plugin, maintain a release history, and build and publish the gem to your gem server.
Plugins should generally be careful not to lock to specific versions of its dependencies in its .gemspec file, and should instead specify a minimum version. Something like this is good, which means any version 1 of the dependency that is greater than 1.2.3: '~>1', '>1.2.3'. See here for more on how to specify gem versions: http://guides.rubygems.org/patterns/#declaring-dependencies
Top-level applications on the other hand generally want to lock to specific versions of gems within their Gemfile so that their builds are reproducible. In theory you get that for free by checking in the Gemfile.lock file which means that when bundler builds the gem bundle in a new workspace it will use the exact versions specified in that file, even though the rules in the Gemfile may allow a range of possible versions.
In practice, many engineers prefer to take a stricter/more declarative approach by specifying absolute versions within their Gemfile.
Unit Testing
Unit testing is almost always done via the rspec tool and by convention launched via the origen specs command.
Unit tests are good because they can target very specific features and they are therefore relatively easy to debug when they fail.
These tests are often very useful for doing test driven development, particularly when creating new APIs. The technique is to use the act of writing to test to define the API you wish you had, then make the changes required to make the test pass.
The downside to these tests is that they take time to write and this often gets dropped when under time pressure.
There is also an art to writing them and sometimes it can be difficult for less experienced engineers to know how to write a test to target a specific feature.
For this reason, unit tests tend to be used more in plugins, particularly those which provide APIs to parent applications.
Top-level applications, particularly those concerned with generating patterns or test programs, tend to instead rely on diff-based testing...
Diff-based Testing
Diff-based or acceptance testing, means that someone blesses a particular output (i.e. a pattern or a test program file) as 'good' and then tests can be created which simply say that as long as the current output matches some previously known good output, then the test should pass.
If a diff is encountered, then the test will fail and an engineer has to review the changes and decide if it is unwanted and highlighting a genuine problem, or whether the change is acceptable/expected and in which case the new output should become the new known good reference.
The advantage to this style of testing is that it doesn't take any time to write the test and yet it can provide extremely high test coverage. The downside is that it can sometimes be hard to track down the source of an unwanted diff since the test covers a large surface area.
Another issue can be that un-wanted diffs can get lost in the noise if you make changes that globally affects all output. For that reason it is best to make such changes in isolation.
Many plugins and Origen itself implements this style of testing via a command called origen examples and the known good output is checked into the approved directory.
Some applications also implement a command called origen test which is simply a wrapper that combines origen specs and origen examples into a single command which is useful to create a combined test coverage analysis (more on that below).
You should refer to some of the open source repositories for examples of how to create these commands, but all new Origen application shells should come with example code commented out in config/commands.rb.
Checking in the approved output is OK when its footprint is relatively small, but for large production applications which may generate thousands of patterns or files and/or support many different targets, then it is not so practical to check all that in.
Also in that case, it sometimes becomes more useful to say "what has changed in the output compared to version X of the application?", rather than always comparing to the latest known good version.
You can manually run such tests by checking out a specific version of your application, generating the output and then use the origen save all command to locally save the approved output. Then checkout the latest version of your application, run the same generation command again and see if there are any changes.
That workflow can become tedious after a while and so Origen provides a helper for it via the Regression Manager: http://origen-sdk.org/origen/api/Origen/RegressionManager.html
It is common for application's to integrate that as a command called origen regression.
A complete implementation of and origen regression command from one of my apps is included below to give an example of how to create it.
In this case we are using Origen's LSF API to parallelize the commands, but if you don't use that just replace Origen.lsf.submit_origen_job cmd with system cmd to run your generation commands locally.
The regression manager will take care of replicating the same commands in the current and previous version of the application and providing you with the results of the diff.
Note that you can also supply build_reference: false if you want to re-run the regression against the same previous version of the app that you ran last time.
Test Coverage
Running any Origen command with the -c or --coverage switches will enable the generation of a test coverage report which you can review to see how well your test suite is doing.
Here is an example of a coverage report - http://origen-sdk.org/jtag/coverage/#_AllFiles
#commands/regression.rb
require 'optparse'
options = {}
default_targets = %w(product_a.rb product_b.rb)
short_targets = %w(product_a.rb)
opt_parser = OptionParser.new do |opts|
opts.banner = 'Usage: origen regression [options]'
opts.on('-t', '--target NAME1,NAME2,NAME3', Array, 'Override the default target, NAME can be a full path or a fragment of a target file name') { |t| options[:target] = t }
opts.on('-e', '--environment ENV', String, 'Override the default environment (tester).') { |e| options[:environment] = e }
opts.separator " (default targets: #{default_targets})"
opts.on('-s', '--short', 'Run a short regression (a single target)') { options[:short] = true }
opts.separator " (default short target: #{short_targets})"
opts.on('-f', '--full', 'Run a full regression (all production targets)') { options[:full] = true }
opts.on('-a', '--all', 'An alias for --full') { options[:full] = true }
opts.on('-c', '--ci', 'Build for bamboo CI') { options[:ci] = true } # avoids problematic targets for continuous integration
opts.on('-n', '--no_reference', 'Skip the reference build (careful!). Use when re-running the same regression back-back.') { options[:build_reference] = false }
opts.on('--email', 'Send yourself an email with the results when complete') { options[:send_email] = true }
opts.on('--email_all', 'Send the results email to all developers') { options[:email_all_developers] = true; options[:send_email] = true }
# Regression types-- saying no is easier to define the logic
opts.on('--no-patterns', 'Skip the vector-based patterns in the regression test') { options[:no_patterns] = true }
opts.on('--no-programs', 'Skip the programs in the regression test') { options[:no_programs] = true }
# Regression type only-- have to omit all other regression types
opts.on('--programs-only', 'Only do programs in the regression test') do
options[:no_patterns] = true
end
opts.separator ' (NOTE: must run program-based regression first to get pattern list prior to pattern regressions)'
opts.on('--patterns-only', 'Only do vector-based patterns in the regression test') do
options[:no_programs] = true
end
opts.on('-v', '--version type', String, 'Version for the reference workspace, latest, last, tag(ex: v1.0.0) or commit') { |v| options[:version] = v }
opts.on('--service_account', 'This option is set true only when running regressions through the Bamboo CI, a normal user should never have to use it') { options[:service_account] = true }
opts.on('--reference_workspace location', String, 'Reference workspace location') { |ref| options[:reference_workspace] = ref }
opts.separator ''
opts.on('-h', '--help', 'Show this message') { puts opts; exit }
end
opt_parser.parse! ARGV
if options[:version]
v = options[:version]
end
if options[:reference_workspace]
ref = options[:reference_workspace]
end
if options[:target]
t = options[:target]
t[0].sub!(/target\//, '') if t.length == 1 # remove path if there-- causes probs below
elsif options[:short]
t = short_targets
elsif options[:full]
t = Origen.target.production_targets.flatten
else
t = default_targets
end
if options[:environment]
e = options[:environment]
e.sub!(/environment\//, '') # remove path if there-- causes probs below
else
e = 'v93k.rb' # default environment
end
options[:target] = t
options[:environment] = e
def highlight(msg)
Origen.log.info '######################################################'
Origen.log.info msg
Origen.log.info '######################################################'
end
# Required to put the reference workspace in debug mode since the regression.rb file is modified,
# in future Origen should take care of this
Origen.environment.temporary = "#{options[:environment]}"
Origen.regression_manager.run(options) do |options|
unless options[:no_programs]
highlight 'Generating test programs...'
Origen.target.loop(options) do |options|
cmd = "program program/full.list -t #{options[:target]} --list #{options[:target]}.list -o #{Origen.root}/output/#{Origen.target.name} --environment #{options[:environment]} --mode debug --regression"
Origen.lsf.submit_origen_job cmd
end
highlight 'Waiting for test programs to complete...'
Origen.lsf.wait_for_completion
end
unless options[:no_patterns]
highlight 'Generating test patterns...'
Origen.target.loop(options) do |options|
# Generate the patterns required for the test program
Origen.file_handler.expand_list("#{options[:target]}.list").each do |pattern|
Origen.lsf.submit_origen_job "generate #{pattern} -t #{options[:target]} -o #{Origen.root}/output/#{Origen.target.name} --environment #{options[:environment]} --regression"
end
end
end
end

Skip JHipster integration tests using gradle

I am using JHipster 3.4.0 with gradle.
Excuse me for newbie questions.
There are times when I don't trust hot reloads and want to do full clean build.
However, executing 'build' task always lead to running integration tests.
Doing something like
test {
// include '**/*UnitTest*'
// include '**/*IntTest*'
// ignoreFailures true
// reports.html.enabled = false
}
in build.gradle doesn't help.
So how do I skip integration tests for full clean build?
And just to confirm, the task to do full clean build is 'build' right?
Thanks in advance,
Sam
To partially answer my own questions. Just found out the command line
gradle build -x test
will do the trick. But I don't think that answer my question of why comment out test task above doesn't work
The reason why the test are still running, when commenting the includes is, that test task has default values (if you don't overwrite them). So all classes in src/main/test are used as test classes. Your way by passing a command line parameter is the way to go.

Cleanup steps for Cucumber scenarios

Is there a way to define the cleanup steps for all of the scenarios for a feature in Cucumber? I know that Background is used to define the setup steps for each scenario that follows it, but is there a way to define something like that to happen at the end of each scenario?
should also notice that 'Before' and 'After' is global hooks i.e those hooks are run for every scenario in your features file
If you want the setup and teardown to be run for just few testcases ( grouped by tags) then you need to use taggedHooks, where the syntax is
Before('#cucumis, #sativus') do
# This will only run before scenarios tagged
# with #cucumis OR #sativus.
end
AfterStep('#cucumis', '#sativus') do
# This will only run after steps within scenarios tagged
# with #cucumis AND #sativus.
end
For more info : https://github.com/cucumber/cucumber/wiki/Hooks
You can use an After hook that will run after each scenario:
After do
## teardown code
end
There's also a Before hook that will allow you to set up state and/or test data prior to the scenario:
Before do
## setup code
end
The Before and After hooks provide the functionality of setup and teardown from Test::Unit, and they are generally located in hooks.rb in the features/support directory.

Resources