Custom JaCoCo Gradle Plugin - exclude files in report - groovy

I'm new to Gradle and Groovy.
I have something like this in my build.gradle file:
jacocoTestReport {
afterEvaluate {
classDirectories.setFrom(files(classDirectories.files.collect {
fileTree(dir: it, exclude: ['aaa/bbb.*', 'ccc/ddd/*',])
}))
}
}
Now, I need this in my custom plugin. But I'm not sure how to set these excludes to it.
I have something like this:
project.afterEvaluate { p ->
def reportTask = project.tasks.findByName('jacocoTestReport') as JacocoReport
reportTask.classDirectories.setFrom(reportTask.classDirectories.files.) // Now what?
}
And I'm stuck here. Can you help me sort this out, please?
EDIT1:
I tried to solve it using patternSet.
println '----------------BEFORE REPORT----------------------'
def patternSet = new PatternSet()
def filesAsString = reportTask.classDirectories.files.collect { file ->
file.path
}
patternSet.setIncludes(filesAsString)
patternSet.setExcludes(['aaa/bbb.*', 'ccc/ddd/*'])
def excludeFiles = reportTask.classDirectories.asFileTree.matching(patternSet).files
println filesAsString
println excludeFiles
def files = reportTask.classDirectories.asFileTree.minus(excludeFiles)
reportTask.classDirectories.setFrom(files)
println '----------------AFTER REPORT----------------------'
println reportTask.classDirectories.files
Result is not what I expected:
----------------BEFORE REPORT----------------------
[aaa\bbb\build\classes\java\main, aaa\bbb\build\classes\groovy\main, aaa\bbb\build\resources\main]
[]
----------------AFTER REPORT----------------------
[]
No exclude directories at all.

So, I somehow solved it.
def ex = ['aaa/bbb/*','ccc/ddd.*','eee/fff/*']
def ft = reportTask.classDirectories.asFileTree.matching {
ex.each {
exclude it
}
}
Don't ask me how Gradle wraps this since it's not a Groovy syntax, but it works.
My opinion is that Gradle somehow implements AntlrTask as a pattern matcher since it implements interface 'PatternFilterable' and documentation mentions Ant. I tried to solve it in pure Groovy using PatternSet which is another class implementing PatternFilterable but I haven't succeded.
But the simple exclude 'aaa/bbb/*' works OK. It doesn't take list so I had to do it using forEach cycle. Maybe there is more simple and elegant way and I'll try to play with it but this solution is good enough.

Related

Gradle - Dynamically building multiple publishes

I have come on to a project which has a Gradle script that builds tasks dynamically based on a list. This is so that when people fork the repository, they can just edit the list. It boils down to something like this:
def files = ['file1', 'file2', 'file3']
files.each { f ->
task (f, type: Exec) {
executable 'touch'
args file
}
}
What I would like to do is to be able to publish the outputs of the tasks. I ideally want to do this as multiple publishes. I gave it a stab with the below code snippet, but Gradle seems to really hate the fact that I'm using a java.lang.String.call() as a class name!
publishing {
publications {
files.each { f ->
f (IvyPublication) {
module f
artifact (file(f))
}
}
}
}
Is what I am trying to do possible, or should I completely change tack?
[edit] I think I can do this with IvyPublication.loadClass(f), but struggling with the details. Any help much appreciated
Aha! I have found the solution. It's not as exciting as using the class loader, but it does work:
publishing {
files.each { f ->
publications.create(f, IvyPublication) {
module f
artifact (file(f))
}
}
}

Gradle plugin best practices for tasks that depend on extension objects

I would like feedback on the best practices for defining plugin tasks that depend on external state (i.e. defined in the build.gradle that referenced the plugin). I'm using extension objects and closures to defer accessing those settings until they're needed and available. I'm also interested in sharing state between tasks, e.g. configuring the outputs of one task to be the inputs of another.
The code uses "project.afterEvaluate" to define the tasks when the required settings have been configured through the extension object. This seems more complex than should be needed. If I move the code out of the "afterEvaluate", it gets compileFlag == null which isn't the external setting. If the code is changed again to use the << or doLast syntax, then it will get the external flag... but then it fails to work with type:Exec and other similarly helpful types.
I feel that I'm fighting Gradle in some ways, which means I don't understand better how to work well with it. The following is a simplified pseudo-code of what I'm using. This works but I'm looking to see if this can be simplified, or indeed what the best practices are. Also, the exception shouldn't be thrown unless the tasks are being executed.
apply plugin: MyPlugin
class MyPluginExtension {
String compileFlag = null
}
class MyPlugin implements Plugin<Project> {
void apply(Project project) {
project.extensions.create("myPluginConfig", MyPluginExtension)
project.afterEvaluate {
// Closure delays getting and checking flag until strictly needed
def compileFlag = {
if (project.myPluginConfig.compileFlag == null) {
throw new InvalidUserDataException(
"Must set compileFlag: myPluginConfig { compileFlag = '-flag' }")
}
return project.myPluginConfig.compileFlag
}
// Inputs for translateTask
def javaInputs = {
project.files(project.fileTree(
dir: project.projectDir, includes: ['**/*.java']))
}
// This is the output of the first task and input to the second
def translatedOutputs = {
project.files(javaInputs().collect { file ->
return file.path.replace('src/', 'build/dir/')
})
}
// Translates all java files into 'translatedOutputs'
project.tasks.create(name: 'translateTask', type:Exec) {
inputs.files javaInputs()
outputs.files translatedOutputs()
executable '/bin/echo'
inputs.files.each { file ->
args file.path
}
}
// Compiles 'translatedOutputs' to binary
project.tasks.create(name: 'compileTask', type:Exec, dependsOn: 'translateTask') {
inputs.files translatedOutputs()
outputs.file project.file(project.buildDir.path + '/compiledBinary')
executable '/bin/echo'
args compileFlag()
translatedOutputs().each { file ->
args file.path
}
}
}
}
}
I'd look at this problem another way. It seems like what you want to put in your extension is really owned by each of your tasks. If you had something that was a "global" plugin configuration option, would it be treated as an input necessarily?
Another way of doing this would have been to use your own SourceSets and wire those into your custom tasks. That's not quite easy enough yet, IMO. We're still pulling together the JVM and native representations of sources.
I'd recommend extracting your Exec tasks as custom tasks with a #TaskAction that does the heavy lifting (even if it just calls project.exec {}). You can then annotate your inputs with #Input, #InputFiles, etc and your outputs with #OutputFiles, #OutputDirectory, etc. Those annotations will help auto-wire your dependencies and inputs/outputs (I think that's where some of the fighting is coming from).
Another thing that you're missing is if the compileFlag effects the final output, you'd want to detect changes to it and force a rebuild (but not a re-translate).
I simplified the body of the plugin class by using the Groovy .with method.
I'm not completely happy with this (I think the translatedFiles could be done differently), but I hope it shows you some of the best practices. I made this a working example (as long as you have a src/something.java) by implementing the translate as a copy/rename and the compile as something that just creates an 'executable' file (contents is just the list of the inputs). I've also left your extension class in place to demonstrate the "global" plug-in config. Also take a look at what happens with compileFlag is not set (I wish the error was a little better).
The translateTask isn't going to be incremental (although, I think you could probably figure out a way to do that). So you'd probably need to delete the output directory each time. I wouldn't mix other output into that directory if you want to keep that simple.
HTH
apply plugin: 'base'
apply plugin: MyPlugin
class MyTranslateTask extends DefaultTask {
#InputFiles FileCollection srcFiles
#OutputDirectory File translatedDir
#TaskAction
public void translate() {
// println "toolhome is ${project.myPluginConfig.toolHome}"
// translate java files by renaming them
project.copy {
includeEmptyDirs = false
from(srcFiles)
into(translatedDir)
rename '(.+).java', '$1.m'
}
}
}
class MyCompileTask extends DefaultTask {
#Input String compileFlag
#InputFiles FileCollection translatedFiles
#OutputDirectory File outputDir
#TaskAction
public void compile() {
// write inputs to the executable file
project.file("$outputDir/executable") << "${project.myPluginConfig.toolHome} $compileFlag ${translatedFiles.collect { it.path }}"
}
}
class MyPluginExtension {
File toolHome = new File("/some/sane/default")
}
class MyPlugin implements Plugin<Project> {
void apply(Project project) {
project.with {
extensions.create("myPluginConfig", MyPluginExtension)
tasks.create(name: 'translateTask', type: MyTranslateTask) {
description = "Translates all java files into translatedDir"
srcFiles = fileTree(dir: projectDir, includes: [ '**/*.java' ])
translatedDir = file("${buildDir}/dir")
}
tasks.create(name: 'compileTask', type: MyCompileTask) {
description = "Compiles translated files into outputDir"
translatedFiles = fileTree(tasks.translateTask.outputs.files.singleFile) {
includes [ '**/*.m' ]
builtBy tasks.translateTask
}
outputDir = file("${buildDir}/compiledBinary")
}
}
}
}
myPluginConfig {
toolHome = file("/some/custom/path")
}
compileTask {
compileFlag = '-flag'
}

Scope of Groovy's metaClass?

I have an application which can run scripts to automate certain tasks. I'd like to use meta programming in these scripts to optimize code size and readability. So instead of:
try {
def res = factory.allocate();
... do something with res ...
} finally {
res.close()
}
I'd like to
Factory.metaClass.withResource = { c ->
try {
def res = factory.allocate();
c(res)
} finally {
res.close()
}
}
so the script writers can write:
factory.withResource { res ->
... do something with res ...
}
(and I could do proper error handling, etc).
Now I wonder when and how I can/should implement this. I could attach the manipulation of the meta class in a header which I prepend to every script but I'm worried what would happen if two users ran the script at the same time (concurrent access to the meta class).
What is the scope of the meta class? The compiler? The script environment? The Java VM? The classloader which loaded Groovy?
My reasoning is that if Groovy meta classes have VM scope, then I could run a setup script once during startup.
Metaclasses exist per classloader [citation needed]:
File /tmp/Qwop.groovy:
class Qwop { }
File /tmp/Loader.groovy:
Qwop.metaClass.bar = { }
qwop1 = new Qwop()
assert qwop1.respondsTo('bar')
loader = new GroovyClassLoader()
clazz = loader.parseClass new File("/tmp/Qwop.groovy")
clazz.metaClass.brap = { 'oh my' }
qwop = clazz.newInstance()
assert !qwop.respondsTo('bar')
assert qwop1.respondsTo('bar')
assert qwop.brap() == "oh my"
assert !qwop1.respondsTo('brap')
// here be custom classloaders
new GroovyShell(loader).evaluate new File('/tmp/QwopTest.groovy')
And a script to test the scoped metaclass (/tmp/QwopTest.groovy):
assert !new Qwop().respondsTo('bar')
assert new Qwop().respondsTo('brap')
Execution:
$ groovy Loaders.groovy
$
If you have a set of well defined classes you could apply metaprogramming on top of the classes loaded by your classloader, as per the brap method added.
Another option for this sort of thing which is better for a lot of scenarios is to use an extension module.
package demo
class FactoryExtension {
static withResource(Factory instance, Closure c) {
def res
try {
res = instance.allocate()
c(res)
} finally {
res?.close()
}
}
}
Compile that and put it in a jar file which contains a file at META-INF/services/org.codehaus.groovy.runtime.ExtensionModule that contains something like this...
moduleName=factory-extension-module
moduleVersion=1.0
extensionClasses=demo.FactoryExtension
Then in order for someone to use your extension they just need to put that jar file on their CLASSPATH. With all of that in place, a user could do something like this...
factoryInstance.withResource { res ->
... do something with res ...
}
More information on extension modules is available at http://docs.groovy-lang.org/docs/groovy-2.3.6/html/documentation/#_extension_modules.

How to Report Results to Sauce Labs using Geb/Spock?

I want to use the Sauce Labs Java REST API to send Pass/Fail status back to the Sauce Labs dashboard. I am using Geb+Spock, and my Gradle build creates a test results directory where results are output in XML. My problem is that the results XML file doesn't seem to be generated until after the Spock specification's cleanupSpec() exits. This causes my code to report the results of the previous test run, rather than the current one. Clearly not what I want!
Is there some way to get to the results from within cleanupSpec() without relying on the XML? Or a way to get the results to file earlier? Or some alternative that will be much better than either of those?
Some code:
In build.gradle, I specify the testResultsDir. This is where the XML file is written after the Spock specifications exit:
drivers.each { driver ->
task "${driver}Test"(type: Test) {
cleanTest
systemProperty "geb.env", driver
testResultsDir = file("$buildDir/test-results/${driver}")
systemProperty "proj.test.resultsDir", testResultsDir
}
}
Here is the setupSpec() and cleanupSpec() in my LoginSpec class:
class LoginSpec extends GebSpec {
#Shared def SauceREST client = new SauceREST("redactedName", "redactedKey")
#Shared def sauceJobID
#Shared def allSpecsPass = true
def setupSpec() {
sauceJobID = driver.getSessionId().toString()
}
def cleanupSpec() {
def String specResultsDir = System.getProperty("proj.test.resultsDir") ?: "./build/test-results"
def String specResultsFile = this.getClass().getName()
def String specResultsXML = "${specResultsDir}/TEST-${specResultsFile}.xml"
def testsuiteResults = new XmlSlurper().parse( new File( specResultsXML ))
// read error and failure counts from the XML
def errors = testsuiteResults.#errors.text()?.toInteger()
def failures = testsuiteResults.#failures.text()?.toInteger()
if ( (errors + failures) > 0 ) { allSpecsPass = false }
if ( allSpecsPass ) {
client.jobPassed(sauceJobID)
} else {
client.jobFailed(sauceJobID)
}
}
}
The rest of this class contains login specifications that do not interact with SauceLabs. When I read the XML, it turns out that it was written at the end of the previous LoginSpec run. I need a way to get to the values of the current run.
Thanks!
Test reports are generated after a Specification has finished execution and the generation is performed by the build system, so in your case by Gradle. Spock has no knowledge of that so you are unable to get that information from within the test.
You can on the other hand quite easily get that information from Gradle. Test task has two methods that might be of interest to you here: addTestListener() and afterSuite(). It seems that the cleaner solution here would be to use the first method, implement a test listener and put your logic in afterSuite() of the listener (and not the task configuration). You would probably need to put that listener implementation in buildSrc as it looks like you have a dependency on SauceREST and you would need to build and compile your listener class before being able to use it as an argument to addTestListener() in build.gradle of your project.
Following on from erdi's suggestion, I've created a Sauce Gradle helper library, which provides a Test Listener that parses the test XML output and invokes the Sauce REST API to set the pass/fail status.
The library can be included by adding the following to your build.gradle file:
import com.saucelabs.gradle.SauceListener
buildscript {
repositories {
mavenCentral()
maven {
url "https://repository-saucelabs.forge.cloudbees.com/release"
}
}
dependencies {
classpath group: 'com.saucelabs', name: 'saucerest', version: '1.0.2'
classpath group: 'com.saucelabs', name: 'sauce_java_common', version: '1.0.14'
classpath group: 'com.saucelabs.gradle', name: 'sauce-gradle-plugin', version: '0.0.1'
}
}
gradle.addListener(new SauceListener("YOUR_SAUCE_USERNAME", "YOUR_SAUCE_ACCESS_KEY"))
You will also need to output the Selenium session id for each test, so that the SauceListener can associate the Sauce Job with the pass/fail status. To do this, include the following output in the stdout:
SauceOnDemandSessionID=SELENIUM_SESSION_ID

How to Parse multiple classes from groovy file

Is there a way to parse all all the classes in a groovy script?
To Parse ONE class right now:
java.lang.Class clazz = groovyClassLoader.parseClass(new File("MainApp.groovy"))
MainApp.groovy:
class MainApp {
def doIt() {}
}
class OtherMainApp {
def doTheRest() {}
}
This will return only MainApp.
I would like something like this:
java.lang.Class[] clazz = groovyClassLoader.parseClass(new File("MainApp.groovy"))
where clazz contains will contain both MainApp class and OtherMainApp class
Basically I want to be able to extract all the declared classes in a script.
Because of the nature of the app that I'm building groovyc command won't help
Thanks,
Federico
No can do:
https://issues.apache.org/jira/browse/GROOVY-3793
You could do it yourself though: you could parse the class yourself (just count the {} pairs), dump it out to a new file, and away you go. Ugly? yes. Painful? Very. Possible? Maybe. Better solution? Not until Groovy fixes the bug.
It's been 12 years since this question was asked and answered, but it arrived at the top of my Google results when I was trying to figure out how to do the same thing.
This ended up working for me:
def loader = new GroovyClassLoader()
loader.parseClass(new File("Company.groovy"))
Class Location = loader.loadClass("tk.vallerance.spacecompanies.models.Location")
Class Event = loader.loadClass("tk.vallerance.spacecompanies.models.Event")
Class Company = loader.loadClass("tk.vallerance.spacecompanies.models.Company")
println Location
println Event
println Company

Resources