Gradle plugin best practices for tasks that depend on extension objects - groovy

I would like feedback on the best practices for defining plugin tasks that depend on external state (i.e. defined in the build.gradle that referenced the plugin). I'm using extension objects and closures to defer accessing those settings until they're needed and available. I'm also interested in sharing state between tasks, e.g. configuring the outputs of one task to be the inputs of another.
The code uses "project.afterEvaluate" to define the tasks when the required settings have been configured through the extension object. This seems more complex than should be needed. If I move the code out of the "afterEvaluate", it gets compileFlag == null which isn't the external setting. If the code is changed again to use the << or doLast syntax, then it will get the external flag... but then it fails to work with type:Exec and other similarly helpful types.
I feel that I'm fighting Gradle in some ways, which means I don't understand better how to work well with it. The following is a simplified pseudo-code of what I'm using. This works but I'm looking to see if this can be simplified, or indeed what the best practices are. Also, the exception shouldn't be thrown unless the tasks are being executed.
apply plugin: MyPlugin
class MyPluginExtension {
String compileFlag = null
}
class MyPlugin implements Plugin<Project> {
void apply(Project project) {
project.extensions.create("myPluginConfig", MyPluginExtension)
project.afterEvaluate {
// Closure delays getting and checking flag until strictly needed
def compileFlag = {
if (project.myPluginConfig.compileFlag == null) {
throw new InvalidUserDataException(
"Must set compileFlag: myPluginConfig { compileFlag = '-flag' }")
}
return project.myPluginConfig.compileFlag
}
// Inputs for translateTask
def javaInputs = {
project.files(project.fileTree(
dir: project.projectDir, includes: ['**/*.java']))
}
// This is the output of the first task and input to the second
def translatedOutputs = {
project.files(javaInputs().collect { file ->
return file.path.replace('src/', 'build/dir/')
})
}
// Translates all java files into 'translatedOutputs'
project.tasks.create(name: 'translateTask', type:Exec) {
inputs.files javaInputs()
outputs.files translatedOutputs()
executable '/bin/echo'
inputs.files.each { file ->
args file.path
}
}
// Compiles 'translatedOutputs' to binary
project.tasks.create(name: 'compileTask', type:Exec, dependsOn: 'translateTask') {
inputs.files translatedOutputs()
outputs.file project.file(project.buildDir.path + '/compiledBinary')
executable '/bin/echo'
args compileFlag()
translatedOutputs().each { file ->
args file.path
}
}
}
}
}

I'd look at this problem another way. It seems like what you want to put in your extension is really owned by each of your tasks. If you had something that was a "global" plugin configuration option, would it be treated as an input necessarily?
Another way of doing this would have been to use your own SourceSets and wire those into your custom tasks. That's not quite easy enough yet, IMO. We're still pulling together the JVM and native representations of sources.
I'd recommend extracting your Exec tasks as custom tasks with a #TaskAction that does the heavy lifting (even if it just calls project.exec {}). You can then annotate your inputs with #Input, #InputFiles, etc and your outputs with #OutputFiles, #OutputDirectory, etc. Those annotations will help auto-wire your dependencies and inputs/outputs (I think that's where some of the fighting is coming from).
Another thing that you're missing is if the compileFlag effects the final output, you'd want to detect changes to it and force a rebuild (but not a re-translate).
I simplified the body of the plugin class by using the Groovy .with method.
I'm not completely happy with this (I think the translatedFiles could be done differently), but I hope it shows you some of the best practices. I made this a working example (as long as you have a src/something.java) by implementing the translate as a copy/rename and the compile as something that just creates an 'executable' file (contents is just the list of the inputs). I've also left your extension class in place to demonstrate the "global" plug-in config. Also take a look at what happens with compileFlag is not set (I wish the error was a little better).
The translateTask isn't going to be incremental (although, I think you could probably figure out a way to do that). So you'd probably need to delete the output directory each time. I wouldn't mix other output into that directory if you want to keep that simple.
HTH
apply plugin: 'base'
apply plugin: MyPlugin
class MyTranslateTask extends DefaultTask {
#InputFiles FileCollection srcFiles
#OutputDirectory File translatedDir
#TaskAction
public void translate() {
// println "toolhome is ${project.myPluginConfig.toolHome}"
// translate java files by renaming them
project.copy {
includeEmptyDirs = false
from(srcFiles)
into(translatedDir)
rename '(.+).java', '$1.m'
}
}
}
class MyCompileTask extends DefaultTask {
#Input String compileFlag
#InputFiles FileCollection translatedFiles
#OutputDirectory File outputDir
#TaskAction
public void compile() {
// write inputs to the executable file
project.file("$outputDir/executable") << "${project.myPluginConfig.toolHome} $compileFlag ${translatedFiles.collect { it.path }}"
}
}
class MyPluginExtension {
File toolHome = new File("/some/sane/default")
}
class MyPlugin implements Plugin<Project> {
void apply(Project project) {
project.with {
extensions.create("myPluginConfig", MyPluginExtension)
tasks.create(name: 'translateTask', type: MyTranslateTask) {
description = "Translates all java files into translatedDir"
srcFiles = fileTree(dir: projectDir, includes: [ '**/*.java' ])
translatedDir = file("${buildDir}/dir")
}
tasks.create(name: 'compileTask', type: MyCompileTask) {
description = "Compiles translated files into outputDir"
translatedFiles = fileTree(tasks.translateTask.outputs.files.singleFile) {
includes [ '**/*.m' ]
builtBy tasks.translateTask
}
outputDir = file("${buildDir}/compiledBinary")
}
}
}
}
myPluginConfig {
toolHome = file("/some/custom/path")
}
compileTask {
compileFlag = '-flag'
}

Related

File or module level 'feature' possible?

Some optimizations/algorithms make code considerably less readable, so it's useful to keep the ability to disable the complex-and-unwieldily functionality within a file/module so any errors introduced when modifying this code can be quickly tested against the simple code.
Currently using const USE_SOME_FEATURE: bool = true; seems a reasonable way, but makes the code read a little strangely, since USE_SOME_FEATURE is being used like an ifdef in C.
For instance, clippy wants you to write:
if foo {
{ ..other code.. }
} else {
// final case
if USE_SOME_FEATURE {
{ ..fancy_code.. }
} else {
{ ..simple_code.. }
}
}
As:
if foo {
{ ..other code.. }
} else if USE_SOME_FEATURE {
// final case
{ ..fancy_code.. }
} else {
// final case
{ ..simple_code.. }
}
Which IMHO hurts readability, and can be ignored - but is caused by using a boolean where a feature might make more sense.
Is there a way to expose a feature within a file without having it listed in the crate?(since this is only for internal debugging and testing changes to code).
You can use a build script to create new cfg conditions. Use println!("cargo:rustc-cfg=whatever") in the build script, and then you can use #[cfg(whatever)] on your functions and statements.

How to extend the behavior of a Gradle task for a new task type?

I would like to set a few things for a few test tasks. More specifically, I would like to add a few environment variables and a few system properties, maybe a few other things such as "dependencies" or "workingDir". With the regular Test task I can do this,
task test1(type:Test, dependsOn:[testPrep,testPrep1]){
workingDir testWorkingPath
systemProperty 'property','abs'
environment.find { it.key ==~ /(?i)PATH/ }.value += (System.properties['path.separator'] + myLibPath)
environment.LD_LIBRARY_PATH = "/usr/lib64:/lib64:${myLibPath}:" + environment.LD_LIBRARY_PATH
}
task test2(type:Test, dependsOn:[testPrep]){
workingDir testWorkingPath
systemProperty 'property','abs'
environment.find { it.key ==~ /(?i)PATH/ }.value += (System.properties['path.separator'] + myLibPath)
environment.LD_LIBRARY_PATH = "/usr/lib64:/lib64:${myLibPath}:" + environment.LD_LIBRARY_PATH
systemPropety 'newProperty','fdsjfkd'
}
It would be nice to have a new task type MyTestType extending the regular Test task type, where the common definition is defined.
task test1(type:MyTestType){
dependsOn testPrep1
}
task test2(type:MyTestType){
systemPropety 'newProperty','fdsjfkd'
}
What would be best way to do this? It seems that the execute() method is final and cannot be extended. I will need to do something like the doFirst to set those properties. Should I add all the extra values in the constructor? Is there any other hook I can use? Thanks.
In general you can extend the 'Test' task and implement your customizations
task test1(type:MyTestType){
}
task test2(type:MyTestType){
systemProperty 'newProperty','fdsjfkd'
}
class MyTestType extends Test {
public MyTestType(){
systemProperty 'property','abs'
}
}
Alternatively you can configure all tasks of type Test with less boilerplate:
// will apply to all tasks of type test.
// regardless the task was created before this snippet or after
tasks.withType(Test) {
systemProperty 'newProperty','fdsjfkd'
}
It is also possible to specify the behavior of a particular superclass setting. Say for example you want to centralize the environment.find block, but allow setting myLibPath per task like this:
task test1(type: MyTestType) {
}
task test2(type: MyTestType) {
libPath = '/foo/bar'
}
You could do that by overriding the configure method:
class MyTestType {
#Input def String libPath
#Override
public Task configure(Closure configureClosure) {
return super.configure(configureClosure >> {
environment.find { it.key ==~ /(?i)PATH/ }.value += (System.properties['path.separator'] + (libPath ?: myLibPath))
})
}
}
Here we use the closure composition operator >> to combine the passed-in closure with our overridden behavior. The user-specified configureClosure will be run first, possibly setting the libPath property, and then we run the environment.find block after that. This can also be combined with soft defaults in the constructor, as in Rene Groeschke's answer
Note that this particular use case might break if you configure the task more than one, since the environment.find statement transforms existing state instead of replacing it.
you can do it the following ways
task TestExt{
Test {
systemPropety 'newProperty','fdsjfkd'
}
}
note: this just adds to the configure method of Test
source: https://docs.gradle.org/current/dsl/org.gradle.api.Task.html#N18D18

Scope of Groovy's metaClass?

I have an application which can run scripts to automate certain tasks. I'd like to use meta programming in these scripts to optimize code size and readability. So instead of:
try {
def res = factory.allocate();
... do something with res ...
} finally {
res.close()
}
I'd like to
Factory.metaClass.withResource = { c ->
try {
def res = factory.allocate();
c(res)
} finally {
res.close()
}
}
so the script writers can write:
factory.withResource { res ->
... do something with res ...
}
(and I could do proper error handling, etc).
Now I wonder when and how I can/should implement this. I could attach the manipulation of the meta class in a header which I prepend to every script but I'm worried what would happen if two users ran the script at the same time (concurrent access to the meta class).
What is the scope of the meta class? The compiler? The script environment? The Java VM? The classloader which loaded Groovy?
My reasoning is that if Groovy meta classes have VM scope, then I could run a setup script once during startup.
Metaclasses exist per classloader [citation needed]:
File /tmp/Qwop.groovy:
class Qwop { }
File /tmp/Loader.groovy:
Qwop.metaClass.bar = { }
qwop1 = new Qwop()
assert qwop1.respondsTo('bar')
loader = new GroovyClassLoader()
clazz = loader.parseClass new File("/tmp/Qwop.groovy")
clazz.metaClass.brap = { 'oh my' }
qwop = clazz.newInstance()
assert !qwop.respondsTo('bar')
assert qwop1.respondsTo('bar')
assert qwop.brap() == "oh my"
assert !qwop1.respondsTo('brap')
// here be custom classloaders
new GroovyShell(loader).evaluate new File('/tmp/QwopTest.groovy')
And a script to test the scoped metaclass (/tmp/QwopTest.groovy):
assert !new Qwop().respondsTo('bar')
assert new Qwop().respondsTo('brap')
Execution:
$ groovy Loaders.groovy
$
If you have a set of well defined classes you could apply metaprogramming on top of the classes loaded by your classloader, as per the brap method added.
Another option for this sort of thing which is better for a lot of scenarios is to use an extension module.
package demo
class FactoryExtension {
static withResource(Factory instance, Closure c) {
def res
try {
res = instance.allocate()
c(res)
} finally {
res?.close()
}
}
}
Compile that and put it in a jar file which contains a file at META-INF/services/org.codehaus.groovy.runtime.ExtensionModule that contains something like this...
moduleName=factory-extension-module
moduleVersion=1.0
extensionClasses=demo.FactoryExtension
Then in order for someone to use your extension they just need to put that jar file on their CLASSPATH. With all of that in place, a user could do something like this...
factoryInstance.withResource { res ->
... do something with res ...
}
More information on extension modules is available at http://docs.groovy-lang.org/docs/groovy-2.3.6/html/documentation/#_extension_modules.

publish artifact overwrite other artifact in Gradle

I am experimenting with Gradle to build a few jars, rather than maintain a list of classes that hold EJBs so that I can deploy them separately I thought it might be neat to scan the classes when making the jar.
Rather than load the classes and use reflection to get the annotations I thought it may be simpler to scan the classes with asm, hence the chuncky ClassReader in one of the tasks.
I don't think this is the issue so can be ignored, basically I have 2 tasks that I use to define the contents of the jars, both report that different content is going into them via the eachFile print out, however when I look in the publish repository location both files and associated sha1 are identical.
Either Gradle is broken or, more likely, I've done something crazy but can't see what it is, can anyone help?
By the way if I disable the publish of either of the jar files the one that does get created is correct so I think it's something wrong with the publish rather than the jarring up, but could be wrong.
// ASM is used to interpret the class files, this avoids having to load all classes in the vm and use reflection
import org.objectweb.asm.*
task ejbJar(type: Jar) {
//outputs.upToDateWhen { false }
from "${project.buildDir}/classes/main"
eachFile { println "EJB server: ${name}" }
include getEjbClassFiles(project.buildDir)
}
task clientEjbJar(type: Jar) {
//outputs.upToDateWhen { false }
from "${project.buildDir}/classes/main/com/company/core/versioner"
eachFile { println "Client EJB ${name}" }
include '**/*'
}
artifacts {
archives clientEjbJar
archives ejbJar
}
String[] getEjbClassFiles(base) {
def includedFiles = []
def baseDir = project.file("${base}/classes/main")
def parentPath = baseDir.toPath()
if (baseDir.isDirectory()) {
baseDir.eachFileRecurse(groovy.io.FileType.FILES) { file ->
if(file.name.endsWith('.class')) {
//get hold of annotations in there --- org.objectweb.asm.Opcodes.ASM4
def reader = new ClassReader(file.bytes).accept(
new ClassVisitor(Opcodes.ASM4) {
public AnnotationVisitor visitAnnotation(String desc, boolean visible) {
if(desc.equals("Ljavax/ejb/Stateless;") ||
desc.equals("Ljavax/ejb/Stateful;")) {
includedFiles += parentPath.relativize(file.toPath())
}
return null //no interest in actually visiting the annotation values
}
},
ClassReader.SKIP_DEBUG | ClassReader.EXPAND_FRAMES | ClassReader.SKIP_FRAMES | ClassReader.SKIP_CODE
)
}
}
}
return includedFiles
}
publishing {
publications {
mypub(IvyPublication) {
artifact(ejbJar) {
name 'ejb'
}
artifact(clientEjbJar) {
name 'client-ejb'
}
}
}
repositories {
ivy {
name 'personal'
url "${ant['developer.repository']}/"
layout 'pattern', {
artifact "[organisation]/[module]/[artifact]/[revision]/[type]/[artifact]-[revision].[ext]"
ivy "[organisation]/[module]/[type]/[revision]/[type]/[type]-[revision].[ext]"
}
}
}
}
I did break the thing down into a simpler form as I thought it may be a Gradle bug.
The simplified form was:
apply plugin: 'java'
apply plugin: 'ivy-publish'
task bigJar(type: Jar) {
from "${rootDir}/src/main/resources"
include '**/*'
}
task smallJar(type: Jar) {
from "${rootDir}/src/main/resources/A/B"
include '**/*'
}
group 'ICantBeEmpty'
artifacts {
archives bigJar
archives smallJar
}
publishing {
publications {
mypub(IvyPublication) {
artifact(bigJar) { name 'biggie' }
artifact(smallJar) { name 'smallie' }
}
repositories {
ivy {
name 'personal'
url "c:/temp/gradletest"
layout 'pattern', {
artifact "[organisation]/[module]/[artifact]/[revision]/[type]/[artifact]-[revision].[ext]"
ivy "[organisation]/[module]/[type]/[revision]/[type]/[type]-[revision].[ext]"
}
}
}
}
}
This results in 2 files in c:/temp/gradletest/ICantBeEmpty/report-bug/biggie/unspecified/biggie-unspecified.jar and c:/temp/gradletest/ICantBeEmpty/report-bug/smallie/unspecified/smallie-unspecified.jar
Both of these files are identical, however I think I know why see my later answer.
Whilst looking at some configurations I noticed some odd behaviour that led me to a resolution of this issue, and it is a Gradle bug.
In my build I had a scratch task doing
configurations.archives.artifacts.each { println it }
This gave me 5 different lines output, however changing it to this
configurations.archives.artifacts.each { println it.file }
produced the same filename 5 times.
It turns out this is related to my issue, although the artifacts are there as separate entities the name used to uniquely identify them was the same so the same file was always chosen during a publish. The name of the artifacts is given by ${baseName}-${appendix}-${version}-${classifier}.${extension} by default in the java plugin. This means that if neither appendix or classifier is specified then the artifact will have the same name.
I tested this using the above sample code by adding an appendix name
task bigJar(type: Jar) {
appendix = 'big'
from "${rootDir}/src/main/resources"
include '**/*'
}
task smallJar(type: Jar) {
appendix = 'small'
from "${rootDir}/src/main/resources/A/B"
include '**/*'
}
Using this rather than the code from the question produces 2 different jars.
It's not a complete answer but is a good enough work around, if I add a new publication definition I can publish the artifacts that I want to to the location that I want, the only downside is that it will create another gradle task which isn't ideal.
publications {
mypub(IvyPublication) {
artifact(ejbJar) {
name 'ejb'
}
}
newpub(IvyPublication) {
artifact(clientEjbJar) {
name 'client-ejb'
}
}
}
The above answer works in the short term, however does reveal yet another short coming in the Gradle world enter link description here
Not sure Gradle is all it could be at the moment, and so far no one has answered my questions so maybe it's not that actively developed!!
I'm no expert in this part of Gradle, but the functionality you are using is marked as "incubating"; you are using the new publishing feature which might or might not be complete. Perhaps you should use the old way of doing things. You also seem to be mixing both ways by using the artifacts closure.

Asynchronous operation of FileTree files?

Is there a way I can easily make the processing of FileTree files in a smart way in gradle tasks? I basically need to wait for the execution of all files, much like what you can do with GPars, but how do I do this gradle with FileTree?
task compressJs(dependsOn: [copyJsToBuild]) << {
println 'Minifying JS'
fileTree {
from 'build/js'
include '**/*.js'
}.visit { element ->
if (element.file.isFile()) {
println "Minifying ${element.relativePath}"
ant.java(jar: "lib/yuicompressor-2.4.6.jar", fork: true) {
arg(value: "build/js/${element.relativePath}")
arg(value: "-o")
arg(value: "build/js/${element.relativePath}")
}
}
}
}
It would be lovely if I could do something like .visit{}.async(wait:true), but my googling turned up nothing. Is there a way I can easily make this multi-threaded? The processing of one element has no effect on the processing of any other element.
Before thinking about going multi-threaded, I'd try the following:
Run everything in the same JVM. Forking a new JVM for each input file is very inefficient.
Make the compressJs task incremental so that it only executes if some input file has changed since the previous run.
Run the minifier directly rather than via Ant (saves creation of a new class loader for each input file; not sure if it matters).
If this still leaves you unhappy with the performance, and you can't use a more performant minifier, you can still try to go multi-threaded. Gradle won't help you there (yet), but libraries like GPars or the Java Fork/Join framework will.
The GPars solution. Note that the compress() function could be modified to properly accept source dir/target dir/etc, but since all my names are consistent, I'm just using the one argument for now. I was able to cut my build time from 7.3s to 5.4s with only 3 files being minified. I've seen build times spiral out of control, so I'm always wary of performance with this kind of behavior.
import groovyx.gpars.GParsPool
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath 'org.codehaus.gpars:gpars:0.12'
}
}
def compress(String type) {
def elementsToMinify = []
fileTree {
from type
include "**/*.$type"
}.visit { element ->
if (element.file.isFile()) {
elementsToMinify << element
}
}
GParsPool.withPool(8) {
elementsToMinify.eachParallel { element ->
println "Minifying ${element.relativePath}"
def outputFileLocation = "build/$type/${element.relativePath}"
new File(outputFileLocation).parentFile.mkdirs()
ant.java(jar: "lib/yuicompressor-2.4.6.jar", fork: true) {
arg(value: "$type/${element.relativePath}")
arg(value: "-o")
arg(value: outputFileLocation)
}
}
}
}
task compressJs {
inputs.dir new File('js')
outputs.dir new File('build/js')
doLast {
compress('js')
}
}
task compressCss {
inputs.dir new File('css')
outputs.dir new File('build/css')
doLast {
compress('css')
}
}

Resources