Gradle - Can I manage task dependencies using convention properties? - groovy

I'm using Gradle as a build system for my project.
What I want is to make task A depend on task B if a given property is set to "true". Is this doable, and if the answer is yes, how can I do that?
Currently, I'm using conventionMapping but this doesn't seem to work. My code looks like this:
MyTask.conventionMapping.propertyName = { MyPluginConvention.propertyName }
if (MyTask.propertyName.equals("true")) {
MyTask.dependsOn ...
}
Thanks in advance,
Marin

Instead of working with task/convention classes, you'll have to work with their instances. Also, you'll have to defer the decision whether to add a task dependency. For example:
def myTask = project.tasks.create("myTask", MyTask)
def otherTask = ...
def myConvention = new MyConvention()
...
myTask.conventionMapping.propertyName = { myConvention.propertyName }
// defer decision whether to depend on 'otherTask'
myTask.dependsOn { myTask.propertyName == "true" ? otherTask : [] }
If there's no task variable in scope, you can also reference existing tasks via project.myTask or project.tasks["myTask"].
PS: Convention objects have been largely replaced by extension objects.

Related

What is the difference between passing a Map and using `body.resolveStrategy = Closure.DELEGATE_FIRST`

These two examples of encapsulated pipelines get their pipelineParams from two different methods, however it is not readily clear why one is preferable to the other.
What is the ramification of using
def call(body) {
// evaluate the body block, and collect configuration into the object
def pipelineParams= [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = pipelineParams
body()
pipeline {
echo pipelineParams.name
}
}
vs using
def call(Map pipelineParams) {
pipeline {
echo pipelineParams.name
}
}
Example code from https://jenkins.io/blog/2017/10/02/pipeline-templates-with-shared-libraries/
The difference is that in the first case, using a pipeline looks like declarative configuration. It's a so-called builder strategy in terms of DSL:
myDeliveryPipeline {
branch = 'master'
scmUrl = 'ssh://git#myScmServer.com/repos/myRepo.git'
...
}
Whereas in the second case, applying a pipeline looks like imperative code, i.e. it's a regular function call:
myDeliveryPipeline(branch: 'master', scmUrl: 'ssh://git#myScmServer.com/repos/myRepo.git', ...)
There is also explanation in the official Jenkins doc:
There is also a “builder pattern” trick using Groovy’s Closure.DELEGATE_FIRST, which permits Jenkinsfile to look slightly more like a configuration file than a program, but this is more complex and error-prone and is not recommended.
Personally, I can't say that I'm not recommending the DSL approach. The doc doesn't recommend this because it's a bit more complex and can be error-prone

How to reuse code block which describe similiar ant build logic in groovy?

How to reuse code block which describe similiar ant build logic in groovy?
If we have build logic which was implemented by Groovy AntBuilder, just like code below:
ant.someTask(attr1:value1, attr2:value2) {
configuration1(param1:args1, param2:args2){
similiarStructure(additionalArgs:aaa){
setting1(param5:value5) {
//...blah blah blah
}
//further more settings, may be or may be not the same with similiarStructure below
}
}
configuration2(param3:args3, param4:args4){
similiarStructure(additionalArgs:aaa){
setting1(param5:value5) {
//...blah blah blah
}
//further more settings, may be or may be not the same with similiarStructure below
}
}
}
Are there any ways to reuse Groovy AntBuilder code block, which could brief the statment in configuration2 ?
I've try to predefine closures and inject them in both configuration,
but it fails with property not found exception while initializing closure.
I'll provide two answers so you can select which one is more appropriate for your use case and test it. The solutions depend on at what level you want the shared config.
If you want a more general purpose solution that allows you to share the whole of the similarStructure block, you need to perform some more advanced work. The trick is to ensure that the delegate of the shared configuration closure is set appropriately:
def sharedConfig = {
similarStructure(additionalArgs:aaa) {
setting1(param5:value5) {
//...blah blah blah
}
}
}
ant.someTask(attr1: value1, attr2: value2) {
configuration1(param1:args1, param2:args2){
applySharedConfig(delegate, sharedConfig)
}
configuration2(param3:args3, param4:args4){
applySharedConfig(delegate, sharedConfig)
}
}
void applySharedConfig(builder, config) {
def c = config.clone()
c.resolveStrategy = Closure.DELEGATE_FIRST
c.delegate = builder
c.call()
}
Although the applySharedConfig() method seems ugly, it can be used to share multiple configurations across different tasks.
One thing to bear in mind with this solution is that the resolveStrategy of the closure can be very important. I think both DELEGATE_FIRST and OWNER_FIRST (the default) will work fine here. If you run into what appear to be name resolution problems (missing methods or properties) you should try switching the resolution strategy.
I'll provide two answers so you can select which one is more appropriate for your use case and test it. The solutions depend on at what level you want the shared config.
If you are happy to simply share the closure that goes with similarStructure, then the solution is straightforward:
def sharedConfig = {
setting1(param5:value5) {
//...blah blah blah
}
}
ant.someTask(attr1: value1, attr2: value2) {
configuration1(param1:args1, param2:args2) {
similarStructure(additionalArgs:aaa, sharedConfig)
}
configuration2(param3:args3, param4:args4) {
similarStructure(additionalArgs:aaa, sharedConfig)
}
}
The method that is similarStructure should ensure that the sharedConfig closure is properly configured. I haven't tested this, so I'm not entirely sure. The disadvantage of this approach is that you have to duplicate the similarStructure call with its arguments.

How to extend the behavior of a Gradle task for a new task type?

I would like to set a few things for a few test tasks. More specifically, I would like to add a few environment variables and a few system properties, maybe a few other things such as "dependencies" or "workingDir". With the regular Test task I can do this,
task test1(type:Test, dependsOn:[testPrep,testPrep1]){
workingDir testWorkingPath
systemProperty 'property','abs'
environment.find { it.key ==~ /(?i)PATH/ }.value += (System.properties['path.separator'] + myLibPath)
environment.LD_LIBRARY_PATH = "/usr/lib64:/lib64:${myLibPath}:" + environment.LD_LIBRARY_PATH
}
task test2(type:Test, dependsOn:[testPrep]){
workingDir testWorkingPath
systemProperty 'property','abs'
environment.find { it.key ==~ /(?i)PATH/ }.value += (System.properties['path.separator'] + myLibPath)
environment.LD_LIBRARY_PATH = "/usr/lib64:/lib64:${myLibPath}:" + environment.LD_LIBRARY_PATH
systemPropety 'newProperty','fdsjfkd'
}
It would be nice to have a new task type MyTestType extending the regular Test task type, where the common definition is defined.
task test1(type:MyTestType){
dependsOn testPrep1
}
task test2(type:MyTestType){
systemPropety 'newProperty','fdsjfkd'
}
What would be best way to do this? It seems that the execute() method is final and cannot be extended. I will need to do something like the doFirst to set those properties. Should I add all the extra values in the constructor? Is there any other hook I can use? Thanks.
In general you can extend the 'Test' task and implement your customizations
task test1(type:MyTestType){
}
task test2(type:MyTestType){
systemProperty 'newProperty','fdsjfkd'
}
class MyTestType extends Test {
public MyTestType(){
systemProperty 'property','abs'
}
}
Alternatively you can configure all tasks of type Test with less boilerplate:
// will apply to all tasks of type test.
// regardless the task was created before this snippet or after
tasks.withType(Test) {
systemProperty 'newProperty','fdsjfkd'
}
It is also possible to specify the behavior of a particular superclass setting. Say for example you want to centralize the environment.find block, but allow setting myLibPath per task like this:
task test1(type: MyTestType) {
}
task test2(type: MyTestType) {
libPath = '/foo/bar'
}
You could do that by overriding the configure method:
class MyTestType {
#Input def String libPath
#Override
public Task configure(Closure configureClosure) {
return super.configure(configureClosure >> {
environment.find { it.key ==~ /(?i)PATH/ }.value += (System.properties['path.separator'] + (libPath ?: myLibPath))
})
}
}
Here we use the closure composition operator >> to combine the passed-in closure with our overridden behavior. The user-specified configureClosure will be run first, possibly setting the libPath property, and then we run the environment.find block after that. This can also be combined with soft defaults in the constructor, as in Rene Groeschke's answer
Note that this particular use case might break if you configure the task more than one, since the environment.find statement transforms existing state instead of replacing it.
you can do it the following ways
task TestExt{
Test {
systemPropety 'newProperty','fdsjfkd'
}
}
note: this just adds to the configure method of Test
source: https://docs.gradle.org/current/dsl/org.gradle.api.Task.html#N18D18

Scope of Groovy's metaClass?

I have an application which can run scripts to automate certain tasks. I'd like to use meta programming in these scripts to optimize code size and readability. So instead of:
try {
def res = factory.allocate();
... do something with res ...
} finally {
res.close()
}
I'd like to
Factory.metaClass.withResource = { c ->
try {
def res = factory.allocate();
c(res)
} finally {
res.close()
}
}
so the script writers can write:
factory.withResource { res ->
... do something with res ...
}
(and I could do proper error handling, etc).
Now I wonder when and how I can/should implement this. I could attach the manipulation of the meta class in a header which I prepend to every script but I'm worried what would happen if two users ran the script at the same time (concurrent access to the meta class).
What is the scope of the meta class? The compiler? The script environment? The Java VM? The classloader which loaded Groovy?
My reasoning is that if Groovy meta classes have VM scope, then I could run a setup script once during startup.
Metaclasses exist per classloader [citation needed]:
File /tmp/Qwop.groovy:
class Qwop { }
File /tmp/Loader.groovy:
Qwop.metaClass.bar = { }
qwop1 = new Qwop()
assert qwop1.respondsTo('bar')
loader = new GroovyClassLoader()
clazz = loader.parseClass new File("/tmp/Qwop.groovy")
clazz.metaClass.brap = { 'oh my' }
qwop = clazz.newInstance()
assert !qwop.respondsTo('bar')
assert qwop1.respondsTo('bar')
assert qwop.brap() == "oh my"
assert !qwop1.respondsTo('brap')
// here be custom classloaders
new GroovyShell(loader).evaluate new File('/tmp/QwopTest.groovy')
And a script to test the scoped metaclass (/tmp/QwopTest.groovy):
assert !new Qwop().respondsTo('bar')
assert new Qwop().respondsTo('brap')
Execution:
$ groovy Loaders.groovy
$
If you have a set of well defined classes you could apply metaprogramming on top of the classes loaded by your classloader, as per the brap method added.
Another option for this sort of thing which is better for a lot of scenarios is to use an extension module.
package demo
class FactoryExtension {
static withResource(Factory instance, Closure c) {
def res
try {
res = instance.allocate()
c(res)
} finally {
res?.close()
}
}
}
Compile that and put it in a jar file which contains a file at META-INF/services/org.codehaus.groovy.runtime.ExtensionModule that contains something like this...
moduleName=factory-extension-module
moduleVersion=1.0
extensionClasses=demo.FactoryExtension
Then in order for someone to use your extension they just need to put that jar file on their CLASSPATH. With all of that in place, a user could do something like this...
factoryInstance.withResource { res ->
... do something with res ...
}
More information on extension modules is available at http://docs.groovy-lang.org/docs/groovy-2.3.6/html/documentation/#_extension_modules.

The Cucumber JVM's (Groovy) "World (Hooks)" object mixin and Intellij navigation

I am developing cucumber scenarios using Cucumber JVM (Groovy) in intellij. Things are much better than doing the same in eclipse I must say.
I'd like to resolve one small problem to make things even better for my team. But since I am new to Intellij, I need help with the following please:
When I am in a step def file (groovy), Intellij can't seem to see variables and methods defined in the cucumber "World" object. So don't get IDE support (auto-complete, etc) for those which is a bit annoying. How can I fix that?
IntelliJ IDEA expects an object inside World closure. So, you could define the folloing block in EACH step definitions file:
World {
new MockedTestWorld()
}
MockedTestWorld.groovy:
class MockedTestWorld implements TestWorld {
#Lazy
#Delegate
#SuppressWarnings("GroovyAssignabilityCheck")
private TestWorld delegate = {
throw new UnsupportedOperationException("" +
"Test world mock is used ONLY for syntax highlighting in IDE" +
" and must be overridden by concrete 'InitSteps.groovy' implementation.")
}()
}
To cleanup duplicated world definitions we use last-glue initializers and a bit of copy-paste:
real/InitSteps.groovy
def world
GroovyBackend.instance.#worldClosures.clear()
GroovyBackend.instance.registerWorld {
return world ?: (world = new RealTestWorld1()) // Actual initialization is much longer
}
legacy/InitSteps.groovy
def world
GroovyBackend.instance.#worldClosures.clear()
GroovyBackend.instance.registerWorld {
return world ?: (world = new LegacyTestWorld1())
}
Finally, run configurations would be like this (with different glues):
// Real setup
glue = { "classpath:test/steps", "classpath:test/real", },
// Legacy setup
glue = { "classpath:test/steps", "classpath:test/legacy", },

Resources