Grade: Can't access configuration defined in one project from another project - groovy

I'm pretty new to both gradle and groovy.
The Problem
I've got a very simple multi-project structure just like below:
Root project 'gradle_test'
+--- Project ':sub1'
\--- Project ':sub2'
This is what the 'build.grade' file looks like for sub1 project:
// build.gradle of sub1 project
task testConfiguration {
println project(':sub2').configurations.sub2FooConfiguration
}
And finally, this is the 'build.grade' file of sub2 project:
// build.gradle of sub2 project
configurations {
sub2FooConfiguration
}
Very minimum. Now, if I run gradle :sub1:testConfiguration, I got the following error:
A problem occurred evaluating project ':sub1'.
> Could not find property 'sub2FooConfiguration' on configuration container.
However, everything just works if the testConfiguration task in sub1 project is modified like this:
// notice the "<<" (I believe this is calling the 'doLast' method on the task instance)
task testConfiguration << {
println project(':sub2').configurations.sub2FooConfiguration
}
The Question
What I assume the difference between the two versions of the 'testConfiguration' task is that in the first instance, a configuration closure is passed to the task whereas in the modified version a 'normal' closure is passed to the 'doLast' method.
So, first of all, is my assumption correct?
Secondly, why I don't have access to 'sub2' project in the first instance?
And finally, is it possible to access 'sub2' project in the first instance (i.e. in the configuration closure)?
[Update] A Further Question
Given the accepted answer provided by "Invisible Arrow", I'd like to ask a further question regarding the best practice of referencing a configuration of another project (i.e. a task in sub1 needs to use an archive produced by sub2 project).
Should I declare evaluation dependency between the two projects?
Or Should I only reference sub2's configuration at execution time (e.g. in doLast())?
Or, should I create a dependency configuration between the two projects?

Yes, there is a difference between the two.
There are essentially 3 phases to a build which are Initialization, Configuration and Execution. This is described in detail under the Build Lifecycle chapter in the Gradle documentation.
In your case, the first instance falls under the Configuration phase, which is always evaluated irrespective of whether the task is executed or not. That means all statements within the closure are executed when you start a build.
task testConfiguration {
// This always runs during a build,
// irrespective of whether the task is executed or not
println project(':sub2').configurations.sub2FooConfiguration
}
The second instance falls under the Execution phase. Note that << is a shorthand for doLast, and this closure is called when the task is executed.
task testConfiguration << {
// Called during actual execution of the task,
// and called only if the task was scheduled to be executed.
// Note that Configuration phase for both projects are complete at this point,
// which is why :sub1 is able to access :sub2's configurations.sub2FooConfiguration
println project(':sub2').configurations.sub2FooConfiguration
}
Now coming to why the first instance gave the error. This was because the Configuration phase of sub2 project was not evaluated yet. Hence the sub2FooConfiguration wasn't yet created.
Why? Because there is no explicit evaluation dependency between sub1 and sub2. In your case, sub1 needs sub2 as an evaluation dependency, hence we can add that dependency in sub1's build.gradle before the task declaration, as follows:
evaluationDependsOn(':sub2')
task testConfiguration {
println project(':sub2').configurations.sub2FooConfiguration
}
This will ensure that sub2 is always evaluated before sub1 (evaluation meaning the Configuration phase for the projects). sub1 will now be able to access configurations.sub2FooConfiguration in the task declaration closure. This is explained in detail in the Multi-project Builds chapter.
In the second instance, configurations.sub2FooConfiguration was accessible, as the call was in the execution block of the task (which is after the Configuration phase for both projects).
PS: Note that if you reversed the names of the projects, then the first instance might actually work, as Gradle configures projects alphabetically if there are no explicit dependencies. But, of course, you should never rely on this and ensure that the dependencies between projects are declared explicitly.

Related

Output resources using Groovy ASTTransformer

I've written a number of Java annotation processors that write some arbitrary data to text files that will be included in my class directory / jar file. I typically use code that looks like this:
final OutputStream out = processingEnv
.getFiler()
.createResource(StandardLocation.CLASS_OUTPUT, "", "myFile")
.openOutputStream();
I'm trying to do something similar in a groovy ASTTransformation. I've tried adding a new source file but that (expectedly) must be valid groovy. How do I write arbitrary resources from an ASTTransformation? Is it even possible?
As part of implementing your ASTTransformation, you need to implement the void visit(ASTNode[] nodes, SourceUnit source) method. In it you can call source.getConfiguration().getTargetDirectory() and it will return your build output directory, e.g. /Users/skissane/my-groovy-project/build/classes/groovy/main). You can then write your resources into there, and whatever is packaging them into the JAR (such as Gradle) should pull them from that.
In my case, I wanted to delay writing the resources until OUTPUT phase – since I was creating META-INF/services files, and I wanted to wait until I'd seen all the annotated classes before writing them, or else I'd be repeatedly adding to them for each annotated class – so I also implemented CompilationUnitAware, and then in my setCompilationUnit method I call unit.addPhaseOperation() and pass it a method reference to run during OUTPUT. Note, if you are using a local ASTTransformation, setCompilationUnit will be called multiple times (each time on a new instance of your transformation class); to avoid adding the phase operation repeatedly, I used a map in a static field to track if I'd seen this CompilationUnit before or not. My addPhaseOperation method is called once per an output class, so I used a boolean field to make sure I only wrote the resource files out once.
Doing this caused a warning to be printed:
> Task :compileGroovy
warning: Implicitly compiled files were not subject to annotation processing.
Use -implicit to specify a policy for implicit compilation.
1 warning
Adding this to build.gradle made the warning go away:
compileGroovy {
options.compilerArgs += ['-implicit:none']
}

Set the project properties in subclassed gradle task

I am defining a gradle task "launchIPad2Simulator" that is subclassing another already defined task "launchIPadSimulatorfrom" in robovm gradle plugin. The goal is to set the project properties which are defining which simulator will run.
// Run the IPad2 simulator
task launchIPad2Simulator2(type: org.robovm.gradle.tasks.IPadSimulatorTask) {
project.setProperty("robovm.device.name", "iPad-2")
project.setProperty("robovm.arch", "x86")
}
But the problem is, I must first define the properties in the gradle.properties file. They don't even need to have any value assigned. The whole content of the gradle.properties file:
robovm.device.name
robovm.arch
I would rather have gradle.properties file empty, but if the above task is then run, the error: Error:(112, 0) No such property: robovm.device.name for class: org.gradle.api.internal.project.DefaultProject_Decorated is shown.
Also if properties are only defined in task as following (gradle.properties is empty), they are ignored.
// Run the IPad2 simulator
task launchIPad2Simulator2(type: org.robovm.gradle.tasks.IPadSimulatorTask) {
project.properties.put("robovm.device.name", "iPad-2")
project.properties.put("robovm.arch", "x86")
}
So what is the correct way to dynamically set the project properties in subclassed task?
=== Edit ===
Ok now I see that setting the project properties is also not good, because in multiple tasks it gets overwritten. So maybe this shouldn't be project properties in first place.
The temp solution now is using command line invocation of tasks:
// simulator with properties launched from command line
task launchIPad2Simulator1(type: Exec) {
commandLine 'gradle', '-Probovm.device.name=iPad-2', '-Probovm.arch=x86', 'launchIPadSimulator'
}
try
task launchIPad2Simulator2(type: org.robovm.gradle.tasks.IPadSimulatorTask) {
project.ext."robovm.device.name" = "iPad-2"
project.ext."robovm.arch" = "x86"
}
this is the gradle syntax to add dynamic properites to the project object.

Gradle script execution semantics

I am trying to understand how exactly is the following Gradle script executed:
task loadTestData(dependsOn: ['fakeTask', createSchema])
I assume that:
loadTestData is a method call
dependsOn is a named argument
But on which object is the method called?
Actually a Task is being executed as part of gradle build workflow. Tasks in gradle get no parameters but can operate on the system/environment/build variables.
Then dependsOn which is a property of the Task gets the tasks the declared defined task is dependent on.
In this case you declare that task loadTestData is dependent on tasks fakeTask and createSchema.

Name lookup of the local variable

I have a question about name lookup in Groovy. Consider the following build script:
apply([plugin: 'java'])
def dependenciesClosure = {
delegate.ext.name = "DelegateName"
println name
println delegate.toString()
project(':api')
}
dependenciesClosure();
dependencies(dependenciesClosure)
The gradle check command produces the output
webapp
project ':webapp'
DelegateName
org.gradle.api.internal.artifacts.dsl.dependencies.DefaultDependencyHandler_Decorated#397ef2
Taking that into account, nonlocal variable name lookup is performed on a delegate object first an, if the name's not found, performed on the global project object. Is that correct?
Correct, Gradle uses a delegate first resolve strategy within configuration closures. In this case, the delegate is an instance of DependencyHandler. You can see what any given block delegates to by looking at the Gradle DSL documentation.
Edit: To confirm your last point, yes, the build script itself delegates to an instance of Project.

Gradle Incremental Tasks: Adding already generated code to the classpath

I have created a custom Gradle task that generates some java code. To optimize execution this plugin uses the #InputDirectory and #OutputDirectory annotations, so that the code does not have to be generated each build.
However, I do want this task to add the generated code to the classpath. I am currently doing this by
class JaxbTask extends DefaultTask {
#OutputDirectory
File destdir = project.file( "${project.buildDir}/generated-sources/mygen" )
#InputDirectory
File schemaRoot = project.file("${project.projectDir}/src/main/resources/myschema/")
#TaskAction
def main() {
..
project.sourceSets.main.java.srcDirs += destdir
..
}
The problem is that the TaskAction is not executed and the sourcedirectory is not added to the compile path when the generated code is up to date. Is there any way to make sure that the modification of the sourcepath is always performed?
A task should never try to configure the build model. Configuration is the responsibility of build scripts and plugins, and needs to happen in the configuration phase (before any task has run).

Resources